Hashemian Blog
Web, Finance, Technology, Running

JAMstack - How To Make The Web Slower

by @ 10:47 am
Filed under: web — Tags: ,

Starting a few years ago everywhere you look, sites are getting updated with a new approach known as JAMstack. JAM, initials for JavaScript, API, and Markup is lauded as more revolutionary, more scalable, speedier, and more flexible than the older style website development.

Some of these claims may be true but for the most part the web crowd has drunk the Kool-Aid and has jumped on the bandwagon, lest be accused of being stuck in the 90's.

Think back about how websites used to work. User accessed a page, the server at the other end, hunted and gathered the data, formatted the page, and threw it back at the browser. It was basically a bit of a delay with a blank page and then an immediate fulfillment. Yes, the designs were sometimes ugly, the fonts were small, and there wasn't much white space but everything was there.

Look at how modern sites work today. There is a shorter initial delay, but the page only fills in with a bunch of inane placeholders. These are a collection of gray boxes, spinning bullets, or wavy bars but there's nothing in them until the browser goes out and fetches the data to fill these placeholders and at times that job seems to take an eternity.

I used to login to my banking site, click on links and the pages would almost immediately return filled with all the necessary info. Now I login and wait and wait and wait for each section to fill in while looking at a bunch of useless spinners and gray boxes.

Call me old-fashioned, and I won't deny it, but is this really called progress? Couldn't we have just cleaned up the pages with better design, more white space, and more appealing fonts using newer styling techniques, instead of creating this monstrosity?

Of course we all want to brag about how our sites are designed using the latest versions of React, Angular, Node.js, and Flask. Weren't JQuery and CSS good enough for us? Did we really need to push a ton of junk into the browser in the name of client-side rendering?

The difference between the old and new sites wasn't as stark to me until I was tasked with migrating my company's CRM platform from Goldmine to Salesforce.com. Salesforce users can operate the site in two distinct formats. The Classic format and the new JAMstack format, known as Lightning Experience which users are slowly being pushed into.

I started out with the classic version. It looks dated but it's fast and responsive. It does the job and does it quickly. But knowing that the classic version will eventually be retired I forced myself to use the lightning version. Compared to classic, lightning is slow and clunky. The pages start with a big spinner, then switch to smaller spinners and then more spinners inside of those and you wait and wait until all the data loads. Then you notice that all the data hasn't loaded because as you scroll down you are confronted with more, you guessed it, spinners. But the design is something else. There are bears dancing, tigers prancing, butterflies fluttering their wings in blue skies with puffy clouds roaming everywhere. I don't know if I'm using a CRM platform or looking at a caricatured Bob Ross painting.

Now I have nothing against cartoon characters zipping around. The lightning experience is an experience alright and admittedly the design is actually pretty nice but anything but lightning fast once you've used the classic version.

You could say that I'm nostalgic for the old days of Perl CGI, and perhaps that's true a little. But I'm actually more nostalgic for how much faster the web used to be at one time, slow modem baud rates notwithstanding. But this is the state of the web as we have today. It has been decided that JAMstack is the way to go even if it's at the speed of molasses, or should I say, JAM.

A Better Way to Fight Scam Calls

by @ 8:22 pm
Filed under: social — Tags: ,

The usual advice on handling scam calls has been to not answer unknown caller id’s or hang up immediately and to not interact, lest the scammers will flag the number as live and will call it more often.

However, a recent extensive study out of NC State has confirmed what I’d always suspected and that is the scammers pay scant attention to if and how the calls are answered and blindly keep on calling. That means they fire up their auto-dialers, sequentially calling numbers and connecting the answered calls to agents regardless of past history.

I get plenty of these calls myself with the immediate telltale sign being a local but unknown caller id. If I have some time to kill, I pick up the call, go through the initial robot IVR qualification steps and when the live agent finally comes on, I just waste their time.

There are plenty ways to waste scammers' time. You can hang up after the initial greeting, talk gibberish, ask them to wait a minute and set the phone down, or just go along pretending that you are interested in whatever junk they are peddling. The scammers are pretty savvy in recognizing time-wasters and hang up rather quickly, but by then you have wasted a bit of their time.

Now If everyone wasted a few seconds of the scammers’ time, that can add up quickly to a substantial loss to them with the effort not justifying the potential gains. That should drive some if not the majority of the scam call centers out of business.

So, go ahead and answer the scam calls. Every time you waste a scammer’s time, there’s the satisfaction of knowing that it might have saved a vulnerable person from falling victim or at least from being annoyed.

From Sendmail To G Suite Gmail

by @ 1:04 pm
Filed under: email,google — Tags: , ,

In a previous post I covered updating my self-hosted Sendmail program to the newest version with some additions such as TLS 1.2 and DKIM. The version 8.15.2 at the time of my update was nearly five years old.

Interestingly, less than two months after my update Sendmail released a new version 8.16.1 . Now it must have been a coincidence that after 5 years of hibernation Sendmail decided to release a new version only a few weeks after I had gone to the trouble of updating my install but I wasn’t about to go through the pain of building, testing and deploying the new version and all its supporting components again.

Sendmail is a rock solid MTA and I could have happily stayed with the 8.15.2 version, but as I had mentioned in that blog post there were bigger concerns about continuing to self-host an email server. I was simply burned out from combating spammers and hackers and since all my emails (including uncaught spam) were forwarded to Gmail, my server’s reputation with Google wasn’t exactly stellar.

The time had come to end self-hosting and migrate to cloud hosting and G Suite was the perfect platform. G Suite by Google is one of several cloud products that companies can migrate their online presence to, including email service for their entire organization.

In my case that decision was even easier since I already had an unused free legacy G Suite account that had been languishing for many years. The legacy account has many limitations compared to the paid versions but it was good enough to proceed with the migration.

I configured hashemian.com as an alias domain for the G Suite account and created two users to handle the 55 or so email addresses. Each user can have a maximum of 30 email aliases which is why two users were needed. After adding all the aliases to the users, I logged into Gmail with each user account and configured them to forward all incoming emails to my regular Gmail account and then to delete those emails.

The final step was to the configure the DNS MX records and emails bound for hashemian.com started to flow to the G Suite users and subsequently to my regular Gmail inbox.

Since my Sendmail install was no longer used to receive email, I blocked it from all outside traffic to stop all spam attempts from external hosts. Scanning the maillog file proved that all spam activity directed at my server had come to a halt which also had a nice side effect of significantly lowering the stress on my server.

Sendmail isn’t completely gone from my server. It’s still used to send out all internal and web page generated messages. At this point I can fully disable it and use a lightweight outbound SMTP program such as ssmtp, msmtp, or nullmailer to submit emails via Gmail’s SMTP relay service. Perhaps some day that may happen, but for now Sendmail is working fine sending outbound messages without much stress on the server, so there’s little reason for me to fully terminate it.

After over 10 years with Sendmail, it was time to hand the email service reigns over to the cloud and so far the only regret is not doing it sooner. My server isn’t the only one benefiting. I also have a lot less stress since the migration.

* G Suite is now known as Google Workspace.

Super Organic Raspberries

by @ 7:36 pm
Filed under: health — Tags: ,

Hiking has a lot of benefits, but if done at the right time, at the right place and with a bit of attention, treasures like this super organic cluster of raspberries can be had for the price of zero.

No rinsing required!

Does It Make Sense To Self-Host Mail Server?

by @ 9:51 pm
Filed under: email,internet — Tags: ,

I have operated hashemian.com for over 2 decades now, earlier on hosted servers and eventually on my own server. During that time the domain has also been email capable, accepting and delivering emails sent to/from addresses such as [email protected] This was also originally hosted but was eventually ported to my own server. The product of choice for hosting my own mail server (MTA) has been Sendmail. At one time Sendmail was the king of the hill. It's still in use today, albeit vastly eclipsed by other products such as Exim and Postfix, as can clearly be seen here.

Years ago I would use email clients such as Squirrelmail to read emails but eventually for the sake of convenience I configured Sendmail to forward all @hashemian.com emails to my Gmail account. With Gmail I also gained great spam detection but there are also potential adverse effects in forwarding emails. One drawback is Gmail could and in fact does block access from my server, especially if a few too many emails are forwarded. Gmail does not recognize that the forwarded emails are from various original senders and instead assume all the emails originate from my server and takes punitive measures against what it perceives to be an abusive server. This periodic blacklisting has been happening for years now and I’m sure it doesn’t bode well for my server’s reputation.

421 4.7.28 [XXX.XXX.XXX.XXX] Our system has detected an unusual rate of unsolicited mail originating from your IP address. To protect our users from spam, mail sent from your IP address has been temporarily rate limited. Please visit https://support.google.com/mail/?p=UnsolicitedRateLimitError to review our Bulk Email Senders Guidelines. - gsmtp

There are steps I can take to correct or at least mitigate this issue. One would be to identify and block spammers from my server and I do that from time to time when I find egregious activities. It helps, but it’s a manual chore and hardly efficient. Another would be utilizing products such as Fail2ban and SpamAssassin to combat spammers at network and application levels. But that would mean more work for me in terms of configuring, tweaking, updating and patching, and I’m too lazy for that. Also instead of pushing emails to it, I can have Gmail pull emails utilizing IMAP or POP. But that means maintaining another product such as Dovecot and opening ports on my server inviting additional exploit activity. No thanks, not at this time, even if those ports can be restricted to Gmail’s IP addresses only.

Recently I undertook the effort to build from source and update Sendmail to its latest available version 8.15.2 on my ancient but functional Fedora 14 server. As can be imagined it wasn’t a simple task, especially since I wanted to bring as many features of ESMTP aboard as possible, including support for STARTTLS on TLS 1.2. In some cases that meant hunting around for newer library source codes to build into Sendmail. The effort was an eventual success, specially after I installed and started the service and mail began to flow. Then to build on that momentum, I also added DKIM authentication to Sendmail by building and installing dkim-milter.

I must admit that even though the effort was successful it wasn’t really cause for celebration. The latest version of Sendmail, while stable and rock solid, is nevertheless 5 years old now, not as ancient as the kernel it’s running on but still pretty aged as software goes these days. Still doubtful I would have felt any better had I switched the MTA to the more modern Exim or Postfix.

Fact is times have changed and with cloud services maturing and prices falling, there’s little reason to maintain a server. Sure, there’s the educational aspect to it and some pride and autonomy, but it can be exhausting to keep up with all the updates and patches when you can spin up a fully loaded droplet on Digital Ocean for $5 or get cheap domain email service on G Suite or Office 365 (soon called Microsoft 365).

And with that in mind, I am slowly warming up to moving my domain’s email setup to my G Suite account. It’s an account I registered for years ago and thankfully Google has kept it free so far. It’ll be a bittersweet moment when I shut down Sendmail for the last time (although I may continue to use it for a bit longer for outbound messages) handing over the reigns to G Suite. I suppose one concern would be if on that same day Google will flip the existing free G Suite accounts to paid versions.

To be continued…

Building New PHP 7.3 on Old Fedora 14 Linux

by @ 5:36 pm
Filed under: Uncategorized — Tags: , ,

I have mentioned before this server runs on old hardware and Fedora Core 14. That’s a 9-year-old OS which in terms of software, it’s like 3 times its life expectancy at this point. Fedora is almost on version 31 now. Keeping old OS around can be real hassle, but then again so is updating and upgrading to newer versions and making sure everything still works as before.

Nowadays updating the OS is a relatively simple task. The kernel gets updated with the new version almost as easily as updating a component and a reboot finishes the job. But at this point the Fedora 14 running on this server is so old that there no upgrade paths. The only way is to make a fresh install and then migrate the files, testing everything along the way. And so, I have decided to leave the OS alone and that’s how this server is nearing a decade on the same OS.

I have previously covered the fact that I have patched and upgraded components of the OS over time, from the Apache Web Server to PHP and Python to TLS to adding IPv6 stack to this server. As the OS has been out of support for many years, in just about all cases, the update job has been done by building and installing from the source. With bits of pieces of built components all over the place, the server is now a patchwork hell, but it has been a surprisingly stable one so far.

The latest effort has been to update PHP to version 7.3. This is actually the second time I was making a PHP update attempt on this server. Last year I upgraded PHP to version 5.6 to head off some security bugs that had been discovered in prior versions, and to make use of some new language features. This time WordPress provided the impetus to make the update. The latest WordPress versions have started to display this alert on the dashboard when running on older PHP versions:

Updating to the latest versions of PHP is pretty straight forward on a newer OS. It’s just the matter of making sure the correct repository is configured and then making the install. For example, in the RedHat/CentOS/Fedora world that entails adding the “Remi” repository followed by “yum install”.

In my case however, the standard “yum” option (now “dnf”) is out of the question. I needed to download the PHP 7.3 source and build it myself. Right from the start “./configure” unleashed a torrent of errors. Some were complaints about older version of products/libraries in my Fedora 14 OS while others were about missing libraries. Fixing those errors is like falling into a rabbit hole of iterative updates. One library relies on another which relies on another library with a newer version. That meant I had to download the sources and build the downstream libraries before I could come a level back up. In other cases, I had not placed the previously built libraries in standard locations, so I had to actually edit the “configure” file and provide a bit of location guidance in a few areas. I also decided to forego a few unused extensions such as “Mapi” that I could do without.

When “./configure” finally ran successfully emitting the “Makefile” file I had miraculously passed the first hurdle. The “make” command was up next. This command, following the “Makefile” instructions, builds the executables and libraries of PHP. I was sure there would be errors and I wasn’t surprised when I saw them. As I was hunting and correcting the errors one persistent error kept emanating from a few lines in “php_zip.c”. After some consideration, I decided that I could comment out those lines without harming the overall product as it is used on this server* and the second hurdle was finally overcome. I had my “php” and “php-cgi” holy grail executables.

The right next step after “make” is “make test”. On this step a number of tests and checks are run against the newly built PHP to make sure it is functioning properly. I knew there was no passing all those tests, given the old Fedora 14 OS. Instead I wanted to test the new PHP against the existing web pages on the server. This is of course not a proper test as even if it passes, that’s no guarantee future pages or scripts will function as expected. But this is my server and my pleasure so I headed straight to executing the cli “php -v”. It was encouraging to see the new version but a few errors also printed on the screen. At least I finally had PHP 7.3 running on the server.

Turns out those errors were due to missing extensions that PHP couldn’t load. A few of those extensions such “mysql” and “mcrypt” have been deprecated and are no longer part of the PHP 7.3 core. Other errors, such as “zip” load error were caused by old library versions, requiring me to build newer versions of those libraries, such as “libssl”, and install them.

The final piece was the “mcrypt” extension which has been deprecated and its use discouraged. Instead the “Sodium” or “OpenSSL” extensions are suggested as a safer alternatives. My site does make use of “mcrypt” and I need time to update those references so leaving that extension out was out of the question.

Getting the “mcrypt” extension involved downloading its source from PECL, placing the source files under the “/ext/” directory, then “cd” into “/ext/mcrypt/”, run “phpize” then “./configure” followed by “make”. Of course, even in this case I ended up with errors at the “./configure” step. Turns out I had an older version of “Autoconf” and, surprise, I had to download the new “Autoconf”, build and install it before I could build the “mcrypt” library.

There’s no doubt that I need to migrate this server to a newer OS version like yesterday. Even though I’m happy that this site is now running on the latest version of PHP and WordPress has stopped complaining, I realize that this patchwork hell can be pushed only so far before it comes apart. It’s amazing that it has gone as far as it has and I’m under no illusion that these newly built components may not necessarily be fully stable nor secure when deployed in this fashion. Guess what I’m trying to say here is, don’t try this at home.

Note: No sooner I had deployed PHP 7.3 on this server than the word of several PHP code execution flaws hit the street. While for most people the fix was to simply install the latest released version of PHP, for me it was back to the source download, code alterations, build and deploy. At least I had a recent roadmap this time to get there faster.

* Altering source code before building the executable could have adverse consequences in terms of security and stability. It is risky.

Dabbling in IPv6

by @ 4:02 pm
Filed under: internet — Tags:

Nowadays many people know of IP addresses. Those 4 numbers (technically known as octets) separated by dots (e.g. which connect all of our devices to the Internet and allow them to find each other. Of course many don't understand the underlying technologies that make the whole thing work and they don't really need to understand it.

That IP address that most people know is IPv4 and is almost 40 years old. Its address space was depleted back in 2011, as in there were no more addresses for the overseeing organization, ICANN, to hand out.

Back in 1999 realizing the dire situation of IPv4, Internet Engineering Task Force (IETF) developed a new IP scheme known as IPv6. While IPv4 is 32 bits allowing for about 4.3 billion addresses, IPv6 is 128 bits allowing for 3.4×1038 addresses. It’s a big number, so big that it’s doubtful we’ll run out of addresses even in a billion years. For now only a fraction of those addresses have been given out to public and even that’s a gigantic pool. Instead of octets IPv6 addressing uses 8 hextets separated by colons, for example, 2603:3005:6009:b400:0:0:0:61, also represented as 2603:3005:6009:b400::61.

The idea was that given enough time IPv6 will supplant IPv4 and while in 20 years there has been adoption, it has not been at the anticipated rate. Here’s a Google chart on the IPv6 adoption rate. IPv4 continues to dominate the Internet with technologies devised to extend its life, the main one being NAT.

Possibly the biggest hindrance to IPv6 adoption is that it does not interoperate with IPv4. Yes, they can co-exists in what’s known as dual stacking and they can communicate using translation technologies such as tunnels but they are separate technologies. Organizations have spent many years and resources to architect their IPv4 networks and they are understandably hesitant to spend the time and expense to repeat the process, especially considering the continued vast prevalence of IPv4.

Thankfully this site is not an organization with large investment in its IPv4 network so when I accidentally discovered that my Comcast Business modem was handing out IPv6 I was happy to start experimenting.

I have written before about the challenges of running this site on an old FC14 OS, but thankfully FC14 is IPv6 ready, no kernel patching or module loading necessary. With a bit of online learning I was able to spin up a public static IPv6 on the NIC and my server was now dual-stacked. As I progressed with the configuration I noticed that IPv6 has a bit of a learning curve even for the simplest operations. The ping command is ping6, traceroute is traceroute6, iptables rules no longer apply and one must use ip6tables, and there is no ARP table among other differences.

Spinning up an Apache website on IPv6 was done with a bit of trial and error but as I attempted to make the site public, I hit a road bump, a small indication of why IPv6 has been slow in growth. For years I had been using the free DNS service provided by my domain registrar, Network Solutions. But as I proceeded to add an IPv6 host name I noticed that Network Solutions provided no such mechanism. IPv6 records are added as AAAA rather than A records which are for IPv4 hosts. Reaching out to the Network Solutions tech support confirmed that they do not support AAAA records.

The only option was to move my DNS hosting to another provider that supported AAAA records, and that search led me to Hurricane Electric. I can’t compliment this company enough. Migrating DNS was easy and while their interface seems a bit dated and a bit cumbersome (there is no zone import facility) everything worked perfectly save a tiny glitch. Even more impressive, their tech support replied to my email within minutes with helpful answers to quickly overcome the glitch. I was impressed, and no, I am not getting paid to endorse Hurricane Electric, just a satisfied user of their free DNS hosting.

You can now browse to the IPv6 site and see for yourself, but you can only access it if your device is IPv6 capable. If your provider doesn’t provide IPv6 and you want to experiment with it, Hurricane Electric has a free service for that called Tunnel Broker for IPv4 to IPv6 translation. I tested that out on a Windows 10 host and it worked flawlessly.

Finally, if you want to see more details on your Internet connection, the whoami page will show you quite a bit of information about your online presence, IPv4 and IPv6 included.

Goodbye to Pingdom

by @ 10:47 pm
Filed under: web — Tags:

I have used the free Pingdom service for nearly 9 years to monitor the health of this site. Over the years it has been a helpful service and at times they would add even more useful features.

Then came 2014 when SolarWinds, a public-then-private-then-public company catering to businesses with their analytics products and services, acquired Pingdom. The writing was on wall. Soon the free tier monitoring service started to lose features and it became less and less useful over time.

Today came the final nail in the coffin, Pingdom will be ending the free service next month. Can't say it was unexpected. I can understand the company's position not to bother with small-time operators such as this site who want a free service. I can only imagine the company meeting where the termination decision was made with the boss' opening kickoff line, "We've had enough of these leeches, let's cut these parasites loose."

Instead Pingdom opted for an obvious self-serving email masquerading as a customer-serving initiative. I have no expectation of these companies providing free services to users, but they should try to be a bit more honest in their messages instead of this opener, dripping with insincerity.

They could have simply said that they'd want to get paid for the services they provide and leave it at that. As I deleted my account with Pingdom today, I felt some gratitude for the years of free monitoring. Then I promptly migrated to another company, StatusCake, which provides a similar web monitoring service. Yes, it is free, at least for now.

Must admit that it wasn't a sad goodbye though. My philosophy is, never get too attached to anything to let it go.


Hey, It's Me on Google Maps

by @ 9:02 pm
Filed under: google,running-hiking — Tags:

I am a pretty regular user of Google Maps as well as the satellite and street view features of it. One wonders if the people walking around with their faces blurred in street view know that they are being featured on the Google service.

Well, I don't have to wonder about that myself. A few months ago I stepped out of my home for a jog in my town and noticed the Google Maps car with the mounted camera driving alongside on the road. Now the driver may have just been going somewhere or doing a test run or even if the camera was recording, the footage might have not made it to Google Maps.

But I made a mental note of the encounter and would occasionally check street view to see if I had made it on Google Maps. For months the street footage remained the same and then one night this week  I noticed a change. Positioning the map to my car encounter location, I finally found myself jogging along the sidewalk, blurry-faced but recognizable to myself.

And so for now I get to have some weird satisfaction of being featured in Google Maps, although the distinction is a bit dubious and surely ephemeral. It won’t be long before Google Maps will update street view and any trace of my existence will be wiped out at the click of a mouse. But not before I captured a few shots of myself starring in this episode of Google Maps street view 🙂

The GDPR Mess

by @ 4:35 pm
Filed under: business,internet,law,web — Tags: ,

With GDPR (General Data Protection Regulation) being in full force since May 25, 2018, one must assume that the privacy and security of users are now fully protected. I think it’s an understatement to call that claim an over-exaggeration.

GDPR is a European regulation designed to protect the privacy of European citizens, giving them full control over their personal information. For most website operators it translates to getting users’ permission before doing anything with their data and deleting that data upon request.

While on the surface it is a well-intentioned law, little doubt remains that it has morphed into a giant confusion. Fact is no one really knows all the subtleties of this law and no one knows how to correctly implement it.

First there was a barrage of emails from companies proclaiming that they had new privacy laws, except that who has the time to click on every email and read reams of legalese nonsense.

Now we have the omnipresent ridiculous popup/slider on sites declaring some inane cookie policy for the site with a button to accept the terms. This site is guilty of that too, you might have noticed a cookie disclaimer sliding up from the bottom of the screen. The popup is just a utility script hosted on some site and I have no idea how it helps with your privacy and security while you are on this site.

Ironically, your privacy and security was just fine on this site prior to showing you the GDPR cookie notice. No data was being collected on you, no cookies were being stored on your browser, and no tracking was being done. Of course the Google services used on this site do some of those things and those are separately covered by Google’s privacy policies.

Now with the introduction of the cookie popup, this site has to use cookies to keep track of the fact that the user has been to the site and accepted the terms. In other words this site has to tell users that is uses cookie because it uses cookies to tell users that it uses cookies. And now the site hosting the popup code knows about the user too. Moreover the user that has just arrived to the site is not going to take the time to read all the cryptic nonsense in the privacy policy. Instead s/he is going to accept everything and continue. Now the site can do whatever it wants with the user’s data and it has explicit permission from the user. That provides a pretty strong incentive to abuse the data without any fear of legal consequences.

Finally, how does the European law expect a small time blogger provide the same level of privacy provisions of Amazon or Facebook to its users? Those are companies with billions of dollars at their disposal and an army of developers, attorneys and consultants.

Now comes GDPR with its esoteric rules to confound the small sites or even worse shut them down because they didn’t ask a silly question with a checkbox next to it. So much for democratizing the Internet where the small guys should have a shot at having their voice heard too.

But for now GDPR is here so by all means, read the disclaimer, visit the privacy page and click the stupid button. Don’t worry, your private data is safe with this site, especially since it doesn’t even ask for it.

Older Posts »

Powered by

Read Financial Markets  |   Home  |   Blog  |   Web Tools  |   News  |   Articles  |   FAQ  |   About  |   Privacy  |   Contact
Donate Bitcoin: 1GfrF49zFWfn7qHtgFxgLMihgdnVzhE361
© 2001-2021 Robert Hashemian   Powered by Hashemian.com