Hashemian Blog
Web, Finance, Technology, Running

Building New PHP 7.3 on Old Fedora 14 Linux

by @ 5:36 pm
Filed under: Uncategorized — Tags: , ,

I have mentioned before this server runs on old hardware and Fedora Core 14. That’s a 9-year-old OS which in terms of software, it’s like 3 times its life expectancy at this point. Fedora is almost on version 31 now. Keeping old OS around can be real hassle, but then again so is updating and upgrading to newer versions and making sure everything still works as before.

Nowadays updating the OS is a relatively simple task. The kernel gets updated with the new version almost as easily as updating a component and a reboot finishes the job. But at this point the Fedora 14 running on this server is so old that there no upgrade paths. The only way is to make a fresh install and then migrate the files, testing everything along the way. And so, I have decided to leave the OS alone and that’s how this server is nearing a decade on the same OS.

I have previously covered the fact that I have patched and upgraded components of the OS over time, from the Apache Web Server to PHP and Python to TLS to adding IPv6 stack to this server. As the OS has been out of support for many years, in just about all cases, the update job has been done by building and installing from the source. With bits of pieces of built components all over the place, the server is now a patchwork hell, but it has been a surprisingly stable one so far.

The latest effort has been to update PHP to version 7.3. This is actually the second time I was making a PHP update attempt on this server. Last year I upgraded PHP to version 5.6 to head off some security bugs that had been discovered in prior versions, and to make use of some new language features. This time WordPress provided the impetus to make the update. The latest WordPress versions have started to display this alert on the dashboard when running on older PHP versions:

Updating to the latest versions of PHP is pretty straight forward on a newer OS. It’s just the matter of making sure the correct repository is configured and then making the install. For example, in the RedHat/CentOS/Fedora world that entails adding the “Remi” repository followed by “yum install”.

In my case however, the standard “yum” option (now “dnf”) is out of the question. I needed to download the PHP 7.3 source and build it myself. Right from the start “./configure” unleashed a torrent of errors. Some were complaints about older version of products/libraries in my Fedora 14 OS while others were about missing libraries. Fixing those errors is like falling into a rabbit hole of iterative updates. One library relies on another which relies on another library with a newer version. That meant I had to download the sources and build the downstream libraries before I could come a level back up. In other cases, I had not placed the previously built libraries in standard locations, so I had to actually edit the “configure” file and provide a bit of location guidance in a few areas. I also decided to forego a few unused extensions such as “Mapi” that I could do without.

When “./configure” finally ran successfully emitting the “Makefile” file I had miraculously passed the first hurdle. The “make” command was up next. This command, following the “Makefile” instructions, builds the executables and libraries of PHP. I was sure there would be errors and I wasn’t surprised when I saw them. As I was hunting and correcting the errors one persistent error kept emanating from a few lines in “php_zip.c”. After some consideration, I decided that I could comment out those lines without harming the overall product as it is used on this server* and the second hurdle was finally overcome. I had my “php” and “php-cgi” holy grail executables.

The right next step after “make” is “make test”. On this step a number of tests and checks are run against the newly built PHP to make sure it is functioning properly. I knew there was no passing all those tests, given the old Fedora 14 OS. Instead I wanted to test the new PHP against the existing web pages on the server. This is of course not a proper test as even if it passes, that’s no guarantee future pages or scripts will function as expected. But this is my server and my pleasure so I headed straight to executing the cli “php -v”. It was encouraging to see the new version but a few errors also printed on the screen. At least I finally had PHP 7.3 running on the server.

Turns out those errors were due to missing extensions that PHP couldn’t load. A few of those extensions such “mysql” and “mcrypt” have been deprecated and are no longer part of the PHP 7.3 core. Other errors, such as “zip” load error were caused by old library versions, requiring me to build newer versions of those libraries, such as “libssl”, and install them.

The final piece was the “mcrypt” extension which has been deprecated and its use discouraged. Instead the “Sodium” or “OpenSSL” extensions are suggested as a safer alternatives. My site does make use of “mcrypt” and I need time to update those references so leaving that extension out was out of the question.

Getting the “mcrypt” extension involved downloading its source from PECL, placing the source files under the “/ext/” directory, then “cd” into “/ext/mcrypt/”, run “phpize” then “./configure” followed by “make”. Of course, even in this case I ended up with errors at the “./configure” step. Turns out I had an older version of “Autoconf” and, surprise, I had to download the new “Autoconf”, build and install it before I could build the “mcrypt” library.

There’s no doubt that I need to migrate this server to a newer OS version like yesterday. Even though I’m happy that this site is now running on the latest version of PHP and WordPress has stopped complaining, I realize that this patchwork hell can be pushed only so far before it comes apart. It’s amazing that it has gone as far as it has and I’m under no illusion that these newly built components may not necessarily be fully stable nor secure when deployed in this fashion. Guess what I’m trying to say here is, don’t try this at home.

Note: No sooner I had deployed PHP 7.3 on this server than the word of several PHP code execution flaws hit the street. While for most people the fix was to simply install the latest released version of PHP, for me it was back to the source download, code alterations, build and deploy. At least I had a recent roadmap this time to get there faster.

* Altering source code before building the executable could have adverse consequences in terms of security and stability. It is risky.

Dabbling in IPv6

by @ 4:02 pm
Filed under: internet — Tags:

Nowadays many people know of IP addresses. Those 4 numbers (technically known as octets) separated by dots (e.g. 173.162.146.61) which connect all of our devices to the Internet and allow them to find each other. Of course many don't understand the underlying technologies that make the whole thing work and they don't really need to understand it.

That IP address that most people know is IPv4 and is almost 40 years old. Its address space was depleted back in 2011, as in there were no more addresses for the overseeing organization, ICANN, to hand out.

Back in 1999 realizing the dire situation of IPv4, Internet Engineering Task Force (IETF) developed a new IP scheme known as IPv6. While IPv4 is 32 bits allowing for about 4.3 billion addresses, IPv6 is 128 bits allowing for 3.4×1038 addresses. It’s a big number, so big that it’s doubtful we’ll run out of addresses even in a billion years. For now only a fraction of those addresses have been given out to public and even that’s a gigantic pool. Instead of octets IPv6 addressing uses 8 hextets separated by colons, for example, 2603:3005:6009:b400:0:0:0:61, also represented as 2603:3005:6009:b400::61.

The idea was that given enough time IPv6 will supplant IPv4 and while in 20 years there has been adoption, it has not been at the anticipated rate. Here’s a Google chart on the IPv6 adoption rate. IPv4 continues to dominate the Internet with technologies devised to extend its life, the main one being NAT.

Possibly the biggest hindrance to IPv6 adoption is that it does not interoperate with IPv4. Yes, they can co-exists in what’s known as dual stacking and they can communicate using translation technologies such as tunnels but they are separate technologies. Organizations have spent many years and resources to architect their IPv4 networks and they are understandably hesitant to spend the time and expense to repeat the process, especially considering the continued vast prevalence of IPv4.

Thankfully this site is not an organization with large investment in its IPv4 network so when I accidentally discovered that my Comcast Business modem was handing out IPv6 I was happy to start experimenting.

I have written before about the challenges of running this site on an old FC14 OS, but thankfully FC14 is IPv6 ready, no kernel patching or module loading necessary. With a bit of online learning I was able to spin up a public static IPv6 on the NIC and my server was now dual-stacked. As I progressed with the configuration I noticed that IPv6 has a bit of a learning curve even for the simplest operations. The ping command is ping6, traceroute is traceroute6, iptables rules no longer apply and one must use ip6tables, and there is no ARP table among other differences.

Spinning up an Apache website on IPv6 was done with a bit of trial and error but as I attempted to make the site public, I hit a road bump, a small indication of why IPv6 has been slow in growth. For years I had been using the free DNS service provided by my domain registrar, Network Solutions. But as I proceeded to add an IPv6 host name I noticed that Network Solutions provided no such mechanism. IPv6 records are added as AAAA rather than A records which are for IPv4 hosts. Reaching out to the Network Solutions tech support confirmed that they do not support AAAA records.

The only option was to move my DNS hosting to another provider that supported AAAA records, and that search led me to Hurricane Electric. I can’t compliment this company enough. Migrating DNS was easy and while their interface seems a bit dated and a bit cumbersome (there is no zone import facility) everything worked perfectly save a tiny glitch. Even more impressive, their tech support replied to my email within minutes with helpful answers to quickly overcome the glitch. I was impressed, and no, I am not getting paid to endorse Hurricane Electric, just a satisfied user of their free DNS hosting.

You can now browse to the IPv6 site and see for yourself, but you can only access it if your device is IPv6 capable. If your provider doesn’t provide IPv6 and you want to experiment with it, Hurricane Electric has a free service for that called Tunnel Broker for IPv4 to IPv6 translation. I tested that out on a Windows 10 host and it worked flawlessly.

Finally, if you want to see more details on your Internet connection, the whoami page will show you quite a bit of information about your online presence, IPv4 and IPv6 included.

Goodbye to Pingdom

by @ 10:47 pm
Filed under: web — Tags:

I have used the free Pingdom service for nearly 9 years to monitor the health of this site. Over the years it has been a helpful service and at times they would add even more useful features.

Then came 2014 when SolarWinds, a public-then-private-then-public company catering to businesses with their analytics products and services, acquired Pingdom. The writing was on wall. Soon the free tier monitoring service started to lose features and it became less and less useful over time.

Today came the final nail in the coffin, Pingdom will be ending the free service next month. Can't say it was unexpected. I can understand the company's position not to bother with small-time operators such as this site who want a free service. I can only imagine the company meeting where the termination decision was made with the boss' opening kickoff line, "We've had enough of these leeches, let's cut these parasites loose."

Instead Pingdom opted for an obvious self-serving email masquerading as a customer-serving initiative. I have no expectation of these companies providing free services to users, but they should try to be a bit more honest in their messages instead of this opener, dripping with insincerity.

They could have simply said that they'd want to get paid for the services they provide and leave it at that. As I deleted my account with Pingdom today, I felt some gratitude for the years of free monitoring. Then I promptly migrated to another company, StatusCake, which provides a similar web monitoring service. Yes, it is free, at least for now.

Must admit that it wasn't a sad goodbye though. My philosophy is, never get too attached to anything to let it go.

 

Powered by


Read Financial Markets  |   Home  |   Blog  |   Web Tools  |   News  |   Articles  |   FAQ  |   About  |   Privacy  |   Contact
Donate Bitcoin: 1K9TzBvQ2oaEb4tX9t2vKDtZouMcpfV6QF
paypal.me/rhashemian
© 2001-2019 Robert Hashemian   Powered by Hashemian.com