Hashemian Blog
Web, Finance, Technology, Running

Building New PHP 7.3 on Old Fedora 14 Linux

by @ 5:36 pm
Filed under: Uncategorized — Tags: , ,

I have mentioned before this server runs on old hardware and Fedora Core 14. That’s a 9-year-old OS which in terms of software, it’s like 3 times its life expectancy at this point. Fedora is almost on version 31 now. Keeping old OS around can be real hassle, but then again so is updating and upgrading to newer versions and making sure everything still works as before.

Nowadays updating the OS is a relatively simple task. The kernel gets updated with the new version almost as easily as updating a component and a reboot finishes the job. But at this point the Fedora 14 running on this server is so old that there no upgrade paths. The only way is to make a fresh install and then migrate the files, testing everything along the way. And so, I have decided to leave the OS alone and that’s how this server is nearing a decade on the same OS.

I have previously covered the fact that I have patched and upgraded components of the OS over time, from the Apache Web Server to PHP and Python to TLS to adding IPv6 stack to this server. As the OS has been out of support for many years, in just about all cases, the update job has been done by building and installing from the source. With bits of pieces of built components all over the place, the server is now a patchwork hell, but it has been a surprisingly stable one so far.

The latest effort has been to update PHP to version 7.3. This is actually the second time I was making a PHP update attempt on this server. Last year I upgraded PHP to version 5.6 to head off some security bugs that had been discovered in prior versions, and to make use of some new language features. This time WordPress provided the impetus to make the update. The latest WordPress versions have started to display this alert on the dashboard when running on older PHP versions:

Updating to the latest versions of PHP is pretty straight forward on a newer OS. It’s just the matter of making sure the correct repository is configured and then making the install. For example, in the RedHat/CentOS/Fedora world that entails adding the “Remi” repository followed by “yum install”.

In my case however, the standard “yum” option (now “dnf”) is out of the question. I needed to download the PHP 7.3 source and build it myself. Right from the start “./configure” unleashed a torrent of errors. Some were complaints about older version of products/libraries in my Fedora 14 OS while others were about missing libraries. Fixing those errors is like falling into a rabbit hole of iterative updates. One library relies on another which relies on another library with a newer version. That meant I had to download the sources and build the downstream libraries before I could come a level back up. In other cases, I had not placed the previously built libraries in standard locations, so I had to actually edit the “configure” file and provide a bit of location guidance in a few areas. I also decided to forego a few unused extensions such as “Mapi” that I could do without.

When “./configure” finally ran successfully emitting the “Makefile” file I had miraculously passed the first hurdle. The “make” command was up next. This command, following the “Makefile” instructions, builds the executables and libraries of PHP. I was sure there would be errors and I wasn’t surprised when I saw them. As I was hunting and correcting the errors one persistent error kept emanating from a few lines in “php_zip.c”. After some consideration, I decided that I could comment out those lines without harming the overall product as it is used on this server* and the second hurdle was finally overcome. I had my “php” and “php-cgi” holy grail executables.

The right next step after “make” is “make test”. On this step a number of tests and checks are run against the newly built PHP to make sure it is functioning properly. I knew there was no passing all those tests, given the old Fedora 14 OS. Instead I wanted to test the new PHP against the existing web pages on the server. This is of course not a proper test as even if it passes, that’s no guarantee future pages or scripts will function as expected. But this is my server and my pleasure so I headed straight to executing the cli “php -v”. It was encouraging to see the new version but a few errors also printed on the screen. At least I finally had PHP 7.3 running on the server.

Turns out those errors were due to missing extensions that PHP couldn’t load. A few of those extensions such “mysql” and “mcrypt” have been deprecated and are no longer part of the PHP 7.3 core. Other errors, such as “zip” load error were caused by old library versions, requiring me to build newer versions of those libraries, such as “libssl”, and install them.

The final piece was the “mcrypt” extension which has been deprecated and its use discouraged. Instead the “Sodium” or “OpenSSL” extensions are suggested as a safer alternatives. My site does make use of “mcrypt” and I need time to update those references so leaving that extension out was out of the question.

Getting the “mcrypt” extension involved downloading its source from PECL, placing the source files under the “/ext/” directory, then “cd” into “/ext/mcrypt/”, run “phpize” then “./configure” followed by “make”. Of course, even in this case I ended up with errors at the “./configure” step. Turns out I had an older version of “Autoconf” and, surprise, I had to download the new “Autoconf”, build and install it before I could build the “mcrypt” library.

There’s no doubt that I need to migrate this server to a newer OS version like yesterday. Even though I’m happy that this site is now running on the latest version of PHP and WordPress has stopped complaining, I realize that this patchwork hell can be pushed only so far before it comes apart. It’s amazing that it has gone as far as it has and I’m under no illusion that these newly built components may not necessarily be fully stable nor secure when deployed in this fashion. Guess what I’m trying to say here is, don’t try this at home.

Note: No sooner I had deployed PHP 7.3 on this server than the word of several PHP code execution flaws hit the street. While for most people the fix was to simply install the latest released version of PHP, for me it was back to the source download, code alterations, build and deploy. At least I had a recent roadmap this time to get there faster.

* Altering source code before building the executable could have adverse consequences in terms of security and stability. It is risky.

Dabbling in IPv6

by @ 4:02 pm
Filed under: internet — Tags:

Nowadays many people know of IP addresses. Those 4 numbers (technically known as octets) separated by dots (e.g. 173.162.146.61) which connect all of our devices to the Internet and allow them to find each other. Of course many don't understand the underlying technologies that make the whole thing work and they don't really need to understand it.

That IP address that most people know is IPv4 and is almost 40 years old. Its address space was depleted back in 2011, as in there were no more addresses for the overseeing organization, ICANN, to hand out.

Back in 1999 realizing the dire situation of IPv4, Internet Engineering Task Force (IETF) developed a new IP scheme known as IPv6. While IPv4 is 32 bits allowing for about 4.3 billion addresses, IPv6 is 128 bits allowing for 3.4×1038 addresses. It’s a big number, so big that it’s doubtful we’ll run out of addresses even in a billion years. For now only a fraction of those addresses have been given out to public and even that’s a gigantic pool. Instead of octets IPv6 addressing uses 8 hextets separated by colons, for example, 2603:3005:6009:b400:0:0:0:61, also represented as 2603:3005:6009:b400::61.

The idea was that given enough time IPv6 will supplant IPv4 and while in 20 years there has been adoption, it has not been at the anticipated rate. Here’s a Google chart on the IPv6 adoption rate. IPv4 continues to dominate the Internet with technologies devised to extend its life, the main one being NAT.

Possibly the biggest hindrance to IPv6 adoption is that it does not interoperate with IPv4. Yes, they can co-exists in what’s known as dual stacking and they can communicate using translation technologies such as tunnels but they are separate technologies. Organizations have spent many years and resources to architect their IPv4 networks and they are understandably hesitant to spend the time and expense to repeat the process, especially considering the continued vast prevalence of IPv4.

Thankfully this site is not an organization with large investment in its IPv4 network so when I accidentally discovered that my Comcast Business modem was handing out IPv6 I was happy to start experimenting.

I have written before about the challenges of running this site on an old FC14 OS, but thankfully FC14 is IPv6 ready, no kernel patching or module loading necessary. With a bit of online learning I was able to spin up a public static IPv6 on the NIC and my server was now dual-stacked. As I progressed with the configuration I noticed that IPv6 has a bit of a learning curve even for the simplest operations. The ping command is ping6, traceroute is traceroute6, iptables rules no longer apply and one must use ip6tables, and there is no ARP table among other differences.

Spinning up an Apache website on IPv6 was done with a bit of trial and error but as I attempted to make the site public, I hit a road bump, a small indication of why IPv6 has been slow in growth. For years I had been using the free DNS service provided by my domain registrar, Network Solutions. But as I proceeded to add an IPv6 host name I noticed that Network Solutions provided no such mechanism. IPv6 records are added as AAAA rather than A records which are for IPv4 hosts. Reaching out to the Network Solutions tech support confirmed that they do not support AAAA records.

The only option was to move my DNS hosting to another provider that supported AAAA records, and that search led me to Hurricane Electric. I can’t compliment this company enough. Migrating DNS was easy and while their interface seems a bit dated and a bit cumbersome (there is no zone import facility) everything worked perfectly save a tiny glitch. Even more impressive, their tech support replied to my email within minutes with helpful answers to quickly overcome the glitch. I was impressed, and no, I am not getting paid to endorse Hurricane Electric, just a satisfied user of their free DNS hosting.

You can now browse to the IPv6 site and see for yourself, but you can only access it if your device is IPv6 capable. If your provider doesn’t provide IPv6 and you want to experiment with it, Hurricane Electric has a free service for that called Tunnel Broker for IPv4 to IPv6 translation. I tested that out on a Windows 10 host and it worked flawlessly.

Finally, if you want to see more details on your Internet connection, the whoami page will show you quite a bit of information about your online presence, IPv4 and IPv6 included.

Goodbye to Pingdom

by @ 10:47 pm
Filed under: web — Tags:

I have used the free Pingdom service for nearly 9 years to monitor the health of this site. Over the years it has been a helpful service and at times they would add even more useful features.

Then came 2014 when SolarWinds, a public-then-private-then-public company catering to businesses with their analytics products and services, acquired Pingdom. The writing was on wall. Soon the free tier monitoring service started to lose features and it became less and less useful over time.

Today came the final nail in the coffin, Pingdom will be ending the free service next month. Can't say it was unexpected. I can understand the company's position not to bother with small-time operators such as this site who want a free service. I can only imagine the company meeting where the termination decision was made with the boss' opening kickoff line, "We've had enough of these leeches, let's cut these parasites loose."

Instead Pingdom opted for an obvious self-serving email masquerading as a customer-serving initiative. I have no expectation of these companies providing free services to users, but they should try to be a bit more honest in their messages instead of this opener, dripping with insincerity.

They could have simply said that they'd want to get paid for the services they provide and leave it at that. As I deleted my account with Pingdom today, I felt some gratitude for the years of free monitoring. Then I promptly migrated to another company, StatusCake, which provides a similar web monitoring service. Yes, it is free, at least for now.

Must admit that it wasn't a sad goodbye though. My philosophy is, never get too attached to anything to let it go.

 

Hey, It's Me on Google Maps

by @ 9:02 pm
Filed under: google,running-hiking — Tags:

I am a pretty regular user of Google Maps as well as the satellite and street view features of it. One wonders if the people walking around with their faces blurred in street view know that they are being featured on the Google service.

Well, I don't have to wonder about that myself. A few months ago I stepped out of my home for a jog in my town and noticed the Google Maps car with the mounted camera driving alongside on the road. Now the driver may have just been going somewhere or doing a test run or even if the camera was recording, the footage might have not made it to Google Maps.

But I made a mental note of the encounter and would occasionally check street view to see if I had made it on Google Maps. For months the street footage remained the same and then one night this week  I noticed a change. Positioning the map to my car encounter location, I finally found myself jogging along the sidewalk, blurry-faced but recognizable to myself.

And so for now I get to have some weird satisfaction of being featured in Google Maps, although the distinction is a bit dubious and surely ephemeral. It won’t be long before Google Maps will update street view and any trace of my existence will be wiped out at the click of a mouse. But not before I captured a few shots of myself starring in this episode of Google Maps street view 🙂

The GDPR Mess

by @ 4:35 pm
Filed under: business,internet,law,web — Tags: ,

With GDPR (General Data Protection Regulation) being in full force since May 25, 2018, one must assume that the privacy and security of users are now fully protected. I think it’s an understatement to call that claim an over-exaggeration.

GDPR is a European regulation designed to protect the privacy of European citizens, giving them full control over their personal information. For most website operators it translates to getting users’ permission before doing anything with their data and deleting that data upon request.

While on the surface it is a well-intentioned law, little doubt remains that it has morphed into a giant confusion. Fact is no one really knows all the subtleties of this law and no one knows how to correctly implement it.

First there was a barrage of emails from companies proclaiming that they had new privacy laws, except that who has the time to click on every email and read reams of legalese nonsense.

Now we have the omnipresent ridiculous popup/slider on sites declaring some inane cookie policy for the site with a button to accept the terms. This site is guilty of that too, you might have noticed a cookie disclaimer sliding up from the bottom of the screen. The popup is just a utility script hosted on some site and I have no idea how it helps with your privacy and security while you are on this site.

Ironically, your privacy and security was just fine on this site prior to showing you the GDPR cookie notice. No data was being collected on you, no cookies were being stored on your browser, and no tracking was being done. Of course the Google services used on this site do some of those things and those are separately covered by Google’s privacy policies.

Now with the introduction of the cookie popup, this site has to use cookies to keep track of the fact that the user has been to the site and accepted the terms. In other words this site has to tell users that is uses cookie because it uses cookies to tell users that it uses cookies. And now the site hosting the popup code knows about the user too. Moreover the user that has just arrived to the site is not going to take the time to read all the cryptic nonsense in the privacy policy. Instead s/he is going to accept everything and continue. Now the site can do whatever it wants with the user’s data and it has explicit permission from the user. That provides a pretty strong incentive to abuse the data without any fear of legal consequences.

Finally, how does the European law expect a small time blogger provide the same level of privacy provisions of Amazon or Facebook to its users? Those are companies with billions of dollars at their disposal and an army of developers, attorneys and consultants.

Now comes GDPR with its esoteric rules to confound the small sites or even worse shut them down because they didn’t ask a silly question with a checkbox next to it. So much for democratizing the Internet where the small guys should have a shot at having their voice heard too.

But for now GDPR is here so by all means, read the disclaimer, visit the privacy page and click the stupid button. Don’t worry, your private data is safe with this site, especially since it doesn’t even ask for it.

Google Finance Unusable Design

by @ 2:37 pm
Filed under: financial,google — Tags: ,

Sometimes one wonders if giant companies like Google ever think about their end users when they redesign their properties. One such case is Google's recent redesign of their financial site, Google Finance.

There are plenty of sites around for people to track their stock portfolios, real or imaginary. But if you are a Google user and interested in stocks, chances are you have been using Google Finance to keep tabs on the market.

Google Finance had never been a particularly rich site in terms of data and information. They have been dismal at covering earnings data, options prices, analysts ratings, and much more. But one area they were decent at was the homepage where one could get a quick glimpse of his/her custom stock portfolio, local and international market indices, currencies, interest rates, and some commodities such as gold and oil.

That is no longer. A few months ago Google started to push users to their new and updated finance site. The old site was still available via an unpublished link but then a few weeks ago the link went offline and now everyone is forced to use the new site.

So what's wrong with the new site? Design-wise it is more polished and modern than the old one and at the same time it is virtually useless, it simply sucks. What Google failed to understand is that most people interested in the markets want to see as much data crammed into as little space as possible and as timely as possible.

Instead it appears that Google threw a bunch of design-snob interns together with the latest web development tools (AngularJS, no doubt) to create a modern and responsive finance site. There are tons of white spaces, no charts, no indices, no commodities, and only a couple of stocks shown from the portfolio on the homepage.

Well, this strategy has backfired and I am far from the only one complaining. Google's own search box suggestions bear witness to that, not to mention #Googlefinance.

Google Search Box

As for me I opted to escape from the hideous new design to Yahoo Finance. Yahoo pages are sometimes burdened with ads and other nonsense, but at least their portfolio page is leaps and bounds ahead of the Google Finance's so-called modern/responsive design. Now that I have my custom portfolio on Yahoo there's little chance of going back to Google Finance, even if they did bring back the classic site.

Classic Google Finance

 

New Google Finance

The WHOIS Data Block

by @ 10:01 pm
Filed under: internet — Tags:

The WHOIS service is almost as old as the modern Internet. When you register a domain name, ICANN requires the domain registrar to collect the contact information of the domain holder and make that publicly available. There are a number of sites online that let users query the WHOIS database for various domain names. This site also has such a whois service.

Some registrars give their clients the option of private registration where the owner's contact info is replaced with generic data to protect the client's identity. That is fine except that some registrars like GoDaddy and Network Solutions charge fees for such a  service that cost them nothing to operate. Some ethical registrars such as Google Domains or 1&1 offer this service for free.

By ICANN rules The owner, administrative and technical contact information for the domain must be kept current which is why domain owners are contacted annually to verify this data and update it if necessary. There is also another ICANN rule that obligates registrars to make this information publicly available over a web page as well the whois service on port 43.

Here's where some domain registrars such as GoDaddy are in violation of the ICANN rule. A whois lookup for a domain registered on GoDaddy reveals that the domain's WHOIS information is delegated to whois.godaddy.com. Querying whois.godaddy.com for that domain over the whois port returns very little contact information for the domain when it should return complete contact information according to the ICANN rule. Instead it points users to GoDaddy's own website to get the domain details.

One must assume this is a GoDaddy scheme to get visitors to its site to peddle its products and services. But how does it get away flouting ICANN rule? So is it a rule or a suggestion? Some may approve of GoDaddy masking much of the domain contact info over WHOIS as a way to road-block spammers from speedy access. Of course this data can still be had on GoDaddy's own website. But if GoDaddy's intentions are to protect its clients from prying eyes, then why does it charge them for private registration that cost $0 to operate?

A Simple Post-HTTP-to-HTTPS SEO Checklist

by @ 3:21 pm
Filed under: google,web — Tags: , ,

With Chrome version 62 arriving next month Google will begin making good on the promise of warning users when they land on non-secure (non-SSL, non-TLS) sites. This will be subtle at first with a light gray warning on pages that contain any input forms. This warning message will get progressively prevalent and prominent with every new version of Chrome and one must imagine other browsers will follow suit.

Another angle where HTTPS is being pushed is with AMP pages. Secure AMP pages are widely preferred by Google over the non-secure ones. In fact it seems non-secure AMP pages are not even picked up by Google News. That should give content and news sites a serious dose of inducement to go secure if they'd want better representation in the mobile world.

This blog has already covered available options to make a site secure, but once secure what can sites do to effectively promote their new secure status to search engines and by extension to their audience?

Here is checklist of steps to take once a site is migrated to the secure HTTPS/SSL.

  • Test the site with a reputable SSL/TLS utility such as https://www.ssllabs.com/ssltest/ and aim for a high grade.Make sure all pages get a green padlock. For that, all page elements' URLs must be relative or start with https:// or //. Either manually update them, use plugins for CMS's like WordPress, or use mods for servers, for example mod_substitute for apache.
  • Use header Content-Security-Policy: upgrade-insecure-requests or its meta tag equivalent. Not all browsers support this CSP header but majority do. This header instructs the browser to upgrade all HTTP elements on the page to HTTPS equivalents.
  • Use canonical headers or link tags to point to the HTTPS versions of your pages. ala, https://support.google.com/webmasters/answer/139066. The canonical tag is used to point search engines to the most desirable and valid version of a page.
  • Redirect all your HTTP pages to their HTTPS versions. What you want here is a hard or permanent redirect, also known as 301 redirect. This is best accomplished as a header response code and for that access to web server configuration is needed. There are alternative redirection methods such as using the refresh meta tag or the javascript location object, but server response header works best.
  • Use Google Search Console (previously known as Webmaster Tools) to advise Google of your site and to some extent instruct Google on crawling and indexing your pages via Sitemaps. If you already have the non-secure version of your site in Search Console that’s not enough. You must now include the HTTPS version of your site. Search Console is also a great tool for monitoring how Google interacts with your site. It even sends emails if it runs into any issues such as inability to crawl your side or finding malware.
  • If you use freebie certs, use a reputable certificate authority. For example StartSSL certificates are no longer trusted by some browsers, but Let’s Encrypt is fast gaining momentum. There are drawbacks such as lack of wildcard certificates or shorter validity durations so it takes a bit more management effort in return for no cost.
  • Utilize HTTP Strict Transport Security (HSTS) policy for your site. This policy instructs browsers to only interact with your site via HTTPS for a specified duration of time. This is strictly a response header field so access to the server configuration is necessary. It is doubtful that HSTS will improve search engine rankings, but it certainly doesn’t hurt and if a site has migrated to HTTPS, HSTS would be a wise security policy.

Like it or not, migrating to HTTPS is no longer a choice, unless one doesn’t mind being left behind. The prudent way of dealing with it is mapping out an HTTPS migration plan and once secure, taking steps to promote the new secure site.

WordPress Global Replace http to https

by @ 12:00 pm
Filed under: web — Tags: ,

If you are dealing with the pain of migrating your site from non-secure plain http to secure SSL/TLS https, then you are also dealing with the headache of making sure the elements on your pages such as images have https sources instead of http.

The reason is that if your pages are accessed over https but they contain http elements they are considered broken (mixed content) by just about all browsers. The point is that if a page is secure, everything in that page must be secure as well.

Different page elements are treated differently by browsers. As of now, Chrome loads non-secure images in the page, but the URL color goes grey instead of the reassuring green with the secure lockpad. Non-secure scripts are not even loaded, leading to page malfunction in many cases.

If you use WordPress, you know how much of a pain it is to mass replace all the image sources in your past posts from http to https. You can manually update posts which will take forever, update the backing MySQL database via SQL but that’s risky, or use a plugin but that may be overkill.

A quick way to handle this is to add a filter to transform all http occurrences to https on the fly. It won’t change the actual posts, only how they are sent to the browser. And it’s not 100%, but it’s good enough for most sites, including this one. And worst case, you can always remove it and you’ll be back to where you started.

As for the filter code, it’s a one-liner that you add to the functions.php file of your theme. It actually converts http URLs for image src’s to protocol-relative, and that does the job. Here it is and good luck:

add_filter('the_content',function($a){return preg_replace("#src=([\"'])http://#i","src=$1//",$a);});

The Long, Hard and Possibly Foolish Path to SSL/TLS Security

by @ 10:57 pm
Filed under: internet,web — Tags: , , , , , ,

... or TLS 1.2 on Fedora Core 14/FC14 and other older Linux versions

With the chorus of secure browsing  getting louder and becoming more prevalent,  HTTPS migration is becoming inevitable. Going secure is a pretty major undertaking, fraught with numerous pitfalls. It starts with the source files that produce the html pages and it could get ugly if there's even one element in a page that is called over http rather than https, no green padlock in that case. The Protocol-relative URL (//) instead of the hard-coded http:// or https:// is quite helpful to that end. That's one of side of the equation. The other is the server itself.

I run my site on an older server with an old Fedora Core OS (FC14) and by extension an older version of web server software, Apache 2.2.17. Over the years I have updated a few components here and there and fixed and customized a bunch of others, especially after new vulnerabilities have popped up. Updating the server to the latest and greatest version would be a non-trivial task for me. The old hardware may not be sufficiently supportive, much of the OS customizations will be lost, migrating the data and config files will be a pain and there will be downtime as well. Yet FC14 cannot support the newer and safer SSL/TLS technologies considered acceptable by today's browsers.

At my day job, I have access to resources to overcome this problem by fronting the web server with other servers running newer technologies. For example a combination of HAProxy and Varnish provides excellent web acceleration, load balancing, and SSL termination without making any updates to the core web server. No such luck for a small time operator such as myself with limited resources, so what to do?

One approach would be to only upgrade parts of the OS and Apache (httpd program) that deal with encryption but there isn't much in terms of online resources dealing with this topic other than the customary advice to upgrade the OS. In the end this became a long process of trial and error but it was a successful endeavor with a good bit of leaning as a bonus. Here's how I did it.

Apache 2.2.17 running on Fedora Core 14 can be configured for SSL, however it can only provide support for up to TLS1.0, with older cipher suites and the weak RSA key exchange. I had already patched OpenSSL after the Heartbleed bug had become public but what I needed were newer version of libcrypto.so.1.0.0 and libssl.so.1.0.0 libraries used by mod_ssl.so, a module used by httpd to enable SSL.

I downloaded the source and built OpenSSL version 1.0.1u. Building applications from source code in Linux is usually a three-step process, configure, make, and make install. After 'make install' The new OpenSSL libraries were placed in an alternate directory, /usr/local/ssl, instead of overwriting their main system counterparts.

Next step was to incorporate the new libcrypto.so.1.0.0 and libssl.so.1.0.0 libraries into mod_ssl.so and my tool of choice for that was patchelf. I downloaded and built patchelf 0.9.

Before using patchelf I took one step which I am not sure if it were necessary in hindsight. That step was adding the location of these new libraries to a conf file under /etc/ld.so.conf.d and executing the ldconfig command to add this new location to the library cache /etc/ld.so.cache used by the linker.

Here's an example of a command I used to replace one of the libraries in mod_ssl.so, I did the same for the other:

./patchelf --replace-needed libcrypto.so.1.0.0 libcrypto.so.1.0.0 mod_ssl.so

Then I used the ldd command to make sure mod_ssl.so was now linking to these new libraries.

Following an httpd restart and checking one of the secure pages I had the encouraging green padlock on Chrome browser. Indeed the page was using TLS1.2 now with a strong encryption/cipher. Yet the key exchange was using the obsoleted RSA. The new OpenSSL libraries had certainly made an improvement to mod_ssl.so but the strong key exchange element was missing.

To overcome that issue I had to rebuild mod_ssl.so with a newer version of Apache source code. That was version 2.2.32. configure was done with the following parameters:

./configure --enable-mods-shared="ssl"  --with-ssl=/usr/local/ssl

And after make I found the new mod_ssl.so in one of the subdirectories of the source files. I skipped the make install step to avoid possible complications of the new version installing itself in the system. Interestingly the new mod_ssl.so was already linked to the new libcrypto.so.1.0.0 and libssl.so.1.0.0 libraries I had created in the previous step. I suppose adding that directory to the library cache had helped with that.

I placed mod_ssl.so in the folder to be loaded by /etc/httpd/conf.d/ssl.conf and restarted httpd. And it failed to start with a message about a missing symbol ap_map_http_request_error! Obviously mod_ssl.so couldn't call into this function of the older httpd (or some library) version.

To fix that error I edited the file modules/ssl/ssl_engine_io.c and replaced the line:

return ap_map_http_request_error(rv, HTTP_INTERNAL_SERVER_ERROR);

with

return HTTP_INTERNAL_SERVER_ERROR;

I admit, this is a blind alteration with possible adverse repercussions, so I don't vouch for it. Executing make once again yielded a new mod_ssl.so and this time httpd started just fine, this time with a strong key exchange added in. Testing the site with SSL Labs gave additional confirmation that SSL encryption was indeed working fine.

If wondering, I use Let's Encrypt for free SSL certificates. The recommended utility for obtaining certificates is certbot but that tool with its overly complex and finicky python virtual environment wouldn't work under FC14. The tool that worked beautifully was getssl. It's a simple and clean, yet powerful and flexible script written as one executable file in bash script. Kudos to the getssl team for creating this robust tool.

So there you have it for enabling modern SSL/TLS in an older environment, in this case FC14. The prevailing wisdom is to abandon the old OS and start off fresh with a newer platform. I don't disagree with that philosophy and I set out to do just that when I started on this journey. In the end, my way was possibly more difficult and more prone to pitfalls but ultimately it ended up being more satisfying and more instructive.

I haven't migrated the entire site to HTTPS yet, but you can click secure whoami to view and examine the first secure page.

 

Older Posts »

Powered by


Read Financial Markets  |   Home  |   Blog  |   Web Tools  |   News  |   Articles  |   FAQ  |   About  |   Privacy  |   Contact
Donate Bitcoin: 1K9TzBvQ2oaEb4tX9t2vKDtZouMcpfV6QF
paypal.me/rhashemian
© 2001-2019 Robert Hashemian   Powered by Hashemian.com