November 30, 2012
I know it's sacrilegious for some to disable a security feature on a platform, but SELinux (an enhanced Linux security feature) has left me no choice but doing exactly that on Linux.
SELinux was added to Linux to give it additional security measures beyond what it inherited From Unix. By default many of the Linux distros such as Fedora have SELinux built into their kernels and enabled upon install.
The issue is that SELinux can be so restricting and obsessive about curbing malicious activity that it can also hinder normal operations leading to server stress or errors. Having been bitten by SELinux multiple times, I have vowed to deactivate it every time I install Linux on a host. The one time I forgot to disable it, the Varnish server I have setup for my company nearly died taking the company's web site along for the ride. Looking inside the messages file, this arcane message is what I saw in prodigious numbers:
setroubleshoot: SELinux is preventing irqbalance from mmap_zero access on the memprotect Unknown. For complete SELinux messages. run sealert -l efce…
I know the security sticklers would accuse me of not setting up SELinux correctly and for the record SELinux is very configurable. But my most favorite setting for SELinux is disabling it in the /etc/selinux/config file by setting SELINUX=disabled.
I don't have the time nor the inclination to learn SELinux's every minutia, which may or may not protect my hosts completely anyways. The old fashion file permissions, file ownership, suexec, sudo, suid, running daemons with least privilege, and a good dose of firewalling is good enough for me. Feel free to disagree.
August 13, 2012
After being hit by the Wordpress base64 hack twice within a couple of weeks, it finally dawned me that the PHP CGI flaw was the culprit. The attack robots (a la Metasploit) use the knowledge of PGP CGI flaw together with the well-known scripts of popular products (Wordpress, Joomla, Drupal, etc.) to penetrate sites and that's how this site was breached as well.
Most sites run PHP as a module, so those were spared the headache. I used to run PHP as a module many moons ago, but chose to move to CGI for several reasons. Here's a an explanation of the difference between the two modes. What's disconcerting is that the CGI vulnerability had been around or some 8 years and no one seemed to have noticed during all that time and it was my PHP version 5.3.5 that did me in. It was time to move to version 5.3.14. But where to find that php-cgi version for my Fedora install.
I went hunting for php-cgi 5.3.14 all over the net, from user websites, to rpm repositories, but no dice for my Fedora version. Now sometimes you can use an off version, but then you run the risk of crashes, library mismatch and other problems. So why not download the source from php.net and build it myself? Simple, I'm too lazy and hate building/rebuilding programs. There's the big download, long waits, missing libraries, warnings and errors. I had just come off a build hell of Varnish cache, and I wasn't about to plunge myself into another. But, in the end I had no choice.
The Linux build world is a 3-word expression: configure, make, 'make install'. I'll add another word, loop. You download the source code, run configure to adjust the config and make files for your particular platform. Then you run make to build the program(s). finally you run 'make install' to install the program(s) on your system. Sounds easy enough until you realize that a million things can and will go wrong and that's where 'loop' comes in as you configure and make repeatedly to get things right.
Building PHP 5.3.14 didn't disappoint. I ended up running configure and make tens of times until I finally got the binary I wanted with all the necessary libraries included. Building php using make renders a number of binaries including the main command line program (interpreter) as well as php-cgi and it was the latter I was interest in.
I copied the newly-made php-cgi over the old version, tested the site and called it a day. Now I know this sort of installation is a cardinal sin to many techies, no argument there. Think about it, I am running a php-cgi version that is out of sync with the main php interpreter on the same machine and then the rpm (package manager) has no clue about which version of php is really installed on the system. At the first rpm update, the whole system could come crashing down. I just have to be cognizant of the fact that I have a fragmented php installation. Eventually when I migrate the site to a new Fedora version all will be back to good. For now, it's good enough to have been hack-free for a couple of months.
Part I, Part II
July 5, 2012
Surely the hacker must have exploited some wordpress vulnerability, I thought. A quick search on the web for wordpress base64 hack, brings up plenty of pages covering such hacking cases.
I started out by examining the MySQL tables by doing a global search for terms such as base64 or eval. The wp_options table had plenty of such entries. At first it seemed like I had found the hacker's stash. wp_options is where wordpress and the plugins save their parameter data. There were also lots of entries with the "transient" keyword. In the end they turned out to be innocuous. Transients could become real nuisances, but that's a different topic.
With the database search behind me, I put the focus on the files. Deep searching for base64_decode and eval, produced a number of them. Here's a simple command to achieve this search:
$ grep -rl base64_decode *
Some of the hits were legitimate, but eventually I ran into two types of files that were obvious hacks. The first type were mostly legitimate index.php files that had been altered with a giant code block right at the top. The blocks were of the base64_decode(eval(long-hex-string)); variety. Removing the block appeared to restore the files back to their original form. The other type were small-sized php files with varied names containing one or two lines of code like eval(stripslashes($_REQUEST['a']));. This code would basically execute raw code passed in as a parameter to the page, very simple, very effective, and very dangerous.
Armed with that knowledge I went snooping around the site looking for small-sized files and any files that had been altered recently.
$ find -size -1k -name *php
$ find -mtime -7 -name *php
The first command returns php files that are 1kb or less in size. The second returns php files that are modified in the last 7 days. I dug through the long list of files, fixed the altered files by removing the malicious code blocks and then deleted the small dropped-in files.
Finally I upgraded the wordpress installation to the latest version and everything was back to good, or so I thought. Within about two weeks I was hacked again in almost the same way as the first time. How did I recover from it and plugged the hole? Hint: PHP was the actual culprit. Stay tuned …
June 28, 2012
After a bit of investigating I found a number of files, specially those named index.php, that had been modified by a code block at the top that started with something like "eval(base64_decode(…" followed by a long string of hex numbers. I decoded the hex string and I ended up with a php code block that looked pretty devious with references to Chinese sites.
Do a search for Wordpress base64 hack and you'll find thousands of sites around addressing this issue. Wordpress doesn't exactly have a good security reputation but its latest versions were thought to be more secure. But in this case Wordpress wasn't at fault. Turns out the real culprit was php-cgi 5.3.5 and a nasty security hole stretching back for 8 years. The bug would allow the attacker to view the source code of a page, run arbitrary code, and generally be a pain in the a$$.
I suspect in some cases where Wordpress (or Joomla or Drupal or phpMyAdmin) has been hacked, the true culprit has been php-cgi. The reason these popular programs are targets is that their structures and operations are well-known to all including the hackers. All it takes to exploit vulnerable sites is to write simple scripts targeting known pages and letting it loose on the internet. The robot crawls around infecting sites as it find them and then those infected sites infect their users by extension. An old concept, but pretty neat in a distorted sort of a way.
There are some of questions to be answered here. Why was such a gaping hole allowed to remain in php for 8 years? Why was it publicized before it could be plugged? Why do some sites still use CGI? After all, this vulnerability didn't affect php ran as a module. Those are good questions and there are plenty of discussions about them online, so no need to rehash here.
What might be useful is explaining what I did to plug this hole on my site. Stay tuned …
August 25, 2011
Sometimes I'm so tempted to to do this: Block China Web Traffic IP Addresses and Chinese Hackers.
Of course if everyone blocked everyone else indiscriminately that would go against the spirit of the Internet.
What's needed is for the ISPs to get off their lazy and greedy butts and block attacks at their sources.
Certainly a bunch of zombies (unwitting users with infected machines) will be caught in the dragnet too, but they can be contacted and urged to clean up their machines before they're allowed back on.
It'll be good for us, it'll be good for them, it'll be good for the Internet.
August 14, 2011
To the couple of visitors of this website, I'm sorry for the 2-day outage earlier this week. It was a DDOS (distributed denial of service) attack and I never found out who was behind it and why.
The problem started in the early morning hours with an outage alert from the remote monitoring service. The site was down and the server wasn't even responding to SSH login. Jumping directly on the server, I could already tell something was wrong by the loud sound of the fan. Indeed the load was in the 40's when it usually hovers around 0.25 and inbound traffic utilization was at saturation levels.
Realizing that I've been wrong on blaming server issues on attacks, I did what every server admin does at the first sign of trouble, reboot. No dice, the server load soon went sky-high again. So I blocked outside connections to apache and started running some simple tests to check the server health. CPU, RAM and IO checked out fine under some local test load. No, this was something else. The logs finally indicated the problem:
-- possible SYN flooding on port 80. Sending cookies.
Looking at the connections (using netstat), there were hundreds of SYN_RECV records hanging around from various IP's. Obviously the server was under a SYN flood DDOS attack. Using iptables to block the offending IP's was no help. Most likely the ip addresses were fake and combating them was like fighting a tidal wave.
The attack continued throughout the day with no relief and finally in the evening I contacted my ISP to see if they can rescue me. I didn't have much hope, but I almost lost it when the technician asked: "Huh? You have a sink flow attack? Could you spell that?" So much for tech support.
My best option was to lay low and take the abuse and hope the attacker(s) will get bored and move on. And that's exactly what they did. Almost as fast as it started, the attack stopped in the wee hours of the second day and I could finally bring the server back online.
Moral of the story, DDOS attacks are tough enough to combat for big shops. Small guys like me don't stand a chance against them. The best solution is to wait them out and hope the attacker moves on. Also small sites aren't lucrative enough to get expert support from their ISP's. The best that can be hoped for is to ask the ISP for a new set of IP's and still there's no certainly that'll stop the attackers.
As for this attacker(s) and their intent, it remains a mystery. Perhaps it was a script kiddie rolling through a bunch of victim hosts, or someone testing an attack platform or algorithm, or a mistake specifying a domain or IP in the attack vector. This site is just too small for bragging rights or boosting egos. There are much tastier targets out there for attackers to prove their expertise and flaunt their skills. Then again why use your smarts to attack sites instead of doing something constructive?
June 6, 2011
My admiration to Google for standing up for what's right. Even in the face of Chinese retaliation, Google has gone public with the revelation that the hacking activity of Gmail accounts had a Chinese connection.
The allegations are as of yet uncorroborated, but Google deserves much credit for standing up to China when there's evidence of wrong-doing.
We'll see how far Google is willing to go on this issue before it permanently damages its prospects in China. But for me, Google's stance has elevated its stature and image.
China paper warns Google may pay price for hacking claims - Technology & science - Security - msnbc.com.
December 14, 2010
Today, out of curiosity, I downloaded the hacked Gawker files from The Pirate Bay. I'm not sure if I broke any laws by doing that, but I was only interested in checking out their PHP source files. You can learn a lot by looking at production code other than your own.
While my intentions were harmless, I'm sure many others downloaded the files for more sinister purposes. I was blown away by the size and scope of the membership file dumps. There are thousands and thousands of records of login name, passwords and emails. One of the first things the bad guys will do is to try breaking into the members' bank accounts, email accounts, and Facebook, Twitter, Amazon, and eBay accounts since many tend to use the same password everywhere online.
I hope people change their passwords quickly enough to mitigate the damage from the criminals, but there is one damage that will be hard to contain, and that is the sheer number of valid emails that spammers will promptly exploit.
Granted, most emails appear to mysteriously land in spammers' databases almost as soon as they're created. Nevertheless, even those users who guard their emails tooth and nail, had better be ready. If they had a Gawker account, they will be getting valuable offers from a number of spammers real soon.
May 16, 2010
The email came at night, but it wasn't completely unexpected. In a terse missive, Amazon accused me of violating their Terms of Service (TOS) and terminated my account. Reasons given: copying pages and links to other sites and search engines. In other words spamming other sites with specific Amazon links tagged with my id to collect commissions.
I have operated my two sites (hashemian.com and padfly.com) for over a decade with a couple of different associate and affiliate programs. I probably have too many ads on my pages, but I have been careful to stay on the ethical and moral side of the fence. Fairness and respect to my visitors have always trumped making a quick buck or a large sum for that matter. Good reputation is worth way more to me than money.
I have never copied a page nor parts nor links containing my Amazon account data anywhere outside of my own sites - never, not even once. There have also been no schemes to push any links onto search engines. My sites are crawled and pages are indexed normally by search engines. But Amazon simply accused me of being unethical and took punitive steps.
So how did I know that I'd be receiving a termination notice from Amazon at some point? This past Christmas season there was a marked increase in sales and therefore higher commissions in my Amazon account. I attributed that to the season, luck and some validation after years of being online. As months rolled on, the sales continued to stay positive and I became certain that Amazon would not be pleased and they would eventually pull the plug.
For a long time I have suspected that Amazon disapproves of any associate who wields too much selling power. Such an associate can materially influence sales numbers and that's not welcome news to Amazon. So Amazon has created a clever TOS for its associates program that allows them to terminate anyone at anytime. Why even have a TOS when the program is free? That protects Amazon against possible lawsuits such as those for discriminatory practices. The TOS rules are nitpicky enough that at no time any of their associates are in complete compliance. One link appearing on another site is enough to violate the TOS. I'm certain that I was in violation since day one. But it took them 6 years to suspend my account.
As long as the associates make a paltry earnings from the program, Amazon is willing to let the violations slide. But when an associate surpasses certain figures, then a quick notice of TOS violation is given and the associate is terminated. No one but Amazon knows what those figures are and how they are applied, but they do exist and they are applied. And that's how I was terminated from Amazon associates.
The most damaging part of the notice to me was the accusation of being unethical, just a simple and cold assault on my reputation. Now, I realize that no one cares about my situation and people would just dismiss this as a another scammer's rant. I don't mind. People don't know me, so why should they believe my story?
But people should at least believe this part. As a part of my account termination, Amazon also seized all commissions earned. They would also continue to keep future commissions from any sales related to my links. It's not much money, but if these were indeed ill-gotten gains, then a responsible company and an ethical corporate citizen would not keep them nor would they keep any profits from the sales. They would at the least donate them to a good cause. A charity for fighting hunger and poverty, educational programs for under-privileged children, or organizations combating diseases such as cancer. Instead, Amazon simply and silently pockets the money for itself.
If cops busted a suspected drug dealer, is it right for them to kill him and pocket whatever money they found on him? Is it OK if they sold the rest of his stash on the streets and kept the profits? It's an exaggerated comparison, but I don't think that would be right. i don't know, maybe I have a warped perception of ethics.
March 25, 2009
Older Posts »
A couple of years ago a few sites started collecting answers to a few personal questions. The idea was to strengthen security by integrating a few personal questions to the authentication process. It also would help unlock accounts in case users forgot their passwords. After all the questions were private enough that only the account owner would know the answers.
Nowadays it seems like every site is requesting personal and private information as a means of beefing up security. But I wonder if the security proposition is any longer valid.
You've seen these questions before:
- What is your mother's maiden name?
- What is your favorite pet name?
- What street did you grow up on?
- What was the name of your elementary school?
- What city were you born in?
- What was your first car model?
With so many sites storing so much personal information about you, is your privacy and security any longer assured? What guarantees do you really have that these responses will remain private and out of reach of prying eyes? Who knows what kinds of people have access to these responses. Are the responses encrypted? Are they shielded from the companies' personnel? Are they safe from hackers and snoops? Besides how secure can these responses be when so many people choose to reveal personal information on their blogs, forums, or Facebook accounts?
Most likely these responses are given less protection than login names and passwords as they are generally the second line of defense in authenticating users. Once site operators have access to these private responses, it won't be too difficult for one bad apple to use them to gain access to your other accounts. Some guesswork and social engineering is involved but since when that stopped determined account thieves.
Maybe I'm just too paranoid, but it seems to me that the enhanced security gained through personal responses is just an illusion and the convenience of password recovery is not worth the risk. In fact it may be worse than just the traditional login and password. At least you are not giving away personal details about your life to some faceless site. Nor will your accounts be compromised on the basis of a few answers which may be easily obtained on Google.
Liked this page? Donate and support the effort.
Read Financial Markets |
Web Tools |
© 2001-2013 Robert Hashemian