December 8, 2012
Recently someone asked me if I had received their text message. I am a Google Voice user and had not received the text message. Worse yet, the person hadn't received a bounce either so apparently the text had just vanished without a trace.
Turns out that the person had actually attached a photo to the text message which changes the text format from SMS to MMS and it turns out that Google Voice isn't very amenable to MMS.
Investigating the matter further I found this post from over a year ago outlining Google's roadmap on supporting MMS universally. While I verified that MMS support sort of exists from Sprint phones, MMS messages from other subscribers continue to get dropped.
Hey Google, it's been long enough. What's the hangup?
November 30, 2012
I know it's sacrilegious for some to disable a security feature on a platform, but SELinux (an enhanced Linux security feature) has left me no choice but doing exactly that on Linux.
SELinux was added to Linux to give it additional security measures beyond what it inherited From Unix. By default many of the Linux distros such as Fedora have SELinux built into their kernels and enabled upon install.
The issue is that SELinux can be so restricting and obsessive about curbing malicious activity that it can also hinder normal operations leading to server stress or errors. Having been bitten by SELinux multiple times, I have vowed to deactivate it every time I install Linux on a host. The one time I forgot to disable it, the Varnish server I have setup for my company nearly died taking the company's web site along for the ride. Looking inside the messages file, this arcane message is what I saw in prodigious numbers:
setroubleshoot: SELinux is preventing irqbalance from mmap_zero access on the memprotect Unknown. For complete SELinux messages. run sealert -l efce…
I know the security sticklers would accuse me of not setting up SELinux correctly and for the record SELinux is very configurable. But my most favorite setting for SELinux is disabling it in the /etc/selinux/config file by setting SELINUX=disabled.
I don't have the time nor the inclination to learn SELinux's every minutia, which may or may not protect my hosts completely anyways. The old fashion file permissions, file ownership, suexec, sudo, suid, running daemons with least privilege, and a good dose of firewalling is good enough for me. Feel free to disagree.
November 18, 2012
Two weeks ago, when nature was flexing its muscle in the northeastern USA in the form of hurricane Sandy, I was away from home on a business trip. When the alert email from my monitoring company arrived in my inbox informing me of the web server outage, I wasn't surprised. Power had been lost, the UPS had run out of juice and the server had gone silent.
But with the return of power, the server (which was setup to spring back to life) didn't come back and with me awaiting a return flight, the outage would go on a few more days. Diagnosing the server after returning home turned out to be a futile exercise. There were no hints as to why the server had failed to properly boot. So I just powered it up, repaired a few corrupt databases and thought that was the end of it.
But the server wasn't its old self, it kept on crashing with an ever-increasing frequency. Eventually I decided that the server had suffered critical, yet unidentified, damage to its hardware and last weekend I reluctantly replaced it with a newer box. Restoring a server is no walk in the park.
Over a year ago when I decided to host this site on my own server, I knew about the risks of self-hosting. The storm and the ensuing issues certainly proved some of those risks. There were loss of traffic, user inconvenience, and loss of Google ranking. A more robust setup might have averted some of that, but this website isn't quite at the point to justify that level of operation. Still I don't regret self-hosting. A hosted service can never match the same level of control and in the end it may not necessarily be that much more reliable either.
October 21, 2012
October 16, 2012
Can't say I'm shocked to get the news of Citibank's Vikram Pandit quitting (Citigroup CEO Vikram Pandit resigns | Reuters.) Evidently Citibank's online banking has been an absolute disaster recently. There have been numerous outages, slow response, and unscheduled downtimes.
Telephone customer service and tech support have also been horrendous during these outages. Customers are told that the site would be online shortly only to have no access for entire days. Understandably the service reps are frustrated having to deal with irate customers and they have, in turn, become rude and abusive to the callers.
The CEO's abrupt resignation may not have been directly related to Citibank's online problems, but such prolonged issues are manifestations that the bank is in the state of chaos and disarray and no one is minding the store. That's just a shame as Citibank was a decent bank to do business with at one time.
October 10, 2012
I was so happy when 101.9 in New York finally switched from the news format to alternative rock a few months. I only listen to that station and NPR in my car.
Now CBS has bought the station and will be simulcasting vapid sports talk with their AM station. Is there really a need for more boring talk radio stations, and sports of all formats? In New York, Another Setback for Rock Radio.
There's plenty of that on the AM dial, but no good alternative rock stations around, other than perhaps 104.1 WMRQ in Hartford, CT and that's too far.
I sure will miss the music on 101.9 and to the guys that made it happen, if only for a short period, thank you.
October 8, 2012
I used to think that the Internet was the great equalizer in the business world. A small guy with programming skills and a big drive sets up a new site and offers a novel service. The service goes viral and the small guy becomes a small company and builds and expands his way to success. The small guy pulls off an IPO or gets acquired and retires to the tropics. It's a happy ending that some have indeed experienced.
But what I have learned is that without some early connections and some cash infusion the small guy can quickly and quietly wither away, no matter how much effort he puts into his novel idea and no matter how many users he attracts. He's destined for a quick failure unless he gets some serious support behind him and fast.
How do I know this? Having operated this very site for some 12 years has given me plenty of lessons to that end. I operate this site as a hobby from the corner of my condo and while the free utilities offered here have a decent number of users, which I assume find them useful, and while I never looked to this site as a means of financial success, this site is in fact too small to succeed. Take these cases:
- For a number of years this site was hosted on various web hosting services such as 1&1 and every few months there was a warning to kick me off the service because the site was exceeding usage quotas. So, like a gypsy, I kept moving the site from one hosting company to another. A financially secure company would have had no issues paying for more resources.
- A couple of years ago Amazon Associates (an Affiliate Network) I was using for this site accused me of cheating and shut down my account, depriving the site from a small stream of revenue. According to Amazon, I had published URL's with my associate account to other sites, violating their terms of service. URL's had in fact been copied to other sites but not by me. Page-scraping and content-stealing robots had done that. A large site most likely would have never been suspended. In my case my appeals of innocence fell on deaf ears in Amazon.
- A few years ago I operated a URL shortening service much like tinyurl and bitly. One day a spammer used the links in a widespread spamming operation and suddenly the domain registrar, GoDaddy, cut off the domain registration claiming that is was spamvertized. It took over two months to convince GoDaddy of my innocence and get the domain back. I shut off the service promptly. This would have never happen to bit.ly or goo.gl.
- Recently a service on this site fell victim to a Nigerian phishing operation to collect bank information from unsuspecting victims. For days my ISP hounded me about this, nearly cutting off my services. That would have never happened to a customer with deep pockets, but I ended up discontinuing the service to guard against possible service termination or potential legal consequences.
- The latest headache came in the form of a DDoS, paralyzing this site. An outside site using one of the widget services from this site came under attack and the attack spilled over to this site causing capacity issues. I had to resort to all sorts of traffic blocking filters to partially mitigate the effects. This would have been a non-event for a larger site, but for this site it meant lengthy periods of slow performance and outages.
The Internet, a great equalizer? Hardly, great ideas can only go so far and without serious financial backing, they are destined for failure and eventual oblivion. I can't imagine how many great innovations have died premature deaths without that all important cash infusion.
September 15, 2012
Got an email of apology from GoDaddy for the outage they had earlier this week. It sure was a real pain for many and no doubt many lost business over it.
The hosting customers have received a one-month credit for their trouble. The rest, who have a domain or two with GoDaddy, only got the apology email and it was laying it on pretty thick.
We let you down and we know it. We take our responsibilities — and the trust you place in us — very seriously. I cannot express how sorry I am to those of you who were inconvenienced.
Ok, fine, he's sorry and traumatized. Now how about extending the domains for a year to go along with the words?
September 5, 2012
August 13, 2012
Older Posts »
After being hit by the Wordpress base64 hack twice within a couple of weeks, it finally dawned me that the PHP CGI flaw was the culprit. The attack robots (a la Metasploit) use the knowledge of PGP CGI flaw together with the well-known scripts of popular products (Wordpress, Joomla, Drupal, etc.) to penetrate sites and that's how this site was breached as well.
Most sites run PHP as a module, so those were spared the headache. I used to run PHP as a module many moons ago, but chose to move to CGI for several reasons. Here's a an explanation of the difference between the two modes. What's disconcerting is that the CGI vulnerability had been around or some 8 years and no one seemed to have noticed during all that time and it was my PHP version 5.3.5 that did me in. It was time to move to version 5.3.14. But where to find that php-cgi version for my Fedora install.
I went hunting for php-cgi 5.3.14 all over the net, from user websites, to rpm repositories, but no dice for my Fedora version. Now sometimes you can use an off version, but then you run the risk of crashes, library mismatch and other problems. So why not download the source from php.net and build it myself? Simple, I'm too lazy and hate building/rebuilding programs. There's the big download, long waits, missing libraries, warnings and errors. I had just come off a build hell of Varnish cache, and I wasn't about to plunge myself into another. But, in the end I had no choice.
The Linux build world is a 3-word expression: configure, make, 'make install'. I'll add another word, loop. You download the source code, run configure to adjust the config and make files for your particular platform. Then you run make to build the program(s). finally you run 'make install' to install the program(s) on your system. Sounds easy enough until you realize that a million things can and will go wrong and that's where 'loop' comes in as you configure and make repeatedly to get things right.
Building PHP 5.3.14 didn't disappoint. I ended up running configure and make tens of times until I finally got the binary I wanted with all the necessary libraries included. Building php using make renders a number of binaries including the main command line program (interpreter) as well as php-cgi and it was the latter I was interest in.
I copied the newly-made php-cgi over the old version, tested the site and called it a day. Now I know this sort of installation is a cardinal sin to many techies, no argument there. Think about it, I am running a php-cgi version that is out of sync with the main php interpreter on the same machine and then the rpm (package manager) has no clue about which version of php is really installed on the system. At the first rpm update, the whole system could come crashing down. I just have to be cognizant of the fact that I have a fragmented php installation. Eventually when I migrate the site to a new Fedora version all will be back to good. For now, it's good enough to have been hack-free for a couple of months.
Part I, Part II
Liked this page? Donate and support the effort.
Read Financial Markets |
Web Tools |
© 2001-2013 Robert Hashemian