amazon and salesforce.com outages grabbed headlines this week. Google's blueprint in Harper's for its massive new data center in Oregon will fuel more scrutiny of SaaS and utility computing and SLAs associated with them.
I find the general assumption among us bloggers is these providers have to be up 99.9999% of the time. Major crisis when any of them has a crisis. Because supposedly that is what corporate America has always got and continues to get.
Who started that myth? Other than some sensitive and global apps, there is plenty of downtime in corporate data centers and those of big outsourcers like IBM and EDS.
In a given year (unlike a leap one like this year), we have 525,600 minutes. To meet a 99.99% uptime, the system could only be down 52 minutes - less than an hour - in the year. I can tell you most corporate data centers have scheduled downtimes which exceed that every month, if not every week.
Few companies have data centers which need to service users across more than 7-8 time zones. One based in Switzerland could service most of West and East Europe, one based in Singapore would cover a swath from India to New Zealand. If their operations are so geographically diverse. The majority of them find 16 hours of weekday coverage and 8 on weekends more than adequate. So a 90% uptime is good enough for many using 525600 minutes as denominator.
Of course, there are some applications which are global and some critical ones which need to be up constantly. But that is a small percent of the overall portfolio. And folks like IBM and CSC charge them a king's ransom to support those.
It's nice to push amazon and Google and others for high availability...but the big kahunas in corporate world don't get it consistently even at their large budgets...