Over night around 4:40 AM EST I received a call from a co-worker explaining the damndest and strangest thing happened, we lost connectivity to the entire data center.  Not just a particular segment on the network being unresponsive, not just high latency from a switch upgrade, no, a total outage.  Thankfully before I settled on driving down to the data center at 5 AM [and consequently get stuck in Atlanta “rush hour” traffic for 1 hour coming back 2 miles] our pings began receiving replies.  Everything’s up right? Wrong.

Recall yesterday that all of the servers were taken down to replace the big, bulky, integrated network card with little dinky, albeit still expensive, D-Link PCIe cards purchased over the weekend.  My policy with kernel upgrades is to run the new kernel once for a week.  If there is a system lockup the server is automatically rebooted and we are back to the old kernel.  Usually this works unless the older kernel doesn’t have the driver for the new network card built-in — oops.  Good timing nevertheless.

We spent the remaining 30 minutes logging into each server through the on-board VNC to manually reboot and bring up the new kernel again.  I always knew the DRACs would come in handy.  Beyond the reboot there were tiny individual server fix-ups we had to do on each one, chiefly bringing the primary and secondary DNS servers back up.  I can’t say when things returned back to normal, because at that hour things were a blur with me mostly fuming at the mouth.

What the hell happened, those are your exact thoughts, right?  Mine too…  In addition to the thousands of others at Gnax…  I can say for sure it was a power outage.  Apparently it was something internal after the backup generators in the power distribution scheme.  Everyone lost power.  Every single server.  All 5,000.  Yes, that is what we call a major screw-up.   I don’t know when we’ll get a clear story on the cause, because all of the staff — owner included — are scrambling to reboot servers, fsck filesystems, replace access switches (yes, the power outage knocked one of the out), and answer the torrent of tickets/phone calls streaming in… and we’re all wondering: what the hell just happened, Gnax?

To say I’m disappointed isn’t scratching the surface of how I feel about the entire situation.  Between this event and the inbound GigE links that perpetually suck (PCCWBTN and Global Crossing specifically) I am beginning to question the competence in design of this data center.  Here’s hoping this isn’t a sign of things to come.  Reboots are a pain and cleaning up after an outage is a nightmare.

Once I know for certain what happened I’ll update the entry.  No idea when that will happen though since the people who have an answer are busy regretting their jobs right now .

* Gnax thread (needs registration) – “What the hell just happened?
* WHT thread (publicly accessible) – “GNAX DOWN!?

Update: 1:00 PM EST (-0500 GMT)
Straight from the horse’s mouth:

At approximately 4:45 am EST the NAP suffered a power outage lasting approximately 10 seconds from Georgia Power.

The generators fired and came online 15 seconds after the initial outge and the load was transferred to generators which ran for 30 minutes while monitoring the incoming power quality from GA Power at which time the load was transferred back to utility.

One of the UPS’s that serves part of the facility suffered a battery outage on 2 different redundant strings which caused it to drop the load.
We installed a second redundant string approximately 9 months ago to minimize the possibility of this type of situation. The batteries in the 2 strings are setup in parallel meaning each is capable of carrying the full load for up to 5 minutes.

All it takes is 1 battery in a string to fail for the entire string to fail. this is the same in all ups systems and is the reason we installed the second string from advice from the manufacturer.

The original string batteries are 1.5 years old and were installed new. The second string is 9 months old and was installed new.

A single battery in the second string failed after 3 batteries in the first string failed.

We turned the generators back on to avoid an interruption during troubleshooting and maintenance and MGE sent a tech onsite within an hour to troubleshoot at which time we discovered the battery issue. we replaced the batteries within an hour of diagnosis and brought the system back onlnine and out of maintenance bypass.

The load is currently protected and all batteries have been tested again.

Both sets of batteries have been maintained and tested by MGE direct service every 6 months under a pm plan that they recommended for proper maintenance and operation.

This was extremely rare and unforseen to have something like this happen.

We are purchasing our own battery tester and will set up a monthly pm on the batteries that we will conduct ourselves in addition to the 6 month pm that MGE does on the UPS as well as the batteries. We are also researching a real time battery monitoring system that can predict battery failure.

Batteries are the weakest link in the system and we feel like we properly followed recommended engineering and maintenance on these systems. – however that will not assure 100% as we found out today in a very rare incident.

Extemporaneous events that continued to affect service during the outage:
one of the main metro e switches that runs the links of our backbone went offline during the outage and during that powerinduced reboot we lost connectivity to half our backbones. we have our backbones split in half – with half going out the east and half out the west side of the building taking dirverse paths across redundant switches to the final interconnect points.
the switch was unstable when it came back online due to a gbic that died and for some odd reason rebooted itself several times about every 10 minutes. we replaced the gbic with a spare we keep onsite.

This caused half the backbones to go up and down and placed a large cpu load on the different core routers we have due to bgp table loads going on – this is very cpu intensive and when you have a lot of up and down it can appear that the network is completely down (it is if you are on a link that is flapping) but the fact is that the entire network was not down but was impacted. this settled down when the switch was stabilized.

We split our backbones up over several different redundant backbone routers.

once this switch was brought back online and stabilized the network stabilized as well.

an access switch that serves 16 servers also died and we replaced it with a spare once we found the issue. we keep spares on site for every piece of network gear we have.

an apc that was only 6 months old and is a dual fed apc from 2 different power sources (including the newer ups) failed and did not come back – we replaced it with an onsite spare. it was bizarre to say the least and of course it powered one of our 3 main dns clusters so we lost dns capacity for an hour.

Most of the issues currently going on are related to server hardware that did not do well in a power reboot situation or need a fsck. we are actively working on them and will not rest until all is well.

Many customers in the facility do have A and B feeds from our power. we offer this through different ups systems / different power panels and different transformers. Some very early customers that purchased a and b feeds when we only had one ups system at the NAP are on the same ups and as such lost power. those customers will be offered a free move on their b feed to the newer ups to increase their power diversity – they simply need to open a ticket.

What are we doing on power in the future?

We have another UPS from MGE on order as of 4 weeks ago that is due to deliver in mid Feb that will increase the diveristy of the power in the facility. We plan on having 2 battery strings on it as well.

We are in the process of installing another set of 5 cummins generators and another 3000 amp transformer which will further diversify our generator and transformer plant – this will be completed in mid february – construction of this is going on currently we took delivery of the switchgear and generators 2 weeks ago. 4 ups/ will be moved to the new power feeed and g enarators to diversify the power source to the UPS . this will give us 100% redundancy on the A / B feeds at that point.

We installed a redundant b feed to our metro e gear and 2 dual fed apcs at our TELX cabinet after TELX suffered a complete UPS failure at 56 marietta 4 months ago. This turned out to be good because there was another complete failure of the B ups 4 weeks ago – but we were not affected since we had a redundant feed from them. the outage affected all customers on the second floor. we would have more than 50% of our network had we not been on dual fed apcs and dual power feeds at the building which would have been bad.

we are increasing the battery pm schedule to monthly from biannual.

we are researching a battery monitoring system for the strings.

we will be taking a fuel delivery this week to restock our main fuel supply

we are examining in depth on of our 4 core metro switch abnormalities this morning and if we do not find a rfo from the manufacturer will be examining replacing it or upgrading to a different more robust solution – which has been in our long term plan but may get moved up.

we will be doing another power examination of our core swithcing routers ( currently 6 of them all with dual fed power ) and our core metro e switches (currently 4 of them) to make sure that our power feeeds are truly redundant and no legacy circuits are there to affect them.

we will be examining our on site spares inventory to make sure we are still at correct levels since we used some items this morning.

We appologize for the outage caused by the failure of hte primary and backup batteries and will continue to provide the best service at an excellent price.
The MGE tech that has all the major accounts in Atlanta including coke and several others told us that this was a very freak occurance with negligible odds of happening and in his opinion we have done everything right on our maintenance and pm and redundancy of the batteries and he would have done the same thing and that there was really nothing he would have recommended different at that point.

we are still going to make the changes above that I mentioned though.

As indicated on their forum there are still a hundreds of servers down waiting on direct intervention from a tech on-site 8 hours later.  Thank goodness for the DRACs.  We had all of the servers back up within the first 30 minutes.

Power Outage at Gnax

7 thoughts on “Power Outage at Gnax

  • January 9, 2008 at 10:54 pm GMT-0500
    Permalink

    What I do find impressive is that they actually seem to know what happened. At my “day job” they’ve had some pretty serious issues (thankfully, nothing like this), and there tends to be a lot of confusion for a couple of days, part of which, unfortunately, seems to be caused by folks who are trying to spin things so that it doesn’t sound like it’s their fault.

    When my company moved to (their own) new data center this year, once of the things they did was go to out of band management for every single piece of gear in there. In the racks, they have dual-power (A and B feed, as GNAX described) intelligent PDUs that are both on separate out of band networks solely for management. Of course, nothing guarantees something bizarre couldn’t occur there, but then again, our data center is purely for in-house use, and we’ve always got something like a half-dozen (or maybe more) in the shop “just in case”.

    I am going to take the GNAX explanation and run it by a couple of people here with the question “could this possibly happen to us”? I’m not responsible for the data center itself, but I am responsible for several key infrastructure components that enable our workforce to do their jobs. As a big “customer” of our data center, I want to know if I’ve got anything to worry about.

    Thanks for posting that Matt.

  • Pingback:Apis Networks Community Updates » Data center power outage/kernel upgrade tonight

  • Pingback:Royal Pingdom » Mother Nature’s assault on electricity and the Internet

  • October 17, 2008 at 11:05 am GMT-0500
    Permalink

    I feel compelled to correct some mis information on this original post.

    The outage affected 1 ups. We have many ups’s in the facility – so while there was an outage it did not affect all of the servers in the DC – only a portion of them. The original post states that all of the servers lost power and that is factually incorrect.

  • October 17, 2008 at 11:24 am GMT-0500
    Permalink

    The outage affected a significant number of systems at Gnax’s facility, judging from the 11 pages of (fruitless) discussion. Although there are certain unavoidable extraordinary circumstances, common issues such as electrical strikes should be managed. Our servers were housed at EV1 from 2002 until early 2007 and it was a flawless. The worst thing that we saw were the infrequent network blips… And this is including Hurricane Rita’s landfall.

    Jeff, I understand it is your position to protect the reputation of Gnax; however, I have the responsibility of transparency with our customers. Until you learn to adopt a continuous improvement methodology, we will continue to experience the occasional outages at your facility. And until that happens, I will continue to post these outages on the blog. I wouldn’t be in a position to assume the worst if the track record with Gnax were impeccable, but, sadly it is not.

  • October 17, 2008 at 9:22 pm GMT-0500
    Permalink

    I think your post is quite offbase – we are in a process of continuous improvement. Have you been by to see the facility or asked me about what we have done since your original post? I would suspect not since you posted something that is completely wrong and without merit. If you really want to be transparent with your customers then I would suggest you ask me for comment and information before posting statements that are simply incorrect. We appreciate your business and value you as a client and want the correct information to be out there.

    When was the last outage you had and how many have you had and what was their nature? Your statement that you will continue to experience occasional outages here is also in my opinion completely off base and without merit and you certainly dont have the facts.

    Please let me know what services you have with us. If it is colo – do you have redundant power and redundant network drops? If its dedicated – we will be releasing a 100% uptime server in a week and I would highly encourage you to purchase it if avoiding downtime is important to you. It is redundant to the point that we guarantee it against most outage no questions asked with a full month credit for ANY downtime. I would not do this if I did not have full confidence in our infrastructure.

    Thanks for allowing me to post our information and have a great weekend.

  • October 18, 2008 at 9:58 am GMT-0500
    Permalink

    Jeff,
    This is not the place to discuss sensitive information. Please e-mail me at msaladna@apisnetworks.com if you wish to discuss this further.

Comments are closed.