The Cloud / IaaS industry has grown rapidly in the last years, and providers have been solidifying their systems over the years. Outages are always unfortunate and by large the cloud has shown that it is more resilient than pretty much any on premises computing setup. Nonetheless outages happen – and we are adding a new blog post type for these events – the “Down Report” – where we plan to dissect and rate what has gone wrong, and especially focus on the lessons learnt for the provider affected, the industry, but most importantly for their customers. 
 

To make the effort a little more fun – we assign ‘Cloud Load Toads” to the overall event and each circumstance. We mean no disrespect to the ‘load toads’ that work valiantly in the worlds air forces, but liked the suggestion of our colleague Alan Lepofsky (@alanlepo), who came up with the term ‘Cloud Load Toad”.
 

On the ‘Cloud Load Toad’ scala that goes from 1 (bad but ok, can happen) to 5 (very bad, should never ever happen) we rate the severity off the event overall and the events that lead to it.

AWS S3 Down in US-EAST-1

First of all, kudos to AWS, who published the post mortem post (see here) in about 48 hour past the event, faster than usual, judging from other downtime events in the past. But then each cloud outage is different, the root cause – manual error – is easier to establish than e.g. trouble shooting a battery fire, that destroys its very evidence (think Samsung).

But let’s dissect the post mortem report:
We’d like to give you some additional information about the service disruption that occurred in the Northern Virginia (US-EAST-1) Region on the morning of February 28th. The Amazon Simple Storage Service (S3) team was debugging an issue causing the S3 billing system to progress more slowly than expected.

MyPOV – Certainly production and billing systems need to be connected, and in many scenarios the production system can create issues with the load triggered for the billing system. But a production system should never be able to be stopped by an administrative system, like a billing system. Production should be kept running, billing can be worried about later. It is likely that the S3 billing system (my speculation) is using S3, too – creating a potential recursive dependency. Needless to say – these systems should be isolated. 
Rating: 3 Cloud Load Toads
 
 
At 9:37AM PST, an authorized S3 team member using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process.

MyPOV – Related to above, obvious that the billing system is also using S3 now. Good to drink your own champagne, but when it goes bad because of a mistake by the champagne maker – never good and not only the customers but the champagne maker gets food poisoning – not what you want to have happen. But humans can make mistakes.
 
Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended. The servers that were inadvertently removed supported two other S3 subsystems. One of these subsystems, the index subsystem, manages the metadata and location information of all S3 objects in the region. This subsystem is necessary to serve all GET, LIST, PUT, and DELETE requests. The second subsystem, the placement subsystem, manages allocation of new storage and requires the index subsystem to be functioning properly to correctly operate. The placement subsystem is used during PUT requests to allocate storage for new objects. Removing a significant portion of the capacity caused each of these systems to require a full restart. While these subsystems were being restarted, S3 was unable to service requests.

MyPOV – Kudos to AWS for transparency. But any attendee to its reInvent user conference knows how much the vendor prides itself of not letting humans make mistakes, but putting key / vital processes into code. Certainly, the approach and philosophy wasn’t followed here. Would be good to chat with AWS CTO Werner Vogels about this one… I am sure that enough people in Seattle are pondering that in the future typos, manual human error should not take systems down. Of course, we still need a kill switch for the humans… 
Rating: 4 Cloud Load Toads.
 
 
Other AWS services in the US-EAST-1 Region that rely on S3 for storage, including the S3 console, Amazon Elastic Compute Cloud (EC2) new instance launches, Amazon Elastic Block Store (EBS) volumes (when data was needed from a S3 snapshot), and AWS Lambda were also impacted while the S3 APIs were unavailable.

MyPOV – AWS suggests to write critical processes to span across regions. Its own website – amazon.com and subsidiary zappos.com did not go down, and were probably coded correctly. The question is (and sorry if I have not read the fine print) – could an AWS client still use the US-EAST-1 services like EC2, EBS, AWS Lambda etc. if pointed to other S3 stores – or does an S3 failure take the whole region out? This is a deeply critical issue for any IaaS techstack in a IaaS data center. So, did customers have a chance here? A question to follow up with AWS. Not Rated.


 
S3 subsystems are designed to support the removal or failure of significant capacity with little or no customer impact. We build our systems with the assumption that things will occasionally fail, and we rely on the ability to remove and replace capacity as one of our core operational processes. While this is an operation that we have relied on to maintain our systems since the launch of S3, we have not completely restarted the index subsystem or the placement subsystem in our larger regions for many years. S3 has experienced massive growth over the last several years and the process of restarting these services and running the necessary safety checks to validate the integrity of the metadata took longer than expected. The index subsystem was the first of the two affected subsystems that needed to be restarted. By 12:26PM PST, the index subsystem had activated enough capacity to begin servicing S3 GET, LIST, and DELETE requests. By 1:18PM PST, the index subsystem was fully recovered and GET, LIST, and DELETE APIs were functioning normally. The S3 PUT API also required the placement subsystem. The placement subsystem began recovery when the index subsystem was functional and finished recovery at 1:54PM PST. At this point, S3 was operating normally. Other AWS services that were impacted by this event began recovering. Some of these services had accumulated a backlog of work during the S3 disruption and required additional time to fully recover.

MyPOV – AWS describes well that things break all the time, and they can even go down. But IaaS providers need to be certain they can come back up, and part of that coming back is also to understand how long it will take to come back up. S3 has been very popular, so the harder to take it down, test (or simulate) time for it to come back, but certainly something AWS could and should have done and known. When you run IT, and don’t know when a system that is down will come back up more or less for sure, the IT professionals are in a bad spot. 
 
 
Rating: 4 Cloud Load Toads
 
 
We are making several changes as a result of this operational event. While removal of capacity is a key operational practice, in this instance, the tool used allowed too much capacity to be removed too quickly. We have modified this tool to remove capacity more slowly and added safeguards to prevent capacity from being removed when it will take any subsystem below its minimum required capacity level. This will prevent an incorrect input from triggering a similar event in the future.

MyPOV – This section read like there was a software tool – but it malfunctioned. That of course is not good. Granted hard to simulate and test with systems of this scale – but not a good enough answer. 
Rating: 3 Cloud Load Toads
 
We are also auditing our other operational tools to ensure we have similar safety checks. We will also make changes to improve the recovery time of key S3 subsystems. We employ multiple techniques to allow our services to recover from any failure quickly. One of the most important involves breaking services into small partitions which we call cells. By factoring services into cells, engineering teams can assess and thoroughly test recovery processes of even the largest service or subsystem. As S3 has scaled, the team has done considerable work to refactor parts of the service into smaller cells to reduce blast radius and improve recovery. During this event, the recovery time of the index subsystem still took longer than we expected. The S3 team had planned further partitioning of the index subsystem later this year. We are reprioritizing that work to begin immediately.

MyPOV – Kudos to AWS for transparency, explaining that it has a solution and committing to get better going forward. School book response that all vendors with an outage should share – not all have.


 
From the beginning of this event until 11:37AM PST, we were unable to update the individual services’ status on the AWS Service Health Dashboard (SHD) because of a dependency the SHD administration console has on Amazon S3. Instead, we used the AWS Twitter feed (@AWSCloud) and SHD banner text to communicate status until we were able to update the individual services’ status on the SHD. We understand that the SHD provides important visibility to our customers during operational events and we have changed the SHD administration console to run across multiple AWS regions.

MyPOV – This is probably the worst finding, a too optimistic implementation of the key dashboard on AWS overall status. It should never have a single point of failure, but yet we see this happening over and over in outages. Vendors need to learn not to rely on their services to communicate with clients in an outage situation – as they may not be able to respond, a cardinal mistake (see e.g. for another outage issue here)... but yet vendors keep doing so. 
Rating: 5 Cloud Load Toads

 
 
Finally, we want to apologize for the impact this event caused for our customers. While we are proud of our long track record of availability with Amazon S3, we know how critical this service is to our customers, their applications and end users, and their businesses. We will do everything we can to learn from this event and use it to improve our availability even further.

MyPOV – Kudos for acknowledging and owning the issue. No blame game and scape goating (that is often seen here too, the most common scape goat being the network / network provider).
 

A pretty severe event

When doing the tally across the cloud load toads, assuming I did the math right - then I count 19 total toads, across 5 events - bringing the event to 3.8 cloud load toads. I am sure AWS will be the first to agree that this wasn't an insignificant event. But let's look at the lessons learnt. But customers could have coded their loads to avoid the down time.
 
 

Lessons for IaaS Customers

Here are the key aspects for customers to learn from the AWS S3 outage:

Have you built for resilience? Sure, it costs, but all major IaaS providers offer strategies on how to avoid single location / data center failures. Way too many prominent internet properties did not chose to do so – so if ‘born on the web’ properties miss this – its key to check regular enterprises do not miss this. Uptime has a price, make it a rational decision, now is a good time to get budget / investment approved, when warranted and needed.

Ask your IaaS vendor a few questions: Enterprises should not be shy to ask IaaS providers if they have done a few things:
  • Do your run your systems by hand or with software
     
  • Could the same issue that happened with AWS S3 in US-EAST-1 happen to you?
     
  • How do you test your operational software?
     
  • When have you taken your most popular services down last time?
     
  • What is the expected up time of your most popular services?
     
  • When did your produce that test of expected up time last and how has the system usage increased since then?
     
  • How can we code for resilience – and what does it cost?
     
  • What kind of renumeration / payment / cost relief can be expected with a downtime?
     
  • What single point of failure should we be aware of?
     
  • How are your operation consoles built?
     
  • How do you communicate in a downtime situation with customers?
     
  • How often and when do you refresh your older datacenters, servers?
     
  • How often have your reviewed and improved your operational procedures in the last 12 months? Give us a few examples how you have increased resilience.


And some key internal questions, customers of IaaS vendors have to ask themselves:
  • What are your customer / employee communication tools?
     
  • When your IaaS vendor goes down, so may your customer and employee facing apps. How do you communicate then?
     
  • Make sure to learn from AWS mistake – do not rely on the same point of failure / architecture as the production systems – as it will not be available. Simple, but always good to check and better even monitor. 
 

MyPOV

Outages are always unfortunate. The key thing is to learn from them, knowing AWS they will be ruthless to address issues (and hopefully update customers and analysts on status progress). Kudos for a fast past mortem, taking responsibility and sharing first strategies to avoid another occurrence.

On the concern side AWS needs to ask itself how it recycles and reviews architecture and servers. US-EAST is a behemoth that is nonetheless popular, but may need more rejuvenation than AWS may expect / have planned. In the cloud location monopoly race it is possible that vendors might stretch aging infrastructure beyond the breaking point. Of course, afterwards it is easy to armchair everything, but this remains an area to watch.

Overall hopefully plenty of lessons learnt all around, for AWS, other IaaS providers and customers.