Sticking with tradition the day 2 keynote of the AWS reinvent conference was done by the AWS CTO, Werner Vogels. Vogels - or as referred to by his Twittter handle by a devoted developer community as @Werner - was once again at his best leading through the 100 minute keynote in front of 9000 clouderati.




As with any day 2 keynote it was interesting to see how Vogels built on what we learnt from Jassy yesterday. And he kicked off consistently with pointing at AWS' pace of innovation - a slide that was consistently seen throughout the conference. And to give AWS credit - it has been innovative and adding features at a rapid path - even though it has been growing and getting more complex as a technology platform and organization. 





With so far 243 functions and features delivered throughout 2013 AWS is at a record pace of innovation even for their own standard. And Vogels tied things back to the Amazon WorkSpaces and Amazon AppStream from yesterday's keynote as key product deliverables of 2013. It was good to see that Vogels acknowledged that the amount of new features can also be overwhelming, but this will not stop AWS as rapid delivery is in the DNA of the division.




Philosophy Part 4 - Retro engineer starting with the customer

(Part 1 - 3 are in yestereday's post here). As we have heard from Jassy (and every other session we attended at reinvent) AWS is customer focused and customers drive the innovation agenda. Vogels now went into how the development teams achieve this, which is almost like a retro engineering, starting with the customer requirements. 





So Vogels said, that the AWS teams start with a pseudo press release - that describes what the new product / feature is all about. From there they write the FAQ on the product / feature. Next are the use cases and then the user documentation. The goal of the process is, that the desired features are delivered and not lost in the traditional, e.g. waterfall approach of product development. At Constellation we advise customer to start with the end in mind - this is pretty much the same philosophy applied to product development.



Philosophy Part 5 - Keep it small 

The other key success factor of the AWS development philosophy is to keep the teams small - Vogels introduced the 2 Pizza rule, which postulates that the team should be so small, that two pizze will feed it for dinner. Additionally the AWS teams work autonomously, own their product's road map and work decoupled from product launch schedules.




The other noticeable practice is, that the product teams are in constant contact with customers, working with them on product direction and requirements. Speed is the most important factor for AWS, so the teams work autonomously next to each other. They release their products when they are ready - as fast as possible. The sooner customers have the product in their hands, the sooner AWS can start improving the product is a key benefit AWS goes after.

In our view that is a laudable approach, but AWS needs to take account of the fact that their customers use multiple of their products, and often these need to work well together. With the AWS teams siloed and working on getting their products delivered as fast as possible, it is possible that the customer becomes the AWS system architect. A scenario we think AWS management will want to avoid.




Vogels' showcase for this was the RDS team, that has continuously innovated based on customer requirements - and the key feature released at invent for RDS has now been - PostgreSQL. And with that we were at one of the two major product announcements of Day 2.




The announcement drew spontaneous applause from the crowd and Vogels was visibly happy about the new product.

And Vogels took a stab at the competition, too - the old guard, as AWS management describes them, as being technology and not customer driven, and that the guard adds technology as wished, which leads to unnecessary complexity vs. AWS that only adds features that are customer requested.




Vogels even went so far to refer to lean principles in the AWS development process - ensured by only focussing on what the customers request. So AWS and customers form an epic collaboration relationship.



Netflix awards winners

Next it was the Netflix Chief Product Officer Neil Hunt with Chief Cloud Architect Adrian Cockcroft on stage, talking about how Netflix has been building its platform on top of AWS. And to Netflix credit it has contributed many of its platform components to open source.




Netflix has realized that the developer community is key and created the NetflixOSS cloud prize, that awards a winner $10k in cash, $5k AWS credit and a trip to reinvent. And the 10 winners have truly build innovative software. Remarkably this was completely merit based and not political - as e.g. gentlemen from IBM and eucalyptus won 2 of the 10 prices.



Philosophy Part 6 - The Power of Innovation

Vogels made clear that all innovations that AWS provides are there to be around forever. They can't be  lost and need to be maintained. And Vogels postulated 5 principles around which AWS Innovation anchors:


  1. Performance
  2. Security (interesting that was #1 for Jassy yesterday)
  3. Reliability 
  4. Cost (noone has ever said I wish AWS would be a little more expensive)
  5. Scale


And Vogels believes that if AWS works hard on all these 5 dimensions, then AWS customers will do well. And the rest of the keynote was structured along these 5 principles.



It is all about IO, stupid (or performance)

Interestingly Vogels then mentioned, that it is all around storing and serving the data of the AWS applications that matters to customers, and with that it matters to the division. And for storage the most important KPI is IO performance , and IO needs to be consistent. He then quoted the famoust statement that disks are becoming the new tape. But random IO makes it very hard to get consistent performance out of these systems. So AWS is moving to SSD to provide consistent, random IO.

Instagram is the example that by moving to SSD they were able to move data 20 times faster between middle tiers and backend servers. So now AWS uses SSD, too - and announced the new I2 instances, that on the lower specs are cheaper than the  H1 (those gave 120k IOPS) instances.




Not surprisingly AWS uses these instances themselves, and the example of Vogels was to illustrate consistent performance with was of course DynamoDB. Consequently we saw a flat performance chart for average DynamoDB latency. And to aid performance consistency further, AWS announced the avalaibility of secondary indexes on a global level for DynamoDB.




And then it was to Parse CEO and co-founder Ilya Sukher to provide a showcase for consistent performance. Parse markets itself as a cloud on its own - with key mobile, push, storage and analytical capabilities.  Sukhar showed lines of ObjectiveC code - first code seen this reinvent - certainly welcome by the audience. The business event that created the showcase for AWS that Parse represents, happened when Parse went on Facebook and its apps volume jumped from a few hundred to 160k.




And AWS also helped Parse to make MongoDB performance consistent using PIOPS - which dropped the base line latency to half, spikes disappeared and overall Parse is now scaling much better as memory warm up time has been cut down by 80%. And finally one of the main benefits for Parse was, that its developers could focus completely on the customer and did not have to worry about infrastructure. And lastly Sukher mentioned the peace of mind for him as a CEO - knowing that the infrastructure can scale with AWS and is no longer something he has to worry about.



Philosophy Part 6 - Flip the Security Model

In the past it was up to customers to increase security on their data by e.g. turning on encryption. Vogels wants to turn this around and said that in the near future AWS customers will have to explicitly request not to have their data e.g. encrypted. Encryption and other security measures will be the new normal - getting less will be something customers will have to request. Vogels example was that a few years ago there was the discussion that https would be to expensive - but today it's standard. Along the same lines he thinks that security standards that are under cost and performance scrutiny today will be standard sooner than later. And AWS maybe an active change agent in this process.

Specifically for AWS this means that IAM and IAM roles get more important. And it has been achieved pretty well for S3 said Vogels. But how to do this in real databases - which data is accessible for who remained a challenge, for that fine grained access control of DynamoDB is the showcase. For instance mobile applications can access DynamoDB directly - no longer requiring a separation of customers by proxies needed. And then there is now support for SAML 2.0. Only now - which surprised me a bit - but better late than never.

Along these lines Redshift gets encrypted and thanks to a dual key system only the customer and not AWS (or other partners) have access to the encrypted data.





Reliability

And of course reliability is achieved by the availability zones. And AWS sees the usage maturing, with customers even using different regions for their availability zones. The Japanese earthquakes and hurricane Sandy are the recent events that make businesses consider moving availability zones across regions.





And with AWS adding snapshot copy for Redshift, customers get the capability to secure their data warehouse easily across regions. And even more importantly, RDS will allow cross region replicas. This makes migration between regions easier by allowing to spread copies across regions. This gives customers many options for backup - starting from simple backup to a pilot light approach, to a warm stand by solution and ultimately to a multi-site solution like the one Netflix is pursuing.




Cost

As storage and database usage are a key cost driver for AWS customers, Vogels went over the tiered capabilities of AWS for both storage and IO.
 
 

Equally compute needs to be part of the cost optimization component - and there Vogels stresses how important the spot market is. Customers that are shrewdly taking advantage of the spot market are hungama for transcoding, Pinterest that manages front end operation and was able to reduce cost by 75% and finally vimeo, where the company differs between free and paid accounts. And vimeo free accounts are transposed in the spot market, and paid accounts are transposed in dedicated instances. And the final example was Cycle Computing - which can use all of AWS compute capacity - and they procured 1.21 PFlops with over 16k instances and 264 years of compute to calculate compound formulas.




And the stunning revelation by Vogels in this case was, that the cost for running that massive compute was $33k - versus procuring the compute in a traditional on premise delivery, which would have cost the client $68M.

And Vogels announced also the G2 instances that leverage the NVIDIA 'Kepler' GPUs, have 1536 CUDA cores and are great at encoding and streaming video.




Vogels confirmed that these G2 instances are the backbone for the Amazon AppStream product, that Jassy announced yesterday. But AWS does not stop there - it also announced a new flagship compute instance, the C3. It runs Ivy Bridge and is an SSD based platform.



 


And AWS would not be AWS, if it would not offer a range of different configuration options.



Scale 

The showcase for scale was the Dutch company We Transfer, that transfers artist wall papers and other attachments that are too large to send via email. And the success of the company is creating a massive scaling problem as a week in 2013 is the same amount of transfers for a month in 2012. And needless to say - they solved that with AWS.




Next up was Mike Curtis, VP of Engineering of Airbnb. Not surprisingly Airbnb is experiencing massive subscriber growth, reaching 4M subscribers in January 2013. And about 150k people are Airbnb hosts at any given night. Again AWS solved the scalability problems for the company. Even  more convincing Curtis said, that anytime AWS has something that they could use - Airbnb uses it and does not look further.

Airbnb went from 24 EC instances in 2010 to over 1000 in 2013. Photos are key for guests as they pick their host property through these - and the usage of photos has gone from 294GB in 2010 to 50TB in 2013.



 

Most amazingly Airbnb can run all this infrastructure with a 5 FTE operations team.



AWS and the Internet of Things

Next Vogels went over all the many applications of sensor data and real world machines that AWS is enabling customers to work in. Starting with the Nest thermostat, Illumina dumping sequencing data into S3, Tata Motors instrumenting trucks and to predict preventive maintenance, over collaborating with GE on the inudstrial cloud, to helping catching sensor device data from smartphones with startups like e.g. Human, that motivates to be active for 30 minutes a day - it's all happening with AWS.




The combination of the offline with the  online world is the common thread of these applications said Vogels. And then he got a little geeky and social showing a life logging application coming from Sweden - presenting his narrative of his last 72 hours in Las Vegas - as the device takes a picture every x minutes.




The showcase for massive real world to AWS connection then was dropcam, with their CEO and co-founder Gret Duffy on stage. And Duffy made the great point that it was not about the hardware, but the software - so dropcam did not have to build the camera - but a camera web service. And interesting dropcam is the largest video service on the web - with more data uploads per minute than Youtube.




And as expected - when moving to AWS usage started to go massively up - main reason was the free inbound of data into AWS - which was a key reason for dropcam to move to AWS. Then Duffy walked the audience through the dropcam architecture - as expected compression starts on the camera and dropcam makes uses of Scala, Python and PostgreSQL heavily using DynamoDB.



 
Where dropcam gets really powerful - and a little bit of a concern from a privacy persective (1984 anyone?), too - is that they are enabling real time video analysis - of course using AWS EC2 to process the massive video load. 
 
Then it was back to Vogels to go over some interesting AWS built products that connect the real world with AWS. And the examples came from the transport world with Moovit and One bus away
 
 
One more was mBuilder, that puts sensors into construction sites to monitor e.g. temperaturs and other sensrs that are put on construction site - and then their data gets streamed back to AWS and allow efficient managing of the construction site.
 
All that data creates logging challenge and data storage challenging - as you cannot afford loss of data. Vogels quoted Netflix's Cockcroft, that Netflix is actually a log generating application  that just happens to stream movies. And that queued up the net topic - and last but key announcement - around realtime.
 

AWS gets serious with realtime

Vogels did a good job about talking that it matters less and less what happened yesterday or even 15 minutes ago (AWS CloutTrail maybe?) and it all comes back to find out what happens right now, to drive real time insights. 
 
Next he went in a smart way through the deficiencies of the current technology at hand with Hadoop, Storm, Kafka, AMPQ et al - that all work - but are hard to maintain at scale and tough to configure. 
 
 
The show case was echo that helps resolving URLs and detect spam with massive inputs of 1000 average and 13k peak and outputs of 1100 average and 7000 peak per second. 
 
But AWS wants to make it easier - and launches Kinesis (Greek - a movement as reaction to a stimulus):
 
 
And of course AWS makes it massively scalable, allowing to processs TBs, while staying reliable and most importantly - being simple to use. In a later session we learnt that AWS provides a Java based client library - called KCL - that makes it easy to administer and ramp up and down these complex system.
 
 
As to be expected, Kinesis ramps up and down gracefully, being able to increment the system throughput on a 1 MB/s in and 2 MB/s out base and a unit (later we learnt they are shards) being able to process 1000 transactions per second. 
 
 
Streams into Kinesis can be scaled and so can backend Kinesis applications. What is most important is, that AWS plays more and more on the synergistic platform field with building and integrating new offerings like Kinesis into the overall AWS platform. So Kinesis can leverage DynamoDB, RedShift, Storm, S3, EMR and RDS - and Kinesis Apps can be deployed to EC2, Autoscale is enabled and Kinesis streams can even be combined.
 
 
Kinesis was the only Amazon product demoed in the keynote - showing how important AWS sees this new capability. And the demo was really impressive - done by AWS's Khawaya Shams - analysing the Twitter firehose, making each tweet persistent and analyzing it for content and popularity perspective. Persistence was achieved by DynamoDB and analysis done in Redshift. The demo was all about looking up Mars as a planet, but Mars is a popular term on Twitter not as a planet but as the last name of an artist whose first name is ... Bruno. A funny way to demo the very powerful Kinesis capabilities.
 
Next Shams showed where people are tweeting from about Mars and of course this was a good demo of the new PostgreSQL ability with its inherent GIS capabilities, showing a US map with the tweet activity.
 
But the most important takeaway from the Kinesis demo was all about being able to build this application in around 5 days (of course 2 very smart people involved) - and the cost to run it in production was.... 5$.
 
And then it was up to Vogels to close out the keynote - going over all the announcements quickly - PostgreSQL support in RDS, Amazon Kinesis, cross region replicas, Redshift snapshot copies, global secondary indexes and the new C3 and I2 instances.
 
 
 
As it had to be - the most applause was garnered by Vogels announcing Deadmau5 as musical guest for the AWS party.
 

MyPOV

AWS keeps innovating at a very fast pace - no doubt. The good news - and that was abundantly made clear - AWS does so with the customer in mind. And it keeps providing new value from innovation to all its constituents - AWS CloudTrail caters to security concerns shared across the client side, the PostgreSQL support in RDS caters to developers, Amazon AppStream is geared towards developers building compute intensive apps and Amazon Kinesis towards real time analysis needs of both enterprises and ISVs. Only Amazon WorkSpaces is a new market entry. 
 
All that happens while Amazon makes the backbone of its infrastructure stronger. We noted in our takeaways from yesterday, that AWS has not added any regions - but there must have been a ton of fiber being put in these existing data centers. And you need some physical stability on the data center side to achieve this - as we think these are the backbone to new services such as the cross region reads and replicas, RedShift snapshot copy and global secondary indexes. Even if the competition wants these - it just takes time on the calendar to put all that fiber in the ground and connected to their data centers to replicate similar features on their infrastructure.
 
Lastly AWS was very proud of their new hardware instances, and while the new C3 and I2 instances are very powerful - this is the area where AWS is less strong in regards of the competition. But they are probably fine with it - as its not high end hardware that wins the cloud wars - but value added capabilities and real word network speeds at attractive prices.
 
Overall we think AWS has moved the yardstick further out for the competition to catch up - it will be interesting to see how the real word actions of the usual suspects will materialize as a reaction to this years reinvent announcements. 
 
Lastly the new Amazon Kinesis offering is the most exciting product in our view - as it moves the realization of the real world much closer into software, at a fraction of the cost previously imaginable. Can't wait to see Kinesis apps being build. 

-------------
A collection of key tweets from the keynote can be found in Storify here.

And you can watch the replay of the keynote here: