So let’s dissect Urs Hölzle’s blog post – which pretty much serves as a Google press release:
[…]Industry-leading, simplified pricing
The original promise of cloud computing was simple: virtualize hardware, pay only for what you use, with no upfront capital expenditures and lower prices than on-premise solutions. But pricing hasn’t followed Moore's Law: over the past five years, hardware costs improved by 20-30% annually but public cloud prices fell at just 8% per year.
MyPOV – This is a new mantra for cloud pricing. While it was only pay for what you use – down to minute or even second – Google is looking at the underlying mechanism that makes all computing more affordable, Moore’s Law. And kudos to Google for calling out the profit accumulation most providers have been entertaining to a certain point, as cloud price reduction have not been keeping step with the cost reduction seen in hardware. In an industry already feeling the cost pinch by Amazon’s retail DNA – it is now Google calling out that the existing cost reduction drive may not even have been aggressive enough. And we knew already that Google is serious as it dropped its consumer pricing for 100 GB of storage below a Google Cloud use of the same amount of storage – till today. We’ll look into the commercial dynamics how we think Google enables this price reduction later.
We think cloud pricing should track Moore’s Law, so we’re simplifying and reducing prices for our various on-demand, pay-as-you-go services by 30-85%:
- Compute Engine reduced by 32% across all sizes, regions, and classes.
- App Engine pricing is drastically simplified. We've lowered pricing for instance-hours by 37.5%, dedicated memcache by 50% and Datastore writes by 33%. In addition, many services, including SNI SSL and PageSpeed are now offered to all applications at no extra cost.
- Cloud Storage is now priced at a consistent 2.6 cents per GB. That’s roughly 68% less for most customers.
- Google BigQuery on-demand prices reduced by 85%.
MyPOV – Google is showing the application of Moore’s Law and significantly reducing prices. The good folks up in Seattle will check if this is factual – but it looks to me as the biggest price reduction we have seen in the public cloud. Where AWS follows its retail DNA of smallish cost reduction – mimicking the always sale strategy seen with some brick and mortar retailers – Google is giving away one year of cost savings. And that’s the most interesting insight here – as any beyond 30% price reduction shows that Google may have pocketed some extra profits, too. And there is nothing negative with it by the way – be price competitive and have a good margin to protect yourself against upcoming price wars – is a very viable and probably the only public cloud vendor price and business strategy. Some colleagues have already pointed out that Google is now the most cost effective cloud provider for the highly demanded high memory instance category. If Google keeps that cost leadership, it will create a very viable alternative to Amazon in regards of next generation, computing intensive in memory applications category. And needless to say – the storage reductions make Google more attractive to enterprises building and using the cloud than it costs end users to use it. It has to be like that to foster and grease an ISV ecosystem.
In addition to lower on-demand prices, you’ll save even more money with Sustained-Use Discounts for steady-state workloads. Discounts start automatically when you use a VM for over 25% of the month. When you use a VM for an entire month, you save an additional 30% over the new on-demand prices, for a total reduction of 53% over our original prices.
MyPOV – This is probably the most innovative move by Google on the commercial side of the public cloud since a long time - if not ever. The key benefit of cloud in regards of elasticity of load becomes a disadvantage when the load stabilizes so much, that an originally elastic loud becomes a static load. Ultimately that’s a good sign for software vendors, as they want to grow their business and hand in hand with that comes a more stable load profile. In technical reality that load profile – always assuming a neatly scaling application architecture - realizes itself in VMs becoming static, meaning they run 24x7. And the commercial consequence is, that this VM becomes more expensive than a non VM load. There are numerous cases of software vendors starting out in the public cloud, but once loads have stabilized, moved their load to an on premise, dedicated data center environment. Google (and all other public cloud vendors) don’t want to see that – so major credit to Google for making this commercially less attractive to do. And to a certain point it is fair – less needs to happen at a cloud provider when VMs become dedicated, so passing along some of these cost savings to customers for the ‘loyalty’ is actually good business sense. Setting the usage threshold at 25% and the maximum saving to an additional 30 percentage points will be parameters I’d say we will see more action on in the future. And I leave it to some tech pundits to speculate on the underlying Google architecture – what are the savings Google sees and how much of it passes along with the 30 percentage points.
Finally it confirms Google’s commitment to the VM – there are (at least for now) no ambitions or plans visible for anything in the bear metal field. And clearly Google is not interested in reserving instances for a multi year deal. Enterprises like these options though – as they give them cost certainty. But Google will rightly argue that an enterprise can gain similar certainty with a 3 year sustained usage. With the upside (in contrast to Amazon) that price reductions (Moore’s Law anyone) will take the cost down through the three years. An argument I expect enterprises will be open to – after some explaining.
With our new pricing and sustained use discounts, you get the best performance at the lowest price in the industry. No upfront payments, no lock-in, and no need to predict future use.
MyPOV – The key emphasis has to be on – no need to predict the future use. As Churchill said, predictions are always tricky, especially concerning the future [freely quoted]. And many cloud users see themselves in that situation at the beginning of the billing cycle – how many dedicated instances will we need for the next month. Google takes away that challenge, which will be greatly appreciated. It also moves the value proposition from ‘pay by the glass’ closer to ‘all you can eat’. Someone will do the math and keep load on a VM for some minutes or even hours longer in order to lock-in the full discount. An easier decision to make than predicting what you need.
Making developers more productive in the cloud
We’re also introducing features that make development more productive:
- Build, test, and release in the cloud, with minimal setup or changes to your workflow. Simply commit a change with git and we’ll run a clean build and all unit tests.
- Aggregated logs across all your instances, with filtering and search tools.
- Detailed stack traces for bugs, with one-click access to the exact version of the code that caused the issue. You can even make small code changes right in the browser.
We’re working on even more features to ensure that our platform is the most productive place for developers. Stay tuned.
MyPOV – Needless to say that making developers more productive is a main draw to specific clouds. And Google has picked an attractive set of firsts round DevOps / DeBug functions to get the attention of the development community. Having seen a lot of troubled software products, the automated unit tests are a valuable feature. Probably it is also a self preservation mechanism for Google – as noting too crazy can happen through the code. But it is also good to see that Google extends the same services, that its internal developers have, to their cloud customers.
Introducing Managed Virtual Machines
You shouldn't have to choose between the flexibility of VMs and the auto-management and scaling provided by App Engine. Managed VMs let you run any binary inside a VM and turn it into a part of your App Engine app with just a few lines of code. App Engine will automatically manage these VMs for you.
MyPOV – Well this should be really called ‘AppEngine managed VMs’. With this Google addresses a long term critique and weakness of Google AppEngine that you could not break out of it. And as much as that is intended from a stability perspective – it limits the scope of the apps that can build on AppEngine. Now developers can access C libraries, and local (Google calls them native) resources. But the AppEngine is in charge as it enables and controls the managed VM.
Expanded Compute Engine operating system support
We now support Windows Server 2008 R2 on Compute Engine in limited preview and Red Hat Enterprise Linux and SUSE Linux Enterprise Server are now available to everyone.
MyPOV – This is a huge win for Google that before operated only on the two more exotic Linux variants. Now decision makers not only have access to the two most popular Linux versions with RHEL and SUSE – but also Windows Server 2008. Both will face less concern by corporate IT decision makers, as well as mainstream minded CTOs at application vendors. Lastly given the managed VM capability – a number of local resource options that developers are familiar with and rely on – become available for AppEngine.
Real-Time Big Data
BigQuery lets you run interactive SQL queries against datasets of any size in seconds using a fully managed service, with no setup and no configuration. Starting today, with BigQuery Streaming, you can ingest 100,000 records per second per table with near-instant updates, so you can analyze massive data streams in real time. Yet, BigQuery is very affordable: on-demand queries now only cost $5 per TB and 5 GB/sec reserved query capacity starts at $20,000/month, 75% lower than other providers. […]
MyPOV – An aggressive move by Google in the BigData field – the other providers being the usual suspects. They key takeaway though is, that Google wants a piece of the fast growing pie of BigData apps being built for the cloud.
A truly landmark point for the cloud, with Google laying down the cards. Reports say that Hölzle and team only switched focus to public cloud in January this year – if true a lot has been done in little time and the competition is warned. We will have to see if Hölzle’s team will be able to neglect the current largest customer – Google – for the next quarters, but the ambitions and hints during the events are there.
Historically – in statements made by Marissa Mayer (when still at Google) – Google has been the only cloud provider that openly stated, that access capacity should be given back to other developers (thanks for colleague @ReneBuest to remind me recently). All other cloud providers have – despite my probing – never admitted to it. And maybe Google won’t anymore either – but its good business practice. If a cloud provider has a very elastic cloud, why not commercialize excess capacity at very attractive, cost rates. If the bulk of the load is higher margin, you are still running a formidable business (check Google’s latest earnings). And as long as you grow your cloud capacity faster than public cloud demand is –well then that spare capacity is unlikely to see bottle necks, given the scale on which Google operates.
Over at Rightscale Hassan Hosseini has done a detailed level comparison between Google and Amazon. Google comes mostly out on top. It’s interesting that on the most price attractive scenario – a three year sustained usage of Google and a 3 year commitment for Amazon, AWS comes out slightly ahead. But of course with the 3 year commitment. It’s interesting as the 3 year commitment comes the closest to a cloud user to own their hardware – so it’s probably the closest price to the real cost to operate a cloud infrastructure.
All in all it is very good news for the overall cloud adoption. Enterprises will benefit from better and more cost effective software, application vendors will have more options where to move load, and developers have a cloud provider that has cache and cares for them. On the flipside Google’s approach is a developer centric cloud – many more things need to happen for Google to move traditional load such as existing commercial databases and enterprise applications to the cloud. Hoping for all these applications to be rebuild on the technologies available in Google cloud – will not be viable medium term strategy. But for now this is a great step by Google, we are eager to measure the stride length of the next one – same cadence, short or longer. For sure the startup audience is listening.
Here is the first of many videos of the event:
Lastly – this is the week of the cloud – yesterday Cisco announced its Intercloud, tomorrow Amazon has a cloud event, and on Thursday Microsoft has an announced press conference. I am sure the price strategists in Seattle and Redmond are crunching some numbers.
More on Google:
- A tale of two clouds - Google and HP - read here
- Why Google acquired Talaria - efficiency matters - read here