Results

Event Report - Workforce Vision 2017 - After restart and takeoff, time for the boosters

Event Report - Workforce Vision 2017 - After restart and takeoff, time for the boosters

We had the opportunity to attend the Workforce Software Vision 2017 user conference held in New Orleans from March 13th till 15th 2017. The even was well attended with over 260 attendees, managing a collective 500k workers. 

 
 



Here is my event video of the event:

 

No time to watch – here is the 1 slide update:

 

If you want more details – read on:

Focus on Product and more – Always good to see vendors focusing on product, in the case of Workforce Software, CEO Morini shared that the vendor will add 80 more developers. For a 600 employee company a substantial investment. Likewise Workforce Software has shown progress on the partner side (more large SIs are on board), it appears the reseller relationship with SAP is going well, and lastly the vendor unveiled a new implementation methodology to help customers go live quicker.


 
Workforce Software Constellation Research Holger Mueller
Morini introduces the connected Workforce


Roadmap Transparency – In the past Workforce (like many other HCM vendors) was not a poster child for transparency, luckily for customers, this has changed. And Workforce shared a three year roadmap, with the usual caveats, but quite a difference to what was shared two years ago (the last user conference I was able to attend). No surprise – unification is a key theme across the coming release, with some vertical capabilities and most importantly a much needed UI improvement. 

 
Workforce Software Constellation Research Holger Mueller
Workforce Software 2016 in review


Hard work – will it be enough? – No question Workforce has made a lot of progress, but the question remains, can the vendor catchup with the 800 pound gorilla of the industry, Kronos. In general the speed of vendors in Workforce Management is increasing, with a strong focus on innovation, so Workforce’s task is not getting easier. But plenty of room to differentiate, but the vendor now needs to get accelerate in delivery across the board, hence the blog post title.
 
Workforce Software Constellation Research Holger Mueller
Broady opens Workforce Vision 2017
 

MyPOV 

Good progress by the new Workforce Software management team, no doubt. With more investment into product, focus on implementation speed, more partners, successful reseller relationships, and more – the vendor is executing the right strategies. Now they have to materialize and make a difference in the near future.

On the concern side, Workforce Software has to bring together multiple platform at different levels, from architecture, data centers all the way to UI. And it needs to deliver the next generation of its product, taking advantage of cloud, microservices etc. and for all the talk on engagement, it must improve its user experience. The days of clumsy screens for power users are counted. To be fair, the vendor has realized that and plans a UI overhaul, architecture change and other improvements.

If it all will be enough to change the distance to the market leader, it is too early to tell, with no doubt Workforce Software has positioned itself much better than where the vendor was a few years ago. We will have to check in again, stay tuned



More on Workforce Software:
 
  • News Analysis - WorkForce Software Announces Global Reseller Agreement with SAP - read here
  • Progress Report - WorkForce Software powers into more Workforce Management - but needs to watch the Fundamentals - read here


More on Workforce Management:
 
  • Event Report - Kronos KronosWorks - Solid progress and big things loom - read here
  • Progress Report - Ceridian makes good progress, the basics are done now its about next gen capabilities - read here
  • Event Report - Kronos KronosWorks - New Versions, new UX, more mobile - faster implementations - read here
  • Event Report - Ceridian Insights - Momentum and Differentiation Building - read here
 


Find more coverage on the Constellation Research website here and checkout my magazine on Flipboard and my YouTube channel here.
New C-Suite Data to Decisions Innovation & Product-led Growth Revenue & Growth Effectiveness Future of Work Next-Generation Customer Experience Tech Optimization Digital Safety, Privacy & Cybersecurity Leadership AI Analytics Automation CX EX Employee Experience HCM Machine Learning ML SaaS PaaS Cloud Digital Transformation Enterprise Software Enterprise IT HR AR Chief Customer Officer Chief People Officer Chief Human Resources Officer

Concur Gets Deeper Into Traveler Risk Management

Concur Gets Deeper Into Traveler Risk Management

Constellation Insights

SAP's Concur subsidiary is expanding its play in traveler risk management, a key area of innovation in a world increasingly marked by political unrest, extreme weather and terrorism. Here are the details from its announcement:

Concur Risk Messaging will capture unrivaled traveler location data via Concur Travel & Expense, Concur Mobile, Concur TripLink, TripIt from Concur, supplier e-receipts and more, providing travel managers immediate and unparalleled visibility into employees that may be at risk. Concur Active Monitoring, powered by HX Global, will offer 24/7 monitoring, proactive communication capabilities, and assistance coordination. This enables businesses to deliver on their commitment to ensure employee safety and well-being as they travel, across time zones and outside of business hours.

While Concur has offered traveler risk management capabilities for years, the new offering broadens the feature set through the partnership with HX Global. Concur's system uses more granular travel data than that provided by a GDS (global distribution system) such as Sabre, which are used by transportation providers, hotels and travel agencies to make reservations. For example, it can use an employee's expense receipts and card purchases to piece together a location data trail. Concur can also pull in HR system data for a fuller view of the employee.

Overall, it's growing business for Concur. The company says it sent more than 10 million alerts to travelers last year, and the number of Concur users who were alerted grew from 151,000 to 1.3 million over the course of the year. 

Concur's Fusion user conference is ongoing in Chicago this week. I'll be there and plan to dig deeper into Concur's new travel risk management offering. 

24/7 Access to Constellation Insights
Subscribe today for unrestricted access to expert analyst views on breaking news.

Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity Chief People Officer

CEN Member Chat with R "Ray" Wang on Dynamic Leadership

CEN Member Chat with R "Ray" Wang on Dynamic Leadership

R "Ray" Wang, founder of Constellation Research, shares his views on what it takes to be a dynamic leader and explains why it's valuable. For those who want regular access to content like this, consider joining the Constellation Executive Network

On <iframe src="https://player.vimeo.com/video/208403091?badge=0&autopause=0&player_id=0" width="832" height="720" frameborder="0" title="CEN Member Chat with R &quot;Ray&quot; Wang on Dynamic Leadership" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>

Mar 22: Join Microsoft CIO Jim Dubois and Mott Macdonald’s Simon Denton on how to achieve success with Office 365

Mar 22: Join Microsoft CIO Jim Dubois and Mott Macdonald’s Simon Denton on how to achieve success with Office 365

1

Working with various organizations worldwide, I’ve had the great opportunity to help facilitate sustainable Office 365 adoption. On March 22, I’m excited to host a webinar along with Microsoft CIO Jim Dubois,  Mott Macdonald’s Simon Denton and Microsoft Fast Track Sharon Liu as we unpack what it takes to achieve success with Office 365.

 

Jim will share advice on how to enable digital transformation with Office 365 and Simon will walk through how he helped his colleagues boost productivity by moving to Office 365. Lastly,  Sharon will demo the Fast Track resources for starting your adoption and achieving your goals.

What are you waiting for? Register now!

Inside Intel's $15.3 Billion Bet On Mobileye

Inside Intel's $15.3 Billion Bet On Mobileye

Constellation Insights

Intel has already been a key player in the IoT (Internet of Things) market but is looking to significantly strengthen its hand by plunking down $15.3 billion for MobilEye, maker of software, specialized chips and cameras for self-driving cars.

The Israeli company has been in business for 17 years, and has 25 partnerships with automakers. It began working with Intel last year and had already announced plans to launch fully autonomous vehicles in conjunction with BMW and Intel by 2021. Intel plans to create a global autonomous vehicle division based in Israel that combines its existing operations with Mobileye. 

With Mobileye, Intel gains software for each of the three main "pillars" of autonmous driving: mapping, environment sensing and driving policy. Mobileye develops a series of proprietary chips called EyeQ, upon which its software is deployed. Intel sees synergies between Mobileye's specialized tech and its own high-end chips, estimating that self-driving cars could generate in the neighborhood of 4,000 GB of data per day—information that needs to be processed in real-time in order to keep the vehicles moving safely down the road.

While Mobileye is focused on autonomous vehicles, the acquisition speaks to Intel's broader ambitions in IoT and the new wave of computing, says Constellation Research VP and principal analyst Andy Mulholland.

"Intel is actively riding the shift from the traditional computer chip market to the new markets, where an ever increasing number of devices require a processor chip," he says. "Intel has worked to steadily over recent years to introduce a new generation of chips that combine low power consumption, low cost, and specialized functionality."

This new generation of chips effectively require Intel to rewrite Moore's law from a focus on doubling the capacity of a chip every eighteen months towards providing the same capacity but at half the cost every eighteen months, Mulholland adds. "A big part of this challenge is to understand exactly how how the processing power will be demanded and this increases the need for specific market expertise," he says. "Clearly, self-driving cars are likely to be a huge marketplace, and introduce very specific processing requirements, making the acquisition of Mobileye a logical move."

"Compared to the "tab for the fab" as the investment in the design and production of a new chipset is known, the MobilEye acquisition price could be seen as a good buy to get a world leading chipset right at first release," Mulholland notes.

Intel expects the deal to close within about nine months. 

24/7 Access to Constellation Insights
Subscribe today for unrestricted access to expert analyst views on breaking news.

Tech Optimization Chief Information Officer Chief Digital Officer

Google Next - Day 1 Summary

Google Next - Day 1 Summary

Media Name: googlenextlogo.jpg

Google Cloud Next 17 Recap

Google Cloud Next 17 Recap

Google Cloud is adding must-have enterprise features and scaling the business to meet data platform, machine learning and AI demand. Here’s a progress report.

On <iframe src="https://player.vimeo.com/video/208189440?badge=0&autopause=0&player_id=0" width="1280" height="720" frameborder="0" title="Google Cloud Next 17 Recap" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>

Down Report - Human error takes AWS S3 down in US-EAST-1 - and it is felt - 3.8 Cloud Load Toads

Down Report - Human error takes AWS S3 down in US-EAST-1 - and it is felt - 3.8 Cloud Load Toads

The Cloud / IaaS industry has grown rapidly in the last years, and providers have been solidifying their systems over the years. Outages are always unfortunate and by large the cloud has shown that it is more resilient than pretty much any on premises computing setup. Nonetheless outages happen – and we are adding a new blog post type for these events – the “Down Report” – where we plan to dissect and rate what has gone wrong, and especially focus on the lessons learnt for the provider affected, the industry, but most importantly for their customers. 
 

To make the effort a little more fun – we assign ‘Cloud Load Toads” to the overall event and each circumstance. We mean no disrespect to the ‘load toads’ that work valiantly in the worlds air forces, but liked the suggestion of our colleague Alan Lepofsky (@alanlepo), who came up with the term ‘Cloud Load Toad”.
 

On the ‘Cloud Load Toad’ scala that goes from 1 (bad but ok, can happen) to 5 (very bad, should never ever happen) we rate the severity off the event overall and the events that lead to it.

AWS S3 Down in US-EAST-1

First of all, kudos to AWS, who published the post mortem post (see here) in about 48 hour past the event, faster than usual, judging from other downtime events in the past. But then each cloud outage is different, the root cause – manual error – is easier to establish than e.g. trouble shooting a battery fire, that destroys its very evidence (think Samsung).

But let’s dissect the post mortem report:
We’d like to give you some additional information about the service disruption that occurred in the Northern Virginia (US-EAST-1) Region on the morning of February 28th. The Amazon Simple Storage Service (S3) team was debugging an issue causing the S3 billing system to progress more slowly than expected.

MyPOV – Certainly production and billing systems need to be connected, and in many scenarios the production system can create issues with the load triggered for the billing system. But a production system should never be able to be stopped by an administrative system, like a billing system. Production should be kept running, billing can be worried about later. It is likely that the S3 billing system (my speculation) is using S3, too – creating a potential recursive dependency. Needless to say – these systems should be isolated. 
Rating: 3 Cloud Load Toads
 
 
At 9:37AM PST, an authorized S3 team member using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process.

MyPOV – Related to above, obvious that the billing system is also using S3 now. Good to drink your own champagne, but when it goes bad because of a mistake by the champagne maker – never good and not only the customers but the champagne maker gets food poisoning – not what you want to have happen. But humans can make mistakes.
 
Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended. The servers that were inadvertently removed supported two other S3 subsystems. One of these subsystems, the index subsystem, manages the metadata and location information of all S3 objects in the region. This subsystem is necessary to serve all GET, LIST, PUT, and DELETE requests. The second subsystem, the placement subsystem, manages allocation of new storage and requires the index subsystem to be functioning properly to correctly operate. The placement subsystem is used during PUT requests to allocate storage for new objects. Removing a significant portion of the capacity caused each of these systems to require a full restart. While these subsystems were being restarted, S3 was unable to service requests.

MyPOV – Kudos to AWS for transparency. But any attendee to its reInvent user conference knows how much the vendor prides itself of not letting humans make mistakes, but putting key / vital processes into code. Certainly, the approach and philosophy wasn’t followed here. Would be good to chat with AWS CTO Werner Vogels about this one… I am sure that enough people in Seattle are pondering that in the future typos, manual human error should not take systems down. Of course, we still need a kill switch for the humans… 
Rating: 4 Cloud Load Toads.
 
 
Other AWS services in the US-EAST-1 Region that rely on S3 for storage, including the S3 console, Amazon Elastic Compute Cloud (EC2) new instance launches, Amazon Elastic Block Store (EBS) volumes (when data was needed from a S3 snapshot), and AWS Lambda were also impacted while the S3 APIs were unavailable.

MyPOV – AWS suggests to write critical processes to span across regions. Its own website – amazon.com and subsidiary zappos.com did not go down, and were probably coded correctly. The question is (and sorry if I have not read the fine print) – could an AWS client still use the US-EAST-1 services like EC2, EBS, AWS Lambda etc. if pointed to other S3 stores – or does an S3 failure take the whole region out? This is a deeply critical issue for any IaaS techstack in a IaaS data center. So, did customers have a chance here? A question to follow up with AWS. Not Rated.


 
S3 subsystems are designed to support the removal or failure of significant capacity with little or no customer impact. We build our systems with the assumption that things will occasionally fail, and we rely on the ability to remove and replace capacity as one of our core operational processes. While this is an operation that we have relied on to maintain our systems since the launch of S3, we have not completely restarted the index subsystem or the placement subsystem in our larger regions for many years. S3 has experienced massive growth over the last several years and the process of restarting these services and running the necessary safety checks to validate the integrity of the metadata took longer than expected. The index subsystem was the first of the two affected subsystems that needed to be restarted. By 12:26PM PST, the index subsystem had activated enough capacity to begin servicing S3 GET, LIST, and DELETE requests. By 1:18PM PST, the index subsystem was fully recovered and GET, LIST, and DELETE APIs were functioning normally. The S3 PUT API also required the placement subsystem. The placement subsystem began recovery when the index subsystem was functional and finished recovery at 1:54PM PST. At this point, S3 was operating normally. Other AWS services that were impacted by this event began recovering. Some of these services had accumulated a backlog of work during the S3 disruption and required additional time to fully recover.

MyPOV – AWS describes well that things break all the time, and they can even go down. But IaaS providers need to be certain they can come back up, and part of that coming back is also to understand how long it will take to come back up. S3 has been very popular, so the harder to take it down, test (or simulate) time for it to come back, but certainly something AWS could and should have done and known. When you run IT, and don’t know when a system that is down will come back up more or less for sure, the IT professionals are in a bad spot. 
 
 
Rating: 4 Cloud Load Toads
 
 
We are making several changes as a result of this operational event. While removal of capacity is a key operational practice, in this instance, the tool used allowed too much capacity to be removed too quickly. We have modified this tool to remove capacity more slowly and added safeguards to prevent capacity from being removed when it will take any subsystem below its minimum required capacity level. This will prevent an incorrect input from triggering a similar event in the future.

MyPOV – This section read like there was a software tool – but it malfunctioned. That of course is not good. Granted hard to simulate and test with systems of this scale – but not a good enough answer. 
Rating: 3 Cloud Load Toads
 
We are also auditing our other operational tools to ensure we have similar safety checks. We will also make changes to improve the recovery time of key S3 subsystems. We employ multiple techniques to allow our services to recover from any failure quickly. One of the most important involves breaking services into small partitions which we call cells. By factoring services into cells, engineering teams can assess and thoroughly test recovery processes of even the largest service or subsystem. As S3 has scaled, the team has done considerable work to refactor parts of the service into smaller cells to reduce blast radius and improve recovery. During this event, the recovery time of the index subsystem still took longer than we expected. The S3 team had planned further partitioning of the index subsystem later this year. We are reprioritizing that work to begin immediately.

MyPOV – Kudos to AWS for transparency, explaining that it has a solution and committing to get better going forward. School book response that all vendors with an outage should share – not all have.


 
From the beginning of this event until 11:37AM PST, we were unable to update the individual services’ status on the AWS Service Health Dashboard (SHD) because of a dependency the SHD administration console has on Amazon S3. Instead, we used the AWS Twitter feed (@AWSCloud) and SHD banner text to communicate status until we were able to update the individual services’ status on the SHD. We understand that the SHD provides important visibility to our customers during operational events and we have changed the SHD administration console to run across multiple AWS regions.

MyPOV – This is probably the worst finding, a too optimistic implementation of the key dashboard on AWS overall status. It should never have a single point of failure, but yet we see this happening over and over in outages. Vendors need to learn not to rely on their services to communicate with clients in an outage situation – as they may not be able to respond, a cardinal mistake (see e.g. for another outage issue here)... but yet vendors keep doing so. 
Rating: 5 Cloud Load Toads

 
 
Finally, we want to apologize for the impact this event caused for our customers. While we are proud of our long track record of availability with Amazon S3, we know how critical this service is to our customers, their applications and end users, and their businesses. We will do everything we can to learn from this event and use it to improve our availability even further.

MyPOV – Kudos for acknowledging and owning the issue. No blame game and scape goating (that is often seen here too, the most common scape goat being the network / network provider).
 

A pretty severe event

When doing the tally across the cloud load toads, assuming I did the math right - then I count 19 total toads, across 5 events - bringing the event to 3.8 cloud load toads. I am sure AWS will be the first to agree that this wasn't an insignificant event. But let's look at the lessons learnt. But customers could have coded their loads to avoid the down time.
 
 

Lessons for IaaS Customers

Here are the key aspects for customers to learn from the AWS S3 outage:

Have you built for resilience? Sure, it costs, but all major IaaS providers offer strategies on how to avoid single location / data center failures. Way too many prominent internet properties did not chose to do so – so if ‘born on the web’ properties miss this – its key to check regular enterprises do not miss this. Uptime has a price, make it a rational decision, now is a good time to get budget / investment approved, when warranted and needed.

Ask your IaaS vendor a few questions: Enterprises should not be shy to ask IaaS providers if they have done a few things:
  • Do your run your systems by hand or with software
     
  • Could the same issue that happened with AWS S3 in US-EAST-1 happen to you?
     
  • How do you test your operational software?
     
  • When have you taken your most popular services down last time?
     
  • What is the expected up time of your most popular services?
     
  • When did your produce that test of expected up time last and how has the system usage increased since then?
     
  • How can we code for resilience – and what does it cost?
     
  • What kind of renumeration / payment / cost relief can be expected with a downtime?
     
  • What single point of failure should we be aware of?
     
  • How are your operation consoles built?
     
  • How do you communicate in a downtime situation with customers?
     
  • How often and when do you refresh your older datacenters, servers?
     
  • How often have your reviewed and improved your operational procedures in the last 12 months? Give us a few examples how you have increased resilience.


And some key internal questions, customers of IaaS vendors have to ask themselves:
  • What are your customer / employee communication tools?
     
  • When your IaaS vendor goes down, so may your customer and employee facing apps. How do you communicate then?
     
  • Make sure to learn from AWS mistake – do not rely on the same point of failure / architecture as the production systems – as it will not be available. Simple, but always good to check and better even monitor. 
 

MyPOV

Outages are always unfortunate. The key thing is to learn from them, knowing AWS they will be ruthless to address issues (and hopefully update customers and analysts on status progress). Kudos for a fast past mortem, taking responsibility and sharing first strategies to avoid another occurrence.

On the concern side AWS needs to ask itself how it recycles and reviews architecture and servers. US-EAST is a behemoth that is nonetheless popular, but may need more rejuvenation than AWS may expect / have planned. In the cloud location monopoly race it is possible that vendors might stretch aging infrastructure beyond the breaking point. Of course, afterwards it is easy to armchair everything, but this remains an area to watch.

Overall hopefully plenty of lessons learnt all around, for AWS, other IaaS providers and customers.
Innovation & Product-led Growth Tech Optimization Future of Work amazon SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service AR Chief Information Officer Chief Technology Officer Chief Digital Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Executive Officer Chief Operating Officer

Google Cloud Invests In Data Services and ML/AI, Scales Business

Google Cloud Invests In Data Services and ML/AI, Scales Business

Google Cloud is adding must-have enterprise features and scaling the business to meet data platform, machine learning and AI demand. Here’s a progress report.

Google’s reputation for big data, machine learning and artificial intelligence innovation is richly deserved. The challenge for Google Cloud – which exposes that innovation to enterprises through the Google Cloud Platform (GCP) – isn’t scaling the technology so much as scaling the business.

Last week’s Google Cloud Next '17 event in San Francisco was a case in point. The event was sold out, with more than 10,000 attendees packing the Moscone Center’s West hall. Many conference sessions were overbooked and had long standby lines. You got the feeling Google could have easily filled another massive conference hall.

Google Cloud Next 17 Recap from Constellation Research on Vimeo.

So what is Google Cloud doing to meet fast-growing demand? For starters, parent company Alphabet Inc. is investing big money in the business — more than $30 billion, according to Chairman Eric Schmidt, a keynoter at the event. But Google Cloud is not only hiring on all fronts and expanding the ecosystem, it’s also doubling down on tech investments in data centers and network improvements; new big data, machine learning (ML) and artificial intelligence (AI) services; and enterprise-oriented migration, administrative, governance and security features. Here’s a closer look at the business- and data-tech-oriented announcements.

Business-Side Investments

You may get started on a public cloud through clicks and credit card payments, but it takes lots of people-centric interaction and hand holding to move an entire business into the cloud as scale. That’s why Google Cloud saw the largest headcount increase of any Alphabet business in 2016, and it expects to add at least 1,000 more salespeople in the first six months of this year. What’s more, Google Cloud’s professional services team has seen 5X growth over the last year, and partnerships with systems integrators and resellers are quickly multiplying.

At Google Cloud Next, Diane Greene, senior VP of Google Cloud, announced a significant new partnership with SAP, adding to a growing list of enterprise software partners. SAP executive board member Bernd Leukert joined Greene on stage to highlight certification of SAP Hana as well as plans for joint work on enterprise-grade data governance and compliance in the cloud. The more such partnerships Google can forge with blue-chip enterprise software vendors, the better.

@GoogleCloud, #GoogleNext17, #AI

Of course, the big draw for many companies to Google is its expertise in ML and AI. This point was validated onstage at Google Next by C-level customer guests from Colgate-Palmolive, Disney, Home Depot, HSBC and Verizon. And bolstering the company’s case, Google Cloud Chief Scientist Fei-Fei Li announced the acquisition of Kaggle, a company that has been a magnet to more than 850,000 data scientists through its famed Kaggle competitions.

Bringing Kaggle inside the Google Cloud ecosystem “combines the world’s largest data science community with the world’s most powerful machine learning cloud,” wrote Kaggle CEO Anthony Goldbloom. It also gives Kaggle and all those data scientists access to (and, it’s hoped, familiarity and comfort using) Google Cloud infrastructure, scalable training and deployment services and the ability to store and query large data sets.

MyPOV on business-side investments: Google Cloud clearly has big momentum. But whatever it’s investing in people and customer support, the company could probably double the effort and still not match the scale of its chief rival, Amazon Web Services (AWS). Amazon does not break out AWS headcount from its far more labor-intensive retail business, so it’s not an apples-to-apples comparison, but Amazon is many times larger than Google, with nearly 350,000 employees and plans to hire more than 100,000 more full-time employees over the next 18 months.

Suffice it to say that Google Cloud needs to stay aggressive on workforce expansion. My sense is that it’s growing as fast as it can without creating the sort of internal chaos that could negatively impact customer experience. Partnerships with systems integrators and high-scale vendors like SAP and acquisitions such as the Kaggle deal are smart ways to grow the ecosystem without putting all the pressure on internal development.

Data-Platform Investments  

Google is investing on many technology fronts, but my focus is on data platforms, ML and AI, so I’m not going to get into the details of the three new data center regions, app developer news or the G Suite announcements (see Alan Lepofsky’s blog). There were also many infrastructure and security announcements, but the one most relevant to my data-to-decisions research is the new Data Loss Prevention (DLP) API. Now in beta, the DLP API promises to automatically discover, classify and redact sensitive information such as Social Security numbers, credit card numbers, phone numbers and more.

Together with security key and encryption developments, Google is addressing cloud security concerns that persist no matter how many times actual breaches show conventional corporate data centers to be far more vulnerable than clouds.

Google Cloud’s Data Loss Prevention API, now in beta, is designed to automatically discover, classify and redact sensitive data, as shown above.

As for those data platform, ML and AI announcements, here’s a rundown of the highlights:

Cloud ML Engine goes GA. This managed machine learning service based on Google’s open sourced TensorFlow ML framework is now generally available. The service features automatic hyperparameter tuning and tools for job management and resource utilization. In beta are GPU-based training and online prediction, which promise state-of-the-art performance at scale.

Cloud Video Intelligence API. Now in private beta, this service is designed to search and automatically tag large libraries of videos. The service spots entities within videos and notes the timecode of their appearance, making sense of videos and large collections of videos in automated fashion. Developers need no knowledge of machine learning or deep learning, says Google. They simply invoke the API.

Cloud Dataprep. This service, now in beta, promises to bring self-service data preparation to structured and unstructured information. The no-code, point-and-click interface provides a guided approach to joining data and automated data-transformation suggestions.

Google BigQuery Data Transfer Service. Also in beta, this managed data-import service will launch with support for moving data from DoubleClick, Google AdWords and YouTube, but Google Cloud execs say they’re keen on providing managed, automated migration options from other high-volume data sources. Stay tuned.

Cloud SQL for PostgreSQL. This managed service, now in beta, supports the popular, enterprise-standard, open-source relational database, opening the door for a world of compatible enterprise applications.   

MyPOV on the tech announcements: Google Next announcements were heavy on betas and light on general releases, although the company less than a month ago announced the GA of its Cloud Spanner service. Spanner is Google’s globally distributed relational database service. It’s unique (and distinguished from global NoSQL database services) in offering consistency for demanding financial services, advertising, retail, supply chain and other applications demanding synchronous replication.

Aside from its big data, ML and AI depth, another differentiator for Google Cloud is the open-source nature of Google Cloud Platform services. Cloud Dataproc, for example, is based on Apache Spark and Apache Hadoop, and the batch-and-stream-data processing capabilities of Cloud Dataflow were open sourced as Apache Beam. This means you could run all these technologies on premises, in other clouds or in hybrid scenarios, diminishing vendor lock-in. It’s a contrast with AWS services that are unique to that vendor, although I don’t recall any customers discussing actual hybrid deployments or moving between cloud and on-premises deployments.

The more practical and real differentiator with Google Cloud Platform – and the one that multiple customer at Google Cloud Next actually talked about — is the managed nature of its services. With BigQuery, for example, you load data and issue queries and Google automatically makes all the necessary moves to ensure adequate compute and storage capacity. With unmanaged cloud services you have to think about, plan and take care of all the provisioning. And if you get it wrong, you either overpay for capacity or suffer from poor performance. This has been Google Cloud’s ace in the hole, but it’s a card the vendor is now playing up with every customer and at every opportunity.

Related Reading:
Cloud BI and Analytics Options Aren’t Just for Cloud Data
AWS Analytic and AI Services Are No Surprise, But They Will Succeed
Strata + Hadoop World Highlights Long-Term Bets on Cloud


Media Name: Data Loss Prevention API.jpg
Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization Future of Work Next-Generation Customer Experience Google Cloud Google SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service AI ML Machine Learning LLMs Agentic AI Generative AI Robotics Analytics Automation Quantum Computing developer Metaverse VR Healthcare Supply Chain Leadership business Marketing finance Customer Service Content Management Chief Customer Officer Chief Information Officer Chief Marketing Officer Chief Digital Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Google Next: Analysis of G Suite News

Google Next: Analysis of G Suite News

March 8-10 Google held their Google Next conference, where they shared news about Google Cloud, G Suite and more. My primary focus for the conference was G Suite (formerly known as Google Apps), Google's personal productivity and team collaboration platform. My colleagues Doug Henschen and Holger Mueller focused on the Google Cloud Platform (GCP) side, and you can see their reviews here and here

My Quick Take: After several years of working on architecture improvements for Drive and Hangouts, G Suite is doing several things right for their enterprise customers. However, given Google's past reputation for innovation, I'm disappointed that most of the announcements were catch-up to features/applications that other vendors are already doing in this space.

Below is a video in which I provide my full review and analysis of the G Suite news. I'm using the Acrossio video player, which allows you to jump back and forth to annotated moments of the video, as well as add your own annotations and comments. So if you're just interested in a specific section, find it in the conversation stream on the right side of the player, click and the video will start playing from there.

 

There were several announcements which you can read about here, including:

  • Team Drives - Shared team folders
  • Drive File Stream - files hosted in the cloud appear in Windows File Manager or Mac Finder as if they were local on your harddrive. (Early Adopter Program for G Suite Enterprise, Business and Education customers)
  • Vault for Drive - Audit, compliance, governance.  (DLP was announced earlier in Jan)
  • Acquisition of AppBridge - for migrating content to G Suite
  • Hangouts Meet - video / webconferecing
  • Hangouts Chat - 1:1 and persistent group chat rooms. Available to G Suite Early Adopter customers.
  • General Availability of Jamboard - white-boarding device and applications
  • Gmail Add-ons - A new integration platform for Gmail that works across web and mobile devices (in Developer Preview)

Is your organization using G Suite? If so, what do you think of these announcements? If not, what are you using and how do you think it compares to G Suite?

 

Future of Work