Results

Cloudera Transitions, Doubles Down on Data Science, Analytics and Cloud

Cloudera Transitions, Doubles Down on Data Science, Analytics and Cloud

Cloudera has restructured amid intensifying cloud competition. Here’s what customers can expect.

Cloudera’s plan is to lead in machine learning, to disrupt in analytics and to capitalize on customer plans to move into the cloud.

It’s a solid plan, for reasons I’ll explain, but that didn’t prevent investors from punishing the company on April 3 when it offered a weaker-than-expected guidance for its next quarter. Despite reporting 50-percent growth for the fiscal year ended January 31, 2018, Cloudera’s stock price subsequently plunged 40 percent.

Cloudera’s narrative, shared at its April 9-10 analyst and influencers conference, is that it has restructured to elevate customer conversations from tech talk with the CIO to a C-suite and line-of-business sell about digital transformation. That shift, they say, could bring slower growth (albeit still double-digit) in the short term, but executives say it’s a critical transition for the long term. Investors seem spooked by the prospect of intensifying cloud competition, but here’s why Cloudera expects to keep and win enterprise-grade customers.

It Starts With the Platform

Cloudera defines itself as an enterprise platform company, and it knows enterprise customers want hybrid and multi-cloud options. Cloudera’s options now range from on-premises on bare metal to private cloud to public cloud on infrastructure as a service to, most recently, Cloudera Altus public cloud services, available on Amazon Web Services (AWS) and Microsoft Azure.

Supporting all these deployment modes is, of course, something that AWS and Google Cloud Platform (GCP) don’t do and that Microsoft, IBM, and Oracle do exclusively in their own clouds. The key differentiator that Cloudera is counting on is its Shared Data Experience. SDX gives customers the ability to define and share data access and security, data governance, data lifecycle management and deployment management and performance controls across any and all deployment modes. It’s the key to efficiently supporting both hybrid and multi-cloud deployments. Underpinning SDX is a shared data/metadata catalog that spans deployment modes and both cloud- and on-premises storage options, whether they are Cloudera HDFS or Kudu clusters or AWS S3 or Azure Data Lake object stores.

As compelling as public cloud services such as AWS Elastic MapReduce may sound, from the standpoint of simplicity, elasticity and cost, Cloudera says enterprise customers are sophisticated enough to know that harnessing their data is never as simple as using a single cloud service. In fact, the variety of services, storage and compute variations that have to be spun up, connected and orchestrated can get quite extensive. And when all those per-hour meters are running the collection of services can also get surprisingly expensive. When workloads are sizeable, steady and predictable, many enterprises have learned that it can be much more cost effective to handle it on-premises. If they like cloud flexibility, perhaps they’ll opt for a virtualized private-cloud approach rather than going back to bare metal.

With more sophisticated and cost-savvy customers in mind, Cloudera trusts that SDX will appeal on at least four counts:

  • Define once, deploy many: IT can define data access and security, data governance, data lifecycle, and performance management and service-level regimes and policies once and apply them across deployment models. All workloads share the same data under management, without having to move data or create copies and silos for separate use cases.
  • Abstract and simplify: Users get self-service access to resources without having to know anything about the underlying complexities of data access, deployment, lifecycle management and so on. Policies and controls enforce who sees what, which workloads run where and how resources are managed and assigned to balance freedom and service-level guarantees.
  • Provide elasticity with choice: With its range of deployment options, SDX gives enterprises more choice and flexibility than a cloud-only provider in terms of how it meets security, performance, governance, scalability and cost requirements.
  • Avoid lock-in: Even if the direction is solidly public cloud, SDX gives enterprises options to move workloads between public clouds and to negotiate better deals knowing they won’t have to rebuild their applications if and when they switch providers.

MyPOV on SDX

The Shared Data Experience is compelling, though at present it’s three parts reality and one part vision. The shared catalog is Hive and Hadoop centric, so Cloudera is exploring ways to extend the scope of the catalog and the data hub. Altus services are generally available for data engineering, but only recently entered beta (on AWS) for analytics deployments and persisting and managing SDX in the cloud. General availability of Cloudera Analytics and SDX services on Azure is expected later this year. Altus Data Science is on the roadmap, as are productized ways to deploy Altus services in private clouds. For now, private cloud deployments are entirely on customers to manage. In short, the all-options-covered rhetoric is a bit ahead of reality, but the direction is clear.

Machine Learning, Analytics and Cloud

Cloudera is counting on these three growth areas, so much so that it last year appointed general managers of each domain and reorganized with dedicated product development, product management, sales and profit-and-loss responsibility. At Cloudera's analyst and influencers conference, attendees heard presentations by each of the new GMs: Fast Forward Labs founder Hilary Mason on ML, Xplain.io co-founder Anupam Singh on analytics, and Oracle and VMware veteran Vikram Makhija on Cloud.

Lead in Machine Learning. The machine learning strategy is to help customers develop and own their ability to harness ML, deep learning and advanced analytical methods. They are “teaching customers how to fish” using all of their data, algorithms of their choice and running workloads in the deployment mode of their choice. (This is exactly the kind of support executives wanted at a global bank based in Denmark, as you can read in my recent “Danske Bank Fights Fraud with Machine Learning and AI” case study report.)

Cloudera last year acquired Mason’s research and consulting firm Fast Forward Labs with an eye toward helping customers to overcome uncertainty on where and how to apply ML methods. The Fast Forward team offers applied research (meaning practical, rather than academic), strategic advice and feasibility studies designed to help enterprises figure out whether they’re pursuing the right problems, setting realistic goals, and gathering the right data.

On the technology side, Cloudera’s ML strategy rests on the combination of SDX and the Cloudera Data Science Workbench (CDSW). SDX addresses the IT concerns from a deployment, security and governance perspective while CDSW helps data scientists access data and manage workloads in self-service fashion, coding in R, Python or Scala and using analytical, ML and DL libraries of their choice.

MyPOV on Cloudera ML. Here, too, it’s a solid vision with pieces and parts that have yet to be delivered. As mentioned earlier, Altus Data Science is on the roadmap (not even in beta), as are private-cloud and Kubernetes support. Also on the roadmap are model-management and automation capabilities that enterprises need at every stage of the model development and deployment lifecycle as they scale up their modeling work. Here’s where Azure Machine Learning and AWS SageMaker, to name two, are steps ahead of the game.

I do like that Cloudera opens the door to any framework and draws the line at data scientist coding with DSW, leaving visual, analyst-level data science work to best-of-breed partners such as Dataiku, DataRobot, H2O and RapidMiner.

Disrupt in Analytics. It was eye opening to learn that Cloudera gets the lion’s share of its revenue from analytics -- more than $100 million out of the company’s fiscal year 2018 total of $367 million in revenue. One might think of Cloudera as being mostly about big, unstructured data. In fact it’s heavily about disrupting the data warehousing status quo and enabling new, SQL-centric applications with the combination of the Impala query engine, the Kudu table store (for streaming and low-latency applications), and Hive on Apache Spark.

Cloudera analytics execs say they’re having a field day optimizing data warehouses and consolidating dedicated data marts (on Netezza and other aging platforms) now seen are expensive silos, requiring redundant infrastructure and copies of data. With management, security, governance and access controls and policies established once in SDX, Cloudera says IT can support myriad analytical applications without moving or copy data. That data might span AWS S3 buckets, Azure Data Lakes, HDFS, Kudu or all of the above.

The new news in analytics is that Cloudera is pushing to give DBA types all the performance-tuning and cost-based analysis options they’re used to having in data warehousing environments. Cloudera already offered its Analytic Workbench (also known as HUE) for SQL query editing. What’s coming, by mid year, is a consolidated performance analysis and recommendation environment. Code named Workload 360 for now, this suite will provide end-to-end guidance on migrating, optimizing and scaling workloads. To be delivered as a cloud service, this project combines Navigator Optimizer (tools acquired with Xplain.io) with workload analytics capabilities introduced with Altus. Think of it as a brain for data warehousing that will help companies streamline migrations, meet SLAs, fix lagging queries and proactively avoid application failures.

MyPOV on Analytics. Workload management tools are a must for heavy duty data warehousing environments, so this analysis-for-performance push is a good thing. Given the recent push into autonomous database management, notably by Oracle, I would have liked to have heard more about plans for workload automation.

Cloudera also didn’t have much to say about the role of Hive and Spark for analytical and streaming workloads, but I suspect they are significant. I’ve also talked to Cloudera customers (read “Ultra Mobile Takes an Affordable Approach to Agile Analytics”) that tap excess relational database capacity to support low-latency querying rather than relying on Impala, Hive or a separate Kudu cluster. Hive, Spark and conventional database services or capacity fall into the category of practical, cost-conscious options that may not drive additional Cloudera analytics revenue, but it’s an open platform that gives customers plenty of options.

Capitalize on the Cloud. As noted above, SDX and the growing Altus portfolio are at the heart of Cloudera’s cloud plans. Enough said about the pieces still to come or missing. I see SDX as compelling, and it’s already helping customers to efficiently run myriad data engineering and analytic workloads in hybrid scenarios. But as a practical matter, many companies aren’t that sophisticated and are choosing to keep things simple with binary choices: X data and use case on-premises and Y data and use case in the cloud. Indeed, one of Cloudera’s customer panel guests acknowledged the importance of avoiding cloud lock in; nonetheless, he said his firm is considering the “simplicity” versus data/application portability tradeoffs of using Google Cloud Platform-native services.

MyPOV on Cloudera Cloud. Binary thinking is not the way to harness the power of using all your data, and it can lead to overlaps, redundancies and need of moving and copying data. Nonetheless, handling X on premises and Y in the cloud may be seen as the simpler and more obvious way to go, particularly if there are natural application, security or organizational boundaries. Cloudera has to execute on its cloud vision, develop a robust automation strategy and demonstrate to enterprises, with plenty of customer examples, that the SDX way is simpler and more cost-effective way to go and a better driver of innovation than binary thinking.

Related Reading:
Nvidia Accelerates AI, Analytics with an Ecosystem Approach
Danske Bank Fights Fraud With Machine Learning and AI
Ultra Mobile Takes an Affordable Approach to Agile Analytics

Data to Decisions Tech Optimization Chief Customer Officer Chief Information Officer Chief Marketing Officer Chief Digital Officer

Adobe Acquires Sayspring to Bring Voice Interaction to their AI Platform

Adobe Acquires Sayspring to Bring Voice Interaction to their AI Platform

Adobe has announced the acquisition of Sayspring, makers of a natural language platform for interacting with devices like Amazon Echo and Google Home/Assistant. 

MyPOV: People are becoming accustomed to using their voice to interact with devices like their phones, tablets and ambient speakers (Echo, Home, etc), soon we will see a similar level of comfort for interacting with our business application software. Using voice commands is a very quick and natural way to find and create content, automate tasks, or look up people and information. It will be interesting to see how Adobe enhances their Sensei platform which provides AI features to their Document, Creative, and Customer Experience Cloud platforms using Sayspring's existing assets, but even more so leveraging their talented team to build new interfaces directly into Adobe software.

 

Future of Work

Event Report - Globoforce Workhuman 2018

Event Report - Globoforce Workhuman 2018

 

   
Want to read on? Here you go:
 
Outstanding Speaker Lineup - Workhuman stands out on the conference circuit with an exceptional speaker lineup. The pre-conference could easily have been the full external speaker line up for any other, well funded vendor conference. And in the main conference, give a conference that has Salma Hayek, Amal Clooney and Ashley Judd ('relegated' to a panelist in a MeToo panel) in the run of 24 hours. It is by design, Globoforce wants the WorkHuman conference not to be about product, but though leadership, inspiration and purpose. 
 
Globoforce WorkHuman 2018 Constellation Research Holger Mueller
The Globoforce WorkHuman Cloud
 
 
Globoforce launches WorkHuman Cloud - Inside all the great speaker lineup, it was a challenge to get the attention to what really mattered to any user community - the launch of a new Globoforce product, not surprisingly called the WorkHuman Cloud. It's a suite of five reward / recognition - if you want Performance management capabilities, built on a single platform. All the usual suite benefits apply - single sign-on, UI consistency (some work left), common foundation etc. An almost overdue move by Globoforce, who has been a multi module / product vendor since quite some time. Adoption is not so much of a concern, as true SaaS vendor style, the existing customers are 'on' the platform already with their existing products. 
 
Globoforce WorkHuman 2018 Constellation Research Holger Mueller
Globoforce WorkHuman Emloyee Dashboard
 
 
Impressive Customer Stories - Workhuman stands out from the regular conferences in the sense that it gives more stage to customers than the average conference. Nothing is more powerful than having customers share how they implemented a product, and how it helped them create benefits and favorable outcomes. A lot, impressive, educational and sometimes even inspirational success stories were shared at WorkHuman, no surprises, as we know that a well implemented rewards and recognition system, can have substantial positive impact on the performance of an enterprise.  
 
Globoforce WorkHuman 2018 Constellation Research Holger Mueller
Globoforce WorkHuman My Life Events
 
 

MyPOV

 
WorkHuman stands out as a remarkably different conference. Customers and prospects of Globoforce clearly enjoy the format, and vote by increasing attendance. The success shows a lack of vendor independent, grand scheme (work human!) focus events that serve the HR community. Clearly something that user group conferences should to - but clearly are not doing. Good to see the product progress by Globoforce, who has changed and improved the user experience, and most importantly created a suite of products, the next milestone of maturation of any software vendor. 
 
On the concern side, Globoforce could be a little concerned on how to top this conference in 2019. I would be. And it was remarkable, and deeply surprising, that it was hard for the audience to pay attention to the product updates during the keynote, which were ... substantial. Too much motivation and inspiration makes the product message dull, and at the end of the day, users attend user conferences to learn ... about the product. Inspirational speakers are great, but not the argument to implement, upgrade or purchase a product, that is needed to convince the rest of the enterprise to invest further into any software product.
 
But for now, Globoforce has setup one of the best HR "Un-conferences" on the circuit, probably the best for a vendor. As with anything, success comes with repercussions... and I can't wait to see how WorkHuman 2019 will shape out. Stay tuned.  
 
 
Also - check out a Twitter Moment of IBM Think 2018 here
 
 
 
 
Future of Work Innovation & Product-led Growth Tech Optimization Data to Decisions Next-Generation Customer Experience New C-Suite Marketing Transformation Digital Safety, Privacy & Cybersecurity AI Analytics Automation CX EX Employee Experience HCM Machine Learning ML SaaS PaaS Cloud Digital Transformation Enterprise Software Enterprise IT Leadership HR Chief People Officer Chief Customer Officer Chief Human Resources Officer

Musings - Why splitting Windows is Nadella's first major mistake

Musings - Why splitting Windows is Nadella's first major mistake

On March 29th Microsoft shared that the head of it's Window team, Terry Myerson, was leaving and as a consequence the Windows team was going to be split up into two large development teams under Rajesh Jha and Scott Guthrie (see Nadella's memo here, kudos for transparency). 

 

 

 


Here is what we don't know: Did Myerson quit, or was he compelled to leave as expectations on the Windows progress did not meet the board's / shareholder expectations. Or did he leave knowing his team would be split and not interested in other roles, hanging in there etc. Those are missing pieces that may surface – or not – and change the analysis here.

So why could this be a major mistake for Microsoft and its users? Here are my musings:

Windows was finally 'fixed'. You can't blame Microsoft for not investing into Windows. And with Windows 10 Microsoft had finally fixed practically all the sins of the past, pulverized the skeletons in the closet all coming from the fast paced 90ies, that still had traces in the Windows source code until Windows 10. And Windows 10 has been steadily growing, even though it may have hit a slower pace or temporary backlash (see ComputerWorld here). Yes – Microsoft was no longer on track to get to 1B Windows 10 devices in 2018, but since when does that faze people… all market adoption projections need to be taken with a grain of salt. But with a Windows 7 end of life date in a few years, and when walking into any PC store – it only and always Windows 10 devices, that would have been addressed sooner then later.

Major Platform – with no leader? According to Statcounter (see here), Windows is in a neck to neck race with Android for overall platform leadership. And that's not a fair competition, different platforms, monetization, sales channels, purchase price and and… The real competition that is comparable is Apple's OS X and that's hovering well under 10%... so despite all these 'Hello I am a Mac' advertisements of years past, Apple's OS X hasn't moved up much on Windows 10. Would Apple split OS X? Don't think so. Would anyone split responsibilities of a platform with way over 1B installs up? Everybody wants one, they look for leaders and teams to get them there… And a platform with no leader, typically has ceased to be a platform only a few quarters in. Let's watch the next major Microsoft conference, which is Build in May. I expect some chaperoning by Nadella, and then Jha and Guthrie to merge the messages with their existing and new assets. And then watch for the cracks to appear… first small, then bigger, then visible, then obvious…

Platform Morphing beats Platform Abandonment. You don't split a platform, even when it is old. You renovate it (see e.g. IBM with Z/OS), you re-platform it (see Microsoft with Windows 10), you innovate (see Apple OS X) or you morph it to where it needs to be and where to evolve towards. Nadella is right that in the very long run, the PC is dead. And certainly, the cloud and the edge are showing more growth. But the industry has not come up with an alternative to the PC - yet. You can call the Chromebooks something else than a PC, but the form factor, connectivity is practically the same device. This is where Windows may have to morph, to maybe a browser based OS, and Microsoft has very much the assets (and the ambition) in play with Edge. And a micro Edge browser could very well work on the IoT edge. Wait – we have even have a perfect branding head line – Microsoft Edge for the IoT Edge – with all the good Windows DNA should that IoT edge platform need to get a little more beefier. Wait there is also Windows Server… so morph, position… even "embrace and extend" – remember that? Why no more in 2018?

Warning – Major Brand Implosion. Searched the interwebs for a bit on Windows brand value… with no success. But it must be out there… (please let me know if you find it). What is the #1 brand associated with Microsoft – Windows, then Office. Ask anyone. Why give that up? Yes it maybe old, but so are the affluent aging populations in the 1st world – and they know Windows for their whole computing life time. It may not be the snazziest brand and may need some maintenance… but in the B+ brand area this is just destruction of brand value … again – you morph a brand, you don't… split it, make it disappear (I know Microsoft will of course argue this, but let's watch what happens to the Windows brand in the next 48 months). 
 
What's the platform message? Microsoft tried with the Universal Windows Platform (UWP). A very attractive value proposition for developers. Yes, the mobile part fell flat, but Microsoft has successfully provided tools to run on iOS and Android, and a great testing capability with Xamarin. Developers till have to build for and on Windows devices... so what is the message to the developer community with the Windows split? At the moment I can't go to good places for that... will be interesting for Microsoft to address at Build in May in Seattle. 
 
What does it mean for the future of computing? Microsoft has done remarkable footwork with the HoloLens, which runs Windows 10. I called it the first 'headable' PC. Will Windows 10 slim down as a more device centric OS? What about the synergies of running the same apps in a familiar OS? More questions that don't bide well if see a fragmentation of Windows going forward.
 

MyPOV

Certainly a bold move by Nadella, probably his boldest. I am sure he has major shareholder (aka Bill Gates, Steve Ballmer) support. Both of those two have dedicated decades of their lives to make Windows what it is today. They may know something, that we don't know, and I am happy to correct this blog… when I turn out to be wrong. I can't' imagine Balmer giving Nadella a hard time that he is not moving fast enough to split up Windows... but hey, maybe. But for now, pretty comfortable with the POV… what is yours? Please share!

 

[April 10th 2018] Needless to say Microsoft wants to stress some points here: Windows remains an important part of Microsoft's future, in combination with the Microsoft 365 offerings. Windows also powers the devices on the "intelligent" edge. Microsoft states that customers have been asking for getting Office, Windows and devices closer to each other for a better experience. Fair enough, make up your mind. 1st data point will be ... Build. Or any major announcements before. 

 

Future of Work Innovation & Product-led Growth Next-Generation Customer Experience Tech Optimization Data to Decisions android apple Microsoft

Event Report - IBM Think 2018 - IBM is back...

Event Report - IBM Think 2018 - IBM is back...

We had the opportunity to attend IBM's Think conference held in Las Vegas, all over the MGM and Mandalay Bay, from March 19th till 22nd 2018. Happening in the busiest week of Spring conference season, I could make it only for one day, invited by the IBM Partner conference, that was happening in parallel. 

 

 

 

 

 

 

 

Prefer to watch – here is my event video … (if the video doesn't show up – check here)

 

 

 ):


Here is the 1 slide condensation (if the slide doesn't show up, check here):

 

 

 


Want to read on? Here you go:

IBM brings Partner program to 21st century – I had the opportunity to attend the PartnerWorld program events and it was good to learn that the partner program is making jolts into the 21st century, with simplifications for partners to do business with IBM, cutting down the incentive system from a triple digit to a single digit number, giving partners sandboxes to evaluate, create and sell joint offerings. To a certain point surprised this is only happening now - but better now than later or never. Talking to partners the top concerns remain channel conflict with IBM's direct sales force, while the top wish was for IBM to build up dedicated partner pre-sales capacities (in North America). Common concerns, fears and wishes from partners towards their product / platform vendor. 

 

 

 

 

IBM Think 2018 Holger Mueller Constellation Research
Teltsch in the Partner Keynote


ICP picks up speed – IBM has re-positioned it's hybrid cloud offerings, formerly BlueMix local with IBM Private Cloud (IPC), basically providing the same value proposition as before – a platform for enterprises to build their next generation applications on. Apart from the development tools, IPC has a twist on operation management and monitoring, an important aspect of a hybrid cloud. Being able to securely scale code and monitor the next generation application is important for enterprises. And it being 2018 – Kubernetes is a key aspect to make this happen, and IPC leverages Kubernetes to achieve load portability across clouds. The product team gave me a private preview demo on what is coming later in the year, and it looks promising in terms of capability and usability, especially for IBM shops, and potentially beyond. 

IBM Think 2018 Holger Mueller Constellation Research
Teltsch in convo with Wylie

 

 

 

 

IBM ML comes to Swift - IBM has been closely working with Apple on Swift, pretty much since the launch of Apple's new programming language. The joint solutions are built on the platform, and given the IBM push on cognitive / Watson / ML / AI, it's key that developers on the Swift platform that are building applications in and for the IBM ecosystem can leverage IBM AI / ML services for the iOS applications. Something joint customers expected and IBM now has (finally) delivered. I had some good conversations with early adopters. 

 

 

 

MyPOV

After a one year hiatus of no conferences and events, IBM is back having a user conference. It has consolidated all the many separate conferences that IBM used to have in one single one, which is of course a major challenges for all involved... but a good change in my view. Customers could not afford to attend 4-5 conferences a year, if they were fully bought into the IBM offerings... moreover, customers had to connect the dots between the various offerings, which at times, where not synced... as each conference would plan it's product and announcement cycles around their individual conferences... Think makes a difference here, aligning messaging, and likely over time product release cycles... that makes it easier for customers and prospects get an overview at a coordinated point in time of the many IBM products, offerings and services. It was good to see the focus of the new partner management regime on making it easier and simpler for partners to do business with (or for?) IBM. Always a good true north for a partner organization.

On the concern side IBM needs to learn how to put out a mega conference, to maximize value for attendees and return of event dollars. It has massive experience of a single property events in Las Vegas, you name it and IBM has been at the respective casino. Multiple property events in Las Vegas are hard for all vendors who have outgrown a single property, but IBM can do better connecting them. And on the product innovation side, it felt at times that announcements could have been made earlier, but were held for Think. Understandable, but a fine line to walk for any vendor. It will be good to see what IBM can create and deliver in the next 12 months, giving a better insight of the innovation power that IBM can harness.

But for now, good to see a single event happening, aligning all messaging, product, offering and services cycles, for a first combined event, Think 2018 was a good start. Stay tuned. 


Also - check out a Twitter Moment of IBM Think 2018 here

 

Tech Optimization Data to Decisions Future of Work Innovation & Product-led Growth New C-Suite Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity PaaS ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Cloud CCaaS UCaaS Enterprise Service Chief Executive Officer Chief Information Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Event Report - Oracle HCM World 2018 Dallas - Steady Progress

Event Report - Oracle HCM World 2018 Dallas - Steady Progress

We had the opportunity to attend Oracle's HCM World conference, held from March 20th till 22nd in Dallas. Held in the busiest week of the spring conference circuit, the conference had good analyst and influencer representation and was overall well attended (Oracle claimed 2200+). 

 

 

 


Prefer to watch – here is my event video … (if the video doesn't' show up – check here)

 

):


Here is the 1 slide condensation (if the slide doesn't show up, check here):

 

 

 

 


Want to read on? Here you go:

Oracle keeps delivering in HCM – Since its early beginning the Oracle HCM product, has done well and is growing at an almost metronomic takt rate. New capabilities are created in the product, and Oracle delivers customer adoption of them in the following 6-12 months. Customers, partners and the overall ecosystem is now on the by-yearly announcement schedule of HCM World in Spring and OpenWorld in Fall, and in between customer adoption manifests themselves in go lives. The result is likely the most comprehensive singe platform HCM Suite in the market. 

 

 

 

Oracle HCM Cloud HCM World Holger Mueller Constellation Research
Oracle HCM Cloud Spring 2018 Release Highlights

 


Good DNA in Spring 2018 Release – Along the same lines the 2018 Spring release of Oracle HCM Cloud is a rich release that pushes the boundaries. Oracle replaced and strengthened the Onboarding capabilities in the suite. Given the best practice uncertainty that at the moment plagues Performance Management, Oracle has (wisely) opted for a suite of performance management tools. People leaders can choose from four different approaches to address performance management in their enterprise. The good news is that they can even choose to use different best practices / flavors of performance management across the enterprise – slice and dice by operating company, division, people type etc. The right approach and it will be interesting to see adoption. And last but not least more AI, no conference without AI in 2018… Oracle pointed out that it always has had some form of 'intelligence' but no the offerings are serious and it's good to see Oracle speak of AI (vs the a tad misaligned 'adaptive intelligence' term of the past. 

 

 

 

 

Oracle HCM Cloud HCM World Holger Mueller Constellation Research
The new Oracle HCM Cloud UX paradigm

 


A new UX - Having been a critic of the Oracle HCM UI for a long time, the new UX is a welcome first step in the right direction to improve this situation. Not surprisingly Oracle opts for the mobile focus of the largest user population, and for the high volume transaction. The new UX looks modern, easy to use and has some of the key inner workings a UX in 2017 should have – it's responsive and can be used across devices and form factors as well takes an aggressive stance on defaulting, suggesting entries. At the core is a newsfeed paradigm, that users are familiar with from consumer websites. The newsfeed manages to collapse menu structures and to surface the relevant information at the right time. Not easy to get right - think of the challenges Facebook has had getting this UX right, but from what we saw about the Oracle HCM newsfeed, it's a well working first implementation in an Oracle enterprise application.  All welcome by busy enterprise users, who have to do a real job and can't afford to be held up too long by administrative systems (like any HCM system). Next step is to check in on customer feedback, roll out plans and roadmap.

 

 

 

 

 

Oracle HCM Cloud HCM World Holger Mueller Constellation Research
Oracle has invested to get the Newsfeed right

 

 

 

 

MyPOV

A good HCM World for Oracle, the product keeps progressing and customers and ecosystem are positive. Customers tap more and more into the suite benefits, adding modules after originally going live on more administrative functions as ESS / MSS and Payroll. Needless from the start though, suite level benefits are tangible and create productivity for users as well as HR departments, all leading to a more positive stance towards the product, in this case Oracle HCM Cloud.

On the concern side the event seemed smaller than last years events. Maybe the timing and the location in Dallas did not help. But in general you expect a growing attendee number. Equally in the ecosystem we saw less partner activity... it looks like Deloitte (for NA SIs) and Infosys (for the Indian SIs) seem to have capture the pool position. Next year's HCM World will be a data point on how much Oracle can activate users and prospects to come to an event like this on. For the record, happy users do not need to travel to user conferences as much, as they know what's coming and are busy implementing and using the software.

But overall a good event for Oracle customers and prospects. Oracle HCM Cloud is probably the most complete, single platform, single code base HCM Suite out there. Stay tuned.




 

Future of Work Tech Optimization Innovation & Product-led Growth New C-Suite Data to Decisions Next-Generation Customer Experience Revenue & Growth Effectiveness Marketing Transformation Digital Safety, Privacy & Cybersecurity Oracle AI Analytics Automation CX EX Employee Experience HCM Machine Learning ML SaaS PaaS Cloud Digital Transformation Enterprise Software Enterprise IT Leadership HR Chief Customer Officer Chief People Officer Chief Human Resources Officer

Nvidia Accelerates Artificial Intelligence, Analytics with an Ecosystem Approach

Nvidia Accelerates Artificial Intelligence, Analytics with an Ecosystem Approach

Nvidia’s GTC 2018 event spotlights a play book that goes far beyond chips and servers. Get set for next era of training, inferencing and accelerated analytics.

“We're not a chip company; we're a computing architecture and software company.”

This proclamation, from NVIDIA co-founder, president and CEO Jensen Huang at the GPU Technology Conference (GTC), March 26-29 in San Jose, CA, only hints at this company’s growing impact on state-of-the-art computing. Nvidia’s physical products are accelerators (for third-party hardware) and the company’s own GPU-powered workstations and servers. But it’s the company’s GPU-optimized software that’s laying the groundwork for emerging applications such as autonomous vehicles, robotics and AI while redefining the state of the art in high-performance computing, medical imaging, product design, oil and gas exploration, logistics, and security and intelligence applications.

Jensen Huang, co-founder, president and CEO, Nvidia, presents the sweep of the
company's growing AI Platform at GTC 2018 in San Jose, Calif.

On Hardware

On the hardware front, the headlines from GTX built on the foundation of Nvidia’s graphical processing unit advances.

  • The latest upgrade of Nvidia’s Tesla V100 GPU doubles memory to 32 gigabytes, improving its capacity for data-intensive applications such as training of deep-learning models.
  • A new NVSwitch interconnect fabric enables up to 16 Tesla V100 GPUs to share memory and simultaneously communicate at 2.4 terabytes per second -- five times the bandwidth and performance of industry standard PCI switches, according to Huang. Coupled with the new, higher-memory V100 GPUs, the switch greatly scales up computational capacity for deep-learning models.
  • The DGX-2, a new flagship server announced at GTC, combines 16 of the latest V100 GPUs and the new NVSwitch to deliver two petaflops of computational power. Set for release in the third quarter, it’s a single server geared to data science and deep-learning that can replace 15 racks of conventional CPU-based servers at far lower initial cost and operational expense, according to Nvidia.

If the “feeds and speeds” stats mean nothing to you, let’s put them into the context of real workloads. SAP tested the new V100 GPUs with its SAP Leonardo Brand Impact application, which delivers analytics about the presence and exposure time of brand logos within media to help marketers calculate returns on their sponsorship investments. With the doubling of memory to 32 gigabytes per GPU, SAP was able to use higher-definition images and a larger deep-learning model than previously used. The result was higher accuracy, with a 40 percent reduction in the average error rate yet with faster, near-real-time performance.

In another example based on a FAIRSeq neural machine translation model benchmark test, training that took 15 days on NVidia’s six-month-old DGX-1 server took less than 1.5 days on the DGX-2. That’s a 10x improvement in performance and productivity that any data scientist can appreciate.

On Software

Nvidia’s software is what’s enabling workloads—particularly deep learning workloads--to migrate from CPUs to GPUs. On this front Nvidia unveiled TensorRT 4, the latest version of its deep-learning inferencing (a.k.a. scoring) software, which optimizes performance and, therefore, reduces the cost of operationalizing deep learning models in applications such as speech recognition, natural language processing, image recognition and recommender systems.

Here’s where the breadth of Nvidia’s impact on the AI ecosystem was apparent. Google, for one, has integrated TensorRT4 into TensorFlow 1.7 to streamline development and make it easier to run deep-learning inferencing on GPUs. Huang’s keynote included a dramatic visual demo showing the dramatic performance difference between TensorFlow-based image recognition peaking at 300 images per second without TensorRT and then boosted to 2,600 images per second with TensorRT integrated with TensorFlow.

Nvidia also announced that Kaldi, the popular speech recognition framework, has been optimized to run on its GPUs, and the company says it’s working with  Amazon, Facebook and Microsoft to ensure that developers using ONNX frameworks, such as Caffe 2, CNTK, MXNet and Pytorch, can easily deploy using Nvidia deep learning platforms.

In a show of support from the data science world, MathWorks announced TensorRT integration with its popular MATLAB software. This will enable data scientists using MATLAB to automatically generate high-performance inference engines optimized to run on Nvidia GPU platforms.

On Cloud

The cloud is a frequent starting point for GPU experimentation and it’s an increasingly popular deployment choice for spikey, come-and-go data science workloads. With this in mind, Nvidia announced support for Kubernetes to facilitate GPU-based inferencing in the cloud for hybrid bursting scenarios and multi-cloud deployments. Executives stressed that Nvidia’s not trying to compete with a Kubernetes distribution of its own. Rather, it’s contributing enhancements to the open-source community, making crucial Kubernetes modules available that are GPU optimized.

The ecosystem-support message was much the same around Nvidia GPU Cloud (NGC). Rather than offering competing cloud compute and storage services, NGC is a cloud registry and certification program that ensures that Nvidia GPU-optimized software is available on third-party clouds. At GTC Nvidia announced that NGC software is now available on AWS, Google Cloud Platform, Alibaba’ AliCloud, and Oracle Cloud. This adds to the support already offered by Microsoft Azure, Tencent, Baidu Cloud, Cray, Dell, Hewlett Packard, IBM and Lenovo. Long story short, companies can deploy Nvidia GPU capacity and optimized software on just about any cloud, be it public or private.

In an example of GPU-accelerated analytics, this MapD geospatial analysis shows six
years of shipping traffic - 11.6 billion records without aggregation - along the West Coas
t.

MyTake on GTC and Nvidia

I was blown away at the range and number of AI-related sessions, demos and applications in evidence at GTC. Yes, it’s an Nvidia event and GPUs were the ever-present enabler behind the scenes. But the focus of GTC and of Nvidia is clearly on easing the path to development and operationalization of applications harnessing deep learning, high-performance computing, accelerated analytics, virtual and augmented reality, and state-of-the art rendering, imaging or geospatial analysis.

Analyst discussions with Huang, Bill Dally, Nvidia’s chief scientist and SVP of Research, and Bob Pette, VP and GM of pro visualization, underscored that Nvidia has spent the last half of its 25-year history building out its depth and breadth across industries ranging from manufacturing, automotive, and oil and gas exploration to healthcare, telecom, and architecture, engineering and construction. Indeed, Nvidia Research placed its bets on AI – which will have a dramatic impact across all industries – back in 2010. That planted the seeds, as Dally put it, for the depth and breadth of deep learning framework support that the company has in place today.

Nvidia can’t be a market maker entirely on its own. My discussions at GTC with accelerated analytics vendors Kinetica, MapD, Fast Data and BlazingDB, for example, revealed that they’re moving beyond a technology-focused sell on the benefits of GPU query, visualization and geospatial analysis performance. They’re moving to a vertical-industry, applications and solutions sell catering to oil and gas, logistics, financial services, telcos, retail and other industries. That’s a sign of maturation and mainstream readiness for GPU-based computing. In one of my latest research reports, “Danske Bank Fights Fraud with Machine Learning and AI,” you can read about why a 147-year-old bank invested in Nvidia GPU clusters on the strength of convincing proof-of-concept tests around deep-learning-based fraud detection.

Of course, there’s still work to do to broaden the GPU ecosystem. At GTC Nvidia announced a partnership through which its open sourced deep learning accelerator architecture will be integrated into mobile chip maker Arm’s Project Trillium platform. The collaboration will make it easier for internet-of-things chip companies to integrate AI into their designs and deliver the billions of smart, connected consumer devices envisioned in our future. It was one more sign to me that Nvidia has a firm grasp on where its technology is needed and how to lay the groundwork for next-generation applications powered by GPUs. 

Related Reading:
Danske Bank Fights Fraud with Machine Learning and AI
How Machine Learning & Artificial Intelligence Will Change BI & Analytics
Amazon Web Services Adds Yet More Data and ML Services, But When is Enough Enough?

Data to Decisions Tech Optimization Chief Information Officer Chief Digital Officer

Event Report - ADP Meeting of the Minds 2018 - Stay the course

Event Report - ADP Meeting of the Minds 2018 - Stay the course

We had the opportunity to attend ADP's 25th Meeting of the Minds (MOTM) user conference, held in Orlando at the Walldorf / Hilton from March 18th till 23rd 2018.

 

 

 


Take a look at the event video first (if it does not show up - please check here):

 

 

 

 

 

 

 

 

 

 
Here is the 1 slide condensation (if the slide doesn't show up, check here):
 
Event Report - ADP Meeting of the Minds 2018 - Stay the course from Holger Mueller

Want to read on? Here you go:

 
 
 
 
 
 
 
 
If you want to learn more about the keynote, key tweets are collected in this Twitter Moment here.
 
 
 

MyPOV

A good event for ADP customers and prospects. ADP keeps delivering on a steady pace and creates value for its customer base. The focus on diversity and inclusion is high on people leader's agenda, so it is no surprise that ADP also focuses on this important topic. Good to see also the first TMBC assets making it to the mainstream North American customer base that ADP is targeting with its Meeting of the Minds conference.

On the concern side,  ADP is moving at a conservative, maybe too slow speed. One year is enough time for most vendors to create integrated value from an acquisition like TMBC. And ADP announced its new payroll product, Pi, back at the HR Tech conference in fall - so an update to the MOTM attendees would have been timely. Not to mention ADP's new HR core system Lifion, that ADP advertises for talent publicly, but choose to mention in Orlando. It's always good for enterprises software vendors to be conservative and quality focused, and ADP customers certainly expect that, but vendors can't be too slow rolling out differentiating capabilities either. 

Overall a good event, customers and ecosystem are happy with the progress. A lot of new innovation should see the light at ADP MOTM 2019, fingers crossed. Stay tuned. 


 
Future of Work Innovation & Product-led Growth Next-Generation Customer Experience Tech Optimization New C-Suite Revenue & Growth Effectiveness User Conference AI Analytics Automation CX EX Employee Experience HCM Machine Learning ML SaaS PaaS Cloud Digital Transformation Enterprise Software Enterprise IT Leadership HR Chief Customer Officer Chief People Officer Chief Human Resources Officer

Monday's Musings: Designing Five Pillars For Level 1 Artificial Intelligence Ethics

Monday's Musings: Designing Five Pillars For Level 1 Artificial Intelligence Ethics

 

Focus On Humanizing AI

As organizations begin their journey into artificial intelligence (AI), ethics often enter the design process.  While achieving a uniform set of ethics may seem insurmountable, some design points will help facilitate the humanization of artificial intelligence and provide appropriate checks and balances.  Constellation has identified design pillars for Level 1 AI.  Level 1 AI is defined as machine learning proficiency (see Figure 1)

Figure 1.  Five Levels of Artificial Intelligence Requires Different Design Points

The five pillars include (see Figure 2):

  1. Transparent.  Alogrithms, attributes, and correlations should be open to inspection for all participants.
  2. Explainable.  Humans should be able to understand how AI systems come to their contextual decisions.
  3. Reversible.  Organizations must be able to reverse the learnings and adjust as needed.
  4. Trainable.  AI systems must have the ability to learn from humans and other systems.
  5. Human-led.  All decisions should begin and end with human decision points.

Figure 2. Five Pillars For Level 1 AI Ethics Focus On Humanizing AI

The Bottom Line.  Instill The Five Design Pillars For AI Ethics In All Projects

Prospects of universal AI ethics seem slim.   However the five design pillars will serve organizations well beyond the social fads and fears.  The goal – build controls that will identify biases, show attribution, and enable course correction as needed.

 

 

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Matrix Commerce Marketing Transformation Revenue & Growth Effectiveness New C-Suite Distillation Aftershots infor SoftwareInsider AI Agentic AI LLMs Generative AI ML Analytics Automation CX Customer Service Supply Chain Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software IoT Blockchain ERP Leadership Collaboration M&A Marketing B2B B2C Customer Experience EX Employee Experience Growth eCommerce Next Gen Apps Social Content Management Machine Learning Robotics SaaS PaaS IaaS Quantum Computing CRM CCaaS UCaaS Enterprise Service developer Metaverse VR Healthcare Data to Decisions finance HCM HR business AR Chief Executive Officer Chief Information Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer Chief Customer Officer Chief Experience Officer Chief Digital Officer Chief Revenue Officer Chief Supply Chain Officer Chief Marketing Officer Chief Financial Officer Chief Operating Officer Chief People Officer Chief Human Resources Officer

Domo Focuses Its Cloud-Based Analytics Message, Adds Predictive Options

Domo Focuses Its Cloud-Based Analytics Message, Adds Predictive Options

Domo insists its platform is aimed at business people, not data analysts. Here’s the appeal to CXOs and line-of-business types.

The key message at Domopalooza 2018, March 13-15 in Salt Lake City, was that Domo is a platform for business, not a tool for techies. I’ve heard this platform messaging before, but it made sense for cloud-based analytics vendor Domo to emphasize its new “for the good of the company” slogan to try to set itself apart from competitors Tableau, Microsoft Power BI and Qlik.

“Domo is an operating system that lets you run your business on your phone,” declared Domo CEO and founder Josh James in his opening keynote.

It’s an apt description, though Domo is most often used to run the marketing and sales aspects of businesses. Once adopted, Domo usage tends to expand and, in some cases, go companywide. Customers I spoke to at Domopalooza were extending their deployments into finance, customer support, supply chain management and other operational areas. Broad use is most common among corporate customers (meaning those with less than $1 billion in annual revenue), but enterprise customers including Target, United Health Group, Telus and L’oreal are expanding their Domo footprints.

These announcements from Domopalooza 2018 will see beta release in spring and
general availability this summer.

To recap some basics (from my Domopalooza 2017 analysis), Domo is a cloud-based, multi-tenant platform onto which you can load diverse data at scale. There are more than 500 connectors to common data sources, and Domo’s Magic ETL supports integration and transformation. Domo’s Vault back end and infrastructure runs primarily on Amazon Web Services, but it’s also available on Azure for customers (such as Target) that aren’t comfortable storing data on AWS.

Domo introduced a Bring Your Own Key encryption option last year. That won over many customers with demanding security requirements. The company is also rolling out a federated data-access option for those who want to retain certain data on-premises without loading it or copying it into the cloud. Performance with this remote-query option depends on the bandwidth and latency of the customer’s data-center connections and query engine.

Once data is loaded into Domo, admins expose data sets with appropriate access privileges. Business intelligence/data analyst types tend to build out the initial cards and pages (akin to data visualizations and dashboards), but it’s common for business types to create and edit their own cards. Once users learn the platform, departmental and line-of-business power users often develop cards, pages and even “Beast Mode” custom calculations. There are hundreds of prebuilt and templated cards and pages available.

Domo execs say it’s not uncommon for CXOs to be big Domo users. Jeremy Andrus, CEO at Traeger Grills and a keynote guest, said he’s the number-one user at his company, which is a $450-million maker of barbeque grills. Andrus said he looks at insights on revenue, day sales, margins, channel productivity and marketing efficiency on a daily basis, usually from his phone. At larger companies the buyers and biggest users are typically line-of-business leaders. A keynote panel of media customers brought together marketing, advertising and operations vice presidents from Domo customers ESPN, The New York Times, Univision and The Washington Post.

Upgrades Announced at Domopalooza

The major announcements at Domopalooza fell into four categories: storytelling, Mr. Roboto, certified content and data center. Most of these upgrades are expected to see beta release in this spring with general availability expected by summer. Here’s a quick summary of what’s in store:

Storytelling: Hightlights include auto-suggested page layouts, more templated page layouts, custom charting capabilities, and support for guided, interactive analyses.

Mr. Roboto: This is Domo’s intelligence, alerting and machine learning layer. Upgrades here include natural-language generation, automated predictions, forecast alerts, and anomaly and correlation detection in third-party data. Also planned are R- and Python-based data science integration.

Mobile views of coming Mr. Roboto capabilities including, left to right, natural language
generation, forecasting and correlation.

Certified content: Coming certification capabilities for cards, data sets and Beast Mode custom calculations will beef up governance and compliance capabilities with granular controls. This will help analysts and administrators ensure sound data and sound, sanctioned analyses, but it’s not a one-way street. Business users will be able to submit the new cards they develop for certification. Beefed up statistics for admins will reveal who’s using what data, cards and Beast Modes and whether there are overlaps, redundancies or inconsistencies.

Data center: These upgrades will help admin and data-management types with data cleansing and validation. Here, too, beefed statistics will help admins understand and tag data and then track usage. Collaboration and group controls will help with managing data sets at scale.

MyPOV on Domopalooza 2018 and Domo’s Direction

I assumed Domopalooza’s move from The Great American Hotel in 2017 to the far larger Salt Palace Convention Center for 2018 would mean a much bigger event, but attendance was roughly the same as last year at around 3,000 people. The extra space made things more commodious and comfortable, but last year’s event seemed to have a bit more energy.

As for the keynotes, Domo leaned heavily on fireside chats with celebrity guests. I prefer hearing from customers, particularly innovators. On that note, Simone Knight, VP, Marketing Strategy and Media Intelligence at Univision, was fantastic both as a guest and in leading the media panel. Ben Schein, Senior Director, Enterprise Data, Analytics and BI at Target, made a reprise appearance, updating the details of the retailer’s massive Domo footprint. Target now has 800 billion (with a B) rows of data in Domo, and demand now averages 3,000 weekly users, up from 1,500 weekly users last year.

As for the announcements, every customer I talked was eager to adopt the new features. The certifications, storytelling and data admin upgrades were described as must haves that can’t come soon enough. The Mr. Roboto capabilities are nice-to-haves that will drive innovation. If you read my January report on “How Machine Learning and Artificial Intelligence will Change BI and Analytics,” you know that Domo is among the leading vendors I detailed that are investing in ML and AI. I like the platform approach of Mr. Roboto, which will enable customers and partners to work with APIs and add their own code and customized capabilities.

Domo creates early anticipation for features by holding a “Sneak Peek” and customer-wish-list session at the end of every Domopalooza. The upside is that Domo’s direction is very customer driven. The downside is that it might be 14 to 18 months between initial public discussions about features and upgrades and general availability. That can make the process seem slow when, in fact, it’s just more open. Companies that are more secretive about their development work for many months before announcing new features run the risk of getting too little input and facing surprises in the beta and release stages.

Domo is still maturing. It started as a platform designed to run in the cloud and give business users web and mobile access to insights. The market messaging is now in line with that original vision and it’s building out the deeper levels of management control and customization capabilities that the developers and administrators are demanding as deployments scale up and out.

Related Reading:
MicroStrategy Makes Case for Agile Analytics on its Enterprise Platform
Tableau Conference 2017: What’s New, What’s Coming, What’s Missing
Qlik Plots Course to Big Data, Cloud and ‘AI’ Innovation

Media Name: Domopalooza 2018 announcements.jpg
Data to Decisions Tech Optimization Chief Customer Officer Chief Executive Officer Chief Marketing Officer Chief Supply Chain Officer Chief Digital Officer Chief Revenue Officer