Results

Digital Business Distributed Business and Technology Models Part 3a; Distributed Service Management (Technology)

Digital Business Distributed Business and Technology Models Part 3a; Distributed Service Management (Technology)

Service Management is a very broad term, and in the framework of Digital Business has a particularly broad and crucial role. A further complication is that the role is split between the Service Management of the enabling technologies, overlaid by the Service Management of distributed Business interactions/transactions. In the short term the Distributed Service Technology Management will dominate, and must support/integrate both on-premise Enterprise owned infrastructure, and the use of external ‘As a Service’ Cloud Service provider infrastructure.

However for a Digital Business, as defined in Part 1 of this series, to trade in the Digital Economy does require the implementation of an external Distributed Service Business management capability as well. Part 3b of this series provides a briefing on the requirement, and in particular, the potential role of Blockchain technologies.

The preceding part two, see appendix for details, focused on the first layer of simple four layer abstracted framework defining the technologies supporting a Digital Enterprise. Dubbed ‘Dynamic Infrastructure’ this base layer provides on demand access to networking and computational resources with the necessary Service Management directly associated with provisioning. The Distributed Service Technology Management Layer sits above this layer to provide and manage a wide range of sophisticated functions that enable the delivery of Business value.

The separation between the Service Management of the Dynamic Infrastructure and the Distributed Service Technology Management may seem odd, but it is important. The provisioning infrastructure may be provided by the enterprise, but increasingly a large portion will be provided ‘as a Service’ by market leading vendors such as AWS, Google or Microsoft, amongst many others. Enterprises can expect to operate across both Private and Public Infrastructure, and as such their Distributed Service Technology Management must operate seamlessly across both.

Distributed Service Technology Management should be used as a term to refer to those functions that must operate in an independent manner above the Dynamic Infrastructure layer. The over used popular term ‘Platform’ is frequently used to describe these capabilities, but the differences between various Platforms is so large as to render the term meaningless as a requirement definition.

The comparison of ‘Platform’ products is difficult due to the wide range of functions contained in this layer, some of which are very focused on a particular aspect. In addition most Platforms are continually developing in line with deployment experience and market demands. Discussion on standard may be actively underway but it will take both time and market maturity before significant impact. The definition of an IoT Platform started around the connectivity of sensors with associated functionality for data management, but today a Platform are increasingly seen as an integral part of CAAST, (Clouds, Apps, AI, Services & Things). High function Platforms from leading technology vendors support the integrated operation of these technologies as the enabler for Digital Business.

Specialized Platforms, particularly as part of final mile IoT connectivity, are still required and as a further complication these are usually designed to connect into the sophisticated high function Platforms. With such wide diversity in capabilities it makes the term ‘Platform’ effectively meaningless as a capability definition. To gain an insight on the numbers of products defined as an ‘IoT Platform’ then visit a product-listing site such as Postscapes.

Platforms can be broken into four major groupings, a methodology that allows the positioning of major technology vendors to be more readily identified in alignment with their core market focus;

  1. Enterprise operated Dynamic Infrastructure; examples; Cisco, Dell, HPE
  2. Cloud Services providers+; examples; AWS, Google, Microsoft, Salesforce, SAP,
  3. IoT ‘final mile’ focused; examples; Labellum, PTC, ThingsWorx
  4. Open Source Development*; examples; AllJoyn, GE Predix, OpenIoT

+IBM, Salesforce and SAP all offer Platforms that connect to their respective Clouds, but their focus is on providing Business Apps, not Cloud capacity. They have been included to avoid questions that would occur if their names were omitted.

*See a list of 21 Open Source Projects here

As is usually the case in the initial stages of a new technology market Vendor proprietary solutions are likely to provide the most attractive solutions for the requirements of first generation deployments. Such deployments tend to be focused and do not require the full range of functions that will be required later when maturity and scale drive product selection. Not unnaturally there will be concern as to vendor lock-in, and/or, restrictions on the development of a fully functional Distributed Services Technology Management layer, but this may be less concerning then it might seem.

For any Digital Enterprise the successful implementation of an independent Distributed Service Management of Technology layer able to integrate ‘any to any’ combinations of Private or Public Dynamic Infrastructure provision into advanced operational Services in support of the higher business layers is a crucial success factor.

A great deal of Technology attention is focused upon the architecture and standards necessary to achieve this as by definition the Distributed Service Management layer be based on standardized principles to ensure ‘open’ operation. Leading technology vendors are active in addressing the requirement for standards. Almost all references to IoT Architecture are in reality references to the Architecture of the Distributed Services Technology Management layer, and have relatively little to contribute to the remaining three layers of the Digital Business framework.

If the number of Platforms, each with different features, available in the market are confusing, then the confusion is made worse when the numbers of communities developing architectural models and standards are added into consideration. This is not the place to examine, even list, each individually. This blog is aimed at providing an informed overview to build understanding of the necessary considerations for enterprise deployment and product evaluation.

Commercially sponsored standards activities often have a scope, or point of view driven by the market positioning and products of particular vendors. This often fragments the overall architecture required as well as making it difficult to use for objective evaluation. Those charged with managing the introduction of the Distributed Services Technology Management into their enterprise need a comprehensive future framework to help them ensure the various tactical deployment choices will come together in a cohesive transformation of capabilities.

Perhaps the best example of an independent approach but with a scope to cover the entire architectural framework comes from the IoT Forum. This body took over the work of the EU on IoT Architecture and extended the reach to be global, as well as to more inclusive with a series of events held around the world. EU funding has reduced reliance on technology vendor sponsorship, enabling the production of detailed report on what is required, and why, under the title of ‘Architectural Reference Model, or ‘ARM’ introduction.

The IoT Forum work on ARM provides an excellent background to understanding this complex requirement, as well as offering a strategic definition as a longer-term target for the development of an Enterprise Distributed Services Technology Management layer. Current deployment requirements can be assessed against this framework to establish requirement definitions for product choice. This is particular useful given the lack of reliable standards to guide choice.

The value of the work on Architectural Frameworks by various bodies on across the Technology industry currently provides guidance on incorporating the first standards. However, in determining how the Technology elements will support interworking internally, and externally, it is easy to lose sight of the real question. The technology aspect is there to support and enable the Distributed Services Business Management capabilities.

The Digital Business Enterprise only exists because it is part of the Digital Economy conducting business through exchanging Services with its industry ecosystem of partners. In this continuously dynamic model with ever changing Business partners and transactions a distributed, and decentralized, commercial transaction recording capability is a necessity.

In a decentralized distributed Digital Business ecosystem operating in a loose coupled, stateless format existing forms transaction management based on predefined close coupled relationships and managed state cannot be applied. The huge interest in Blockchain technology is to provide this new and radically different capability.

It should be noted that BitCoin, often quoted as an example of Blockchain, is not indicative of the overall capabilities that can de developed using Blockchain technologies. BitCoin is a particular implementation that uses the technology in a certain manner with corresponding limitations.

Part 3b of this series provides a briefing on decentralized Distributed Services Business management.

 

Summary; Background to this series

This is third part in a series on Digital Business and the Technology required to support the ability of an Enterprise to do Digital Business. An explanation for the adoption of a simple definition shown in the diagram below to classify the technology requirements rather than attempt any form of conventional detailed Architecture is provided, together with a fuller explanation of the Business requirements.

 

 

 

 

 

 

Part One - Digital Business Distributed Business and Technology Models;

Understanding the Business Operating Model

Part Two - Digital Business Distributed Business and Technology Models;

The Dynamic Infrastructure

 

 

 

New C-Suite Innovation & Product-led Growth Tech Optimization Future of Work AI ML Machine Learning LLMs Agentic AI Generative AI Analytics Automation B2B B2C CX EX Employee Experience HR HCM business Marketing SaaS PaaS IaaS Supply Chain Growth Cloud Digital Transformation Disruptive Technology eCommerce Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP Leadership finance Customer Service Content Management Collaboration M&A Enterprise Service Chief Information Officer Chief Technology Officer Chief Digital Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Executive Officer Chief Operating Officer

Machine Learning Is The New Proving Ground For Competitive Advantage

Machine Learning Is The New Proving Ground For Competitive Advantage

1
  • 50% of organizations are planning to use machine learning to better understand customers in 2017.
  • 48% are planning to use machine learning to gain greater competitive advantage.
  • Top future applications of machine learning include automated agents/bots (42%), predictive planning (41%), sales & marketing targeting (37%), and smart assistants (37%).

These and many other insights are from a recent survey completed by MIT Technology Review Custom and Google Cloud, Machine Learning: The New Proving Ground for Competitive Advantage (PDF, no opt-in, 10 pp.). Three hundred and seventy-five qualified respondents participated in the study, representing a variety of industries, with the majority being from technology-related organizations (43%). Business services (13%) and financial services (10%) respondents are also included in the study.  Please see page 2 of the study for additional details on the methodology.

Key insights include the following:

  • 50% of those adopting machine learning are seeking more extensive data analysis and insights into how they can improve their core businesses. 46% are seeking greater competitive advantage, and 45% are looking for faster data analysis and speed of insight. 44% are looking at how they can use machine learning to gain enhanced R&D capabilities leading to next-generation products.
If your organization is currently using ML, what are you seeking to gain?*

If your organization is currently using ML, what are you seeking to gain?

  • In organizations now using machine learning, 45% have gained more extensive data analysis and insights. Just over a third (35%) have attained faster data analysis and increased the speed of insight, in addition to enhancing R&D capabilities for next-generation products. The following graphic compares the benefits organizations who have adopted machine learning have gained. One of the primary factors enabling machine learning’s full potential is service oriented frameworks that are synchronous by design, consuming data in real-time without having to move data. enosiX is quickly emerging as a leader in this area, specializing in synchronous real-time Salesforce and SAP integration that enables companies to gain greater insights, intelligence, and deliver measurable results.
your organization is currently using machine learning, what have you actually gained?

If your organization is currently using machine learning, what have you actually gained?

  • 26% of organizations adopting machine learning are committing more than 15% of their budgets to initiatives in this area. 79% of all organizations interviewed are investing in machine learning initiatives today. The following graphic shows the distribution of IT budgets allocated to machine learning during the study’s timeframe of late 2016 and 2017 planning.
What part of your IT budget for 2017 is earmarked for machine learning?

What part of your IT budget for 2017 is earmarked for machine learning? 

  • Half of the organizations (50%) planning to use machine learning to better understand customers in 2017. 48% are adopting machine learning to gain a greater competitive advantage, and 45% are looking to gain more extensive data analysis and data insights. The following graphic compares the benefits organizations adopting machine learning are seeking now.
If your organization is planning to use machine learning, what benefits are you seeking?

If your organization is planning to use machine learning, what benefits are you seeking?

  • Natural language processing (NLP) (49%), text classification and mining(47%), emotion/behavior analysis (47%) and image recognition, classification, and tagging (43%) are the top four projects where machine learning is in use today.  Additional projects now underway include recommendations (42%), personalization (41%), data security (40%), risk analysis (41%), online search (41%) and localization and mapping (39%). Top future uses of machine learning include automated agents/bots (42%), predictive planning (41%), sales & marketing targeting (37%), and smart assistants (37%).
  • 60% of respondents have already implemented a machine learning strategy and committed to ongoing investment in initiatives. 18% have planned to implement a machine learning strategy in the next 12 to 24 months. Of the 60% of respondent companies who have implemented machine learning initiatives, 33% are in the early stages of their strategies, testing use cases. 28% consider their machine learning strategies as mature with between one and five use cases or initiatives ongoing today.
Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief People Officer Chief Executive Officer Chief Information Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Microsoft Moves Up the IoT Value Chain with Hardware Co-Innovation

Microsoft Moves Up the IoT Value Chain with Hardware Co-Innovation

Constellation Insights

While many new connected devices are being made by established companies, countless more are getting hatched by startups, many of which have only a handful of employees and limited technical and financial resources.

In a bid to help out, Microsoft has opened up IoT and AI Insider Labs in Redmond, Wash., and Shenzhen, China, with another set to open in Munich next month. The labs are staffed by Microsoft experts who help participating companies fine-tune their hardware, debug device drivers, develop related software and perhaps most importantly, how to achieve last-mile connections at scale. In other words, the labs are about completing the journey from proof-of-concept to production-ready.

Microsoft hasn't made much fuss about the labs until now. This week, it published a lengthy feature that takes readers inside the labs. Here's a key excerpt from the piece:

Companies of all sizes can work in the labs at no cost. They get access to Microsoft technology and its engineers’ expertise in machine learning, AI and the cloud, all in one-stop shops. During stints that typically span from one to three weeks, visiting development teams learn how to refine their product architecture, unblock technical issues and build the skills to create a full-stack IoT solution.

Four-person, full-time teams of engineers versed in custom hardware, embedded software, industrial design, secure telecommunications and cloud development walk invited guests through sprint planning, tooling and testing – tasks that typically require a company to pay six or seven vendors. Ultimately, the labs help large enterprises and tiny startups alike scale and accelerate their IoT solutions to market.

Given the dubious quality of IoT security practices, particularly in the consumer device market, that part of the lab's work is especially welcome. 

Naturally, there's more to Microsoft's effort than benevolence; the longer-term goal is getting more devices ready to connect to and consume Azure cloud services. By aligning and co-innovating with IoT device makers early on, there's a much better chance of gaining their business on Azure. 

Any company is open to apply to the program, which overall is a smart move by Microsoft. 

"It is normally considered good business practice to invest in moving up the value chain from your current position," says Constellation Research VP and principal analyst Andy Mulholland. "Currently many tech vendors' investment seems to be in the direction of AI, and for most of the last year it's certainly been focused on the business value part of IoT. The result has been to leave the sensors, and final mile, services of IoT in something of a vacuum, yet everything up the value chain depends on the availability of good-quality sensing and data flow management. Microsoft is making an excellent move to encourage and breathe more support into what seems to have become a 'Cinderella' market."

24/7 Access to Constellation Insights
Subscribe today for unrestricted access to expert analyst views on breaking news.

Tech Optimization Digital Safety, Privacy & Cybersecurity Chief Information Officer Chief Digital Officer

Oracle Looks to Differentiate from AWS, Azure with Cloud Converged Storage

Oracle Looks to Differentiate from AWS, Azure with Cloud Converged Storage

Constellation Insights

Oracle has designs on taking significant share from the likes of Amazon Web Services and Microsoft Azure in the IaaS (infrastructure as a service) market, and is now touting what it calls an industry-first offering as a differentiator. Here are the details from Oracle's announcement:

Oracle today unveiled the industry's first Cloud Converged Storage, representing the first time a public cloud provider at scale has integrated its cloud services with its on-premises, high performance NAS storage systems. Oracle ZFS Cloud software, included in the latest Oracle ZFS Storage Appliance release, enables organizations to easily and seamlessly move data and/or applications to the cloud to optimize value and savings, while eliminating the need for external cloud gateways and avoiding the costs of software licenses and cloud access licenses--AKA "cloud entrance taxes"—charged by legacy on-premises vendors for the right to access the public cloud from their infrastructure platforms.

Oracle claims its Cloud Converged Storage setup results in an 87 percent lower total cost of ownership when compared to "one industry competitor."

Again and again, Oracle's announcement focuses on the benefits of linking on-premises and cloud storage from the same vendor. ( It's worth noting that Oracle's public cloud storage is built with ZFS appliances.)

Oracle's approach removes the burden on users to do their own on-premises to public cloud integration, manage environments comprised of different security requirements, support teams, industry standards, and skill sets, as well as the struggle with end-to-end visibility, diagnostics and support.

In addition, on-premises NAS (network attached storage) vendors don't have public clouds, while public cloud vendors don't have on-premises NAS systems, Oracle claims. While this is true to an extent—helped out by the fact that Dell and Hewlett-Packard both scuttled their public cloud offerings in 2015—one can argue that IBM has offered rough equivalents to Oracle's new offering for some time.

Use cases for Cloud Converged Storage include backup and recovery, dev and test, archiving and workload migration, Oracle says. Oracle is also shipping some new features aimed at improving the performance of its database when used in conjuction with its storage technology. Intelligent Storage Protocol 2.0 can increase OLAP (online transaction processing) by up to 19 percent and RMAN backup performance by up to 33 percent, without the need for adminstrators to do anything, according to a statement.

Every vendor engages in chest-beating about the raw power of their products and Oracle isn't acting any differently here. What should perk up the ears of ZFS customers is the idea of those "cloud entrance taxes" going away, as well as no need for a gateway.

On the other hand, if you're not currently a ZFS storage shop, Cloud Converged Storage doesn't make much sense without a significant investment in on-premises hardware. Oracle's announcement didn't speak to any incentives related to ZFS hardware acquisition. 

In addition, Cloud Converged Storage is tightly coupled to Oracle's public cloud; don't expect the company to create similarly seamless tie-ins to the likes of AWS or Azure. ZFS customers can still integrate with other public cloud services but this will require a separate gateway. Once the numbers get crunched, ZFS customers may find it makes more financial, as well as technical, sense to stick with Oracle's public cloud.

In hindsight, Oracle's Cloud Converged Storage announcement was inevitable following last year's release of the ZS5 version of the ZFS appliance. The system was originally  designed for IaaS but Oracle kept on-premises deployments in mind, as Constellation VP and principal analyst Holger Mueller writes in an in-depth report accessible here.

It's not clear how much impact Cloud Converged Storage will have on Oracle's bottom line, but its messaging, focusing on tight integration and the removal of unfriendly additional costs, may have a ripple effect in the market. 

24/7 Access to Constellation Insights
Subscribe today for unrestricted access to expert analyst views on breaking news.

 

 

Tech Optimization Chief Information Officer

New Google Open Source Project Portal Is A Gift to Enterprises

New Google Open Source Project Portal Is A Gift to Enterprises

Constellation Insights

Google is the canonical example of a company that found commercial success through the use of open-source software. It's undertaken more than 2,000 open source projects over its 18-year history, and in turn has released millions of lines of code to the public.

But such a sprawling array of open-source efforts can be difficult for interested third parties to navigate. To this end, Google has launched a new portal that ties together all of its open-source projects while also providing a window into its internal practices around open source. Here are the details from a blog post by Will Norris of Google's Open Source Program Office:

This new site showcases the breadth and depth of our love for open source. It will contain the expected things: our programs, organizations we support, and a comprehensive list of open source projects we've released. But it also contains something unexpected: a look under the hood at how we "do" open source.

Inspired by many discussions we've had over the years, today we are publishing our internal documentation for how we do open source at Google.

These docs explain the process we follow for releasing new open source projects, submitting patches to others' projects, and how we manage the open source code that we bring into the company and use ourselves. But in addition to the how, it outlines why we do things the way we do, such as why we only use code under certain licenses orwhy we require contributor license agreements for all patches we receive.

Making it easier for companies to get a big-picture view of what Google is doing in open source makes plenty of sense, says Constellation Research VP and principal analyst Holger Mueller. Next steps could see Google do more with regard to unification of project documentation as well as the development of synergies across different projects, he adds. 

Norris cautions that in Google's view, the documentation doesn't constitute a "how-to" guide for an open source software strategy, as Google's approach has been informed by its own experiences. That being said, an enterprise struggling with how to develop an open source framework could do worse than to follow Google's lead—far worse. 

Google is involved with an industry group, TODO, that counts Red Hat, Facebook, IBM, Microsoft, Netflix and many other prominent tech companies as members. TODO members work together to develop best practices and common tooling around open source; Google's new open source portal urges visitors to check out TODO's work, but the value of the documentation Google has released shouldn't be downplayed, despite Norris's caveat. 

24/7 Access to Constellation Insights
Subscribe today for unrestricted access to expert analyst views on breaking news.

Tech Optimization Chief Information Officer

Event Report - SAP Ariba Live - The quest to make Procurement awesome

Event Report - SAP Ariba Live - The quest to make Procurement awesome

We had the opportunity to attend SAP Ariba’s Live user conference in Las Vegas, held from March 21st till 23rd 2017, at the Cosmopolitan. The event was well attended with over 3200 attendees, good partner representation and influencer selection.
 
 
 
So take a look at my musings on the event here: (if the video doesn’t show up, check here)
 
 
No time to watch – here is the 1-2 slide condensation (if the slide doesn’t show up, check here):

 
 
Want to read on? 

Here you go: Always tough to pick the takeaways – but here are my Top 3:

Ariba is on a roll – A few years ago it seemed that Ariba (add an SAP in front of every time you read Ariba going forward) maybe in the state of a being the sleeping beauty of enterprise software. Always there – but not going anywhere. That has changed in the recent year and Ariba now has the momentum to show it: Even for SAP adding 10 of the Top 100 Global Businesses in 12 months is quite a feat – and both a testament to the market position that Ariba has achieved as well as the attractiveness of what Ariba has recently provided and plans offer soon. And suppliers will pay attention when hearing that Ariba has added B300US$ in 2016 in network volume. Similar to e-commerce we can see the networks synergies playing in favor of Ariba.
 
SAP Ariba Live Holger Mueller Constellation Research
Atzberger - Make Procurement Awesome

Functionality Push – Last year Ariba unveiled the Guided Buying approach, both a simplification on the product side and usability improvement for the UX. The combination has worked well for Ariba and proven popular with customers. Spot Purchases were announced, too and are available now, the upcoming implementation at Latin American trading powerhouse Mercado Libre is a proof point that Ariba has built good functionality. When asking Mercado Libre for the reasons for selecting Ariba, they first mentioned the synergy effects of using SAP already – something that bides well for further sales of Spot Purchases into the SAP install base.

 
 
 
SAP Ariba Live Holger Mueller Constellation Research
Ariba 2016 Momentum

That the combination of UX improvement and simplification is working out for Ariba, is also seen by the plans to bring the same concept to bear on the Supplier side with the Light Account: During the keynote, we saw the demo of onboarding a new supplier in 2 minutes, something as unthinkable as well as un-achievable in today’s business practice.

 
SAP Ariba Live Holger Mueller Constellation Research
McDermott & Atzberger Q&A

Blockchain meets Purchasing – In a sign of times, this was also a conference with a Blockchain announcement, and SAP picked Hyperledger for its first dabs into distributed ledger technology. Certainly, a good choice, though SAP likely will also have to support other blockchain technologies, but Hyperledger is a good start. And few places lend themselves more to the blockchain scenario than purchasing, so it’s good to see SAP innovating.
 
 

MyPOV

It is remarkable how fast SAP Ariba is moving, especially when one considers (which was not much part of the public talks at the conference) that Ariba is in the midst of a major re-platform endeavor – moving off Oracle and onto SAP HANA. Usually vendors take a noticeable pause while undergoing exercises like these – not so much SAP Ariba. It is good to see the vendor doubling down on things that work, e.g. the UX improvements and the overall process simplifications, while at the same time innovating with blockchain, team productivity software (Microsoft Teams was shown) and speech recognition. Still a tall order – to make a traditionally boring administrative software like Procurement awesome… to get there Ariba has shown the willingness to partner and be an open platform and finally has embraced lofty goals such as diversity, or even more ambitious – the quest of ending modern age slavery, a topic near and dear to the heart of Ariba boss Alex Atzberger.
On the concern side, Ariba has to deliver a lot while a lot is happening. Never an easy scenario for any vendor, so we will be keeping an eye, especially on the platform side, how Ariba will progress in the next quarters.
Finally, it was good to see that SAP seems to have found the right length of ‘leash’ for the Ariba subsidiary – not too close to stop innovation, and allowing freedom (e.g. manifested in Ariba using angular.js and not Fiori) but also have leverage (e.g. with HANA). That SAP CEO McDermott said in a Q&A (see a Storify Tweet Story here) that he would be open to e.g. integrate with perennial co-opetitor Oracle, speaks signs of the flexibility and pragmatism that is lived now at the top of both companies, always a good sign for customers.
 

  •  
Want to learn more? Checkout the Storify collection below (if it doesn’t show up – check here). Don't miss the Day #2 keynote Storify collection here.

 

 
Future of Work Tech Optimization Matrix Commerce Sales Marketing Revenue & Growth Effectiveness Next-Generation Customer Experience Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth SAP Supply Chain Automation Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software IoT Blockchain ERP Leadership Collaboration M&A SaaS PaaS IaaS Next Gen Apps CRM CCaaS UCaaS Enterprise Service Chief Information Officer Chief Procurement Officer Chief Supply Chain Officer Chief Experience Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer

Domo Climbs Enterprise Ladder in Cloud Business Intelligence

Domo Climbs Enterprise Ladder in Cloud Business Intelligence

Domo has graduated from analytics startup to enterprise contender, breaking new ground in cloud-scale deployments. Here’s a look inside the fast-growing company.

Domo is enterprise ready. That’s the key takeaway Domo wanted to project at Domopalooza, held March 22-23 in Salt Lake City. The event drew more than 3,000 attendees and saw keynote appearances from enterprise-scale customers including Target, GE Digital, UnitedHealth Group and Univision.

Domo has surpassed the 1,000-customer mark, and more than half of its revenue now comes from $1 billion-revenue-plus enterprise customers, according to company executives. At Domopalooza, CEO and founder Josh James announced that the company has reached a $120-million-annual-revenue run rate. That’s a fraction of the $827 million in revenue rival Tableau reported in 2016 and a rounding error compared to Microsoft’s revenue (although that company doesn’t break out PowerBI revenue, which is Domo’s competition). Nonetheless, given Domo’s claimed 100% growth rate and the list of enthused customers at Domopalooza, it’s time for a closer look.

Domo CEO Josh James focused mostly on interviewing big customers during
his keynote time at Domopalooza 2017.

Previously co-founder and CEO of Omniture, James founded Domo in 2010 shortly after selling his old company to Adobe for $1.8 billion. The Domo executive team is loaded with Omniture veterans, and they tell the story that James came up with the idea for Domo because he was so frustrated with the incumbent tools available for business insight when he was the CEO at Omniture. The intent was to build an agile, cloud-based analytics platform in the mold of Omniture, but designed to handle diverse business data sources (beyond the Web, mobile and social data analyzed in Omniture).

Domo runs in Amazon’s cloud, and it includes components to capture, prepare and visualize data and then engage in collaboration and optimize business decisions. The platform’s back-end data warehouse, the Domo Business Cloud, scales up at cloud speed and handles diverse data sources, including semi-structured and sparse data. At Domopalooza the company announced that its data store has surpassed 26 petabytes, making it the largest cloud-based analytical data store of its kind, according to James.

Domo’s largest customer, Target, spoke to the platform’s scalability during a keynote interview. Target loads data on every item and every transaction from about 2,000 stores into the Domo Business Cloud at 15-minute increments, explained Ben Schein, Director of BI & Analytics. Where store-level reporting was previously updated once a week, Schein said Domo brings near-real-time insight into store operations, purchasing and stocking trends to 1,500 to 1,700 users per week. James acknowledged that Target helped Domo learn how to scale, harden and mature its platform.

To capture data, Domo has created more than 400 pre-built connectors to popular data sources. A data-transformation tool called Magic lets users join and blend data through a drag-and-drop interface. The company partners with data-integration vendors like Informatica and Talend to support more sophisticated ETL work.

Domo’s front-end data-analysis environment combines pages, cards and applications. Pages are analogous to dashboards and cards are individual visual analyses. Pages and cards are mobile first, meaning you build them once and they dynamically render for phone, tablet or desktop viewing. The company has more than 1,000 applications, which are pre-built, but customizable visual analyses, such as a Social Index app for benchmarking brand popularity and net promoter scores or the Sales Forecast app, which measures predictions against actuals and quotas, with drill-down analysis of rep and manager performance. There’s also a SQL-like “Beast Mode” that enables power users to develop custom transformations and analyses.

Ease of deployment and administration are big selling points. The back end is entirely managed by Domo. When you add more data or more challenging analyses, Domo adds storage and compute nodes automatically. Pricing is based entirely on the number of users, not storage or compute capacity. The pricing model is designed to encourage customers to load more data and build more cards and pages.

Agile analysis is another selling point. At the event, a GE Digital executive showed off a company-wide performance dashboard she built “in one day,” complete with slick, graphical formatting created through an Adobe Illustrator plug-in to Domo. A merchandising executive from Target described how her team reviewed a prototype dashboard in the morning and got back a revise with all requested changes by the end of that day. And an executive from Sephora Southeast Asia said her company got started with Domo late last fall and had a dashboard available within two weeks — just in time for Black Friday performance analysis.

Domopalooza saw four key product announcements:

Analyzer upgrades. The Analyzer is where users do their slicing, dicing and page and card building. Top announcements here included a Data Lineage Inspector that shows where the data used in an analysis comes from and how it was transformed or altered. Data “slicer” buttons can now be added to cards to support guided analysis to the most sought-after views of data. And a new period-over-period analysis feature supports time-based comparisons that previously required Beast Mode customization.

Business-in-a-Box. This collection of pre-built, role-based dashboards is designed to support rapid delivery of the most-asked-for insights across sales, marketing, finance, operations, IT and other business functions. It’s set for release this spring.

Domo Everywhere. Also due this spring, Domo Everywhere is the company’s entry into embedded analytics, white-label licensing and publishing. The offering provides ways for customers to make Domo analytics available within their own software, through Web services or on websites under their own brand.

Mr. Roboto. Attendees got a sneak peek at a few of the advanced analytics, machine learning and natural language understanding capabilities of this offering. I was told it will be a layer of capabilities  within the platform, not a bolt-on module. Release dates weren’t offered, so it’s not something I expect to see fleshed out until late 2017 or perhaps Domopalooza 2018.

@Domotalk, #Analytics, #BusinessIntelligence, #DP17

Domo customers shared wished-for feature ideas during Domopalooza’s
open-mike closing session. Audience members raised hands (and Domo
execs guessed percentages) to express their interest in each feature.

MyPOV on Domo’s Course  

I came away from Domopalooza impressed by the scale of Domo’s largest deployments and the enthusiasm of its customers. A highlight of the event was the closing general session, during which Domo previewed coming new features and then turned the mike over to customers to share feature requests. Each request was briefly discussed in a back-and-forth with Domo executives. The request was then listed on a slide (see photo above) for all to see and the audience was then asked to show their interest by a show of hands (or clapping or hoots and hollers). I’ve seen these sorts of sessions at other events, but you don’t see companies with poor customer satisfaction doing it for fear of initiating a bitch fest.

The turning point that Domo is now navigating is the same one that the likes of Tableau and Qlik ran into a few years ago, namely enterprise-grade maturity. Domo execs acknowledged that they’re now facing demands from IT for governance features and administrative capabilities for managing many users.

The Data Lineage Inspector, for example, is just a start on the data governance capabilities customers want. During the closing general session a customer asked for a card-certification feature whereby analyses could have a visual check mark or seal of approval indicating certified status. The indicator would automatically change if data sources or analyses were altered. Domo execs said they are working on such a governance scheme, and by a show of hands there was keen interest (approximated at 91% by Domo's chief product officer and session leader, Catherine Wong).

Domo offers three hybrid deployment options for large or regulated customers that don’t want to put everything in the public cloud. A Federated Query feature that’s very new will let customers query data in place, but performance is dependent upon the bandwidth of the connections and the compute power of each source. A second option let’s companies put the Domo Business Cloud data layer behind the corporate firewall while leaving the analysis layer in the cloud. A third option is running the entire Domo platform as a dedicated instance on AWS or Azure or as a private cloud instance behind a corporate firewall.

Domo is facing these enterprise challenges earlier in its lifecycle than did some of its competitors. That’s partly due to the maturation of the market and partly due to the experience of the Domo team rooted in Omniture. The upshot is that Domo is maturing quickly and punching above its actual weight.


Media Name: Domo Customer wish list.jpg
Media Name: Domo CEO Josh James.jpg
Data to Decisions Marketing Transformation ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Customer Officer Chief Information Officer Chief Marketing Officer Chief Digital Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Down Report – Power failure takes Azure services down - 3 Cloud Load Toads

Down Report – Power failure takes Azure services down - 3 Cloud Load Toads

 
We continue our series of IaaS downtimes – and other availability issues, see our Down Report on the recent AWS downtime here.

 
 
 

Kudos to Microsoft to share issue, impact, impact on customers, workaround, root cause mitigation and next steps on the Azure Status History (see here)
 
 
So let’s dissect the information available in our customary style:
RCA - Storage Availability in East US
Summary of impact: Beginning at 22:19 UTC Mar 15 2017, due to a power event, a subset of customers using Storage in the East US region may have experienced errors and timeouts while accessing storage accounts or resources dependent upon the impacted Storage scale unit. As a part of standard monitoring, Azure engineering received alerts for availability drops for a single East US Storage scale unit. Additionally, data center facility teams received power supply failure alerts which were impacting a limited portion of the East US region. Facility teams engaged electrical engineers who were able to isolate the area of the incident and restored power to critical infrastructure and systems. Power was restored using safe power recovery procedures, one rack at time, to maintain data integrity. Infrastructure services started recovery around 0:42 UTC Mar 16 2017. 25% of impacted racks had been recovered at 02:53 UTC Mar 16 2017. Software Load Balancing (SLB) services were able to establish a quorum at 05:03 UTC Mar 16 2017. At that moment, approximately 90% of impacted racks were powered on successfully and recovered. Storage and all storage dependent services recovered successfully by 08:32 UTC Mar 16 2017. Azure team notified customers who had experienced residual impacts with Virtual Machines after mitigation to assist with recovery.

MyPOV – Good summary of what happened, a power failure / power event. Good to see that customers were notified. Power events can always be tricky to recover, and it looks like Azure management erred on the side of caution bringing up services rack by rack and then adding services like SLB later. But the downtime for affected customers was long, best case – when in the first 25% of racks was almost four hours, and worst case 10 hours+. Remarkable it took Azure technicians 2 hours 20 or so minutes to get the power back. Microsoft needs to (and say it will) review power restore capabilities and find ways to bring storage back quicker. Luckily for customers and Microsoft this happened over night, with possibly lesser effect on customers… but that said we don’t know what kind of load was run on the infrastructure.

Rating: 3 Cloud Load Toads


 
Customer impact: A subset of customers using Storage in the East US region may have experienced errors and timeouts while accessing their storage account in a single Storage scale unit. Virtual Machines with VHDs hosted in this scale unit shutdown as expected during this incident and had to restart at recovery. Customers may have also experienced the following:
- Azure SQL Database: approx. 1.5% customers in East US region may have seen failures while accessing SQL Database.
- Azure Redis Cache: approx. 5% of the caches in this region experienced availability loss.
- Event Hub: approx. 1.1% of customers in East US region have experienced intermittent unavailability.
- Service Bus: this incident affected the Premium SKU of Service Bus messaging service. 0.8% of Service Bus premium messaging resources (queues, topics) in the East US region were intermittently unavailable.
- Azure Search: approx. 9 % of customers in East US region have experienced unavailability. We are working on making Azure Search services to be resilient to help continue serving without interruptions at this sort of incident in future.
- Azure Site Recovery: approx. 1% of customers in East US region have experienced that their Site Recovery jobs were stuck in restarting state and eventually failed. Azure Site Recovery engineering started these jobs manually after the incident mitigation.
- Azure Backup: Backup operation would have failed during the incident, after the mitigation the next cycle of backup for their Virtual Machine(s) will start automatically at the scheduled time.

MyPOV – Kudos for Microsoft to give insight into the percentage of customers affected. It looks like Azure Storage units are using mixed load – across Azure services. That has pros and cons, e.g. co-location of customer data, mixed averaged load profiles – but also means that a lot of services are affected when a storage unit goes down.

Rating 2 Cloud Load Toads

 
 
Workaround: Virtual Machines using Managed Disks in an Availability Set would have maintained availability during this incident. For further information around Managed Disks, please visit the following sites. For Managed Disks Overview, please visit https://docs.microsoft.com/en-us/azure/storage/storage-managed-disks-overview. For information around how to migrate to Managed Disks, please visit: https://docs.microsoft.com/en-us/azure/virtual-machines/virtual-machines-windows-migrate-to-managed-disks.
- Azure Redis Cache: although caches are region sensitive for latency and throughput, pointing applications to Redis Cache in another region could have provided business continuity.
- Azure SQL database: customers who had SQL Database configured with active geo-replication could have reduced downtime by performing failover to geo-secondary. This would have caused a loss of less than 5 seconds of transactions. Another workaround is to perform a geo-restore, with loss of less than 5 minutes of transactions. Please visit https://azure.microsoft.com/en-us/documentation/articles/sql-database-business-continuity/ for more information on these capabilities.

MyPOV – Good to see Microsoft explaining how customers could have avoided the downtime. But the Managed Disk option only applies VMs affected by the storage. Good to see the Redis Cache option – the question is though, how efficient (and costly) that would have been. Cache synching is chatty and therefore expensive. More importantly good to see the Azure SQL option, that is key for any transactional database system that needs higher availability. Again enterprise will have to balance cost benefits.

More of concern is that the other 4 services affected by the outage seem to have no Azure provided workaround, in case customers needed and would decide implement (and pay for one). No work around for Event Hub and Service Bus would not be a good situation, especially since event and bus infrastructures are used to make systems more resilient. Azure Search seems to lack a workaround, too, affecting customers using those services. It’s not clear what the statistic means though: Was Search itself not available or could the information of the affected storage units not be searched. Important distinction. The Azure Site Recovery affected isn’t good either, but kudos for Microsoft to start those manually. But manual starts can only be a workaround, as they don’t scale, e.g. in a greater outage. The failure of Azure Backup is probably the least severe, but in case of power failures, which may not be contained and might cascade, of equal substantial severity, as customers loose backup capability to protect them from potential further outages.

Rating: 2 cloud load toads (with workaround would be 1 / with no workaround 3 – the maximum, as we don’t have full clarity here, we use 3 as the average).
 
 
Root cause and mitigation: Initial investigation revealed that one of the redundant upstream remote power panels for this storage scale unit experienced a main breaker trip. This was followed by a cascading power interruption as load transferred to remaining sources resulting in power loss to the scale unit including all server racks and the network rack. Data center electricians restored power to the affected infrastructure. A thorough health check was completed after the power was restored, and any suspect or failed components were replaced and isolated. Suspect and failed components are being sent for analysis.

MyPOV – Always ironic how a cheap breaker can affect a lot of business. I am not a power specialist / electrician, but reading this – if one power panel fails and load has to be transferred, the system should still be operating. Maybe something was not considered in the redundant design vs remaining throughput capacity, not a good place to be.

Rating - 5 toads
 
 
Next steps: We are continuously taking steps to improve the Microsoft Azure Platform and our processes to help ensure such incidents do not occur in the future, and in this case it includes (but is not limited to):
- The failed rack power distribution units are being sent off for analysis. Root cause analysis continues with site operations, facility engineers, and equipment manufacturers.
- To further mitigate risk of reoccurrence, site operations teams are evacuating the servers to perform deep Root Cause Analysis to understand the issue
- Review Azure services that were impacted by this incident to help tolerate this sort of incidents to serve services with minimum disruptions by maintaining services resources across multiple scale units or implementing geo-strategy.

MyPOV – Kudos for the hands-on next steps. The key measure (which I am sure Microsoft is doing) is though: How many other storage system power units, or overall Azure power units may have the same issue, and when will they be fixed and have the right capacity / redundancy, so this event cannot repeat. And then we have the question of standardization, is this a local knowledge event, are other data centers setup differently – or the same and can the same incident with a higher certainty be avoided.

Out of curiosity – there was another event in Storage provisioning, a software defect, only 37 minutes before (you can find it on the Azure status page, right below the above incident) … and these two events could / may have been connected. The connection between the two is at hand: When having a storage failure in one location, customers (and IaaS technicians) may scramble to open storage accounts – at the same or other locations, if they cannot, ad hoc needed remediation and workaround cannot happen. There maybe a connection / there may not be a connection. But when hardware goes down and the software to manage accounts for the hardware – that’s an unfortunate – and hopefully highly unlikely – connection of events.

 

(Luckily) a mostly minor event

Unless someone was an affected party - this was a minor cloud down event. But it was luckily only a minor event, as power failures can quickly propagate and create cascading effects. Unfortunately for some of the services, there is no easy or no workaround at all available that when they go down, they are down. Apart from Microsoft's lessons learned - this is the larger concern going forward. I count a total of 12 toads, averaging 3 Cloud Loud Toads for this event.
 
 

Lessons for Cloud Customers

Here are the key aspects for customers to learn from the Azure outage:

Have you built for resilience? Sure, it costs, but all major IaaS providers offer strategies on how to avoid single location / data center failures. Way too many prominent internet properties did not chose to do so – so if ‘born on the web’ properties miss this – its key to check regular enterprises do not miss this. Uptime has a price, make it a rational decision, now is a good time to get budget / investment approved, when warranted and needed.

Ask your IaaS vendor a few questions: Enterprises should not be shy to ask IaaS providers if they have done a few things:
  • How do you test your power system equipment?
  • How much redundancy is in the power system?
  • What are the single points of failure in the data center being used?
  • When have you tested / taken off components of the power system?
  • How do you make sure your power infrastructure remains adequate as you are putting more load through it (assuming the data center gets more utilized).
  • What is the expected up in case of power failure? 
  • How can we code for resilience – and what does it cost?
  • What kind of renumeration / payment / cost relief can be expected with a downtime?
  • What other single point of failure should we be aware of?
  • How do you communicate in a downtime situation with customers? 
  • How often and when do you refresh your older datacenters, power infrastructure / servers?
  • How often have your reviewed and improved your operational procedures in the last 12 months? Give us a few examples how you have increased resilience.

And some key internal questions, customers of IaaS vendors have to ask themselves:
  • How do you and how often do you test your power infrastructure?
  • How do you ensure your power infrastructure keeps up with demand / utilization?
  • How do you communicate with customers in case of power failure?
  • How do you determine which systems to bring up and when?
  • How do you isolate power failures and at what level to minimize downtime
  • Make sure to learn from AWS (recent) and Microsoft’s mistakes – what is your exposure to the same event? 


Overall MyPOV

Power failures are always tricky. IT is full of anecdotes of independent power supplies not starting – even in the case of formal test. But IaaS vendors need to do better and learn from what went wrong with Azure. There maybe a commonality with the recent AWS downtime, that IaaS vendors can become the victims of their own success. AWS saw more usage of S3 systems, Microsoft may have seen more utilization of the servers attached to the failing power system setup. And CAPEX demands flow into opening new data centers versus refreshing and upgrading older data centers. 
 
There is learning all around for all participants – customers using IaaS services, and IaaS providers. Redundancy always comes at a cost, and the tradeoff in regards of how much redundancy an enterprise and a IaaS providers want and need will differ from use case to use case. The key aspect is that redundancy options exist and that tradeoffs are made in an ideally fully aware state of the repercussions. And get revisited on a regular basis. 
 
Ironically for the next few years – more minor IaaS failures like this can get the level of cloud resiliency up to the levels where they should be for both IaaS vendors and IaaS consuming enterprises. As long as all keep learning and then acting appropriately.
 
 
 

 

 

 

Tech Optimization Innovation & Product-led Growth Future of Work Microsoft SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Chief Information Officer Chief Technology Officer Chief Digital Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Executive Officer Chief Operating Officer

Stanford, MIT Researchers Develop System for Private Web Queries

Stanford, MIT Researchers Develop System for Private Web Queries

Constellation Insights

There have long been options for users seeking more privacy as they browse the web, from the anti-tracking search engine DuckDuckGo to the Tor secure browser. Now teams of researchers from Stanford and MIT have developed a system they say can enable users to make website database queries—such as to look up flights or find Yelp reviews—in anonymity.

This is important because website queries can derive a great deal of information about a visitor, as the paper's lead author noted to MIT's news service:

“The canonical example behind this line of work was public patent databases,” says Frank Wang, an MIT graduate student in electrical engineering and computer science and first author on the conference paper. “When people were searching for certain kinds of patents, they gave away the research they were working on. Stock prices is another example: A lot of the time, when you search for stock quotes, it gives away information about what stocks you’re going to buy. Another example is maps: When you’re searching for where you are and where you’re going to go, it reveals a wealth of information about you.”

Wang and his co-authors will present the system in a paper this week at the USENIX Symposium on Networked Systems Design and Implementation. 

The system is called Splinter, an aptly chosen name given how it is architected. Splinter presents the user with a client through which they split queries into shares and send them to different servers hosting the same database. Splinter combines the results and returns them to the user. The system is foolproof as long as at least one server is trustworthy, according to the paper. 

Splinter isn't the first idea of its kind, of course, but will deliver much better performance and faster results through the use of a recently developed cryptographic primitive, Function Secret Sharing, as the paper notes:

For example, systems based on Private Information Retrieval ... require many round trips and high bandwidth for complex queries, while systems based on garbled circuits have a high computational cost. These approaches are especially costly for mobile clients on high-latency networks.

FSS is up to an order of magnitude quicker than previously developed systems and can often answer queries with only one network roundtrip, the paper adds.

The researchers tested Splinter using an academic dataset from Yelp, a public flight database and a public traffic database from New York City, and achieved no greater than a 1.6 second response time across all three applications. 

Overall, Constellation sees Splinter as a welcome tool for end-users in an age where their personal data is ever more increasingly being mined for commercial gain without enough transparency or returned value. Still, what seems a bit far in the future is the broad data ecosystem a service like Splinter will need to be relevant, as well as commercial viability. MIT's Wang offered this somewhat optimistic prediction:

“We see a shift toward people wanting private queries,” Wang says. “We can imagine a model in which other services scrape a travel site, and maybe they volunteer to host the information for you, or maybe you subscribe to them. Or maybe in the future, travel sites realize that these services are becoming more popular and they volunteer the data. But right now, we’re trusting that third-party sites have adequate protections, and with Splinter we try to make that more of a guarantee.”

MIT and Stanford's work appears to be very innovative, says Constellation Research VP and principal analyst Steve Wilson. "It's great to see new twists on Secret Sharing as a class of security techniques," he says. "Some of these things are provably secure in a mathematical sense, which is super valuable these days."

However, "I can't help but express some cautions," Wilson adds. "They call this a privacy solution, but really it's a secrecy solution. It stops people seeing what you're up to; it keeps your affairs hidden, but at some point you need to reveal yourself, and that's when true privacy kicks in. You need protection against misuse of your personally identifying information when someone has it.

"So in this case, there will be a splinter server—a point at which your database query gets splintered, farmed out, and the responses reassembled," he continues. "Users have to trust the splinter server to not abuse their personal information."

At this stage, "Splinter may end up becoming freeware, a gift from academia, but is it sustainable?" Wilson says. It could be very compute-intensive to run, although the researchers said their tests using Amazon Web Services found the costs to be fairly nominal.

Still, who pays? "The question of whether consumers will pay for privacy protection is vexed," Wilson says. "Consumers are usually shown to be unwilling to pay much of a premium for privacy preserving services."

The Bottom Line

"Privacy services which insert themselves into the information supply chain like this are a bit like bodyguards," Wilson says. "Perfectly understandable, but you cannot imagine a real-life situation where there is so much crime going on that everyone is encouraged to get a bodyguard. No, privacy is a public good, we all need it, it needs to be systemic, and not remedial in nature."

24/7 Access to Constellation Insights
Subscribe today for unrestricted access to expert analyst views on breaking news.

Next-Generation Customer Experience Tech Optimization Digital Safety, Privacy & Cybersecurity

UK Terror Attacks Revive Encryption Backdoor Debate, But the Debate Is Changing

UK Terror Attacks Revive Encryption Backdoor Debate, But the Debate Is Changing

Constellation Insights

Last week's UK terror attacks at in London left more than 50 people injured and four dead. The attack shocked the world, not least because it was committed not with a sophisticated weapon but by a single man with a car and a knife. The attacker, Khalid Masood, was shot dead by police but his methods won't soon be forgotten. 

It has emerged that Masood connected to the popular messaging service WhatsApp just two minutes before the attack. Like other apps such as Signal, WhatsApp uses end-to-end encryption to secure messages. UK Home Minister Amber Rudd has renewed calls for tech companies to create backdoors into their products in order to aid law enforcement agencies investigating crimes. In remarks to the BBC, Rudd said:

We need to make sure that organisations like WhatsApp, and there are plenty of others like that, don't provide a secret place for terrorists to communicate with each other.

It used to be that people would steam open envelopes or just listen in on phones when they wanted to find out what people were doing, legally, through warranty.

But on this situation we need to make sure that our intelligence services have the ability to get into situations like encrypted WhatsApp.

Rudd said she planned to meet with technology companies to make her case. WhatsApp said it is cooperating with authorities.

At the same time authorities are seeking ways into encrypted services, a fresh privacy promise is spreading throughout Silicon Valley. It's best summed up by the statement "We can't see your data," says Constellation Research VP and principal analyst Steve Wilson. This idea, that messaging or storage providers could not access or decrypt a customer’s data even if they wanted to was popularized by Apple in its dispute with the FBI.

The theme played recurringly at IBM's Interconnect event last week, Wilson notes. "For one thing, there is a strong move to pervasive encryption of data both in motion and at rest, with encryption keys controlled by the client," he says. "Under these arrangements, even if a warrant is served on a cloud provider like IBM, they might not be able to furnish copies of client data, without the client’s permission."

IBM’s new Blockchain as a Service is premised on the same principles, he adds. "I haven’t seen such a focus on cryptography standards and certification for many years." IBM is advocating for FIPS 140 and Common Criteria as benchmarks for cloud security and blockchain operations, while its Bluemix
High Security Business Network for the blockchain service has EAL 5+ security certification and FIPS 140 level 3 cryptographic key storage.

"These are the highest levels of security available outside defense departments, which indicates how seriousness IBM is taking encryption," Wilson says. "Clearly this is a doubled-edged sword. Governments should welcome IBM’s and other cloud provider’s security standards, even if the logical consequences are uncomfortable."

IBM also emphasized security containerization as the means for countering insider threats. As Wilson discusses in his research report, "Protecting Distributed Private Ledgers," private blockchains operate with much smaller consensus pools than their big public forebears. "This makes them intrinsically less tamper-resistant," Wilson says. "They also have particular exposure to rogue insiders at the host data centers. Recognizing this, IBM stressed that their private blockchains feature containerized key management, so that even the most trusted systems administrators can’t get at the keys nor the contents of a client’s ledger."

Speakers amplified the point by reminding attendees "that most notorious of all insiders, Edward Snowden, was quite a lowly admin, and look what he got away with," Wilson says. 

"Now IBM didn’t quite put it this way, but in my opinion they could say to their clients, 'Hey, you don’t need to trust us,' insofar as the most critical elements of a client’s hosted system are beyond reach of the operator. As my favorite proverb goes, 'It’s good to trust but it’s better not to.' I think I’m seeing the back of trust."

It remains to be seen what type of consensus law enforcement agencies around the world and the tech industry can come to over data access. What's clear, evidenced by the trends Wilson highlights, is that the privacy debate is getting more complicated all the time.

24/7 Access to Constellation Insights
Subscribe today for unrestricted access to expert analyst views on breaking news.

Tech Optimization Digital Safety, Privacy & Cybersecurity Chief Information Officer