Results

AI - Enterprise Scale Intelligence with C3 IoT

Business expectations of the deployment of AI, with associated Digital Business technologies, may be higher than the reality of their Technology staff’s current experience to deploy. Pilots, even business successful deployments, are by necessity focused on targeted and contained deliveries. The technologies, methods and experience will all have been chosen to reflect the requirements of the singular project.

The likely outcome, with some additional challenges, will be a dysfunctional Enterprise unable to integrate at an Enterprise level all resources, assets and intelligence in a cohesive manner able to optimize responses to markets and operating conditions. Sadly, as has been too often the case Business Management will be frustrated by the failure of Technology investments to deliver expectations.

Why is this happening again? From the start of Enterprise IT in the mid-eighties, through the succession of Internet, Web, Mobility, Social Tools, and Apps, localized project successes have had a nasty tendency to turn into Enterprise level barriers. Yet eager Business managers and technology leaders are once again embarking on project level deployments, often supported by C Level executives. Why is the apparently accepted Enterprise Business Strategy of Digital Business transformation failing to set an appropriate Enterprise Digital Technology strategy?

Once again there is a Business and IT alignment issue, but this time not occurring in the usual manner. Digital Business operating models focus on ‘unbundling’, or de-centralization, of activities, to support flexibility in what and how they do business. Not unsurprisingly Technology managers believe they should align to this business model by moving to individual project deployment optimization.

It is perfectly understandable, even a necessity at one level, given the new diversity of requirements for ‘Systems of Engagement’, (as distinct from ‘Systems of Record’ provided by current IT systems). However, to make this work there has to be a parallel, and conscious, Digital Business Technology strategy for the use of data from these new systems to develop Enterprise ‘Systems of Intelligence’, (Business Insights from Machine Learning and AI).

The words ‘use of data’ are the key to understanding what is required. This is not a call for an Enterprise Data strategy, or even Big Data analytics, as commonly understood in support of the current IT systems, and associated Business Intelligence analytics. Nor for that matter will the current methods of Enterprise Architecture deliver integrity and integration in this new Digital Business and Technologies environment. A Digital Business and Technology strategy requires three individual elements to be successful, combined and integrated at the Enterprise level;

Enterprise IT Systems of Record (existing Enterprise Applications), create centralized recording of transactions as ‘state-full’ historic data in defined formats arising from an architecture of closely coupled Enterprise Applications; Enterprise scale challenges usually refer to the size of the resulting data bases, together with their management and analytics.

Digital Business Systems of Engagement (Apps from IoT, Mobility, Social tools, etc.), refers to loosely coupled integration of sets of activities on the Edge of the Business producing huge flows of ‘stateless’ data on real events actually happening. This data is the foundation of the new era of Digital Business and has little in common with Systems of Record. The challenge of Scale refers to the data volumes flowing from connected devices, and systems, at levels that overwhelming traditional methods of storage and analytics.

Enterprise Systems of Intelligence use ‘stateless’ data to provide the fundamental properties that enable Digital Business insights, read and react capabilities, and support the introduction of Machine Learning and AI. Digital Business Stateless data has the power to corrupt Enterprise IT State-full data and the role of Systems of Intelligence is to provide a successful integration and processing capability that results in the genuine innovation that Machine Learning and AI brings to Enterprise operations.

The Constellation Research Digital Transformation Survey October 2017 identified that whilst Enterprise CxO management had a clear commitment to enacting a Digital Transformation there is a distinct lack of clarity as to what this actually meant. The reality is that many Enterprises are still struggling to come to terms with successful deployment of IoT driven projects, modernization of their existing IT systems to run on the Cloud, or integration of Social CRM.

These ‘Digital Strategies’, driven by the very real sense of the speed with which market and competitive change is occurring, are creating intense activity and investment. Unfortunately, also a breeding ground for potential internal Digital Disruption of operations that will see Enterprise management struggling to reconnect, and rebuild, their internal operational coherence to face the all too real external Digital Disruption of their markets.

A genuine joint Enterprise Business and Technology strategy is required to ensure Systems of Engagement develop to empower the Enterprise Systems of Intelligence that are the key to Digital Transformation of Enterprise capabilities.

In this, the Digital Technology era, a Cloud Platform would appear to be the immediate answer, but as in so many other aspects of the Digital Transformation some care needs to be taken over the casual use of terminology. The traditional definition of a ‘Platform’ refers to the technology capability to provide a set of common technology functions to simplify the tasks of integration and processing, as such it is the common route chosen by Technology management. Though inherently a technology choice many offer the addition of some Business value, but technology simplification is usually offset by Business requirement constraints.

N.B. Internal Integration Platforms should not be confused with external Business Ecosystems Platforms, though similar care must be exercised in choices. See Constellation blog Open Business Ecosystems or Closed Technology Platforms.

Systems of Intelligence are constantly discovering and creating new innovative business insights by continually refining dynamic, unpredictable, event inputs with historic data. Much of this new insightful value comes from finding in the data flows previously unforeseen, or planned, relationships, a very different processing activity from traditional Systems of Record deployments using data Integration and analytical tools.

This new requirement for a combination of simplified, abstraction, of core technology elements with complex integration, processing, and automation of responses is more closely aligned to the definition of an Software Engine, than a Integration Platform. With the simplification of Software development, often by the use of a Platform in one form or another, over recent years the thoughtful in depth functional specification evoking the requirement for an ‘Engine’ to power a series of complex software elements has sharply decreased.

The following description of a Software Engine fits the requirement for Systems of Intelligence more closely, whereas the requirements for Systems of Engagement integration and connectivity are well suited to Platform technology.

An ‘Engine’ is defined as an application program that coordinates the overall operation of other programs. Systems of Intelligence are the conglomeration of the dynamic environment of Business defined values, and insights, integrated, regardless of technology definitions and formats, into cohesive optimized Enterprise and/or Edge outputs

Perhaps it would be better to think of an Enterprise requiring a Digital Business Engine to successfully operate as an intelligent Digital Enterprise by provide the necessary continuous business ‘agility’ from Systems of Intelligence.

The first wave of Digital Pioneers, or early adopters, have already reached the stage when these issues have become clear, and the need to find a solution has become necessary. Interestingly, many who are the standard business cases references for Digital Transformation have started deployments with C3 IoT.

The following is based on a briefing by C3 IoT to Constellation Research

The C3 IoT Platform™ is a platform as
a service (PaaS) for the design, development, deployment, and operation of next-generation AI and IoT applications and business processes. The applications apply advanced machine learning to recommend actions based on real-time analysis of petabyte-scale data sets, dozens of enterprise and extraprise data sources, and telemetry data from tens of millions of endpoints.

C3 IoT provides a suite of pre-built, cross-industry applications, developed on its platform, that facilitate IoT business transformation for organizations in energy, manufacturing, aerospace, automotive, chemical, pharmaceutical, telecommunications, retail, insurance, healthcare, financial services, and the public sector. C3 IoT cross-industry applications are highly customizable and extensible. Pre-built applications are available for predictive maintenance, sensor health, enterprise energy management, capital asset planning, fraud detection, CRM, and supply network optimization. Customers can also use the C3 IoT Platform to build and deploy their own custom applications.

Operationalizing IoT is much harder than it looks. Many IoT platform development efforts to date – internal development projects as well as industry-giant development projects – are attempts to develop a solution via acquisition of multiple piece parts or from the many independent software components that are collectively known as the open-source Apache Hadoop stack. Despite the marketing claims surrounding these projects, a close examination suggests that there are few examples, if any, of enterprise production-scale, elastic cloud, big data, artificial intelligence, and machine learning IoT applications that have been successfully deployed in any vertical market except for applications addressed with the C3 IoT Platform.

Companies often expect that installing the Apache Hadoop open-source stack will enable them to establish a “data lake” and build from there. However, the investment and skill level required to deliver business value from this approach quickly escalates when developers face hundreds of disparate software components in various stages of maturity, designed and developed by more than 350 different contributors using different programming languages while providing incompatible software interfaces. A loose collection of independent, open-source projects is not a true platform, but rather a set of independent technologies that need to be somehow integrated and maintained by developers. Instead, companies need a comprehensive AI and IoT application development platform. To avoid this increasingly common pitfall, C3 IoT leverages a model-driven architecture approach.

Model-Driven Architecture

The architecture requirements for the Systems of Intelligence that are the key to the digital transformation of enterprises are uniquely addressed through a Model-driven architecture. Model-driven architectures define software systems using platform independent models, which are translated to one or more platform specific implementations. The C3 IoT Platform is a proven model-driven architecture.

This architecture abstracts application and machine learning code from the underlying platform services and provides a domain-specific language (annotations and expressions) to support highly declarative, low-code application development.

The model-driven approach provides an abstraction from the underlying technical services (for example, queuing services, streaming services, ETL services, data encryption, data persistence, authorization, authentication) and simplifies the programming interface required to develop AI and IoT applications to a Type System interface.

The model is used to represent all layers of an application including the data interchange with source systems, application objects and their methods, data aggregates on those objects, complex features representing business and application logic, AI-machine learning algorithms that use these features, and the application user interface. Each of these layers are also accessible as microservices.

The C3 IoT Platform is an example of a proven model-driven AI platform. The C3 IoT Platform allows small teams of five to ten application developers and data scientists to collaboratively develop, test and deploy large-scale production AI applications in one to three months. The platform is proven across 30 large-scale deployments across industries including energy, manufacturing, aerospace and defense, healthcare, financial services, and more. A representative large-scale C3 IoT Platform deployment processes AI inferences at a rate of a million messages per second against a petabyte-sized unified federated cloud data image aggregated from 15 disparate corporate systems and a 40-million sensor network. Global 1000 organizations have successfully used the platform to deploy full-scale production deployments in six months and enterprise-wide digital transformations with more than 20 AI applications in 24- to 36-month timeframes.

Time to market is critical as next-generation computing platforms emerge. C3 IoT is a proven, scalable, production and development environment with dozens of large-scale IoT applications deployed in the market, managing tens of millions of smart, sensor-enabled devices. The time-to-market advantage of a proven, scalable architecture can be leveraged to gain early network effects and competitive differentiation in the next big wave of computing and industrial automation.

C3 Type System – a detailed review provided by C3 IoT

The C3 Type System is a data object-centric abstraction layer that binds the various C3 IoT Platform components, including infrastructure and services. It is both sufficient and necessary for developing and operating complex predictive analytics and IoT applications in the cloud.

The C3 Type System is the medium through which application developers and data scientists access the C3 IoT Platform, C3 Data Lake, C3 Applications, and applications and microservices. Examples of C3 Types include data objects (e.g., customer, product, supplier, contract, or sales opportunity) and their methods, application logic, and machine learning classifiers.

The C3 Type System allows programs, algorithms, and data structures – written in different programming languages, with different computational models, making different assumptions about the underlying infrastructure – to interoperate without knowledge of the underlying physical data models, data federation and storage models, interrelationships, dependencies, or the bindings between the various structural platform or cloud infrastructure services and components (e.g., RDBMS, No SQL, ETL, SPARK, Kafka, SQS, Kinesis, object models, classifiers, data science tools, etc.). The C3 Type System provides RESTful interfaces and programming language bindings to ALL underlying data and functionality.

Leveraging the C3 Type System, application developers and data scientists can focus on delivering immediate value, without the need to learn, integrate, or understand the complexities of the underlying systems. The C3 Type System enables programmers and data scientists to develop and deploy production big data, predictive analytics, and IoT applications in one-tenth the time at one-tenth the cost of alternative technologies.

To improve manageability, Types support multiple object inheritance, allowing objects to inherit characteristics from one or more other objects. For example, a mixed-use building might have characteristics of both a residential and commercial use building.

The Type System, through inherent dataflow capabilities, automatically triggers the appropriate processing of data changes by tracing implicit dependencies between objects, aggregates, analytic features, and machine learning algorithms in a directed acyclic graph.

The Type System is accessible through multiple programming language bindings (i.e. Java, JavaScript, Python, Scala, and R) and Types are automatically accessible through RESTful interfaces allowing interoperability with external systems.

Model-Driven Architecture Abstracts Underlying Platform Services Through a Simple Type Systems Interface

 

 

Summary

The C3 IoT Platform (and the associated C3 Type System) is a unique high-productivity, low-code application PaaS for rapidly developing and deploying AI and IoT applications at scale across an enterprise. The C3 IoT Platform has been developed and hardened through numerous large-scale deployments over 9 years at an investment of $300 million. The C3 IoT Platform is proven to support Enterprise Digital Transformations.

Capitalizing on the potential of AI and IoT requires a new kind of technology stack that can handle the volume, velocity, and variety of big data and operationalize machine learning at scale. Existing attempts to build an IoT technology stack from open-source components have failed—due to the complexity of integrating hundreds of software components developed with disparate programming languages and incompatible software interfaces. C3 IoT has successfully developed a comprehensive technology stack from scratch for the design, development, deployment, and operation of next-generation applications and business processes.

Developing AI applications requires less code to be written, and less code to be debugged and maintained, significantly reducing delivery risk and total cost of ownership. Using the C3 IoT Platform, a company’s investment in application code is abstracted from underlying infrastructure and platform services and future proofed against rapidly evolving software technologies avoiding lock-in to those technologies. Further, the C3 Type System provides enterprise leverage – any published code <entity, method, aggregate, feature, ML algorithm, UI> is instantly available to the rest of the organization.

The C3 believe its IoT Platform is the industry’s only IoT platform proven in full-scale production. With hundreds of millions of sensors under management and more than 20 enterprise customers reporting measurable ROI, including improved fraud detection, increased uptime as a result of predictive maintenance, improved energy efficiency, and stronger customer engagement.

 

New C-Suite Chief Digital Officer

Digital Transformation Digest: Target Buys Shipt for Same-Day Delivery, NVIDIA Trains AI At Construction Site Safety, Microsoft Pushes Azure Cost Envelope Again

Constellation Insights
 

Target buys same-day shipping startup: Hoping to both fend off Amazon's challenge and keep pace with rival Walmart, Target is buying same-day delivery platform provider Shipt for $550 million. Target says Shipt will give it same-day delivery at half of its stores by early in the new year, and in every major market by late next year.

Shipt will become a subsidiary of Target but will run independently and still seek deals with other retailers. At first, Target customers can get same-day deliveries of grocery items, home products, electronics and some other categories, with an expansion coming over the next couple of years.

The startup, based in Alabama and San Francisco, has a network of 20,000 personal shoppers who fulfill customer orders in 72 markets. As of right now, Shipt's coverage area is centered in the South, Mid-Atlantic, part of the Midwest and Texas. Memberships are $99 for an annual plan, or $14 for a month-to-month plan. Deliveries are free on orders over $35 and can be completed in less than one hour, according to Shipt's site. Delivery drivers can earn around $25 per hour.

POV: Target's move comes just a few months after it acquired Grand Junction, another delivery-related startup. Grand Junction's platform has a different focus, however. It can be used to coordinate and optimize deliveries performed by a retailer's own employees, as well as connect them to some 700 local contract carriers. Shipt's model has more of a gig economy makeup, but it will be interesting to see how Target blends the two companies' capabilities going forward.

Meanwhile, Target's ambitious-sounding plans for increased Shipt availability sound like a must, given its absence in major markets such as the Northeast and California. While there will be an awareness gap at first, Target's retail footprint, digital channels and loyalty programs could close it rather quickly. Overall, same-day delivery is becoming table stakes for many retailers, and Target is showing willingness to make serious investments in it.

But Target's competitors are hardly standing still. Walmart, which is experimenting with rapid delivery on multiple fronts, recently acquired Parcel, a New York delivery startup with a different twist than either Shipt or Grand Junction. Parcel uses its own employees and leased trucks to make deliveries from a warehouse in Brooklyn to locations around the city. For Walmart, buying Parcel was a way to test out same-day delivery in one of the most difficult markets to maneuver in from a logistics standpoint.

Meanwhile, Amazon unsurprisingly isn't letting off the gas either. On the same day as Target's announcement, Amazon said that free same-day and one-day shipping for its Prime members has been expanded to 8,000 U.S. cities and towns.

NVIDIA, Komatsu eye AI for safer jobsites: Chipmaker NVIDIA, which has moved deeper into software and artificial intelligence in recent years, is working with construction equipment manufacturer Komatsu on technology aimed at making work sites safer places. Here are the key details from their announcement:

The partnership – described at GTC Japan by NVIDIA founder and CEO Jensen Huang – will focus on Komatsu using NVIDIA GPUs to visualize and analyze entire construction sites. The NVIDIA® Jetson AI platform will serve as the brain of heavy machinery deployed on these sites, enabling improved safety and productivity.

NVIDIA GPUs will communicate with drones and cameras in the construction sites, acting as an AI platform for analysis and visualization. SkyCatch will provide drones to gather and map 3D images for visualizing the terrain at the edge. OPTiM, an IoT management-software company, will provide an application to identify individuals and machinery collected from surveillance cameras. Both of these Komatsu partners are also members of NVIDIA’s Inception program for AI startups.

At the center of the collaboration is NVIDIA Jetson, a credit-card sized platform that delivers AI computing at the edge. Working in tandem with NVIDIA cloud technology, Jetson will power cameras mounted on Komatsu’s construction equipment and enable 360-degree views to readily identify people and machines nearby to prevent collisions and other accidents.

NVIDIA and Komatsu will hit at other pain points in the construction industry beyond safety. In Japan, construction workers are in high demand because of its aging population; to that end, Komatsu and NVIDIA's object is also to make job sites more productive.

POV: The project, which builds upon jobsite safety measures Komatsu has been working on since 2015, has many moving pieces. This reflects both its complexity and the breadth of NVIDIA and Komatsu's plans. Construction is just the latest industry move for NVIDIA, which has pushed strongly into autonomous vehicles, healthcare and robotics.

NVIDIA's Jetson developer kit for embedded applications uses the same Kepler GPU core found in supercomputers. GPUs contain thousands of smaller cores that are well-suited for massive parallel processing of deep learning workloads.

NVIDIA has been working on AI for nearly 10 years and has developed a vast set of related libraries and frameworks. This year, it introduced NVIDIA GPU Cloud, a unified software stack that runs on a PC, a more powerful DGX system, or in the cloud.

A related focus has been on education; NVIDA has said it will train 100,000 developers on deep learning this year. The deal with Komatsu represents the type of broad industry use case for AI that can start putting those developers to work sooner than later.

Microsoft pushes Azure cost envelope again: If there is a perennial trend in the IaaS and PaaS industry, it's a steady stream of cost cuts as rivals seek to remain competitive. This week, Microsoft introduced four new capabilities and pricing changes for Azure, with the most prominent being Azure Policy.

The service, now in public preview, allows customers to apply governance over their Azure resources:

Azure Policy allows you to turn on built-in policies or build your own custom policies to enable company-wide governance. For example, you can set your security policy for your production subscription once and apply that policies to multiple subscriptions.

Microsoft is also expanding support for its Cost Management service to Azure Virtual Machine Reserved Instances; cutting prices on Dv3 VMs in some regions, and making its Azure Archive low-cost storage generally available.

POV: None of the announcements on their own are earth-shattering, but they all hold some importance to Azure customers. Moreover, while Microsoft wants to grow Azure workloads and corresponding spend as much as possible, and price cuts are one way to entice more consumption. But keeping a steady focus on factors such as cost optimization and good governance as well drives more value from customers and demands similar efforts by competitors.

Data to Decisions Future of Work Matrix Commerce Next-Generation Customer Experience Tech Optimization Chief Customer Officer Chief Information Officer Chief Procurement Officer Chief Digital Officer

Digital Transformation Digest: Box Eyes Bigger Enterprise Consulting Role, How Chaos Can Prevent Cloud Outages, IBM's Enterprise Chatbot Exchange

Constellation Insights

Box believes content management is a crucial component of digital transformation projects, and to that end it has launched a new consulting service called Transform. The service provides customers who buy in with a dedicated and long-term consultant who will work with them on issues much broader than implementing Box software. Here's how Box describes Transform:

Integrating digital initiatives to accelerate organization-wide transformation: Box Transform provides enterprises with a strategic IT advisor to help them go beyond traditional file sharing practices, including implementing paperless strategies, digitizing time consuming processes like HR onboarding, building out custom applications with Box Platform, and retiring costly legacy infrastructure like network file shares.

Deploying agile methodologies: Projects through Box Transform are comprised of iterative sprints that include phases of planning, execution and retrospection, with the goal to increase efficiency and speed to results, resulting in processes being reimagined in months, not years.

Developing long term content strategies: With Box’s team of product and domain experts help you develop strategic and actionable content strategy roadmaps to centralize content layers and support ROI goals.

POV: Pricing wasn't disclosed for Transform services, which build upon the consulting practice Box formed in 2013. That was a telling development in Box's push toward enterprise business, and Transform represents the next natural step on that journey. The question is whether Box can make a compelling enough case to customers that Transform can truly deliver value outside the company's core domain expertise. Issues around identity management and compliance with respect to content are two areas where Box consultants could do so, notes Constellation VP and principal analyst Alan Lepofsky.

Box, founded in 2005, does have the benefit of having helped thousands of customers move technology and processes to the cloud and has been steadily notching up more large enterprise deals, including with AstraZeneca and General Electric. Still, it's a crowded market for digital transformation consulting services and Box has its work cut out for it.

Gremlin targets cloud outages with chaos engineering: Outages are a fact of life in the cloud, and they're not only inconvenient but incredibly costly to both providers and customers who rely on their services. Now a startup just out of stealth called Gremlin wants to make outages a thing of the past—or at least dramatically infrequent—through a technique called chaos engineering.

It's led by Kolton Andrus, a former engineer at Amazon and Netflix, the latter of which is known for its use of chaos engineering. Netflix built a tool called Chaos Monkey that randomly takes parts of its production system offline to see how the rest of it responds, giving engineers insight into how to build more resiliency. Chaos Monkey is now part of a larger family of Netflix tools known as Simian Army.

Gremlin's tool acts in a similar manner; the company refers to it as an "engineering flu shot," wherein companies can "safely inject failure into systems in order to proactively identify and fix unknown faults."

The distributed systems found in cloud services are inherently problematic, Gremlin contends in its announcement:

Previously, software ran in a controlled, bare metal environment that introduced few variables, making it possible for engineering teams to identify potential risk and failures before they occurred. Within the last decade, systems have shifted to the cloud and become distributed with microservices and serverless methodologies, which introduced new dependencies on services outside of one’s control - creating complexity for any team of engineers to fully understand. This makes failure and outages inevitable.

Gremlin’s tool allows engineers to see how the system will behave in the face of failure, validates that defenses will work to prevent outages, minimizes the blast radius to allow for safe experimentation in production, and saves time and resources for engineering teams.

POV: Gremlin has raised a total of $8.75 million in seed and Series A funding, so it's certainly early days for the company in that respect. But it has already managed to sign up some high-profile early customers, including Expedia and Twilio. Gremlin has certainly taken inspiration from companies who understand massively scalable systems and its tool may quickly find a place in many enterprises' operations.

IBM launches Bot Asset Exchange: Big Blue is hoping to draw more chatbot developers to its Watson Conversation service through a new portal called the Bot Asset Exchange. Here are the key details from IBM's announcement:

The Bot Asset Exchange leverages open source development to help conversational interface developers, including bot, voicebot, IoT, and virtual reality developers easily discover, quickly configure, and simply deploy bots. Users can find domain-specific conversation logic ready for them to use, leverage the creativity of a community of bot builders to discover innovative ways others have built bots, or create and contribute their own bot conversation logic.

Using the platform tools and participating in the community is incentivized by a point system that offers rewards, recognition and prizes such as IBM-branded merchandise, tickets to IBM events like Index – San Francisco, one-on-one meetings with the Digital Business Group’s product team, and social media mentions.

POV: The portal is already stocked with a healthy number of industry-specific bots, with use cases including IT support, travel booking, online banking, equity trading, personal finance and property management. The bots are dependent on Watson Conversation for the back-end conversation processing but there are no restrictions on how they can be designed or where they can be published. The site is in its early stages but is well-designed; the challenge will be raising awareness, getting a critical mass of developers engaged, and ensuring good governance over the bot content.

Future of Work Next-Generation Customer Experience Tech Optimization Chief People Officer Chief Information Officer Chief Digital Officer

Digital Transformation Digest: Microsoft Quantum Appeal to Developers, Google Cloud's Ecosystem Traction, Internet Pioneers' Net Neutrality Hail Mary

Constellation Insights

Microsoft looks to seed quantum developer base: In September, Microsoft laid out its quantum computing vision, which included the unveiling of Q#, a new programming language. Now Microsoft is hoping to get its large base of Visual Studio developers working on quantum projects with the release of a counterpart toolkit.

Classical computers are binary, storing bits as either a one or a zero. But quantum systems take advantage of the behavior of subatomic particles, which can hold multiple states in a phenomena known as that stands to give quantum systems vast amounts of processing power.

The new toolkit is "deeply integrated" with Microsoft's Visual Studio IDE, comes with a set of libraries and tutorials, and also includes a quantum simulator that runs on a laptop. For bigger quantum projects, Microsoft has a simulator that runs on Azure. Any quantum applications written with the kit will be future-proofed in a sense, as they'll work on general-purpose quantum hardware now under development at Microsoft.

POV: Microsoft has been working on quantum computing for more than 10 years. Still, it is playing catchup somewhat to rivals such as IBM, who has already pledged to have commercial quantum systems to market in just a few years. Google says it is getting close to reaching "quantum supremacy," referring to a quantum system that can complete a task faster than the world's most powerful classical supercomputers.

But giving millions of developers access to quantum tooling and educational resources now, consumable from within the familiar confines of Visual Studio, is a long and smart play from Microsoft. There are fundamental conceptual differences in programming a quantum system that the vast majority of developers will need to wrap their heads around. The toolkit and language, which will surely be continuously refined, provide an abstraction layer that gives developers a head start.

Google Cloud Platform plays catchup on MSP ecosystem: In a move that both suggests increased interest from enterprises and acknowledges their needs, Google Cloud Platform is steadily increasing the number of managed service providers in its orbit.

In March, Google announced that Rackspace would be GCP's first MSP; today, that number has grown to 12. While not a jaw-dropping total, it's nonethless progress for GCP's ecosystem and a sign that partners are becoming more willing to make their own substantial investments in building out GCP practices. Here's how Google describes the MSP program:

From hands-on support to the ongoing operation of customer workloads, these partners offer proactive services to both large and small cloud adopters. With their staff of dedicated technical experts, MSPs can tackle high-touch projects, covering engagement to migration and execution, to post-planning and ongoing optimization. Specifically, Google Cloud MSPs offer at minimum:

Consulting, assessment, implementation, monitoring and optimization services 

24x7x365 support with enterprise-grade SLAs 

L1, L2, L3 tiered support models 

Certified support engineers

One big addition to the ranks is Accenture. The rest is a mix of smaller companies: Cascadeo, Claranet, Cloudreach, DoIT International, Go Reply, Pythian, RightScale, SADA Systems, Sutherland and Taos.

POV: A robust MSP ecosystem is a proof point that a platform has matured and has market traction. As for GCP, 12 MSPs on board is certainly better than one, but to compete for more enterprise business Google will need to grow the ecosystem significantly, both from an expertise and geographic availability perspective. Google says more MSP partner announcements are coming soon.

Internet pioneers throw a Net Neutrality hail Mary: Later this week, the U.S. Federal Communications Commission's board is expected to overturn net neutrality regulations along party lines. The vote seems inevitable (although net neutrality proponents have a number of options to pursue next), but a group of 21 well-known technologists are asking members of Congress to step in at the eleventh hour.

The group, which includes Internet pioneer Vint Cerf and Apple co-founder Steve Wozniak, has written an letter to members of the House and Senate committees for technology-related matters, with the rather tart title, "Internet Pioneers and Leaders Tell the FCC: You Don’t Understand How the Internet Works." Here is an excerpt from the letter:

This proposed Order would repeal key network neutrality protections that prevent Internet access providers from blocking content, websites and applications, slowing or speeding up services or classes of service, and charging online services for access or fast lanes to Internet access providers’ customers.

It is important to understand that the FCC’s proposed Order is based on a flawed and factually inaccurate understanding of Internet technology. These flaws and inaccuracies were documented in detail in a 43-page-long joint comment signed by over 200 of the most prominent Internet pioneers and engineers and submitted to the FCC on July 17, 2017.

Despite this comment, the FCC did not correct its misunderstandings, but instead premised the proposed Order on the very technical flaws the comment explained. The technically-incorrect proposed Order dismantles 15 years of targeted oversight from both Republican and Democratic FCC chairs, who understood the threats that Internet access providers could pose to open markets on the Internet.

POV: Net neutrality bars ISPs from slowing legal Internet traffic based on payments or other considerations. The rules passed in 2015 determined that Internet service should be governed under Title II of the Commmunications Act, a law that dates to 1934. Opponents argue that the rules overreach and are anticompetitive.

As for the group's letter, it's highly unlikely to have any effect on the vote but does serve to bring attention to the issue. It's a sure bet that once the FCC votes, net neutrality proponents will file a lawsuit and it's conceivable that the rule change will be stayed by the court pending an outcome. This debate is far from over.

Data to Decisions Matrix Commerce Tech Optimization Chief Information Officer Chief Procurement Officer Chief Digital Officer

3 Ways Employees Will Benefit From Digital Transformation in 2018

From Baby Boomers to Gen Z, today’s workplace contains a mixture of generations. Although each has grown up with very different technological and cultural experiences, all face similar challenges at work, like information overload and having to stay up-to-date with technology that’s constantly changing. But all is not lost! The future of work is an exciting one which will leverage new tools, technologies and techniques to help people get work done.

At Constellation Research, three of the top areas we’re tracking around employees in the digital workplace are:
1. using technology to augment how teams accomplish work,
2. using data to guide actions and prioritize projects and
3. using technology to encourage more creativity among teams. 

Here are some of the things we’re observing.

Augmenting our ability to get more done

No longer a thing of the future, AI is already all around us in a big way—powering the voice input on our phones or the content in our news streams.

While conversations about AI often turn to science fiction, the reality for knowledge workers is that AI is already enhancing how they work, and will continue to do so. We’re already seeing email clients that recommend replies, calendars that automate meeting scheduling, and video services that transcribe content.

The way we create, consume and interact with content is also changing. Legacy whiteboards in meeting rooms are being replaced by large, intelligent and interactive screens that allow people to collaborate whether they're in the same room or across the world. Augmented and virtual reality are moving beyond science fiction (and gaming) to mainstream use cases such as education, product design and retail. While today’s headsets may be cumbersome, soon augmented reality will be everywhere, turning any clear surface into a potential display.

In addition, new input methods including voice dictation and gesture recognition (hands and face) are allowing us to interact with our devices in new ways. I actually wrote a lot of this post by speaking out loud to my phone. 

Using data to derive insights and guide actions

How many miles have you flown this year? How many steps have you taken today? Our personal lives are filled with measurements of our accomplishments and actions. Everything is quantified. But can you say the same for work?

Imagine if you could understand which social media posts are most effective or which meetings lead to more customer wins. We don’t always have the information we need at work to help us be more effective employees. In order to provide employees with meaningful information, data needs to be collected and patterns need to be discovered. But the fragmentation of work across social networks, file sharing, web conferencing and business applications creates quite a challenge.

The solution requires charting the interactions between people, content and devices. These collections are called “graphs” in computer science, and they reveal things like who people work with and what content they interact with. This information can be used to discover patterns, leading to insights about the way people work. In turn, this data can help employees better determine what work should be prioritized and what can be postponed.

Everyone becomes a storyteller

Think about the types of content people use at work: email, chat, documents, spreadsheets, presentations. Compare that to your personal life which is probably dominated by photos and videos. Wouldn’t it be nice if we had a similar level of fun and creativity at work? 

In the past, creating compelling graphics or videos was limited to professionals. Today, almost anyone with a camera phone can start creating highly visual content. Most camera applications provide lenses, filters, stickers and other digital tricks to enhance pictures. Some take gorgeous panoramic images and some even create 3600 content. Conversations in group messaging applications now include emojis and animated gifs. Photo-sharing sites can automatically create collages from our best images.

These advances in storytelling are starting to show up in the workplace as well, enabling marketers to create more effective presentations, financial workers to create visually informative spreadsheets and sales people to pitch products with more engaging content. The days of boring content at work are coming to an end.

Delivering in the digital workplace

We’ve witnessed incredible advancements in the tools we use at work over the past 20 years. However, these pale in comparison to what the next decade will be like. The future of work is going to empower employees regardless of skillset or seniority.

If you're ready to embrace the changes and become a digital employee, have your holographic assistant connect with mine so we can discuss this further! ...Or at least take advantage of some of the auto-scheduling features cropping up in your Calendar app.

Future of Work

Digital Transformation Digest: Kubernetes Can't Be Contained, Cisco Eyes Cloud Cost Management, Oracle and MongoDB Set for Earnings Showdown

Constellation Insights

Kubernetes can't be contained: Even if you're not a DevOps type, it's likely you've heard of Kubernetes, the open-source container orchestration platform that has become the industry standard in just a couple of years. Kubernetes originated at Google, which had used it and previous incarnations of the idea to run its own operations. That's undoubtedly one reason for Kubernetes' rapid start out of the gate when Google open-sourced it in 2014.

Containers are lightweight packages that include everything an application needs to execute—binaries, config files and so forth—so they can run the same across different environments and systems. Kubernetes handles the job of deploying and managing armies of containers, which offer benefits for developers, IT operations staff as well as end-users in the form of stability and performance.

It's overseen by the Cloud Native Computing Foundation, which the Linux Foundation formed in 2015. The CNCF now has 13 other open-source projects under its purview, many of which are focused on container-related functions. The group now has 160 members representing every top enterprise technology vendor; this week, Salesforce joined the CNCF in another prominent addition.

Another momentum data point: This week's KubeCon event in Austin, Texas drew greater than 4,000 attendees. That's well over three times the number who showed up one year ago.

It's not difficult to read the tea leaves, says Constellation VP and principal analyst Holger Mueller.

"Enterprises want portability and containers give them that," he says. "The more support for a container there is, the more they want it. So the flywheel is working for Kubernetes." Last week at its re:Invent conference, Amazon Web Services announced its own distribution of Kubernetes, even though it offers a homegrown container orchestration system. There's only one way to interpret that, Mueller says: "The war is over. Kubernetes has won."

Cisco buys Cmpute.io for managing cloud spend: This week, Cisco quietly struck a deal to buy Cmpute.io, a Bangalore company focused on helping enterprises manage their cloud spending. Cisco's Rob Salvagno gave the rationale for the acquisition in a blog post:

Cmpute.io’s software solution analyzes cloud-deployed workloads and consumption patterns, and identifies cost-optimization strategies. The solution helps customers right-size their cloud workload instances, minimize overprovisioning, and avoid paying for resources that don’t deliver business value.

With a multicloud strategy, customers need to budget, buy, and consume differently. Cmpute.io’s technology added to existing Cisco solutions will help our customers optimize their cloud consumption to ensure optimal business value.

Cmpute.io's team and technology will be rolled into Cisco's CloudCenter group.

POV: Terms of the deal weren't disclosed, so the price tag was likely on the smaller side. The more important thing to note is how Cisco's move ties into a broader cloud industry trend, where the market has become somewhat binary. Amazon Web Services and Microsoft Azure hold the lead in IaaS, with Google, IBM and Oracle trying to bring up their market share. Cisco, HPE and other players who attempted to launch a public IaaS but were compelled to fold their hand under too-stiff competition are trying to make money by helping customers manage multi-cloud spend, which is arguably a runaround way to compete with the IaaS leaders.

Oracle and MongoDB's dueling earnings: This is the time of year when enterprise tech industry news slows down for a bit, but there are still some notable items to watch. One is Oracle's Q2 earnings report, which is due out Dec. 14.

The quarter is typically one of Oracle's slower ones yet the numbers, when they're released, could be telling. For Q2 is the first quarter in which customers could take advantage of new programs geared toward convincing them to adopt Oracle's IaaS and PaaS.

The programs include a BYOL (bring your own license) option for Oracle's database and middleware, wherein customers can transfer their existing on-premises licenses to Oracle's IaaS. Those who move database licenses there can run them "at a fraction of the old PaaS price," Oracle said at the time.

Oracle also rolled out universal credits for PaaS and IaaS, which it described as follows:

Customers have one simple contract that provides unlimited access to all current and future Oracle PaaS and IaaS services, spanning Oracle Cloud and Oracle Cloud at Customer. Customers gain on-demand access to all services plus the benefit of the lower cost of pre-paid services. Additionally, they have the flexibility to upgrade, expand or move services across datacenters based on their requirements.

Go here for Constellation VP and principal analyst Holger Mueller's deep-dive on Oracle's new programs. Other metrics to watch in Oracle's Q2 include trend shifts in the sale of new on-premises licenses, how well Oracle SaaS is selling across different categories, and mentions of "all-in" customer wins, particularly for Oracle cloud services.

Oracle has a long and rich history, with a market capitalization of more than $200 billion. In a bit of counterpoint, NoSQL database vendor MongoDB will issue its first earnings report since going public in October. The tech unicorn has been posting heavy losses as many hot startups do, but eyes will be closely watching MongoDB's numbers, particularly any guidance it provides on future quarters. MongoDB's leadership has long positioned the company as an alternative to Oracle's database; to that end, some of the closest observers of its numbers may be watching from a certain set of towers in Redwood Shores.

Data to Decisions Tech Optimization Chief Information Officer Chief Procurement Officer Chief Digital Officer

Digital Transformation Digest: EU Says Luxury Brands Can Block Amazon Sales, Inside Dell's Q3

Constellation Insights

EU Court rules that luxury bands can block Amazon sales: A landmark ruling has come down in the European Union Court of Justice, which said that luxury product makers can block their wares from being sold by third parties on marketplace sites such as Amazon and eBay.

"The quality of luxury goods is not simply the result of their material characteristics, but also of the allure and prestigious image which be stows on them an aura of luxury," the court said. "That aura is an essential aspect of those goods in that it thus enables consumers to distinguish them from other similar goods. Therefore, any impairment to that aura of luxury is likely to affect the actual quality of those goods."

German cosmetics and fashion manufacturer Coty, which owns brands such as Calvin Klein and Covergirl, brought the action to the ECJ after an authorized distributor began selling its wares on Amazon. Coty and other luxury brands, such as LMVH, are loath to be associated with the likes of Amazon, with its emphasis on low prices and mass availability. To that end, LVMH has created 24 Sevres, a glossy e-commerce site where shoppers can select products from more than 150 high-end designers and get them via 2-day shipping.

POV: "This is all about brand protection and a part of preventing counterfeit goods," says Constellation VP and principal analyst Cindy Zhou. Luxury brands have taken similar issues to court in the United States as well, such as when Tiffany sued warehouse club Costco over rings labeled "Tiffany" in stores. Costco argued that it was not selling counterfeit rings but rather using the word as a general description of the type of setting used in the rings. A court ruled against Costco, ordering it to pay Tiffany more than $19 million.

"Luxury brands are all about differentiation and curating their upscale image," Zhou says. "As Amazon and other sites have a third-party seller network, it is difficult for the brands to control their distribution."

The ECJ's ruling offers protection to the LVMH's of the world from brand dilution in the EU's 28 countries, but won't have any effect on the massive consumer goods market in the U.S. Nor is it any guarantee across the EU, as the notion of a luxury good can be a bit fluid. For example, Coty brands such as Cover Girl and Clairol products are widely available for a reasonable cost in drug and department stores everywhere, hardly confined to tony retail boutiques. But overall, the ruling provides interesting fodder in a time of rapid change for cross-border trade, e-commerce, marketing and customer engagement.

Dell Technologies Q3 results—the highlights: This week Dell Technologies reported its third-quarter results, posting revenue of $19.6 billion but a net loss of $941 million. The full numbers are available here, but in this post we'll mostly focus on comments executives made during a conference call, and how they reflect on the broader market.

One major aspect of the historic Dell-EMC merger that created Dell Technologies was the potential for supply chain synergies, which would not only lower the vendor's cost but also improve product availability and service. Here's how Jeff Clarke, VP of products and operations, described the state of the state:

We've seen tremendous efficiency in the supply chain particularly through cycle time improvement, lead time improvement to our customer base and managing our working capital initiatives through our facilities most notably in the form of inventory. So I think we are well along the path of managing our other cost outside the commodity and the supply chain on the product side.

Memory prices, which affect so many tech products, companies and end-users, have been rising and with little end in sight, but Dell's supply chain footprint is helping it compete:

You've seen what we've gone through which is the longest inflationary period that I can recall in memory in a decade plus. And that's a byproduct of two things; one, there hasn’t been any new DRAM capacity been brought online and then the consumption of DRAM is at the highest rates we've seen.

We have DRAM, so as much as I have said DRAM is going up in cost we have it. And we're getting a value for having it. And whether that's in our PC business, on our server business I think that is something customers are coming to us for knowing we have supply and they're obviously paying for it.

Dell introduced flexible consumption models across its product line earlier this year. They allow companies to scale usage up or down as needed while avoiding major up-front costs. The number of flexible consumption deals in Q3 dropped compared to Q2, suggesting initial customer enthusiasm for the model is waning. That's not necessarily the case, Dell CFO Tom Sweet said:

I think that things are going to vary. They are complex, they're multi-year, they take time to negotiate with and typically the larger customers, the global customers that are negotiating these types of arrangements and so I think we are going to see some variability. I think all things being equal it's entirely conceivable that we'll see an uptick in a flexible consumption models in Q4 just given natural end of year sort of activity both from a customer and from a Dell Technologies perspective.

Dell has a massive array of products, some of which will likely be consolidated over time. Clarke acknowledged a need to lower complexity for customers:

[Y]ou know, complexity doesn't mean less products, complexity can mean the number of offers per product, how many countries we offer our product in. Interestingly we treat the 180th country the same way we treat the largest country in the world. It's not clear to me the 180th country in the world needs all of the entire storage breadth of our portfolio and we can make that less complex.

The next big news out of Dell will come in about a month at the Consumer Electronics Show in Las Vegas.

Marketing Transformation Matrix Commerce Next-Generation Customer Experience Tech Optimization Chief Customer Officer Chief Information Officer Chief Marketing Officer Chief Supply Chain Officer

Digital Transformation Digest: Microsoft IoT Central Eases App Development, UPS's Holiday Crunch Could Spark Drone Debate, and More

Constellation Insights

Microsoft IoT Central enters public preview: If your enterprise wants to empower line-of-business users to build IoT applications quickly, Microsoft says it has the answer in the form of IoT Central, which is now in public preview. 

IoT Central is a counterpart to Azure IoT Suite, which offers deeper customization capabilities and access to underlying services, with the tradeoff being a need for skilled developers. Both services provide quickstart application templates. IoT Central's browser-based Application Builder environment employs a wizard-like approach for creating models of IoT devices, setting application logic and parameters, and testing via simulation before live deployment.

IoT Central taps into a rising tide of interest in low-code platforms aimed at empowering citizen developers. (Microsoft has some experience in this area already with PowerApps.)

While the focus is on simpler scenarios, IoT Central applications can scale out to millions of devices, Microsoft says. It leverages multiple components of Azure IoT Suite, such as Azure Hub for device connectivity, but runs as a fully managed service. (For a deeper dive into IoT Central's technical details, go here.) Microsoft is offering a 30-day free trial for up to 10 devices; preview pricing for larger deployments is set at $0.50 per device per month.

POV: The speed and simplicity Microsoft is promising IoT Central will deliver could be compelling for many enterprises, particularly ones who have struggled to get IoT applications out quickly with other solutions (including Azure IoT suite). But buying decisions will require a thorough side-by-side assessment of the feature sets for IoT Suite and IoT Central with regard to customization capabilities and pricing. For its part, Microsoft says IoT Central offers a "medium" degree of customizability.

Still to come are important features for IoT Central such as pre-built integrations with enterprise apps. Microsoft says those are coming for Dynamics, Salesforce and other products.

"It's always good to see vendors making things simpler," says Constellation Research VP and principal analyst Holger Mueller. At Build 2016, Microsoft held a workshop for Mueller and other analysts in which they connected a Raspberry Pi to Azure IoT. The task was ultimately manageable but required a lot of steps and had some stressful moments. "It's good to see it simplified even further."

UPS faces holiday crunch—again: This year's holiday shopping season saw an uptick in online sales so strong that UPS had to delay some deliveries by one or two days, the Wall Street Journal reports. In an attempt to keep up with demand, the company has mandated that its delivery drivers work up to 70 hours over an eight-day period. While that's allowed under federal law, the Teamsters union, which represents drivers, is planning demonstrations and potential legal actions if the mandate isn't reversed.

UPS has had these holiday congestion issues before, despite hiring thousands of additional seasonal workers as driver helpers and distribution center staff. It expects to deliver 750 million packages between the U.S. Thanksgiving holiday and December 31, a rise of 5 percent over last year, according to industry publication Freightwaves.

POV: As the world's largest package delivery company, UPS is a crucial bellwether in the rapidly changing world of logistics. It is currently in contract negotiations with the Teamsters, who are working under a five-year pact that expires July 31. The new deal will most likely be for a similar term.

While the bulk of negotiations will concern working hours, wages and benefits, talks could also broach the role of self-driving trucks and other autonomous forms of delivery in UPS's operations. The company has been testing drones for years, including one earlier this year that launches from a truck's roof, delivers a package adjacent to locations along the driver's route, then returns to its docking station.

The drones have huge potential for UPS's bottom line, as they could save many millions in fuel otherwise burned by trucks going to those routes and would also provide obvious operational efficiencies. They also could have a negative effect on the UPS driver fleet's livelihood. The Teamsters have said they are "closely monitoring" the development of drones at UPS and based on their statement, won't be avid proponents of them.

Overall, the next few years will be interesting times at UPS as it juggles the challenges of meeting consumer demand, negotiating with its workforce and managing the rollout of new technologies.

Researchers predict most software will be written by computers in 2040: Scientists at the U.S. Department of Energy's Oak Ridge National Laboratory argue in a newly released paper that by 2040, the bulk of software code will be written by machines, with humans playing a highly diminished role:

The combination of machine learning, artificial intelligence, natural language processing, and code generation technologies will improve in such a way that machines, instead of humans, will write most of their own code by 2040. This poses a number of interesting challenges for scientific research, especially as the hardware on which this Machine Generated Code will run becomes extremely heterogeneous. Indeed, extreme heterogeneity may drive the creation of this technology because it will allow humans to cope with the difficulty of programming different devices efficiently and easily.

The authors cite a variety of projects and technologies that already exist for machine-generated code, including the Defense Advanced Project Agency’s (DARPA) Probabilistic Programming for Advancing Machine Learning and Microsoft's DeepCoder. In the future, "if a human does need to write some code, they may find that they spend more time using autocomplete and code recommendation features than writing new lines on their own," they write.

POV: The paper is worth a read (h/t to the Register) but is framed as speculative, and perhaps rightfully so. Its authors give no specific reason for the 2040 prediction, and barely get into important software development topics such as requirements gathering, testing and security. But the paper's focus on the very real problem of increasing hardware heterogenity, and how machine-generated code could help mitigate it, is a provocative one.

Data to Decisions Future of Work Matrix Commerce Tech Optimization Chief Customer Officer Chief People Officer Chief Information Officer Chief Supply Chain Officer Chief Digital Officer

Event Report - Pivotal SpringOne 2017 - It’s all about PCF – and some Spring

We had the opportunity to attend Pivotal’s yearly user conference of the Spring developer community. SpringOne, held in San Francisco December 4th till 7th 2017 at Moscone West. With about 3000 attendees it is the best attended Pivotal conference every, a proof point for the popularity of the Pivotal products.

[I am writing this blog post after attending the analyst summit on Monday, and attending Tuesday at the conference – more news is coming out and I may revise my judgement at that point.]

 

Prefer to watch – here is the video summary (if it doesn’t show up – find it on my YouTube Channel here).
 


Here is the 1 slide condensation (if the slide doesn’t show up, check here):
 


Want to read on? Here you go: 
 
Pivotal Cloud Foundry (PCF) 2.0 is here – Pivotal thought that so much substantial work has happened on Cloud Foundry recently, that it is worth to rev a release number, making it PCF 2.0. And certainly, the addition of Kubernetes support (announced in August, now GA) with PKS, the announcement of serverless capabilities (Pivotal Function Service – PFS), the integration of VMware NSX-T stack for networking and security – all make this a lot of new functionality. In combination with support for Microsoft Azure Stack, more support for Windows containers (like auto-scaling) and access to Google Cloud platform services more additional capabilities are added … and partners are flocking to PCF, the most prominent being IBM, adding Open Liberty as an embedded server option to Spring Boot, commercial support for IBM WebSphere Application Server Liberty Buildpack in PCF and better integration to a whole plethora of IBM product and services. 

 
Mee opens SpringOne


Pivotal moves into Serverless – No surprise, Pivotal announced its serverless plans, that supposingly materialize in across 2018. Serverless is powerful for enterprises to build and operate their next generation applications, and in order to keep enterprises and developers happy, Pivotal had to come up with its own serverless alternative. It looks architected well, with the usual Pivotal suite integration (Rabbit MQ) but also with Apache Kafka, to manage events that wake up the serverless capabilities. But it is early days - so stay tuned for more on this in the coming months.

 
If Onsie Then Emojis


Partner, partners… did I mention partners? – Enterprise software ecosystems have a fine nose when it comes to identify a vendor that has momentum, and vendors that can partner tend to flock to that vendor. Pivotal is no exception. Accenture launched a joint business unit with Pivotal, signaling the engagement of the large system integrators. The IaaS side is well represented with Google and Microsoft. Developer tools integration is happening with IBM, Microsoft and many more. Tech stack integration is happening with IBM and others. The Dell EMC keiretsu was there with Dell EMC (an on-premise version to run PCF), Virtustream (running PCF for you) and VMware (NSX most prominent). And many startups like e.g. Datadog, Solace etc. Year over year – since SpringOne in Las Vegas last year, I’d say the ecosystem has doubled in presence and efforts.

 
And yes - a new Spring Banner is unveiled
 

MyPOV

Pivotal is on a roll when it comes to Cloud Foundry and Spring. Enterprises want (and need) to build next generation applications fast, and they naturally look for frameworks (Spring) to help them on a platform (Cloud Foundry). This created the “2nd spring for Spring” as I stated a year ago (see here) - which otherwise was a developer community slowly fading away… not anymore and good to see the revival. Serverless is an important innovation for Pivotal to keep new type of workloads in the fold… we now have to see how it materializes in a few quarters from now. Almost ironically there were more Cloud Foundry announcements than Spring announcements – at least on the first day when I attended. The audience did not seem to bother, enterprises and developers know that at the end of the day, that the platform comes first.

On the concern side, while it was almost refreshing not to hear a mention of Machine Learning / AI at a conference in 2017, it still means that Pivotal will have to give its customers and users a solution in this important space. But fair enough, serverless first. Equally the BigData / Hadoop relationship is not mended with the leading vendors, given Pivotal’ s database history… but better to fix this sooner than later… and the only IaaS hold outs to come to the same level as Google, IBM and Microsoft have been AWS and Oracle… but it is likely that Pivotal is out to get those players in 2018. The ecosystem and success that Pivotal has been able to create around Cloud Foundry is too much of an attraction not to be part of the party.

Overall great momentum for Pivotal, good innovation and announcements for both Cloud Foundry and Spring. Excited and eager customers, increased partner interest are all good signs that all is well in Pivotal land. Stay tuned.

Want to learn more? Checkout the Storify collection below (if it doesn’t show up – check here). And a Storify collection of the analyst summit can be found here.
 
Find more coverage on the Constellation Research website here and checkout my magazine on Flipboard and my YouTube channel here.
Tech Optimization Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer

Digital Business Transformation; Technology outside of IT Applying Digital Technology to practical, everyday business activities

Not a title designed to upset the IT department, just a reflection on the manner that Technology has pervaded most aspects of current life to become a practical tool. Think of a modern car, a highly sophisticated technology environment of interconnected devices interacting with, and responding to, wide range of events within a largely self-contained ‘enterprise’ system. It’s certainly not part of the IT role, nor are the Technologies necessarily the same, and there is no alignment with the Business functionality IT supports.

As the article your car may be the most powerful computer you own makes clear the functionality is focused on using Technology to ‘Read and Respond’, completely different in almost every way from the role of IT to provide internal administrative processes and transaction records. Using the increasingly popular terminology; Systems of Record equates to the role and technologies of IT, whereas Systems of Engagement feeding the data to Systems of Intelligence in Digital Business are something entirely different!

Computational power got cheap, programming got easier, and more engineers understood how to make use of these changes. An Apple Watch has more processing power than an Apple 4 Smart Phone, which in turn has more power than a Cray Supercomputer of the 80s. (http://pages.experts-exchange.com/processing-power-compared/ ). Add the boost in connectivity brought by the Internet, with acceptance in use by the population at large, and the resulting transformation in functionality equates to the arrival of ‘Digital’ in everyday life.

Amazing changes, that have resulted in us as individuals not seeing Technology as contained within the IT department. In fact, we have all got rather good at ‘innovating’, every time you load and start to use an App, it’s a personal innovation of some aspect of your life. Digital Technology is merely a convenient name for a group of new technologies, CAAST, (Clouds, Apps, AI, Services and Things), that in combination enable an activity to use technology beneficially.

Hardly surprising that much of the innovations are straight forward practical applications of the capabilities to everyday issues, like Triax Spot-r in the Construction industry. Big Enterprises might get headlines around Business Transformation, but transformation of industry sector working practices is much more widespread. These changes in turn get adopted to evolve current Enterprise activities, so the spread of the ‘Digital Revolution’ continues as a series of quiet success stories.

The success of Triax Spot-r, is an excellent example, less a Technology story and much more a practical capability to address a series of issues confronting ‘on the job’ management. In fact, deployment success is coming from construction workers and their management realizing how to get even greater value than the original business proposition.

Our networked devices worn by every worker on the site provide real-time location visibility and keep you informed of safety incidents as they occur. Quote Triax.

Technically Triax's Spot-r system integrates a number of the new Technologies in an imaginative manner. The combination of a proprietary wireless mesh network with wearable devices, (see photo of a belt clip), that includes an accelerometer, gyroscope and altimeter to give previously unavailable ‘real-time’ data. Triax Spot-r original basic proposition was to automatically check workers in-and-out of the job site and notify of potential and real site safety incidents picked up by its sensors, (slipping, tripping, jumping and falls), to aid site supervisors in fulfilling their legal obligations to manage a safe site.

Spot-r also provides the geolocation of potentially, or actual, injured workers to improve response times for providing aid. The inclusion of a self-alert button allows workers to report unsafe conditions, site hazards or other potential injuries in real-time directly from their work area. All good personal safety features to encourage workers to want to wear a Spot-r device.

Business value is delivered to site supervisors and managers by a cellular wireless connected mobile dashboard, (see photo), connected to the Triax cloud service. In addition to the direct real-time safety information all data is aggregated on each worker’s time & attendance, location, subcontractor activity, and any incident types.

An Open API allows developers to add further functionality and integration, such as with Procore Technologies' project management platform. Here integration automatically sends Spot-r worksite data, including man hours and safety incidents, to Procore for accidents, timecards, manpower and daily construction reports. Not surprisingly, Insurance companies appreciate Triax Spot-r as a means of cutting the cost of injuries on construction sites, and that offers site operators a further incentive in the form of lower insurance premiums.

Spot-r offers a classic win-win proposition for site operators, supervisors, managers, even the workers, and above all a valuable aid to fulfilling safety legislation compliance. It is made possible by a clever combination of the new Digital Technologies, it’s not part of Enterprise IT, and neither does it demand a business transformation. It’s a great example of the large number of adopt and deploy moves across many different industry sectors driven entirely around practical business cases by ‘hands-on’ business managers. The quiet and little publicized take up of Digital technology to provide tangible immediate improvement to existing operations.

A conversation with an early adopter site manager on a large New York site with 400 construction workers produced an enthusiastic endorsement, reeling of a multiplicity of ways that Spot-r was significantly improving operations. The outcome being that Spot-r was scheduled to be rolled out on all the other construction sites that his employer was operating. The benefits of a tangible increase in site and personal safety having even overcome initial Union and worker resistance to change common to the construction industry sector.

Not surprisingly Spot-r has attracted a lot of attention, with Business Insurance, an industry sector publication, selecting the product for an innovation award. The benefits are simple to grasp, deployment is straight forward and importantly with this type of low cost solution the message easily spread by Social media, including a youtube clip. Though a wide range of Building, Facilities Management and Construction publications have also been quick to covered the product and its contribution to safety and improved site management.

All in all Triax Spot-r makes an excellent example of the kind of practical implementation of Digital Technology that is the reality of ‘Digital Technology’. Straight forward direct improvements to current activities providing an evolutionary non-threatening way to gain direct benefit. Unlike the big strategic reports with their focus on large scale high risk business model transformations. As an example Construction Global publication featured in its July 2017 edition an in-depth study of the future of the construction industry by Balfour Beatty, a leading global construction company, entitled Innovation 2050. Clearly aimed at Board level strategic thinking it presented ten well thought through ‘big’ conclusions that would ‘transformation’ Building and Construction.

No doubt the conclusions are a necessary ‘wake up call’ to Board and senior management, but is it right to take the report as recommending Boards make immediate ‘revolutionary change’ through enterprise wide ‘Transformation’. A high risk path, and noticeably few, if any, of these reports offer direct recommendations for ‘practical’ deployment activities featuring particular products. It does seem much more likely that the Building and Construction sector, (and other Industry sectors), will evolve through a gracefully path of every increasing ‘evolutionary’ deployments identified by operational Business managers spotting opportunities.

That’s not to say there is no value in reports such as Innovation 2050, there is a need to make sure senior management is encouraging adoption, rather than hindering, with an eye to the future. In much the same way, the role of IT lies in making sure at an Enterprise level Technology works for the entire Enterprise overall and doesn’t support individual business deployments at the expense of the whole. The big picture of the longer-term impact of Digital Technology transforming markets into Digital Business is important, but equally so is keeping attention focused on encouraging ‘hands-on’ managers and supervisors to identify and deploy practical Digital upgrades.

 

Addendum;

https://www.constellationr.com/blog-news/increasing-digital-competitiveness-your-current-business-model-lessons-ge-and-industrie-40

New C-Suite Innovation & Product-led Growth Tech Optimization Future of Work AI ML Machine Learning LLMs Agentic AI Generative AI Analytics Automation B2B B2C CX EX Employee Experience HR HCM business Marketing Metaverse developer SaaS PaaS IaaS Supply Chain Quantum Computing Growth Cloud Digital Transformation Disruptive Technology eCommerce Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP Leadership finance Social Healthcare VR CCaaS UCaaS Customer Service Content Management Collaboration M&A Enterprise Service Chief Information Officer Chief Technology Officer Chief Digital Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Executive Officer Chief Operating Officer