Results

Google Cloud files EC complaint vs. Microsoft Azure: What it means for enterprises

Google Cloud files EC complaint vs. Microsoft Azure: What it means for enterprises

Google Cloud's move to file a complaint with the European Commission over Microsoft's alleged anti-competitive business practices with Azure highlights how the cloud battle is moving to courtrooms and regulators. 

The big three hyperscalers--Amazon Web Services, Microsoft Azure and Google Cloud--were already ensconced in a sometimes chippy battle for market share. With its EC complaint, Google Cloud is upping the ante to include regulators.

Google Cloud has also made similar appeals to regulators in the UK where cloud marketplace competition is examined. Google Cloud's parent, Alphabet, has been targeted by EU regulators for its search business.

In a blog post, Google Cloud said Microsoft's licensing terms push European customers to Azure over competitor clouds if they want to preserve their Windows Server license pricing. Google Cloud alleges that Microsoft marks up its licensing costs if customers use other clouds.

In its UK testimony in July, Google Cloud noted that Microsoft's licensing practices designed to push customers to Azure had the biggest impact on enterprises, which have legacy ties to the software giant.

Google Cloud said:

“Like many others, we have attempted to engage directly with Microsoft. We have kicked off an industry dialogue on fair and open cloud licensing. And we have advocated on behalf of European customers and partners who fear retaliation in the form of audits or worse if they speak up. Unfortunately, instead of changing its practices, Microsoft has struck one-off deals with a small group of companies.”

While these antitrust complaints take time to play out, there are a few takeaways worth pondering.

Cloud costs are a big concern. Google Cloud's complaint in the EU vs. Azure is just the latest indicator that cloud costs are an issue. Akamai, which has its own cloud computing infrastructure as a service, has launched Project Cirrus to migrate third-party public cloud workloads to its own infrastructure. As a result, Akamai has been able to cut its public cloud costs by 40% in year one with 70% savings projected in year two.

Microsoft could alter pricing practices ahead of EC action assuming the Google Cloud complaint goes anywhere.

AI workloads will bring cloud costs under even more scrutiny. The playing field for AI workloads in the cloud is a little broader with the rise of specialist providers, but the big three dominate here too. The race for AI workloads is well underway and public cloud costs are likely to rise for most enterprises. Those costs are a big reason why on-premises AI infrastructure will be in the mix.

While the big three cloud players duke it out, Oracle Cloud may be a winner. After all, Oracle now has positioned its databases in all three hyperscalers and can be a beneficiary as enterprises move workloads around. Oracle has mastered the art of co-opetition in many ways.

 

Tech Optimization Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Microsoft Google Cloud Google SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer

Smartsheet to go private in deal valued at $8.4 billion

Smartsheet to go private in deal valued at $8.4 billion

Blackstone and Vista Equity Partners are taking Smartsheet private in an all-cash deal valued at $8.4 billion, or $56.40 a share.

The price is a 41% premium to Smartsheet's 90 trading days ending July 17. In recent months, numerous reports noted that Smartsheet was in talks to go private.

Mark Mader, CEO of Smartsheet, said the deal will "accelerate our vision of modernizing work management for enterprises." Blackstone and Vista Equity Partners said Smartsheet will benefit from the combined firms scale and network of companies. For instance, Vista is focused on enterprise software, data and technology. 

Constellation Research analyst Liz Miller said:

"Smartsheet going private is an interesting move as the very idea of what work and project management means today, especially in this age of AI where work is being forever changed by automation. The interesting differentiation with Smartsheet isn’t just their capacity to help manage, automate and optimize work and projects across the enterprise but also their past acquisitions like Brandfolder."

Smartsheet will have a 45-day go-shop period that expires Nov. 8 where the company will be able to solicit other acquisition offers.

For the second quarter, Smartsheet reported revenue of $276.4 million, up 17% from a year ago. Annual recurring revenue was $1.09 billion. Smartsheet reported earnings of $7.9 million, or 6 cents a share.

Smartsheet, which competes with Asana and Monday, had 2,056 customers with ARR of more than $100,000. The company projected fiscal 2025 revenue of $1.116 billion to $1.121 billion.

 

Future of Work Chief Information Officer

Google Cloud rolls out new Gemini models, AI agents, customer engagement suite

Google Cloud rolls out new Gemini models, AI agents, customer engagement suite

Google Cloud launched a series of updates including new Gemini 1.5 Flash and 1.5 Pro models with a 2 million context window, grounding with Google search, premade Gems in Gemini in Google Workspace and a series of AI agents designed for customer engagement and conversation.

The updates, outlined at a Gemini at Work event, come as generative AI players increasingly focus on agentic AI. Google is looking to drive Gemini throughout its platform. The pitch from Google Cloud is that its unified stack can enable enterprises to tap into multiple foundational models including Gemini, create agents with an integrated developer platform and deploy AI agents with grounding in enterprise data on optimized infrastructure.

Google Cloud's agent push was noted by Google Cloud CEO Thomas Kurian at an investment conference recently. Kurian cited a series of use cases in telecom and other industries. Kurian said Google Cloud is introducing new applications for customer experience and customer service. "Think of it as you can go on the web, on a mobile app, you can call a call center or be at a retail point of sale, and you can have a digital agent, help you assist you in searching for information, finding answers to questions using either chat or voice calls," said Kurian.

Google Cloud is showcasing more than 50 customer stories and case studies for Gemini deployments including a big push into customer engagement

During his Gemini at Work keynote, Kurian said customer agents will focus on real-world engagement, natural interaction with voice and understand the information needed to give a correct answer. "Customer agents can interact in natural ways without having to navigate menus and traverse systems," he said. "Agents can synthesize all the information you want and your data privately and securely."

Duncan Lennox, VP & GM of applied AI at Google Cloud, said "the enterprise is shaping up to be one of the most impactful transformations that I've seen in my career." Lennox added that AI agents have the potential "to revolutionize how businesses operate, how people interact with technology and even solve some of the world's biggest challenges."

Lennox said a Google Cloud survey found that 61% of organizations are using GenAI in production and increasingly looking to drive returns. Lennox argued that agents are going to enable new applications and experiences in the enterprise.

As for the news, Google Cloud outlined the following:

Vertex AI (all GA unless otherwise noted)

  • New models Gemini 1.5 Flash and 1.5 Pro with 2 million context window, double what was available before.
  • Controlled generation, which allows you to dictate the format you want to model.
  • Prompt Optimizer in preview.
  • Prompt Management SDK.
  • GenAI Evaluation Service.
  • Distillation for Gemini Models, supervised fine tuning for Gemini 1.5 Pro & Flash.
  • Chirp v2.
  • Imagen 3 editing and tuning in preview.
  • Ground with Google search, dynamic retrieval.
  • Multimodel function calling.
  • Expanded machine learning processing in North America, EMEA and Japan/Asia Pacific.

Google Workspace

  • Premade Gems in Gemini for brainstorming, writing social media posts and coding.
  • Custom Knowledge in Gems to carry out repetitive tasks with specific instructions.
  • Vids, GA by end of the year. Vids can start with a single prompt and guide users to tell a story
  • Summarize and Compare PDFs.
  • Gemini in Chat.
  • Gemini for Workspace Certifications.

Customer Engagement with Google AI

  • 1.5 Flash for Customer Engagement with Google AI.
  • Deterministic and Generative Conversational Agents in preview.
  • Agent Assist Coaching Model in preview.
  • Agent Assist Coaching Model in preview.
  • Agent Assist Summarization in preview.
  • Agent Assist Smart Reply.
  • Agent Assist Translation in preview.

What's an agent?

With the term agent being used extensively, Erwan Menard, Director of Product Management at Google AI, was asked in a briefing how the company segments agentic AI.

The question is a good one considering that in just the last two weeks, Salesforce, Workday, Microsoft, HubSpot, ServiceNow and Oracle all talked about AI agents and likely overloaded CxOs who have spent the last 18 months trying to move genAI from pilot to production. Other genAI front runners—Rocket, Intuit, JPMorgan Chase--have mostly taken the DIY approach and are now evolving strategies.

Menard said there are three flavors of agents across the Google Cloud portfolio. First, there are pre-built agents embedded into experiences in Google Workspace. Then there are Google pre-built agents designed for customer engagement platforms. And then there are agents being built by enterprises using Google Cloud.

The Gemini at Work event will feature a hefty dose of companies that are building agents on Google Cloud. The genAI use cases going to production the fastest are ones that are built into existing applications and those aimed at contact centers, HR and other environments, said Menard.

Menard said:

"We think of AI agents as systems that use an AI technique to push you goals and complete tasks on behalf of users. An agent basically understands your intent, turns it into action. That's how we think of the word agent."

Google Cloud is seeing agentic AI revolving around complexity as well as agency. Agentic AI will revolve around complexity of workflows and the need for multiple systems.

"As we try to get more business impact--let's say a task specific agent that would execute a task on your behalf--we're going to go into workflows that we want to automate, and so we need to interact with many more systems," said Menard.

Agency will also be critical to agentic AI deployments. Agency refers to "the ability of the agent to learn to make decisions, proactively, take action, to achieve a desired goal with a minimum human supervision," said Menard.

Enterprises will likely have the following progression, said Menard, based on what Google Cloud has seen with enterprise customers.

  1. Task specific agents will become an early focus.
  2. Then there will be assistants that can help a human accomplish a ask faster.
  3. Multi-agent systems will then emerge to take a complex task and address it end to end.

Menard said:

"That's kind of the paradigm we're operating in terms of the agents that are being offered and going to production. Clearly, there are important decision factors for customers around the surface where the agent will be presented. Do I build a new surface and attract users, or do I meet the users where they are? Second is the skill set the customers have. Do you invest in building an agent or go with a pre-built agent, or a total DIY approach where you handpick the orchestration framework and all the different elements?"

Timeline to production will also be critical for enterprises, added Menard. "You could very much start with a pre-built experience to confirm the need and the benefit and then decide to decompose into with the DIY approach, to iterate further on your agent," he said, noting that Google Cloud's stack enables all levels and approaches to AI agents. "All of these are not conflicting but different expectations from customers."

Pilot to production takeaways

Speaking after the keynote, Kurian outlined a few takeaways based on what Google Cloud is seeing from customers as they move from pilots to production. Here are the big themes:

Timelines. "Cycle time isn't driven by models," said Kurian. "This is an actual software project and timelines depend on the systems the models interact with."

Kurian said a use case like using Gemini to create content for ad campaigns may take anywhere from 8 weeks to 12 weeks. To enhance search on a commerce site with conversational AI it could take 4 weeks to 8 weeks. If an enterprise is leveraging search and AI conversations in a contact center a project could take up to 6 months depending on the number of modern systems, legacy IT and APIs already in place. Projects where a company has to tap into an old PBX could be longer.

"A lot of it depends on whether you have to change the organization," explained Kurian. "It's not a technology problem alone. If a project doesn't require changes to an organization or workflow then it's faster."

Kurian said it has a maturity model that it has shared with SIs so they "don't go in and here's a big bang project." "There's a sequence to deliver value and we're often dealing with time windows," he said.

Change management and workflows. Kurian's comments on timelines highlight how important change management is with generative AI. Regulation, workflows, processes and technology debt and culture are all factors to consider.

Kurian said enterprises need to keep in mind change management and processes as they deploy AI agents. These processes are critical, but now Google Cloud Gemini models have memory they can wait for an asset or step to take actions.

"It's not a big bang. It's deliver the technology and methodology to deliver an AI solution," said Kurian.

Business value. Kurian also noted that genAI projects need to deliver value whether it's efficiency or revenue growth. The slew of Google Cloud customers noted at the Gemini at Work event have all seen business value. The metrics will differ by company, but the blueprint is the same. Create value quickly and then expand from there.

Data to Decisions Future of Work Next-Generation Customer Experience Innovation & Product-led Growth Tech Optimization Digital Safety, Privacy & Cybersecurity Google Cloud Google ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR SaaS PaaS IaaS CRM ERP CCaaS UCaaS Collaboration Enterprise Service GenerativeAI Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

AI projects remain work in progress, pilots, say CxOs

AI projects remain work in progress, pilots, say CxOs

Do-it-yourself AI projects are alive and well as CxOs are trying multiple approaches for generative AI, getting some projects to production and looking for better returns, according to a pop-up attendee survey at Constellation Research's AI Forum.

In a survey of 35 CXOs at the Constellation Research AI Forum, it's clear that the genAI playbook is far from being solidified. Forty-three percent of respondents were from companies with revenue of more than $1 billion. Overall, the pop-up survey at AI Forum aimed to highlight what CxOs were doing directionally. Salesforce CEO Marc Benioff said do-it-yourself genAI doesn't make sense in the long run because there's too much work involved and most enterprises won't keep up. That argument has a lot of merit, but it remains to be seen how genAI projects play out. T-Mobile and OpenAI announced they were building custom applications in a partnership.

Respondents indicated that they were using multiple approaches to build AI capabilities. The majority (79%) said they were developing home-grown AI services on hyperscale cloud services and 48% were also using open-source frameworks and large language models. Many of these efforts included AI embedded in packaged applications that they already used such as Salesforce, Adobe, Oracle, SAP etc.

Automation was the biggest reason to implement AI with operational efficiency No. 3. The No. 2 reason for implementing AI was for cutting edge capabilities.

More from AI Forum:

ROI, however, was a bit elusive. Forty-five percent of respondents said they have yet to see ROI from AI technology and 31% said they've deployed with models returns. Of the respondents implementing AI, all of them said there was room for improvement whether they were seeing ROI or not. The majority of respondents all said their AI investment in 2025 would be up.

Investment priorities included data lakes, predictive analytics, natural language processing and image recognition.

As for functions, CxOs at AI Forum said they have scaled AI projects in employee productivity, back office, IT and sales and marketing.

Other items of note:

  • 40% of CxOs at the AI Forum noted they didn't have the human capital to successfully implement AI.
  • CxOs were recruiting, internal peer networking, training existing employees and partnering with companies and universities to fill talent gaps.
  • Roles are changing as managers aim to acquire data expertise and restructure business models. Managers are also networking to gain knowledge.
  • CEOs, CTOs and CIOs are generally leading the AI charge.
  • Operational efficiency and revenue and growth are areas driving the most ROI for AI projects.
  • Trust, budget and data quality are the three challenges limiting ROI.
  • OpenAI, Llama and Anthropic models were most popular in order.
Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Sam's Club CEO Nicholas on AI, frictionless commerce, focus on members

Sam's Club CEO Nicholas on AI, frictionless commerce, focus on members

Chris Nicholas, President and CEO Sam's Club, said artificial intelligence is enabling the company to "take 100 million tasks out of our clubs" even though it has more associates.

The game plan for Sam's Club, a unit of Walmart, is to leverage technology to enable employees to solve customer problems and drive engagement, said Nicholas, speaking at the Constellation Research AI Forum in New York.

Nicholas said:

"We will take 100 million tasks out of our clubs, and we will have more associates. Why? What are they doing? They are solving members problems. They are driving engagement. They are building our E commerce business. They are connecting on services that we're offering. What it does is it opens up the aperture of people, and efficiency needs to be some kind of like negative connection."

Nicholas said technology needs to help people build careers, build careers and solve problems and then find the next round of opportunities. He added:

"Going forward, the superpower of a business to win from a business perspective is going to be in the connections with people. It just is. I know you can have great nuance with the tone of a generative AI system, and that will solve the problems I don't want my associates to solve. Then they can spend time solving the more deeply connected, more empathic solutions."

Other takeaways from Nicholas:

AI and technology implementations need to start with design thinking, the problems that need to be solved and the customer journey. Nicholas said personalization, Sam's Club member value and experiences are solved "only by technology and the application of data assimilated through artificial intelligence," said Nicholas.

More than a third of Sam's Club shopping visits are using Scan and Go technology for frictionless checkouts. "The adoption curve is rapid and once people have done it, they never go back,” said Nicholas.

Computer vision boosting inventory. Nicholas said:

"We were literally asking people to walk around the club, look for pallets and write on a clipboard if they could see a palette or not. This still happens in most places. Nobody was doing inventory. Terrible, terrible idea, and they're doing it every day. What we did was we had these floor scrubbers, and we said, 'how about put cameras on the floor scrubbers?' We take 23 million images a day of where every palette is, everywhere, multiple times a day, so that we know what's in stock, what's not in stock, where the palettes are, what needs to come down, when and how. By doing that, you just enabled the associate with the app to say, what's the next best task? What we realized through that is that the quality of computer vision imagery getting was so high that we've put it to other applications."

Focus on the work and people, not the technology. Nicholas said enterprises can be carried away with technology, data, AI and new applications, but "the real value is in the actual work that people are doing." "If you take friction away from their lives and empower them you don't need market adoption," said Nicholas. "You just need to make it easy and adoption will happen."

Data to Decisions Future of Work Matrix Commerce Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Supply Chain Officer Chief Technology Officer Chief Executive Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Boomi CEO Lucas: AI agents will outnumber your human employees soon

Boomi CEO Lucas: AI agents will outnumber your human employees soon

Boomi CEO Steve Lucas said that the number of AI agents will outnumber the number of people in your business in less than three years.

"The digital imperative is how do I work with agents? The number of agents will outnumber the number of humans in less than three years," said Lucas, speaking at Constellation Research's AI Forum in New York. "It will be overwhelming. It will be fast and we're not prepared."

Lucas said that he's challenging his CIO to rid Boomi of expense reports. It's 2024 and it's time to get together with AI and make sure humans never have to approve an expense report.

"The question everyone should be looking at is how can I augment every single business process that I have today with AI," said Lucas. "Try a process, and find a way AI can augment it and reduce human time consumption. No. 2 is how can you automate my business and eliminate the need for humans in specific areas."

Constellation Research analyst Holger Mueller said:

"The three year horizon is probably too conservative. If the trend continues that agents can be built by low code and no code tools and citizen developers. We will reach that point easily by end of 2025. The hunger for more automation powered by a nothing and also with AI is unstoppable. Give business users the tools and they will go fish."

Lucas said he wasn't talking about reducing jobs as much as redundancies. The big issue will be orchestrating agents, he added. In a nutshell, Lucas sees layers of agents all checking for hallucinations, accounting compliance and other issues.

Boomi is working on a registry to manage agents. "We are building what we call an Active Directory for agents, where you can register them, track them, understand the decisions, revoke authority, and do it in real time. If you don't have that doesn't matter what agents you invent. It doesn't matter what you do with AI. You have to have inside transparency, explainability, control," said Lucas.

"Under Steve Lucas' watch, Boomi has accelerated its pace of innovation and is now leading the way among integration platform as a service (iPaaS) vendors toward AI orchestration and the development of agentic applications," said Constellation Research analyst Doug Henschen.

Other takeaways from Lucas' talk:

Data strategy and quality remains job one. Lucas said "the rule hasn't changed. It's garbage in, garbage out." Lucas said he's "amazed at how few companies are really prepared across the complex landscape of apps, databases, APIs and models that they don't know where the data comes from to feed these systems."

Boomi is feeding information into Llama 3 models internally. "We give it everything from all of our product, pricing and packaging data, which changes frequently," explained Lucas. "We feed it Slack channel information from all of our employee, customer success and support conversations. It knows everything. Why? A challenge I'd put in front of you is to build an agent that is better than any human in a role."

AI improvement. Lucas said his bet is that can get 10% incrementally better every year. "If AI gets just 10% better each year we may not reach artificial general intelligence, but we will get close," said Lucas. "Whether it is truly conscious or not won't matter, it will do most of our jobs better than we will as defined today. This is the AI Big Bang, the real version."

Will we be talking about AI a year from now? Lucas said the context of AI conversations will change, but the topic remains. "We are going to need agents, registries and protocols. We will need whole protocols for agents to define how they communicate so we humans can even understand what we're talking about. What does that stack look like from data all the way up to development. Agents will communicate at speeds we cannot even understand."

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

An Origin Story

An Origin Story

What exactly makes any data valuable?

In my previous blog, The Future of Data Protection, I started to look at what it is that makes data valuable. I think this is the best way to frame the future of data protection. In each application, we must know where the value in a piece of data lies if we are to protect it.

There are so many different things that might matter about a piece of data and thus make it valuable:

  • Authorship, including the authority or reputation of the author(s).
  • Evidence, references, peer review, repeatability and so on.   
  • In the case of identifiable (personal) data, the individual’s consent to have the data processed.
  • Details of the data collection process, ethics approval, or instrumentation as applicable.
  • The algorithms (including software version numbers) used in analytics or automated decisions.
  • Data processing system audits.
  • Sometimes the locality or jurisdiction where data has been held is important.
  • As data is added to, who were the contributors, and what were their affiliations?
  • The release of data to the public or specific users may need specific approvals.
  • What rights or conditions attach to released data as to further use or distribution?

A lot of this boils down to origin. Where did a given piece of data come from? 

This simple question is inherently difficult to answer for most data, because raw data of course is just ones and zeros, able to be copied ad infinitum for near zero cost.

But several interesting approaches are emerging for telling the story behind a piece of data; that is, conveying the origins of data. These are some of the first examples of the solutions category I call Data Protection Infostructure.

Proof of personhood

How can we tell human authors and artists from robots?  Or new bank account applicants from bots? The rise of Generative AI and synthetic identities has driven the need to know if we are dealing with a person or an automaton.

Identity crime is frequently perpetrated using stolen personal data. To fight this, we need to know not just the original source of identification data but also the source of each presentation.  In other words, what path did a piece of important data take to get to where it needs to be used?

A sub-category of Data Protection Infostructure is emerging around proof of personhood.

Delivering this sort of assurance in a commercially sustainable way is proving harder than it looks. Only recently, an especially promising start-up IDPartner Systems, led by digital identity veteran Rod Boothby was unexpectedly wound up.

Content Provenance

A conceptually elegant capability with plenty of technical precedents is to digitally sign important content at the source, to convey its provenance. That’s how code signing works.

The Coalition for Content Provenance and Authenticity (C2PA) is developing a set of PKI-based standards with which content creators can be endorsed and certified with individual signing keys. C2PA will be implemented within existing authority and reputation structures such as broadcast media licensing, journalist credentialing, academic publishing and peer review.

Similar proposals are in varying stages of development for watermarking generative AI outputs and digitally signing photographic images immediately after capture, within camera sensors.

Confidential Computing

The path taken by any important data today can be complicated.

For example, powerful AI-based image processing is now built into many smartphone cameras; the automatic manipulation of regular photographs can be controversial, even if it’s not intended to mislead.

And the importance of Big Data and AI in all sorts of customer management and decision support systems has led to strengthened consumer protections (most notably in the European Union’s AI Act) to provide algorithmic accountability and explainability.

So, data now flows through complex and increasingly automated supply chains. Signing important data “at the source” isn’t enough when it goes through so many perfectly legitimate processing stages before reaching a consumer or a decision maker. Data may be transformed by AI systems that have been shaped by vastly greater volumes of training data. Moreover, those AI models may be evolving in real time, so the state of an algorithm or software program might be just as important to a computation as the input data was.  

And we haven’t even touched on all the cryptographic key management needed for reliable signing and scalable verification.

For these reasons and more, there is an urgent need to safeguard data value chains in their entirety — from the rawest of raw data, before it leaves the silicon, through all processing and transformations. We are approaching a point in the growth of computing where every change to every piece of data needs to be accounted for.

Such a degree of control might seem fanciful but the Confidential Computing movement has the vision and, moreover, the key technology players that are needed to fundamentally harden every single link in the data supply ecosystem.  

See also

 

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience adobe nvidia Google Microsoft ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Digital Officer Chief Data Officer Chief Information Security Officer Chief Technology Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Enterprises leading with AI plan next genAI, agentic AI phases

Enterprises leading with AI plan next genAI, agentic AI phases

Enterprises that are leaders in artificial intelligence are evolving quickly and laying out plans for their next phases. The next phase of genAI deployments revolve around integrating AI into the business, optimizing processes, agility and scale. You should watch these companies closely for the road ahead.

When I think of leading AI companies, I'm usually looking for the buy side of the enterprise equation. These companies are deploying AI as well as integrating it into their business. For instance, Rocket spent years deploying its data platform and working with machine learning and AI models. When generative AI hit the enterprise, Rocket had all the DNA to move quickly and leverage the technology to deliver better customer experiences.

Intuit was in a similar position and saw its bets on data science and then AI work. It has an Investor Day coming up that'll feature a lot of strategy and AI. JPMorgan Chase is another player that has moved to the next phase of its genAI strategy. The upshot is that JPMorgan Chase has moved to integrating AI operations into its various business units.

JPMorgan Chase: Digital transformation, AI and data strategy sets up generative AI (download PDF) | JPMorgan Chase: Why we're the biggest tech spender in banking

With that backdrop, it's prudent to think of customer stories and case studies as living documents. Consider Rocket. Constellation Insights documented Rocket's use of AI and strategy (PDF) in May. Since then, the company has named a new Chief Technology Officer and outlined its next steps in its AI progression. Perhaps the biggest takeaway from Rocket is that there are no overnight AI successes. Rocket has spent $500 million over the last five years on Rocket Logic, its proprietary loan origination system that uses AI to streamline income verification, document processing and underwriting.

Simply put, AI isn't merely lift and shift. The work is never done.

Rocket CEO Varun Krishna said the company's "super stack" of technology is critical. "What makes our super stack special from a technology perspective? We have created a groundbreaking new architecture. It's data powered, humanity driven, and self-learning. This engine fuels every aspect of our ecosystem and we've spent years perfecting it. It's now driving efficiency, velocity and experience across the company," he said.

Heather Lovier, Chief Operating Officer at Rocket, said the technology stack has been applied to every process at Rocket. In underwriting, the company has " taken complex processes and the categories of income property asset, and credit and broken them down into hundreds of thousands of discrete tasks in order to apply automation and AI." That approach also expands into the experience layer. Lovier said it's a game of inches and continual improvement with AI. "We've been working on AI long before it was sexy," she said.

During Rocket's investor day, CTO Shawn Malhotra, who started in May after being Head of Engineering and Product Development at Thomson Reuters, laid out the plan. First, Malhotra outlined how Rocket's previous technology decisions left it in good shape for genAI. "AI is not new. It's been around for a while and it's been powerful for a while. Rocket delivered its first production AI models back in 2012," said Malhotra. "We now have more than 200 AI models in production adding real value for our business and our clients. It's important to remember that AI is not a what, it's a how that enables powerful outcomes. Those outcomes are what matter to our clients and our business."

For Rocket, those outcomes revolve around using AI to enhance the "entire homeownership journey" with more seamless processes that are efficient and personalized. This strategy means AI touches every customer touchpoint. Rocket works with Amazon Web Services and Anthropic to meld third-party models and proprietary systems. "We're always going to focus on our secret sauce and our proprietary AI models, but then we're going to deeply partner with the world's best to great large language models that are across domains," said Malhotra. "We're not going to just buy this from them. We're going to co-create."

GenAI is also about productivity. Rocket said that it has reduced closing times for refinancing and home equity loans by 30% to 45% faster than industry benchmarks. The company also uses AI to process more than 300,000 transcripts from client calls weekly to extract data points.

Malhotra said the next phase of AI for Rocket is about accelerating experiences in the homeownership journey. GenAI will be deployed more in marketing automation, customer interactions and servicing. Rocket also plans to double its use of AI in software development and customer-facing operations.

"We're helping our developers produce more code automatically in the last 30 days. We estimate that 27% of our code over a quarter was written by the AI. That's a good start, but soon, we're going to be doubling the number of developers who are using these tools," said Malhotra.

AI-powered chat across all digital platforms was another key theme. Malhotra said models will be improved to blend human empathy with models.

He outlined the following AI services that are being enhanced or rolled out.

Rocket Data Platform: Malhotra said that data platform needs continual improvement with new features such as ID resolution, more data ingestion and democratization of access with natural language.

Rocket Exchange: A platform with 122 proprietary models powered by 6TB of data that allows the company to accurately price mortgage-based securities in less than a minute.

Rocket Logic Assistant: A personal assistant for mortgage bankers, Rocket Logic Assistant transcribes calls, auto-completes mortgage applications, and extracts key data points. This helps bankers focus on relationship-building and customer service rather than manual tasks. Rocket plans to expand its capabilities and automate more of the mortgage process.

Pathfinder and Rocket Logic Synopsis: These tools use generative AI to assist customer service agents by reducing the time needed to resolve customer queries. This has led to a 68% reduction in the time to resolve client requests.

Rocket Navigator will be built out so non-technical team members can leverage AI and contribute to product development.

Rocket said it is increasingly focused on developing AI tools to improve the midfunnel experiences with a focus on personalization and long-tail engagement. These tools will deliver bespoke guidance, educational content, and tailored recommendations to potential homebuyers over time.

The company also hinted that it wants to develop AI-powered real estate tools to help homebuyers search for properties, get insights and automate more of the home-buying process beyond mortgages. This effort is likely to be built on data from Rocket Homes and other properties in its ecosystem.

Insights Archive

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Executive Officer Chief Information Officer Chief Data Officer Chief Technology Officer Chief AI Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

The Future of Data Protection

The Future of Data Protection

I recently released my latest Constellation ShortList™ for “Data Protection Infostructure” . In this blog post and the next, I drill into what these sorts of solutions are seeking to do.

“Data protection” in many parts of the world is simply synonymous with data privacy.  For instance, the General Data Protection Regulation (GDPR) is a data privacy regime; it is very specifically about limiting the flow of personal information, a special class of data.  Further, Europeans tend to refer to privacy regulators as Data Protection Authorities, and the conventional privacy compliance tool is the Data Protection Impact Assessment.  

So “Data Protection” in Europe has a narrow, even technical, meaning.

Now, regular readers will know I am a huge fan of regulating the collection, use and disclosure of personal information. A great many problems of the digital era, from Surveillance Capitalism to Deep Fakes can be tackled by more strenuous and creative application of regular privacy rules featured in most legal systems.

Nevertheless, there is more to data protection than privacy. Privacy by its nature is restrictive. I’d like to spark a broader discussion about what it is about data that needs protecting. We could begin by asking, What is it that makes data valuable?  

First let’s review how security professionals think about data.

Conventional wisdom in data security is that threats to information assets can be viewed in three different dimensions: Confidentiality, Integrity and Availability (or “C-I-A”). Different asset classes can be stronger to different degrees in any of these dimensions. For instance, patient information needs to be especially confidential but medical records also need to have high availability if they are to be useful at a point of care, and high integrity (error resistance) to keep patients safe.

On the other hand, historical employee records — often retained for legal reasons for seven years or more — might not need to be highly available, so archiving on magnetic tape or even paper is worthwhile to keep personal data away from hackers.

But the “C-I-A” perspective is missing so many of the richer dimensions that make data valuable.

Consider three current hot topics:

  1. Identity Theft is generally perpetrated by data thieves who acquire personal data and use it to impersonate their victims. The problem is that automated identification systems can’t tell if personal data is being presented by the individual concerned or by an imposter (see also my analysis of data breaches).
  2. Deep Fakes are images or audio that look or sound like real people but have actually been synthesised artificially (typically by Generative AI) instead of recording the real thing.
  3. And speaking of AI, there is increasing interest in the history of how models are trained. What sort of training data was used? Was it broad and deep enough to be free of bias? Were people in that data aware that it would be used to rain AIs?

Availability, Integrity and Confidentiality are not useful ways to think about safeguarding data in any of these cases. Think about how most LLMs today are rained on "public domain" data. No matter where you stand on the question of creators' intellectual property rights, we would all agree it's too late to make the artworks in question confidential. 

Instead of "C", "I" or "A", stakeholders across these and similar examples may want assurances that:

  • personal data submitted by a purported individual opening an account or applying for a job was really presented by that person
  • creative works used to train an AI model have been licensed for use
  • medical data used to train a diagnostic tool has been audited for bias and came from patients who gave informed consent
  • the science behind a diagnostic tool has been properly evaluated, and
  • software used to generate a particular result was version controlled and can be wound back to an earlier release if bugs are found.

From one digital use case to another, there will be different aspects or qualities of the data concerned that make the data fit for purpose — or in other words, valuable.

In my next blog, I will focus on one such dimension that’s missing from the traditional C-I-A picture: the origins of data.

 

New C-Suite Data to Decisions Digital Safety, Privacy & Cybersecurity Distillation Aftershots I am Team Leader at the Nominee Organization (no vendor self nominations) Innovation & Product-led Growth Next-Generation Customer Experience Future of Work Tech Optimization ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Digital Officer Chief Data Officer Chief Information Security Officer Chief Privacy Officer Chief Technology Officer Chief Executive Officer Chief Information Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Constellation Energy, Microsoft ink nuclear power pact for AI data center

Constellation Energy, Microsoft ink nuclear power pact for AI data center

Constellation Energy said that it is restarting Three Mile Island (TMI) Unit 1 and will sell about 835 megawatts of power to Microsoft for AI workloads.

In a release, Constellation Energy said that the deal with Microsoft is its largest power purchase agreement. TMI Unit 1 is adjacent to TMI Unit 2, which shutdown in 1979 and is being decommissioned. TMI Unit 1 is an independent facility and hasn't been impacted by Unit 2. Constellation Energy bought TMI Unit 1 in 1999.

Nuclear power has seen a resurgence in interest as the electricity grid strains under data center workloads due to generative AI. In addition, technology giants are trying to find a way to power AI workloads and hit carbon neutral goals. Simply put, hyperscale cloud nuclear deals may become more commonplace. In January, Amazon Web Services acquired a data center attached to Talen Energy's nuclear plant. Talen Energy will sell power to AWS.

Generative AI driving interest in nuclear power for data centers

In recent weeks, the drumbeat behind nuclear power as a solution for AI data center needs has picked up. Oracle CTO Larry Ellison talked up nuclear-powered data centers on the company's earnings conference call. Meanwhile, OpenAI CEO Sam Altman is chairman of Oklo, which is touting mini-nuclear reactors that can scale with data centers.

Constellation Energy has advocated for co-locating data centers at nuclear power plants as a way to build out infrastructure for AI quickly.

For Constellation Energy, the Microsoft deal is big. Five years ago, the power company shut down Three Mile Island Unit 1 due to poor economics. In a statement, Joe Dominguez, CEO of Constellation Energy, said "powering industries critical to our nation’s global economic and technological competitiveness, including data centers, requires an abundance of energy that is carbon-free and reliable every hour of every day, and nuclear plants are the only energy sources that can consistently deliver on that promise."

Earlier this week, Microsoft announced a partnership with BlackRock, Global Infrastructure Partners and MGX to invest in data centers and energy infrastructure to power AI.

Speaking on Constellation Energy's second quarter earnings call last month, Dominguez said:

"We're continuing to do well in our discussions and negotiations with data center companies. The simple fact is that data centers are coming and they're essential to America's national security and economic competitiveness. And it's absolutely critical that the U.S. not fall behind it. Time is of the essence. We simply cannot wait years for the data centers that are going to bring transformations."

Dominguez added that sustainability is also playing a role in nuclear and renewable energy demand:

"We're seeing more evidence of our customers, not just data center customers, but customers as a whole, evolving in their sustainability journeys from buying annual clean energy products to starting to match their hourly consumption with clean energy."

Bottom line: Nuclear power is likely to play a big role in the AI factory buildout.

 

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer