Results

Zoho makes big AI move with launch of Zia LLM, pack of AI agents

Zoho has launched its own large language model called Zia LLM, 40 pre-built Zia Agents, a no-code agent builder with Zia Agent Studio and a model context protocol (MCP) server that will connect its AI actions with third-party agents. The combination means Zoho is looking to democratize and differentiate with an AI strategy that revolves around developing its own right-sized models, optimizing and passing on the savings to customers. 

For Zoho, the series of launches fleshes out its agentic AI strategy with the aim of democratizing various use cases, workflows and automation for enterprises of all sizes.

CEO Mani Vembu said Zoho's goal was to build foundational AI internally to better provide value and an integrated approach that "allows us to bring customers around the world cutting edge toolsets at a lower cost."

Zoho's AI strategy is to prioritize privacy and value. Its generic AI models across the Zoho platform aren't trained on consumer data and don't retain customer information. the goal is to use right-sized models that don't break the bank.

Zia LLM was trained and built entirely in India using Nvidia's platform. The foundational model was trained with Zoho product use cases in mind and can handle structured data extraction, summarization, RAG and code generation.

In addition, Zia LLM is family of three models with 1.3 billion, 2.6 billion and 7 billion parameters and competitive performance against comparable open source models. Zoho plans to mix and match models for the right context and power to performance balance.

Zoho also announced two Automatic Speech Recognition (ASR) models for both English and Hindi that's optimized for low compute resources. Zoho plans to support more languages in the future.

According to Zoho, it will still support multiple LLM integrations on its platform including OpenAI's ChatGPT, Llama and DeepSeek bur reckons Zia LLM will feature a better privacy profile since customer data will remain on Zoho servers. Part of the cost equation for Zoho customers will be leveraging Zia LLM and AI agents without sending data to cloud providers.

Raju Vegesna, Chief Evangelist at Zoho, said the company isn't initially charging for its LLM or agents until it has a better view of usage and operational costs. "If there a big operational resource needed for intensive tasks we may price it, but for now we don't know what it looks like so we're not charging for anything," he said.

So far, Zia LLM has been deployed in Zoho data centers in the US, India and Europe. The model is being tested for internal use cases across Zoho's app and service portfolio. Zoho said Zia LLM will be available in the months ahead and feature regular updates to increase parameter sizes by the end of 2025.

Zoho said it is also planning to launch a reasoning language model (RLM).

“It’s good to see Zoho charting it's unique course into the AI era and is now adding its in-house Zia models,” said Constellation Research analyst Holger Mueller. “With its focus on privacy and cost effectiveness in-house built LLMs are the right strategy for Zoho. Now Zoho has to show that it can keep up with the LLM competition.”

Why build your own LLM? B2B models are different

Zoho decided to build its own LLM from scratch for multiple reasons:

  • Investing into its own LLM would give Zoho downstream effects that would improve its platform and enable new features.
  • Zoho already was having success with dozens of AI models that weren't LLM-based.
  • The company wanted control of the LLM layer since it would be a core part of the platform and the company would need to continually tweak. "We don't like black boxes," said Vegesna.
  • Cost to performance is critical for enterprises and B2B software providers. By developing its own LLM, Zoho doesn't have to pass on additional costs to customers.

Although Zoho started LLM development within the last two years, two developments accelerated the pace. First, Zoho partnered with Nvidia. "B2B models are different than B2C and part of the technical partnership was about knowledge sharing," said Vegesna, who said Nvidia was more experienced with B2C. "With B2B, you don't worry about broader concepts as much. You narrow things down because you don't need the biggest model for every single tasks."

Vegesna said open source models such as Llama and DeepSeek, both supported by Zoho, also provided insights that improved development after development had started. Zia LLM was started before open source options appeared.

Zoho also had insights on how to develop Zia LLM from its own visibility into how APIs were used. Narrow models and non-LLMs were often used. Vegesna said Zoho is focused on using the right model for the right costs and optimizing for workflows.

"For the majority of use cases, narrow to smaller models will do the job," said Vegesna.

Having observability into its own platform enabled Zoho to optimize Zia LLM for the most common scenarios. That optimization should keep costs low. Vegesna said Zoho will continue to enable customers to use third party LLMs and the company hosts top open source models such as Llama, DeepSeek and Alibaba's Qwen.

"The customer will decide on the models used and Zoho LLMs will be an option," said Vegesna. "We have customers that don't want to rely on third party LLMs and we saw many of them taking open source models and optimizing them for their environments. Now we have that core technology, we can play the long game."

The agentic AI play

Zoho's strategy for AI agents is to offer dozens of prebuilt agents that can perform actions based on enterprise roles such as sales development, customer support and account management. The company's 40 prebuilt agents will be native in Zoho Marketplace and available for quick deployment in Zoho apps.

Zia Agents can be used within a Zoho app, across the company's stack of 55 applications or customized to specific use cases.

A few of the prebuilt Zia Agents include:

  • A new version of Ask Zia, which is a conversational assistant for data engineers, analysts and data scientists, but can democratize information for business users. Ask Zia is set up to address pain points faced by each persona.
  • Customer Service Agent, which processes incoming customer requests, understand context and answer directly or offload to a human. This agent will be integrated into Zoho Desk.
  • Deal Analyzer, which provides insights on win probability and next-best actions.
  • Revenue Growth Specialist, which looks for opportunities to upsell and cross-sell existing customers.
  • Candidate Screener, which identifies candidates for job openings based on role, skills, experience and other attributes.

Ask Zia agents for finance teams and customer support teams will be added.

Building and connecting agents

A big part of Zoho's AI agent plan is Zia Agent Studio, which was announced earlier this year, but has been revamped to be fully prompt-based with an option for low code.

Zia Agent Studio can build agents that can be deployed autonomously, triggered with rule-based automation or called into customer conversations.

Zoho is betting that its ecosystem of 130 million users, 55 apps and its own developers can fuel the Agent Marketplace to cover multiple use cases. Agent Marketplace now has a dedicated section for AI agents.

The company said its MCP server is designed to work across multiple applications and runs natively in Zia Agent Studio. Zoho has a library of actions from more than 15 Zoho applications exposed in early actions.

Zoho said Zia Agent will be assigned a unique ID and mapped as a digital employee so enterprises can analyze and audit performance, analysis and workflows with guardrails.

According to Zoho, Agent2Agent (A2A) protocol support will be added to enable collaboration with agents on other platforms.

General availability for Zia LLM will be at the end of 2025. Zia Agents, Zia Agent Studio, Agent Marketplace and Zoho MCP Server are being rolled out to early access customers with general availability at the end of the year.

Going forward, Zoho outlined the following roadmap:

  • Scale Zia LLM model sizes with parameter increases through 2025.
  • Expand available languages used by the speech-to-text models.
  • Introduce a reasoning language model (RLM).
  • Add skills to Ask Zia with a focus on finance teams and customer support teams.
  • Support for Agent2Agent protocol.
Data to Decisions Future of Work Innovation & Product-led Growth Next-Generation Customer Experience Tech Optimization Digital Safety, Privacy & Cybersecurity zoho ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Executive Officer Chief Information Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

AI Standards, HPE's Strategy, and DAM | ConstellationTV Episode 109

New ConstellationTV drop! 👀 In episode 109, co-hosts Liz Miller and Holger Mueller unpack summer's tech news landscape, including HPE's evolution and the intersection of #AI, networking, and #cloud technologies... 

Next, Holger explains the emerging AI protocol standards reshaping inter-agent communication. Learn how these frameworks prevent vendor lock-in and create more interoperable AI ecosystems. 🤔 

Wrap it up with a CR #CX Convo with Adobe's Shelly Chiang about AI transforming Digital Asset Management (DAM) from a storage tool to an intelligent, strategic #content engine. Discover how modern DAM supports creativity, brand consistency, and global scalability. 🌎 

Watch the full episode & subscribe to never miss a technology update!

00:00 - Meet the Hosts
01:17 - Enterprise Tech News
11:44 - AI Standards Discussion
16:55 - CR CX Convo with Shelly Chiang
30:23 - Bloopers!

Data to Decisions Digital Safety, Privacy & Cybersecurity Future of Work Innovation & Product-led Growth Marketing Transformation Matrix Commerce New C-Suite Next-Generation Customer Experience Tech Optimization Chief Analytics Officer Chief Customer Officer Chief Data Officer Chief Digital Officer Chief Executive Officer Chief Financial Officer Chief Information Officer Chief Information Security Officer Chief Marketing Officer Chief People Officer Chief Privacy Officer Chief Procurement Officer Chief Product Officer Chief Technology Officer On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/Z-gCwiWRd5Y?si=hxLKJOZLEI9Bzj96" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Intuit starts to scale AI agents via AWS

Intuit's Chief Data Officer Ashok Srivastava Ph. D said the company is now deploying AI agents across its platform, GenOS and products.

Srivastava walked through AI agents deployed on Intuit, which is built on the AWS stack. "Two weeks ago, we formally launched our agent experiences," said Srivastava, speaking during the AWS Summit New York keynote.

We've detailed Intuit's data and generative AI journey. The company has been able to ride inflection points by getting its data architecture right and then leveraging AI. Now Intuit is looking for AI agents.

Srivastava kept with the practical AI theme at AWS Summit New York and said that enterprises shouldn't "become enamored with technology" and focus on business goals and outcomes.

He said:

"Don't get it out into the technology. Use AI only where it's necessary and use rules. Measure your ROI return on investment. Make progress and empower small teams to invest."

A few takeaways from Srivastava:

  • Intuit is focused on providing human experts because it will complement what AI offerings it offers.
  • The GenOS runtime, Intuit's operating system, is designed to orchestrate agents and models.
  • The interface with software will be conversational.
  • Agents are already driving returns and cash flow improvements for small business customers on Intuit.
Data to Decisions Future of Work Next-Generation Customer Experience Innovation & Product-led Growth Tech Optimization Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

AWS launches Bedrock Agent Core, custom Nova models

Amazon Web Services launched Amazon Bedrock Agent Core, a set of tools designed to deploy and operate AI agents at scale. Agent Core includes a secure serverless runtime, access to tools and support for open-source frameworks.

In the big picture, AWS is aiming to be the best place to build and run AI agents that can carry out tasks with minimal human involvement. AWS is also looking to give enterprise customers tools that can give them stability in a rapidly changing AI environment.

During a keynote at AWS Summit New York, Swami Sivasubramanian, AWS VP of Agentic AI, laid out the cloud provider's approach to agentic AI. The four pillars to AWS' agentic AI strategy revolve around embracing agility, ensuring security and trust, reliability and scalability and observability.

Those pillars will be critical for enterprises given that AI agents are software systems that feature foundational models, complete tasks, take actions, plan, remember context and learn with minimal oversight. AWS' argument is that the fundamentals of building AI systems are as critical as the near weekly advances in model capabilities.

According to Sivasubramanian, the fundamental frameworks and approaches will matter even more as AI agents scale. There will be billions of AI agents working alongside humans in multiple settings and that scale will bring excitement, complexity and a bevy of concerns.

He said:

"We are focused on making our agentic AI data set accessible to every organization by combining rapidly innovation, with a strong foundation of security, reliability, and operational excellence. Our approach accelerates progress by building on proven principles."

In the end, Sivasubramanian's talk in New York had a lot to do with the balance of innovation and fundamentals as well as foundational approaches that can change models and underlying technologies. To AWS, a strong foundation and approach enable and accelerate innovation rather than constrain it.

There's also a reality check behind AWS' rather practical approach: Enterprise adoption of AI agents will trail the technology advances and vendor marketing speak.

AWS is setting itself up for AI agent production systems where stability matters and models for most use cases are good enough to last a while. As agentic AI becomes more enterprise ready, basics such as identity, authentication and stability matter.

The goal for AWS is to leverage Amazon Bedrock Agent Core and its partner ecosystem to enable enterprises to go from experiments to production with AI agents designed to run mission critical business processes. That progression is what has enterprises nervous, Sivasubramanian said.

Here's a look at Amazon Bedrock Agent Core:

  • Agent Core features a secure serverless runtime with session isolation. The agent runtime provides dedicated compute environments for AI agents with session, service and memory isolation leveraging AWS' Nitro abstraction layer.
  • Access to tools and capabilities so AI agents can execute workflows with the right permissions, context and controls.
  • The use of any model or open source framework.
  • Identity services to manage permissions of an AI agent and authenticate them.
  • Built-in checkpointing and recovery for interruptions.
  • Observability that's built in for internal and third party AI agents.
  • Agent Code Gateway for integration with other agents and various systems.

Early customers in private beta for Amazon Bedrock Agent Core include Autodesk, Cisco and Workday.

Other items from AWS Summit New York include:

Customization for Amazon's Nova models. AWS announced the ability to customize its Nova models for enterprise use cases in SageMaker AI. AWS will provide optimization recipes, model distillation and customization to balance cost and performance. Nova has launched eight models in 6 months.

"Over 10,000 customers are already using Amazon Nova. What really matters is that these models have real world impacts," said Rohit Prasad, SVP and Head Scientist for AGI at Amazon.

Nova will also get customization on-demand pricing for inference.

AI agent availability on AWS Marketplace. Customers will be able to buy AI agents and tools within AWS Marketplace. These agents can be acquired with standardized central billing and license management via AWS.

Sivasubramanian said the aim is to make it easy to deploy agents easily. “Now you can test and run AI agent solutions from a range of vendors, then quickly push the production and scale,” he said.

Updates to Amazon Connect and AWS Transform. Both will get specialized AI agents.

Amazon Kiro, an AI powered developer environment designed to speed up agentic systems from concept to production. Kiro takes specifications and designs and turn it into code.

Recent AWS:

Data to Decisions Future of Work Next-Generation Customer Experience amazon Chief Information Officer

Anthropic follows enterprise software industry playbook, hires Smith as commercial chief

Anthropic is best known for its Claude large language model (LLM), but its enterprise software ambitions are clear as the company builds out its go-to-market team.

The company launched Claude for Financial Services and hired Paul Smith, an alum of ServiceNow, Microsoft and Salesforce. Smith recently stepped down as president of global customer and field operations at ServiceNow. ServiceNow hired him from Salesforce in 2020. ServiceNow CEO Bill McDermott said on the company's first quarter conference all that Smith "scaled our global go-to market organization and together we built a world class team and methodically nurtured the right leaders to take us to 2030 and beyond."

Enterprise software companies typically go horizontal and then drill down into industries. Once you land a big customer in one vertical others often follow. The go-to-market playbook for enterprise computing has worked repeatedly for the likes of Salesforce, SAP, ServiceNow and Microsoft. Cloud providers are following the same path with targeted offerings for multiple industries. 

Now Smith will be expected to do the same for Anthropic, which has emerged as the enterprise and B2B AI player relative to more consumer LLM players. OpenAI tries to straddle the line between business and consumer, but leans toward the latter. 

Anthropic's features, which include positioning Claude as a work collaboration partner, indicate it is more about business. The tight partnership with AWS is another enterprise data point.

Daniela Amodei, President of Anthropic, said the hiring of Smith will "strengthen our commercial organization and help more businesses worldwide become AI-native when he starts later this year.”

Smith will already have some of the enterprise parts at Anthropic, which has launched offerings designed for specific industries. At ServiceNow, Smith oversaw expansions into financial services, public sector and telecom to name a few.

Here's a look at the moves from Anthropic that are positioning the company as an enterprise software player.

  • Anthropic launched Claude for Financial Services that aims to give finance pros a tool to unify data and feeds with research into a single interface. The aim is to leverage Claude in trading systems, proprietary models, analysis and compliance. Model Context Protocol (MCP) connectors will tie financial data and market intelligence together.

  • The Department of Defense awarded Anthropic a contract to advance AI in defense operations.
  • Anthropic launched Claude for Education with integrations with learning management systems.
  • Anthropic has also targeted workers with collaboration features and workspaces.

With Smith on board, you can expect more industry rollouts and then more scaling of sales folks.

Data to Decisions Future of Work Innovation & Product-led Growth Next-Generation Customer Experience Chief Information Officer

Frontier AI companies land Department of Defense deals

The US Department of Defense has awarded contracts to Anthropic, Google, OpenAI and xAI with a ceiling of $200 million to each vendor to leverage AI models for national security.

The award, announced by the DoD's Chief Digital and Artificial Intelligence Office (CDAO), amounts to a major win for frontier AI companies looking to advance in the public sector.

Google Cloud has the most built-out public sector business of the four companies awarded contracts by the DoD.

According to CDAO, the awards to the AI companies are aimed at developing "agentic AI workflows across a variety of mission areas."

Chief Digital and AI Officer Dr. Doug Matty said in a release that the awards are part of a strategy to implement commercial AI tools first. “Leveraging commercially available solutions into an integrated capabilities approach will accelerate the use of advanced AI as part of our joint mission essential tasks in our warfighting domain as well as intelligence, business, and enterprise information systems," said Matty.

In a Google Cloud post, the company said the DoD can deploy using its Contiguous United States (CONUS) infrastructure for AI. This infrastructure leverages Google Cloud tools like TPUs, Agentspace and broader offerings in a separate Google Public Sector cloud.

Anthropic noted that it is building out its public sector efforts and gaining traction by leveraging Claude at the Lawrence Livermore National Laboratory and in US defense workflows with Palantir. Anthropic also has a government version of Claude called Claude Gov for national security customers built on top of AWS infrastructure.

For its part, OpenAI recently launched its government initiatives and has had a tailored version of ChatGPT for US government agencies since January. OpenAI for Government promises expertise and customer models for the Defense Department. xAI launched Grok For Government alongside the DoD contract.

Here's a look at the DoD's AI readiness framework.

Chief Information Officer

AWS launches Kiro, an IDE powered by AI agents

Amazon Web Services launched Kiro, an integrated development environment (IDE) that uses AI agents to move from prompt to prototype to production.

In a blog post launching Kiro, AWS executives Nikhil Swaminathan and Deepak Singh explained that last production step is where applications often fall over.

Kiro is in line with AWS' approach to creating tools to make deployment easier. Kiro is an IDE that allows you to go from concept to prototype quickly via conversations about specifications and designs.

AI is your new co-founder, core and creative and engineering muse

"As a user, you interact with it, and it creates these specifications and designs which then make for very reliable, robust code over time," said Singh, who noted that Kiro rhymes with efforts like Amazon Connect, an AI-based customer service system and AWS Transform, which modernizes applications with AI agents.

Kiro was launched ahead of AWS Summit in New York, which is expected to feature a heavy dose of AI agent news. "Kiro is great at ‘vibe coding’ but goes way beyond that—Kiro’s strength is getting those prototypes into production systems with features such as specs and hooks," said Swaminathan and Singh.

Here's a look at Kiro components:

  • Kiro specs, which are artifacts that are useful for refactor work and upfront planning. Specs are designed to guide AI agents to better implementations.
  • Kiro hooks, which are automations that trigger an agent to execute a task in the background.

The AWS blog post walks through a few examples of Kiro specs and hooks and how they can move an application along to production.

In addition, Kiro includes code editor features as well as Model Context Protocol (MCP) support, agentic AI chat for coding, various plugins and steering rules and context generated from documentations.

According to AWS, Kiro is part of a broader vision to make building software easier, enterprise ready, eliminate technical debt and preserve institutional knowledge.

Constellation Research analyst Holger Mueller said:

"Software development and coding are not the same anymore in the era of genAI and it starts with AI agents plugging into the IDE, the 'couch in the developer living room'. The challenge is to find the right balance between in the background vs. in the face - to establish the coveted 'vibe' setup. We will see in a few weeks if Kiro got that right."

 

 

Data to Decisions Tech Optimization Innovation & Product-led Growth amazon Chief Information Officer

AI is your new co-founder, core and creative and engineering muse

Artificial intelligence is going to rewrite how businesses operate and enterprises are going to avoid chasing existing markets to create new categories, drop the obsession with transformation and add-on approaches, treat AI as a co-founder and cut the latency from idea to prototype to near zero.

Those are some of the takeaways from the latest DisrupTV episode, which was essentially a college course in an hour.

Christopher Lochhead, 13-time #1 bestselling co-author and "godfather" of Category Design, and Sunil Karkera, Founder & Chief Engineer at Soul Of The Machine, were on DisrupTV providing a glimpse of what it'll take to build an AI-native company.

Here are some of the takeaways from DisrupTV Episode 403.

Avoid the "Existing Market Trap" in AI investments. Lochhead said in doing the research for "The Existing Market Trap: (a Primer) Escaping The 13 Deadly Sins that Destroy Companies, Careers and Portfolios," the data indicates that about $13 trillion in AI startup investments is at risk because too many companies are competing in existing markets rather than creating new ones.

Lochhead said:

"You can't create a new thing and let it be positioned as an old thing. Roughly $13 trillion is about to be lost inside the existing market trap. We have so many AI vendors today chasing existing AI markets. Everybody wants to be ChatGPT, everybody wants to be Claude. Everybody is chasing Nvidia. The big ah-ha here is the companies that will win are companies that choose not to compete in existing markets, but create their own."

AI is core, not an add-on. Successful companies view AI as foundational rather than supplementary, said Lochhead. "AI represents the greatest mega category creating technology in the history of humanity and so but yet, people still suffer from Google Brain. They treat AI like it's Google on steroids. They don't realize they should be creating every single element of their business with AI," said Lochhead. "If you look at the vendors, most of them have the wrong lens on AI."

Lochhead explained that you have to listen to the words and what they're telling you. When a vendor calls AI a companion, copilot or assistant, the company is really saying they're selling software and AI is an add on. "AI is not an add-on to a thing. It is the thing," he said.

Karkera has taken that AI-first approach and ran with it to build a company with digital and human labor. Soul of the Machine has been able to scale with a few dozen humans to do what would have taken hundreds. Soul of the Machine also charges differently with its new model.

Karkera said:

"Pricing is gross margin and milestone and outcome based. We're not charging for head count and equal in time. We're not running a spreadsheet for that. There's a big shift in how things are done. Our biggest team members are actually agents. We consciously use them, we drive them, we plan them and they provide so much augmentation to creativity and engineering."

The "Stop, Change, Start" approach and AI. Lochhead said people need to think of businesses and careers as pre-AI things that have run their course. AI is a co-founder of your career and company. When you build with AI that's what you should be doing. Designing new categories now begins with AI as a co-founder. "Our belief is that if you ae not using proprietary AI to do your work you're out of your mind," said Lochhead, who added that companies that don't use AI from the ground up will fall into the existing market trap. OpenAI and Nvidia created new markets and didn't chase existing demand.

"AI is essentially a giant 'stop, change, start,'" said Lochhead. "We're not transforming old businesses."

Karkera's company, Soul of the Machine, is an example of creating new categories. Karkera is a TCS and Wipro veteran and now is designing his startup to be an AI-native disrupter. "AI definitely is our co-founder," he said. "AI is the new soul of the new machine we're building."

"Vibe Creating" meets "Vibe Coding." New collaborative approaches with AI are transforming work: "Just like we've all heard the term vibe coding, which is how you have a conversation with AI to build software, you converse with software about a set of outcomes that you want to create, and the AI goes and builds that," said Lochhead, who added that AI democratizes technical skills.

Karkera said the AI democratization of technology is real. "I have non engineers in my team who are doing amazing engineering right now. It's basically a function of how much you can stretch your brain in terms of the creativity side, more than how much of algorithmic knowledge you have taught from school," he said.

Creative functions hit the same themes. Every creative effort needs to include AI.

Lochhead explained Vibe Creating.

"There's a vibe creating framework that goes like this: Puke prompt, partner, know. What most people understand is we've all been taught for our whole careers that when you interact with tech, you need to be precise. So when you're working with a spreadsheet, you need to have precise numbers. When you're building a document or a business plan or PowerPoint, you want to be precise. Garbage in, garbage out. We've all heard that a million times. Guess what? That's not how AI works."

Lochhead sat down with his AI, Lucy, said he wanted to work on his new book about the existing market trap and gave it a thesis. The thoughts were put into Lucy, and Lochhead went back and forth.

Forward deployed engineering. Karkera said Soul of the Machine's approach is to put engineers at the source of ideas. It's working since Soul of the Machine is landing enterprises at the expense of much bigger consulting firms.

Karkera explained.

"The forward deployed engineer or creator is right where the ideas are originated, and the latency between idea to concept is zero because of this you're actually creating and seeing while it's being created. That's a completely new way of doing things. There was an idea that was sketched out on paper by one of my customers. I actually took a picture of it and used AI to convert it to a real working prototype within 15 to 20, minutes. Then I added the customers' design system and it became an application by the end of the day."

Data to Decisions Future of Work Innovation & Product-led Growth Marketing Transformation New C-Suite Next-Generation Customer Experience Chief Executive Officer Chief Financial Officer Chief Information Officer

The disconnect between tech euphoria, CFOs is jarring

The view from the C-suite is increasingly gloomy as executives navigate policy, inflation and the economy that's a reality show with two-week story arcs. It's hard to plan when your conditions change every other day.

Yet, technology companies--including a few that barely have revenue--live in a world full of unicorns and rainbows.

Deloitte's CFO Signals survey found that one in three financial chiefs thought it was a good time to take risks. That reading was the lowest since the third quarter of 2024 and well below the 60% of CFOs in the first quarter who said it was a good time to take risks.

The Deloitte CFO survey follows data from Duke University’s Fuqua School of Business and the Federal Reserve Banks of Richmond and Atlanta. The Duke CFO Survey, which closed June 6, found 40% of respondents said tariffs and trade policy were a pressing concern in the second quarter. That percentage was on par with the second quarter of 2020 when there was a pandemic, supply chain disruptions and inflation.

Get the Insights newsletter

Is the sky falling? Not if you live in the land of AI, quantum, enterprise technology and Wall Street traders.

Yes, we know the line in enterprise technology is that investing in AI and new proofs of concept will boost productivity and save the day. These technology projects and efficiency gains--courtesy of AI agents--will offset inflation, tariffs and whatever else is thrown at companies.

However, the disconnect between CFOs and technology companies is jarring. There is a possibility that CFOs are worrywarts, but it's more likely that technology companies are overconfident in what is an emerging bubble in various categories.

There's also some evidence that technology firms are using Wall Street euphoria to better position themselves for turbulence. See: AI's boom and the questions few ask

Here are a few mileposts to consider.

Emerging tech companies are using stock runs to reposition. Who will be the bag holder?

Quantum computing companies are busy fortifying balance sheets as their stocks continue to surge. The goal for these quantum companies is obvious: Raise cash because quantum computing is going to be a long game.

IonQ is playing the stock run beautifully. The company priced a $1 billion equity offering of more than 14 million shares at $55.49 and pre-funded warrants. The 7-year warrants convert at $99.88 a share.

IonQ acquires Oxford Ionics for $1.07 billion, gets quantum-on-a-chip technology

All that mumbo jumbo aside, the biggest takeaway is that IonQ will have $1.68 billion in cash to commercialize its quantum offerings. IonQ is also a fan of acquisitions.

Meanwhile, D-Wave raised $400 million with an at-the-market equity offering. Quantum Computing Inc. closed a private placement of stock to raise $200 million. D-Wave and Quantum Computing now have cash and cash equivalents of $815 million and $350 million, respectively.

More stock and debt fun with CoreWeave

CoreWeave bought Core Scientific for $9 billion in stock. Is CoreWeave getting into bitcoin mining? Nope. CoreWeave depended on Core Scientific for data center capacity. By buying Core Scientific, CoreWeave gets to save on leases and control more of its stack.

Shares of CoreWeave have surged roughly 300% since going public three months ago. Michael Intrator, CEO of CoreWeave, was asked repeatedly by analysts why it had to buy Core Scientific when the existing partnership worked well.

"Owning Core Scientific's high performance data center infrastructure enables us to significantly enhance operating efficiencies and de-risk our future expansion. By controlling the foundational layer of our AI cloud platform, we will enhance the scale, performance and expertise we provide and our customers need to unleash the full potential of artificial intelligence," he said.

There's also a cost of capital issue. Intrator said consolidating Core Scientific allows it to more efficiently finance data centers and lower the cost of capital. "We think it is a pretty material step function in the efficiency of our balance sheet," said Intrator.

CoreWeave as of March 31 had $8.8 billion in debt with cash and equivalents of $1.3 billion.

AI talent free agency goes crazy

Is there anyone that Meta CEO Mark Zuckerberg won't hire? He's building his superintelligence dream team and I'd love to be a fly on the wall to see the ego class in those early meetings.

Super teams and supergroups sometimes work, but more often than not they fail. Management, teamwork and a common cause matter. Zuck has hired a bunch of mercenaries. And who can blame these AI folks for taking up to a reported $100 million in cash and stock? Bloomberg reported that Meta hired Ruoming Pang, who ran Apple's AI models team, with a deal valued at more than $200 million over several years.

OpenAI CEO Sam Altman has countered with its own hiring of AI engineers. Altman had said AI missionaries will top mercenaries on chasing superintelligence.

Anyone who's a big sports fan knows this drill. Pro teams pay out massive contracts to players and many turn out to be busts. This AI talent frenzy will be no different.

Everyday hardware becomes more valuable on AI dreams

And Zuckerberg isn't done. He's also reportedly buying a 5% stake in EssilorLuxottica, which owns Ray-Ban, Oakley and other brands to control distribution of glasses. Meta and EssilorLuxottica have a successful partnership for smart glasses.

Google has also partnered with Warby Parker. Meta obviously wanted to cut rivals off from partnering with EssilorLuxottica.

Watch what tech executives do, not say

Insider sales at tech companies have been picking up as executives cash out. Nvidia executives are selling and the transaction data for the last 60 days is littered with technology companies.

Barchart.com's tracking data for insider trading shows a lot of red. In fact, a spot check of insider buys appears to revolve around non-tech companies.

You can play around with the data, but there are multiple big insider sales at Atlassian, Nvidia, Dell, Rubrik and Oracle to name a few.

Vendors are busy counting demand well into the future

Oracle CEO Safra Catz told employees the company is off to a strong start in fiscal 2026 and the company signed multiple large cloud deals "including one that is expected to contribute more than $30 billion in annual revenue starting in FY28." OpenAI is reportedly Oracle's big contract.

Now that ability to predict future demand is awesome. Way to go Oracle! A cynic would note that a lot can change in two years. OpenAI is a juggernaut today, but who knows how it is positioned two years from now or whether it’ll be able to pay Oracle.

And while we're talking about future AI demand, there's also an agentic AI fatigue emerging among CxOs. Why? Vendors are talking up AI agents that don't quite work yet, promising ROI that's not there yet and raising prices. CxOs are skeptical, pushing back and about 30 seconds from being really pissed off.

Simply put, it's hard to value demand for technology vendors two years from now when CxOs don't have certainty for more than two weeks. Something has to give.

The view from Constellation Research chief distiller Estaban Kolsky

The CFO sentiment is paired with the board sentiment. A recent survey of board members by PwC found that nearly 40% of them are choosing a de-risking path (what I call a wait-and-see approach) for the first time in many years. Indeed, the number has never gone below 50% and is usually in the mid-60% range.

Among the many signals we monitor, board sentiment has been always the most bullish in a down-market (never waste a good crisis is a good mantra for boards to impose risk discipline on executives). When questioned on reasons for their bearish behavior, the number one word that arose is uncertainty. We live in a constant state of chaos, with myriad geo-political hotspots and economic instability. The lack of an explainable vision for the US role in global trade, the dichotomy between economic policies and tax-and-spend initiatives, and the declining consumer-sentiment all bring a thicker air of uncertainty to board discussions.

Leading indicators that had the economy returning to pre-pandemic levels and investment returning to enterprise technology before the year began have all turned negative.

Does this mean there are no opportunities?

Hardly, there has been no better moment for organizations to clean out the technology stacks and make plans for AI-enhanced, cloud-native ecosystems with optimization models implemented on private platforms. The investments in quantum mentioned above are an indicator, as well as investments in AI, enterprise architecture, and data.

Next worry area? Lack of talent – and Meta seems to be equally concerned about that. We will find a way out of that, hopefully without spending billions, soon. It’s going to be a very interesting summer, and a fast-and-furious Q3.

Data to Decisions Future of Work Next-Generation Customer Experience Tech Optimization Chief Executive Officer Chief Financial Officer Chief Information Officer

Why Apple should buy Perplexity and possibly keep going

Apple needs to jump start its AI strategy and the only way it's going to get there is through acquisitions. Here's why a purchase of Perplexity makes sense and why Apple may want to keep shopping.

Bloomberg reported last month that Apple was discussing a purchase of the AI search startup

Let's look at why an Apple purchase of Perplexity, reportedly valued at about $14 billion or so, would make sense.

Apple needs an AI strategy and story. Let's state the obvious: Apple is an AI laggard. Apple's miss on AI is as bad as Microsoft's fail on mobile. I don't think Perplexity will be a magic AI elixir for Apple, but CEO Tim Cook needs a story. Dropping $20 billion on Perplexity is worth it just due to the market cap bump and storyline.

Apple can and should pay up for AI plays because it has to. Would Apple's acquisition of Perplexity be its largest ever? Sure, but Apple has a $3 trillion market cap. Apple bought Beats for about $3 billion in 2014 and had a market cap of roughly $600 million based on news reports at the time. Apple could take $100 billion and spread its bets. There's no shame in buying an AI strategy. If Salesforce can pay $27.7 billion for Slack in 2020, Apple can drop some dough on AI plays. Some will work and others won't.

Like any sports team trying to climb out of the basement, Apple needs to overpay for talent.

Perplexity is a media company that will need content licensing and advertising. If you use Perplexity Pro you get the feel that it's part LLM, part Google News, part search, part answer engine and service. I could see Perplexity folding into Apple News easily, leveraging the advertising business and becoming another $20 a month service to keep you in the ecosystem. Perplexity is even cooking up a browser.

And Apple has relationships with publishers that can keep Perplexity out of courts on a perpetually.

Apple needs an AI platform for developers. Yes, Apple has a devoted developer following but needs to hedge its bets and keep them interested. AI is that hedge. Perplexity would hopefully rev up the developer base.

The Google relationship with Apple may not last. One of the big arguments against the Apple purchase of Perplexity is that it would jeopardize search revenue from Google. First, those funds may not last in the future due to either regulation or competition. Besides, Apple and Google have been competitors and business partners for years.

 

Data to Decisions Next-Generation Customer Experience Innovation & Product-led Growth apple Chief Information Officer