Results

Workday layers generative AI features throughout HCM, Adaptive Planning

Workday layers generative AI features throughout HCM, Adaptive Planning

Workday launched a series of HCM and Adaptive Planning update along with generative AI tools for enterprises, managers and developers. 

Workday's generative AI rollout and updates across its finance and human resources applications come as ERP vendors are racing to add features that streamline work, bolster productivity and deliver real-time insights. Rivals SAP and Oracle recently outlined generative AI updates across applications.

The company outlined the updates, strategy and approach to generative AI at its Workday Rising annual customer conference.

Executives reiterated that Workday's approach to generative AI is to leverage its models based on 625 billion transactions processed on its platform. Workday said it will use generative AI to create job descriptions on the fly, analyze and correct contracts to bolster revenue recognition, create knowledge management articles, streamline collections and use text-to-code models embedded in Workday App Builder. Additional generative AI capabilities will be deployed throughout the platform. Workday has said it plans to "offer generous usage based entitlements" for customers that opt-in to generative AI functionality. That approach could resonate with enterprise buyers, who are about to get hit with a bevy of generative AI upsells and add-ons.

Aneel Bhusri, Co-Founder and Co-CEO of Workday, said during a keynote that the company has more than 10,000 customers. He outlined Workday's AI strategy. "AI is treated like any other feature in Workday. Turn it on and it's ready to go," he said.

Workday is also focused on clean data and trust. "Transparency and understanding how the models are used mitigates risk," said Bhusri. 

Here's a look at the Workday Rising news:

  • Customers with Workday Human Capital Management (HCM) and Adaptive Planning will have a user interface that delivers workforce planning across finance and HR. The interface within Workday HCM will enable workforce planners to update and create new positions and have them reflected in financial and headcount planning in Adaptive Planning. Workday Adaptive Planning will also have an automated headcount reconciliation process to the position level. All changes can be viewed through a cost impact lens.
  • Workday said Manager Insights Hub is available within Workday HCM. Manager Insight Hub uses AI and machine learning to provide personalized recommendations to provide opportunities for employees based on skills and interests. Workday HCM will also get Flex Teams, which enables managers to identify talent and assemble teams using Workday Skills Cloud.
  • In addition to generative AI features to hone financial and work processes, Workday launched Workday AI Gateway within its Workday Extend program to provide developers with tools to build apps on Workday AI. Workday Extend was also updated with a no-code and low-code toolset and services to leverage skills analysis, sentiment, document intelligence and forecasts leveraging machine learning and AI. Developers will also be able to access multiple AWS AI services within Workday Extend. The company said Workday Extend Essentials and Workday Extend Professional will be available in late 2023 with AWS AI services available to Workday Extend Professional in the first half of 2024. 
  • Workday Adaptive Planning will have a new user experience based on generative AI and natural language queries. Officials said Workday Adaptive Planning users will be able to surface data, find contextually relevant insights and garner recommended actions based on conversational text. Workday Adaptive Planning will also get new tools for what-if scenarios, automated communication and report scheduling, performance upgrades for dashboards and machine learning enabled forecasts.
  • Workday also launched the Workday AI Marketplace, a curated set of partners with AI models and applications launching in the second quarter of 2024. Accenture, Amazon, Vertex and others are early partners. 

Most of the aforementioned features from Workday begin rolling out to customers within the next 6 to 12 months.

Workday also previewed some interface concepts based on generative AI queries with a focus on "simple and smart experiences."

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity workday ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration GenerativeAI Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

SAP launches Joule generative AI copilot, goes to 2-year S/4HANA Cloud major release cycle

SAP launches Joule generative AI copilot, goes to 2-year S/4HANA Cloud major release cycle

SAP launched Joule, a generative AI copilot that will be embedded throughout SAP's cloud applications to deliver insights based on the company's platform data and third-party sources. SAP is trying to move its customers to S/4HANA Cloud and dangling innovations such as Joule to prod enterprises off of on-premises deployments.

In addition, SAP said it is moving to a two-year major S/4HANA release cycle with enhancements via feature packs every six months with 7-year maintenance. S/4HANA 2023 will be released Oct. 11. SAP also said it will make new innovations available for the SAP S/4HANA Private Edition. 

The Joule launch, delivered at SAP's Rise Into the Future event, kicks off what will be a series of generative AI announcements at SAP conferences such as SuccessConnect, SAP Spend Connect Live, SAP Customer Experience Live and SAP TechEd throughout October and November.

Joule will be available in SAP SuccessFactors and SAP Start later this year with SAP S/4HANA Cloud public edition in early 2024. Further updates on Joule's integration with SAP's platform will happen during its conferences in the weeks ahead. Pricing remains to be seen. Enterprises will have to start budgeting for Microsoft's Co-pilot add-ons as well as other monetization models from core vendors including ServiceNow, Salesforce, Adobe and Google.

The idea of generative AI across cloud ERP platforms isn't exactly new. For instance, Oracle at its Cloud World conference outlined generative AI additions across its cloud platform, multiple services and apps covering customer service, analytics and marketing and sales. Salesforce also has a broad generative AI push.

SAP's bet is that Joule will stand out amid the copilot sprawl in enterprise software due to the ability to tap data in mission critical systems, knowledge of multiple processes and the ability to learn from nearly 300 million enterprise uses using SAP cloud applications.

Constellation Research analyst Holger Mueller said:

"SAP keeps innovating around the fringes of ERP core, with key innovations in the core being limited to the green ledger – which is still out. SAP needs to realize two things: It needs to have compelling innovation in the core to make the upgrade to S/4 HANA a ‘no brainer’ business case – and at the same time offer a migration of the strategic extensions that customers will need. And as a planning tool – SAP needs to offer a roadmap for the next 3-5 years – so customers can plan their upgrade strategy."

According to SAP, Joule will be embedded in applications across human resources, finance, supply chain, procurement, and customer experience. Employees will be able to use natural language and get contextual insights. SAP said Joule will be able to flag supply chain issues, underperforming regions and content generation.

Here are a few screens illustrating how Joule will be integrated into ERP processes. 

At Sapphire in May, SAP outlined its generative AI ecosystem and partnerships with Microsoft, Google Cloud and IBM as well as investments in Aleph Alpha, Anthropic and Cohere.

More from SAP:

 

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization Future of Work Next-Generation Customer Experience SAP AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

MongoDB steps up generative AI rollout across platform

MongoDB steps up generative AI rollout across platform

MongoDB launched a series of new features across its platform that illustrates that the company is staying aggressive on adding generative AI tools while remaining focused on developers.

The company outlined a trio of announcements at its MongoDB.local conference in London. Data platforms including Databricks, MongoDB and Snowflake all have different focuses but are beginning to overlap in areas as they race to add generative AI features. See: MongoDB launches Atlas Vector Search, Atlas Stream Processing to enable AI, LLM workloads | Constellation ShortList™ Hybrid-Cloud and MultiCloud NoSQL Databases

Here's a look and MongoDB's moves.

  • MongoDB launched four new features designed to make developers more productive with AI. The company launched MongoDB Relational Migrator, which converts SQL to MongoDB Query API syntax to automate migrations from relational databases, MongoDB Compass to generate queries and aggregations from natural language, MongoDB Atlas Charts for data visualizations from natural language, and a new chatbot in MongoDB Documentation to answer technical questions. All of the aforementioned features are available in preview except for the MongoDB Documentation chatbot, which is available now.
  • For MongoDB Atlas Vector Search, the company added tools to query contextual data and performance improvements to bolster generative AI apps. The company also said it integrated MongoDB Atlas Vector Search with Confluent Cloud to give developers data streams from multiple sources. And MongoDB announced that Dataworkz, Drivly, ExTrac, Inovaare Corporation, NWO.ai, One AI, and VISO Trust are customers of MongoDB Atlas Vector Search, which was announced in June.
  • MongoDB launched MongoDB Atlas for Edge to give enterprises the ability to build data applications across devices, on-premises data centers and cloud. AWS and Cloneable were cited as partners and customers of the effort. MongoDB said the goal was to enable edge infrastructure so it has low latency to run AI applications and make Internet of things devices actionable even in rough conditions.

Constellation Research analyst Doug Henschen put the MongoDB announcements in perspective. He said:

"MongoDB is staying aggressive and focused on developer needs, which is what has helped to set its platform apart from single-cloud offerings and to win developer adoption and loyalty. All three announcements are important. What impresses me about the Vector Search announcement is that they’re not just pointing to a new generative AI-focused feature and patting themselves on the back. MongoDB is sharing the real-world customer examples of Dataworkz, Drivly, ExTrac, Inovaare Corporation, NWO.ai, One AI, and VISO Trust along with plenty of details on how they’re innovating with generative AI. That makes the announcement come across as much more real and it will help to inspire other customers to start experimenting. On the MongoDB Atlas for the Edge announcement, the company has long had mobile database and sync capabilities, but the announcement makes it clear that they’re driving more consistency and continuity to simplify edge-to-cloud use cases with two-way interactivity. Finally, the generative AI capabilities – NL query and aggregation, NL data visualization, chat-based product support, and SQL translation to the MongoDB Query API syntax – aren’t terribly surprising, and all but the chat feature are still in preview, but it’s a good set of capabilities that shows that MongoDB isn’t being conservative about infusing its products and services with generative AI capabilities."

Data to Decisions Future of Work Innovation & Product-led Growth New C-Suite Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity mongodb AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Data Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

AWS invests up to $4 billion in Anthropic: It's all about AWS Trainium and Inferentia

AWS invests up to $4 billion in Anthropic: It's all about AWS Trainium and Inferentia

Amazon Web Services' (AWS) $4 billion investment in Anthropic may be all about the chips--AWS Trainium and Inferentia--for training models.

The deal between Anthropic and AWS puts Anthropic's foundational models, known as Claude and Claude 2, on Amazon Bedrock and makes AWS the primary cloud provider for mission critical workloads. For the record, Anthropic's Claude didn't have an opinion on AWS Trainium and Inferentia. "I don't have a personal opinion on AWS Trainium processors since I'm an AI assistant without subjective experiences," said Claude. 

But here's the item that may have the most long-term impact: "The two companies will also collaborate in the development of future Trainium and Inferentia technology."

That quote took me back to August and Amazon CEO Andy Jassy's take on AWS' home-grown processors. He said Nvidia supply has been scarce and price performance will matter with running large language models. "We're optimistic that a lot of large language model training and inference will be run on AWS' Trainium and Inferentia chips in the future," said Jassy.

Should Anthropic be able to train its foundational models on AWS' proprietary chips it'll have huge ramifications. Consider:

Anthropic also rounds out the AWS generative AI strategy. AWS will offer compute instances from Nvidia as well as AWS Trainium and Inferentia and up the stack offer a broad selection of foundational models via Amazon Bedrock. At the top of the stack, AWS customers can customize models with proprietary data and fine tuning.

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity amazon ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR GenerativeAI Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Will IonQ make quantum computing enterprise relevant in 2025?

Will IonQ make quantum computing enterprise relevant in 2025?

This post first appeared in the Constellation Insight newsletter, which features bespoke content weekly.

IonQ, seen as one the major players in quantum computing, is arguing that enterprise relevance for its nascent market will be here in 2025--well before most observers are expecting.

That argument, made during IonQ's Analyst Day Sept. 19, is notable and may surprise folks that are betting that quantum computing will have enterprise relevance in a decade or so. Chuckle if you will, but I'll argue it's worth hearing IonQ out as it develops its #AQ 64 quantum system for 2025 commercial deployments.

First, let's state the obvious--quantum computing is in the early stage. In some ways, quantum computing is a fascinating market that could be the next big thing after generative AI, cloud, mobile, Internet and personal computing. It's not a question of IF quantum computing takes off, but WHEN.

You can see Constellation Research Shortlists from Holger Mueller to get the lay of the quantum computing land.

IonQ's financials tell the tale of early-stage markets. For instance, IonQ has an estimated $52.5 million in bookings estimated for fiscal 2023, but year-date-revenue through June 30 was $9.8 million with a net loss of $71.05 billion. IonQ's second quarter revenue was $5.5 million, up from $2.6 million a year ago. The company does have more than $500 million in cash to figure things out and executives noted IonQ will be self-sufficient without raising more dough.

Whether IonQ is worth a market capitalization of more than $3 billion remains to be seen.

CEO Peter Chapman made the argument that IonQ can be one of those generational companies, but in the meantime, it's hiring a lot of talent from the likes of Nvidia, Microsoft, Amazon, Oracle, Uber and Apple. IonQ has also partnered with QuantumBasel to establish a European quantum data center housing its #AQ 35 and #AQ 64 systems, signed a memorandum of understanding with South Korea's Ministry of Science and ICT.

Here's a look at my takeaways from IonQ's Investor Day as I sat through 5 hours of presentations and 103 slides, so you didn't have to.

The big picture. Quantum computing will be important, but the big question is when. Chapman argued that quantum computing will be enterprise relevant and solve real problems when its AQ 64 system scales up in 2025. Chapman said there's a big difference between solving real problems and waiting until quantum supremacy or quantum advantage to do anything. "We do not care about quantum advantage or quantum supremacy," said Chapman. "The test for me is this. Can I solve a customer problem with a better mousetrap at a better price? Our goal is to build quantum computers that solve problem for customers."

IonQ Thomas Kramer added that "it's too late to start a quantum computing company if you wait for quantum supremacy."

IonQ is pragmatic. A deep dive into the company's production and engineering plans revolved around usage of common parts, modularity and rack-mounted systems. Sure, IonQ's presentation was as quantum geeky as the rest of the field, but the company is homed in on solving problems and setting the stage for on-premises deployments, servicing and maintaining systems as easily as possible.

There's a solid roadmap and IonQ has customer references and use cases. IonQ outlined three customers where it has helped develop algorithms that can scale in future quantum systems. The general idea: Develop the algorithms now for quantum so when the compute lands you'll be ready to roll. Those customers were also heavy hitters: Airbus, Hyundai, GE Research and Air Force Research Laboratory.

The company also is self-aware. IonQ has added strong executives, outlined a strong plan and has matured a lot since its October 2021 special purpose acquisition company (SPAC) IPO. Executives noted that IonQ has evolved from an academic and research driven organization to one that is focused on engineering. The next evolution will be moving from an engineering focused org to a product focused one. That evolution to be product focused will be "the next phase over the next several years," said Chapman.

Today, IonQ is engineering driven and that means tackling issues like error correction. Chapman and his team noted that A64 may not require error correction since it'll be able to mitigate issues ahead of time. "Error mitigation is a statistical approach to remove errors before systems are delivered to the customer," said Chapman. IonQ is pursuing both options at this phase of A64 development.

IonQ is more of a services firm today. IonQ is public but still an early-stage company that talks in terms of bookings and interest without actual sales. IonQ looks like more of a services firm as it develops products, algorithms and its ecosystem. The company will sell hardware and has multiple revenue options, but today it's helping customers with know-how, proofs of concepts and applications and use cases. This approach isn't surprising and there's precedent. Palantir and C3 AI were more consulting and services firms before becoming more product focused.

Manufacturing and supply chain are big unknowns. IonQ is building its manufacturing facility in Seattle, but it remains to be seen if it can deliver quantum systems at scale. IonQ has to build out its supply chain, source components, vertically integrate as needed and decide what parts it needs to create itself. IonQ's Seattle Manufacturing Hub and Data Center is set to start manufacturing in the fourth quarter.

Software will be critical. IonQ said that its software approach will be critical for everything from reducing noise in quantum systems, error mitigation and connecting to the broader ecosystem that'll include quantum processors, GPUs and CPUs. In addition, software will need to be developed for quantum systems while still working with classic computing.

What does the revenue model look like in the future? Executives walked through the go-to-market approach and future revenue streams. Production systems for commercial, government and academia will create hardware revenue, but there will also be access agreements, usage based, and work completed models. Broadly speaking, IonQ revenue drivers in the future include:

  • Application co-development where IonQ partners with companies to develop end-to-end quantum systems.
  • Partner cloud access via hyperscale cloud providers.
  • Preferred computing agreements.
  • Dedicated hardware. "We are seeing sustained interest from multiple parties," said Kramer. "Hardware will be sold and quantum will run on-premises."
  • Apps and software.

Chapman added that there's a lot of interest in quantum networking and that has potential too. IonQ could also be involved with designing products with quantum systems. "At some point in the future, we'll be doing designs and getting royalties for things like battery design and drug discovery," said Chapman. "If 10- to 15-years from now our only source of revenue is systems sales we somehow failed."

Final thought. It's easy to argue that IonQ will simply be roadkill for much larger players including IBM, Google, Nvidia and a bevy of others. Then again, IonQ has a pragmatic approach and focus that potential rivals don't have. For now, track the quantum computing space in a future file.

Tech Optimization Data to Decisions Innovation & Product-led Growth Quantum Computing Chief Information Officer Chief Technology Officer

HOT TAKE: Salesforce Announces Airkit.ai Intention and Advances Promise for Easy Trusted AI

HOT TAKE: Salesforce Announces Airkit.ai Intention and Advances Promise for Easy Trusted AI

The dust hasn’t even settled from breaking down the mega-campground known as Dreamforce that we see Salesforce following through on a promise made in those crowded halls of Moscone Center: AI should be simple, trusted and available for all to deploy in meaningful ways. Salesforce has announced its intention to acquire Airkit.ai, a low-code/no-code bot-builder that has been hot in the customer service and contact center space with their easy to deploy and manage AI-powered agents.

Airkit.ai’s claim to fame has largely grown around a belief that AI-empowered bots should do far more than spit back FAQ and simple help responses. Instead Airkit.ai has encouraged users to think beyond answers and into more proactive, rich and resolution-driving engagements. This has found a natural sweet spot in commerce where the outcome of a bot experience is measured in positive (and profitable) experiences as opposed to engagement deflection as the purpose of a self-service motion.

What We Know About The Deal: Not much.

In the press release officially launching the intention news, Salesforce declined to provide any terms or dollar amounts for the deal and noted that it would not be disclosing any further details about the acquisition. What we DO know is that this is not the first time Salesforce has acquired a company from Airkit.ai’s founders, Adam Evans and Stephen Shikian. In 2014, the duo sold their previous company, RelateIQ, to Salesforce for $390 million, making them part of this Salesforce boomerang trend that has seen former employees AND former entrepreneurs make their way back to the fold. For what its worth, Airkit.ai was not an unknown quantity to Salesforce as it was part of the Salesforce Ventures portfolio that, as of late, has been hyper-focused on AI solutions and tools. In fact, Salesforce Ventures was part of Airkit.ai’s initial funding back in 2020. Airkit.ai will become part of Service Cloud and will continue to be led by Evans who was both Airkit.ai and RelateIQ’s Co-Founder and CTO.

Interestingly, both RelateIQ and Airkit.ai represent building block pieces for Salesforce. When it was time to advance their goal of automation, RelateIQ was a great pick up to leverage unstructured data across things like social networks, chats and calendars to automate the sales process. Now, in this "Data + AI Era" for Salesforce, Airkit.ai helps build AI-powered customer service agents that learn from everything including business policies to customer data from transactions or previous engagements.

The acquisition is expected to close in the second half of Salesforce’s fiscal year 2024.

What This Means for Salesforce: Bots everywhere!

In the early days of Salesforce’s Einstein strategy, bots played a central, if not exclusive role with the introduction of multi-channel and multilingual “Einstein bots” that could automate common tasks and answer common questions. But in this new world of AI and data-enriched, contextual and personalized engagement as the table steaks of a profitable customer experience, these simple answer-focused, deflection-as-outcome bots wouldn’t necessarily be the best of breed. In fact, according to Airkit.ai, leading edge experience driven brands need to “ditch” these bots that center around “deflection and containment KPIs” in favor of intelligent experiences that embrace “resolution as the ultimate mark of success.”

With Data Cloud in place and the addition of the Einstein Platform and Einstein Studio on top of the Salesforce Platform, the company is now ready to supercharge their customer’s capacity to not just deliver results through AI-powered bots, but have the actual data and AI infrastructure in place to ensure that sales and service teams can safely, quickly and easily deploy digital cross-channel assistants. This is as much of a play for Salesforce’s Service Cloud as it is for the growing portfolio of smart features in Commerce Cloud as AirKit.ai offers proactive service and commerce engagements that keep customers happy and coming back for more.

The Bottom Line: A Nice Little Pickup That Could Deliver Big Applications

This pickup makes good on the promise that deploying AI tools and experiences shouldn’t require a costly cadre of data scientists and prompt engineers to get an engagement up and rolling. While Airkit.ai will find its initial home under the Service Cloud banner, it is unlikely that the functionality of Airkit.ai’s easy to configure and deploy bots will stay exclusively in those cloud walls. Expect to see both functional use cases ready for deployment right out of the gate as this acquisition closes with bots and engagements tailored for service, sales and marketing. But I’m also excited to see how David Schmaier and team weave in that strong history of industry-centered expertise and products to extend the Airkit.ai use case far beyond a familiar customer service or contact center storylines as Salesforce dives into industry-specific bots that are proactive and contextual to a customer or employee's journey.

Data to Decisions Marketing Transformation Matrix Commerce Next-Generation Customer Experience Chief Customer Officer Chief Marketing Officer Chief Digital Officer Chief Revenue Officer Chief Data Officer

IT incidents, response hurdles drive up enterprise costs, says Constellation Research survey

IT incidents, response hurdles drive up enterprise costs, says Constellation Research survey

Enterprises are being barraged by IT incidents, face a shortage of skilled personnel and lack the time to follow best practices or automate response processes, according to a new Constellation Research report. And major IT incidents aren't cheap.

In fact, 53% of enterprises have seen anywhere from 3 to 10 major IT incidents in the past 12 months, up from 47% in the previous year. More shocking is that 56% of respondents say at least 50% of incidents could have been avoided with best practices. In addition, 55% of enterprises say the cost of major IT incidents have cost their organizations less than $100,000 with 27% putting expenses at $100,000 to $500,000 and 12% citing costs of $500,000 to $1 million.

Those are some of the findings of Constellation Research's recent report, "An Executive Guide to Faster Incident Resolution" by Andy Thurai. The report is based on a survey by Constellation Research and Dimensional Research of more than 300 respondents. A third of respondents were incident responders, a third were their direct managers and another third were budget holders of the incident response unit.

Thurai's report is timely given Cisco's $28 billion acquisition of Splunk and expand in the observability, security and AI markets. One key finding in the survey is that 40% of respondents take 10 to 30 minutes to identify an incident (not resolve it).

One big reason for this time to identify an incident is enterprises have siloed IT observability tools. Other enterprises have legacy tools that are slow. And assuming observability tools work well, users can quickly run into alert fatigue and miss critical incidents.

Simply put, the idea that Cisco and Splunk could combine observability forces makes more sense to consolidate vendors in the field. 

It remains to be seen whether Cisco with Splunk can help enterprises respond to IT incidents. In the meantime, here are a few not-so-fun facts from Thurai's report.

  • 46% of respondents said automating resolution response to IT incidents was the biggest area to improve.
  • 36% said they could improve incident identification and find a resolution.
  • 49% of all incidents are straightforward and responses could be automated.
  • 64% said AI would be critical to identifying the root cause of incidents.
  • 34% said the top reasons for major IT incidents were manual processes and human error.
Tech Optimization Data to Decisions Innovation & Product-led Growth Digital Safety, Privacy & Cybersecurity Future of Work Chief Information Officer

Microsoft's Copilot enterprise upsell begins Nov. 1, Copilot fatigue will follow

Microsoft's Copilot enterprise upsell begins Nov. 1, Copilot fatigue will follow

Microsoft 365 Copilot will be available to enterprises Nov. 1 in a move that will test the limits of the add-on approach to cloud services and create a new condition: Copilot fatigue.

At an event allegedly focused on Microsoft Surface hardware, the software and cloud giant outlined plans to roll out Copilot to Windows 11 with more than 150 new features. Microsoft is also adding OpenAI's latest DALL.E model to Bing and updating Bing Chat Enterprise.

But the real experiment begins Nov. 1. Enterprises will have to start game planning for Microsoft's $30 per user per month add-on for Microsoft 365 E3, E5, Business Standard and Business Premium customers. Do you simply add all of your employees? Pick a few core functions? Wait and see? Another possibility: Enterprises will have to start budgeting for Microsoft's Co-pilot add-ons as well as other monetization models from core vendors including ServiceNow, Salesforce, Adobe and Google.

Add it up and you can easily see how generative AI add-ons are going to be like your streaming subscriptions. You ditched cable for streaming only to find that all you created was a DIY cable subscription. Enterprises ditched software licensing for the cloud only to find more per seat charges as a subscription that you may or may not use. The real scary thought: Generative AI (Copilot) may just give Clippy more brainpower and scale.

Microsoft's Copilot fiesta in Windows is one area where this new use cases have gone too far. Do I really want copilots in Paint and Notepad? Those two programs are popular because they're kind of dumb. If I wanted smart, I'd use Photoshop and Word. Sometimes you just don't want or need the overkill. I feel the same way about appliances in that I prefer dumb and dependable. Your operating system should be the same way.

Multiply Microsoft's Copilot plans with similar efforts by Google (Duet AI everywhere) and other helpful models giving you insights and recommendations at every turn and I'm already fatigued.

Next step: HR sessions talking about copilot fatigue. In a few years, we'll have therapy sessions about copilots not allowing us to think, being annoying and simply helping us too much. You can almost hear the rank-and-file workers telling HR reps that their copilots hallucinate, lie, are too demanding and won't shut up. Even worse: That copilot in FP&A almost got someone fired.

Copilot fatigue will happen. You heard it here first.  

Future of Work Data to Decisions Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Microsoft AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Cisco acquires Splunk in $28 billion observability, AI and cybersecurity play

Cisco acquires Splunk in $28 billion observability, AI and cybersecurity play

Cisco said it will acquire Splunk for $28 billion, or $157 a share, in a deal that will give the networking giant a big play in security, AI and observability.

On a conference call, CEO Chuck Robbins said the Splunk deal (statement, Cisco blog) will make Cisco one of the largest software companies. It also transforms Cisco by positioning the company in growth markets AI, security and observability. "The value of data only increases. That's why this deal makes sense," said Robbins. "Our combined capabilities will provide an end-to-end data platform."

Robbins added that the two companies product lines were complementary and provide "full observability for the entire IT stack." He added that Cisco has a big opportunity to expand into AI driven networks.

Splunk CEO Gary Steele will join Cisco's executive team and report to Robbins. “Uniting with Cisco represents the next phase of Splunk's growth journey," said Steele, adding that the two companies have compatible cultures and scale to invest and expand market reach.

Among the key details:

  • Splunk will add $4 billion in ARR to Cisco.
  • Cisco expects the deal to be cash flow positive and gross margin accretive in the first year after the close and non-GAAP accretive in the second year.
  • The deal won't impact Cisco's stock buyback or dividend program.
  • The acquisition is expected to close in the third quarter of calendar 2024.

Observability is becoming a hot space. For instance, New Relic recently went private in a deal valued at $6.5 billion.

"There's a natural synergy when you can handle threat detection and security with AI. That's what you get with Cisco and Splunk," said Constellation Research CEO Ray Wang. "Customers get better network security and Splunk gets a key home and Cisco has a better story that drives AI valuation."

Robbins said the combination of Cisco's security business and data flow from Splunk can enable the company to solve more enterprise issues. "We also think there's an opportunity to expand our global presence," said Robbins. Steele added that international expansion can also boost Splunk and expand go-to-market opportunities. Although the companies said there isn't much product overlap, Cisco does have its own observability platform in the same areas as Splunk. 

The combination of Cisco and Splunk may also help enterprises looking to improve incident response. In a recent research report, Constellation Research analyst Andy Thurai found that 57% of incident response teams have more issues than they can handle.

Here are the Constellation Research Shortlists where Splunk appears.

Constellation Research's take

Given Cisco's reach in the enterprise technology market, there are a bevy of analysts covering the company. Here's how team Constellation Research assessed the deal.

Andy Thurai, the Constellation Research analyst following AIOps and the observability markets, made the following takeaways:

  • The first thought that comes to mind is that this deal is a very natural fit. Cisco entered the observability race with their acquisition of AppDynamics and ThousandEyes and has been trying to build the full stack observability (FSO) platform for a while. Adding Splunk to this mix brings a true full-stack observability capabilities between APM, DEM, Logs, other observability capabilities, and SIEM, to add to their own network monitoring.
  • Given that there is very little product overlap, Cisco gets a big customer base and potential upsell opportunity. A big TAM expansion.
  • Cisco's XDR is well-established and has been around for a while. Adding Splunk SIEM to the mix, assuming the companies can integrate soon, will be a huge boost. It will be hard to integrate two big platforms with considerable technical debt built over the years.
  • Cisco is known for channel selling. It can push Splunk, and the FSO platform thru the channel when it is ready.
  • Some nervous customers are already reaching out and asking for opinions and strategies on what to do. The pricing strategy will be a mess for a while. Splunk has recently moved to mostly consumption-based pricing. Cisco needs to figure out how to integrate this pricing model with its model soon.
  • Though the deal rationale suggests taking advantage of AI, security, and observability, I don't see it as much in AI. I see the synergies in security and observability. Neither company is a leading player in applied AI in their solutions. Splunk is ahead of Cisco on that front, but both need to catch up.
  • Cisco took a while to integrate AppDynamics and ThousandEyes long after the acquisitions. I hope this integration goes easier. Splunk was already struggling with too many acquisitions with TruStar, TwinWave, Phantom Cyber, and Metafor, on the security side, and Flowmil, Rigor, Plumbr, SignalFX, Omnition, and VictorOps on the observability side. My advice would be to dump the smaller, not useful ones to concentrate on the bigger goal.
  • While the acquisition price seems high, this gives Cisco an opportunity to expand its TAM. The observability market is growing and if Cisco and Splunk can integrate their platforms soon they can win on observability. Overall, this deal is good for both companies.  

Constellation Research analyst Dion Hinchcliffe said enterprise buyers will be thinking about vendor consolidation and pricing:

"Cisco's acquisition of Splunk is a bold move with the potential to reshape a key new sector of the IT industry. While there is a good bit of skepticism about Cisco's ability to preserve Splunk's culture of innovation, its massive customer base and global reach will likely help Splunk achieve even greater growth. CIOs will be watching closely to see if Cisco can address their growing concerns about vendor consolidation, with resulting higher prices, and see if they deliver a successful outcome for both companies' customers."

Constellation Research analyst Doug Henschen said:

"Cisco tried to get into enterprise software and acquiring Composite Software a few years back and it didn't work out. Splunk/AppDynamics, in contrast, are closer fit to Cisco in being focused on network and log data analysis. not enterprise data integration like Composite (now part of TIBCO)."

Holger Mueller, Constellation Research analyst, added:

"Cisco has long time realzied its future canno tbe the network alone, this is the boldest move to understand what is happening in the network, and on the network. If executed right it will be a better and more vertically integrated Cisco than before, thus creating value for CxO." 

 

Data to Decisions Tech Optimization Innovation & Product-led Growth Digital Safety, Privacy & Cybersecurity Future of Work Splunk cisco systems Chief Information Officer

Amazon's new Alexa devices get new LLM, generative AI spin

Amazon's new Alexa devices get new LLM, generative AI spin

Amazon held an event to highlight new devices and a large language model (LLM) and generative AI tutorial broke out.

In a sign that products are becoming more about algorithms and generative AI than hardware design, the star of Amazon's Devices and Services event was an LLM that makes Alexa more conversational, can absorb real-time information, and reliably make the right API calls. The Amazon LLM model is also proactive and can use the company's Vision ID to know when you're about to talk and read body language. Amazon's Echo devices are a consumer use case with an enterprise LLM behind it. 

Simply put, this new LLM should negate the need to repeatedly say Alexa. Interactions should also become less transactional. Dave Limp, outgoing chief of Amazon's devices unit, said the new proprietary LLM is built on 5 foundational capabilities.

  • Conversation. Amazon was able to use data on what makes a conversation over the past 9 years. Conversations are built on words, body language, eye contact and gestures. That's why the model is built so Alexa can recognize cues on screened devices.
  • Real-world applications. Alexa isn't just a chat box in a browser. As a result, it has to interact with APIs and make correct choices.
  • Personalization. An LLM in the home must be personalized to you and your family.
  • Personality. A more conversational Alexa will be able to have more opinions to go with jokes.
  • Trust. Performance with privacy matters.

Limp's demo with Alexa and its new LLM illustrated some key upgrades. Alexa was able to stop and start a conversation and remember previous context. Amazon said the revamped Alexa will roll out early next year.

Rohit Prasad, senior vice president and head scientist of Amazon Artificial General Intelligence, said what makes Alexa's LLM unique is that it "it doesn't just tell you things but does things."

As a result, Amazon tuned the LLM for voice as well multiple points of context. Real-time connections to APIs will make Alexa better integrated into the smart home and natural.  Key points about Alexa's new LLM upgrades:

  • Alexa's automatic-speech recognition (ASR) system has been revamped with new machine learning models, algorithms and hardware and is moving to a large text-to-speech (LTTS) model that's trained on thousands of hours of audio data instead of the 100s of hours used previously.
  • The ASR model is built on a multibillion parameter model trained on short goal-oriented speech as well as long-from conversations. Amazon said the large ASR model will move from CPUs to hardware accelerated processing and use frames of input speech based on 30-millisecond snapshots of the speech signal frequency spectrum. 
  • Alexa's new speech-to-speech model is LLM based and produces output directly from input speech. The move will enable Alexa to laugh and have conversational tools.

While the models stole the show, Amazon launched a series of features including automatic lighting, call translation and emergency assist services to go along with hardware including the Echo Show 8, Echo Hub and new Fire TV Sticks and Fire TVs with generative AI updates. 

Next-Generation Customer Experience Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity amazon AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer