Results

How Baker Hughes used AI, LLMs for ESG materiality assessments

VIEW FULL CUSTOMER STORY HERE

Baker Hughes' Marie Merle Caekebeke admits she was a bit skeptical about artificial intelligence, but she wanted a way to speed up environmental, social and governance (ESG) materiality assessments so her team could focus on the big picture and stakeholder needs at the energy technology company.

"I was actually quite pleasantly surprised,” said Caekebeke, Sustainability Executive – Strategic Engagement, Baker Hughes. "I wanted the individuals on my team to take ownership of sustainability and to move the needle on progress. I felt that we could leverage a machine, but the decisions will be made by individuals."

Caekebeke (right), a 2023 SuperNova Award winner in the ESG category, started with a pilot with C3 AI to parse 3,500 stakeholder documents in 9 weeks and train natural language processing and large language models (LLMs) to identify and label paragraphs aligned to ESG topics via more than 1,700 training labels. The project quickly went to production and saved 30,000 hours in a two-year cycle time to complete the ESG materiality assessment. Today, Baker Hughes' sustainability executives can be more proactive with stakeholders.

Baker Hughes is an energy technology company that specializes in oil field services and equipment and industrial energy technology. The company aims to be a sustainability pioneer that minimizes environmental impact and maximizes social benefits.

Speaking on Baker Hughes' third-quarter earnings conference call, CEO Lorenzo Simonelli said the company sees strong orders for natural gas markets and electric machinery. The company's plan revolves around delivering financial results while investing in the future, said Simonelli.

"We are focused on our strategic framework of transforming our core to strengthen our margin and returns profile, while also investing for growth and positioning for new frontiers in the energy transition," said Simonelli, who noted that the company is working through three time frames. In 2027, Baker Hughes expects to focus on investing to solidify the company's presence in new energy and industrial sectors with an emphasis on decarbonization in 2030.

Simonelli added that Baker Hughes' execution over the coming years will position it to compete in carbon capture, usage and storage (CCUS), hydrogen, clean power and geothermal. "We expect decarbonization solutions to be a fundamental component, and in most cases, a prerequisite for energy projects, regardless of the end market. The need for smarter, more efficient energy solutions and emissions management will have firmly extended into the industrial sector," said Simonelli, who said Baker Hughes will focus on industry-specific use cases.

Baker Hughes is projecting new energy orders will grow to $6 billion to $7 billion in 2030 from $600 million to $700 million in 2023.

With that backdrop, Baker Hughes' sustainability team has to keep tabs on emerging trends and topics across multiple sources and ultimately customize the insights for various stakeholders, said Caekebeke. In other words, materiality assessments for ESG will become more of a living document.

"The sustainability space is shifting so quickly that I wanted more strategic engagements with our stakeholders," she said. "We're always going to have customer conversations; we're always going to have investor conversations and speak to our employees as well. But I wanted something to supplement it and look at those topics that matter to our stakeholders, weigh information and make sense for our assessment."

The project

Baker Hughes publishes a biennial ESG assessment that informs strategy at the company and creates a listening exercise for internal and external stakeholders. With the assessment, Baker Hughes aligns its strategic priorities and commercial strategy.

Caekebeke said the project started by weighting sources and information that is trustworthy. For instance, filings with the Securities and Exchange Commission (SEC), sustainability reports and annual reports had a higher weighting than something like social media where "everyone is a sustainability expert," she said. Reports from customers, competitors, investors and NGOs were also included.

In nine weeks, the data collection was complete and then Caekebeke's team focused on stakeholder expectations by role and what kind of decisions needed to be made. The lens of the project wasn't about automation as much as it was priorities. "We have a strong sustainability team, and I had enough humans and employees," explained Caekebeke. "It wasn't about running out of sweat equity as much as it was wanting individuals on my team focusing on implementation and change rather than manual tasks."

Baker Hughes, a long-time C3 AI customer, already had a strong partnership, systems in place and data. Caekebeke said C3 AI is a "progress partner" and more strategic vendor. "We reached out to see what C3 AI had and then continued to build together a solution that would demonstrate the ROI and then create sound data to make decisions on," she added.

Previously, Baker Hughes manually collected interviews, surveys, and documents on around 50 topics. At first, Caekebeke's team took a subset of those topics for a pilot. The team also narrowed down the list of stakeholders in the pilot. Baker Hughes collected employee insights and feedback from community resource groups within the company. Once KPIs, users and objectives were defined and the pilot proved the use case worked, the C3 AI application expanded topics, targeted a full list of stakeholders, and went into production.

One critical project consideration was identifying topics and aligning them to roles. "I wanted a tool that would be nimble enough that if I wanted to run a report only on emissions, I could do that. If I wanted to run a report only on just transition and how environmental justice was playing a part, especially after a key event, I could do that too," said Caekebeke.

Using C3 AI as a platform, Baker Hughes was able to train LLMs to fill gaps in the ESG materiality process including:

  • Parsing 3,500 stakeholder documents to produce more than 400,000 paragraphs.
  • Training Natural Language Processing machine learning pipelines to identify and label paragraphs to align with ESG topics and training labels.
  • Deploying a workflow to compute time series ESG materiality scores for source documents at the paragraph, document, stakeholder and stakeholder group levels.
  • Configuring an interface to visually represent ESG scores, analysis, evidence packages and benchmarks.

The returns boiled down to time. A human would have taken 790 hours to analyze the volume of content for the ESG materiality report, while the C3 AI ESG application took less than an hour and was able to focus on the 10% of relevant content. The manual process required nearly 30,000 hours and a 2-year cycle time to complete the ESG assessment without AI.

What's next

Caekebeke said the AI-driven ESG materiality process will enable Baker Hughes to keep better tabs on new topics, impacts of events, legislation and policy around the world. "We work in over 120 countries. We have 55,000 employees so we have a broad reach. And so, it is important to really look across the world at what's happening,” she said.

Going forward, the plan is to use more AI to drive decisions faster with more data transparency. Caekebeke said using AI is also likely to curb unconscious bias in ESG materiality assessments.

"When we were looking at the way that we did it manually, you will have some stakeholders that answer the surveys and then ones that don't. If you make an analysis, you read through their information and then essentially translate that into what you think you're hearing. But there's that kind of unconscious bias that we all have as we're reading through it," said Caekebeke "An engine doesn't really have that bias."

Caekebeke is also betting that the C3 AI ESG application will also connect dots between environmental impacts and social issues.

"Where communities are marginalized, they are feeling the deepest impact of climate change. Those areas are also where you have human rights violations and people that are not making a fair wage," said Caekebeke. "It's about looking at ESG holistically and leveraging AI to look at it so you could draw some parallels."

Baker Hughes released its sustainability framework in April and the goal is to use the lessons from the C3 AI tool to deploy the strategy across the organization. "Moving forward in 2024 is about making sure that the deployment of our sustainability strategy is well understood and that initiatives are pushed all the way to the deepest level," said Caekebeke. "My vision is that I want everyone to have that same focus on sustainability, understand the value and understand our environmental and social footprint. For 2024, it will be a deeper engagement with employees all from the top to the bottom, across regions where we work and also across the functions."

Lessons learned

Caekebeke said the project surfaced a few lessons learned about the intersection of ESG and AI. Here's the breakdown.

Get high level support from executives. Baker Hughes leadership supported the effort and that helped overcome concerns about using AI. "There's a lot of skepticism around AI. Some people love it. Some people are nervous. There should be a bit of both," she said.

Governance is critical. Caekebeke said governance should be laid out in advance of pilots and deployment.

Have a strong partner. Caekebeke said that C3 AI worked closely with her time to customize the application and produce something that works with transparency. Training models require collaboration and back and forth between customer and vendor teams.

Time is a core metric. "We are mindful of the fact that as sustainability requirements are increasing, people have less time," she said.

Start small. There are so many metrics to follow in ESG, but it's critical to narrow them down to the ones that are risks to your enterprise. "It's easier actually to build that up than to go the other way around. A lot of the times we want to please every stakeholder you know, and it's important to listen, but then you have to prioritize," said Caekebeke.

Efficiency and optimization are also sustainability. Internal stakeholders need to realize that "when you make something efficient, you're also making it more sustainable," said Caekebeke.

Keep iterating. "I was an AI skeptic. And I was really surprised to see the efficiency of the tool to the point where we're now in production phase, and we're working on the next iteration," said Caekebeke. "Pick the three or four things you want to do this year and then the next phase, so you have measurable projects from year to year. Just incremental steps in the right direction will really help the company move forward."

Related:

Watch the full interview here:

 

New C-Suite Data to Decisions Innovation & Product-led Growth AR Executive Events Chief Information Officer

Microsoft uses Oracle Cloud Infrastructure for Bing conversational workloads

Microsoft is using Oracle Cloud Infrastructure for its Microsoft Bing generative AI searches.

Oracle announced the multi-year agreement with Microsoft in a press release. What we don't know is whether Microsoft is using Oracle Cloud to overflow Bing workloads or completely due to efficiency and/or procurement of Nvidia GPUs.

The two companies recently outlined a partnership. Oracle also fired up Nvidia-powered instances and apparently has been able to procure GPUs. Oracle and Microsoft have a history of partnership announcements that seem to be refreshed often. For instance, Oracle and Microsoft outlined an interoperability partnership between clouds in 2019. In 2022, the two companies announced the general availability of Oracle Database Service for Microsoft Azure.

Also: Oracle adds vector search to Oracle Database 23c, melds generative AI, transactional data | Oracle's Q1 better than expected and Ellison loves generative AI

Although Microsoft CEO Satya Nadella and Oracle CTO Larry Ellison may seem like odd bedfellows, both companies have joint customers and mutual rivals in AWS and Google Cloud.

The Oracle Cloud-Bing announcement could also simply be a headliner use case for enterprises. Microsoft is using Oracle Cloud along with its Azure AI infrastructure for inferencing for Bing and managed services such as Azure Kubernetes Service to orchestrate workloads. The connection also uses Oracle Interconnect for Microsoft Azure.

Constellation Research analyst Holger Mueller said that "generative AI has the potential to change the cloud market landscape." Time will tell if the Microsoft-Oracle partnership for Bing conversational workloads is a data point for cloud leadership changes.

Mueller made the following points about reading the tea leaves with the Oracle and Microsoft partnership. 

  • It is a milestone because no cloud vendor has ever moved internal workloads - or any workloads to a partner/competitor. 
  • It is a sign that Microsoft may be maxxed out on capacity. 
  • It is a sign it needs and does charge more to customers than it wants to afford internally.
  • Oracle gets the workload of the second largest search engine for generative AI search and evidently have the capacity.
  •  Oracle seems to get Nvidia chips at a better rate than Microsoft.

Research

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization Future of Work Next-Generation Customer Experience Oracle SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Leadership VR Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

OpenAI launches GPTs as it courts developers, models for use cases

OpenAI launched new developer tools, models and GPTs designed for specific use cases.

On its first developer day, OpenAI moved to expand its ecosystem, enable developers, and leverage the popularity of its models so they can be customized.

For OpenAI, the set of announcements brings it closer to where enterprises are going--smaller large language models (LLMs) and generative AI that is tailored to tasks. These task-specific models--called GPTs--will roll out today to ChatGPT Plus and Enterprise users. Constellation Research analyst Holger Mueller said:

"Buried in the press release in a side sentence is what is the biggest challenge in enterprises adopting LLMs. Important information is in OLTP systems and can't be accessed by LLMs. If the OpenAI GPTs capability to access OlTP databases work - we will enter the next generative AI era." 

In a blog post, OpenAI said:

"Since launching ChatGPT people have been asking for ways to customize ChatGPT to fit specific ways that they use it. We launched Custom Instructions in July that let you set some preferences, but requests for more control kept coming. Many power users maintain a list of carefully crafted prompts and instruction sets, manually copying them into ChatGPT. GPTs now do all of that for you."

While OpenAI's ChatGPT is being used for context specific use cases in Microsoft productivity applications, the company is also looking to put its own mark on its models. The game plan for OpenAI is to build a community and launch a GPT Store, which will feature GPTs across a broad range of categories. Developers will get a cut of the proceeds from the GPT Store.

Key points about GPTs:

  • Your chats with GPTs are not shared with developers. If a GPT uses third party APIs you have control over what data can be sent to API and what can be used for training.
  • Developers can use plug-ins and connect to real-world data sources.
  • Enterprises can use internal-only GPTs with ChatGPT Enterprise. These GPTs can be customized for use cases, departments, proprietary data and business units. Amgen, Bain and Square are early customers.
  • OpenAI also launched Copyright Shield, which will indemnify customers across ChatGPT Enterprise and the developer platform.

That backdrop of GPTs complements a bevy of other OpenAI launches that are more aligned with where the LLM market is headed.

Related: Why generative AI workloads will be distributed locally | Software development becomes generative AI's flagship use case | Enterprises seeing savings, productivity gains from generative AI | Get ready for a parade of domain specific LLMs

Here's the breakdown.

  • OpenAI launched GPT-4 Turbo, which is up to date through April 2023 and has a 128k context window. It is also optimized for a 3x cheaper price for input tokens and 2x cheaper price for output tokens relative to GPT-4.
  • GPT-4 Turbo will also be able to generate captions and analyze real-world images in detail and read documents with figures.
  • DALL-E 3 has been updated to programmatically generate images and designs. Prices start at 4 cents per image generated.
  • The company improved its function calling features that enable you to describe functions of your app to external APIs to models. Developers can now call multiple functions, send one message to call multiple functions and improve accuracy.
  • GPT-4 Turbo supports OpenAI's new JSON mode.
  • OpenAI added reproducible outputs to its models for more consistent returns. Log probabilities will also be released so developers can improve features.
  • The company released the Assistants API in beta for developers to build agent-like experiences in applications. These assistants use Code Interpreter to write and run Python code, Retrieval to leverage knowledge outside of OpenAI models and Function calling.
  • Developers will also get text-to-speech APIs to generate human-quality speech.

As for prices, OpenAI models are enabling lower costs.

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity openai ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Data Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Palo Alto Networks acquires Talon Cyber Security, Dig Security

Palo Alto Networks said it has acquired Talon Cyber Security, an enterprise browser security startup. The move comes days after the company acquired Dig Security.

Talon Cybersecurity aims to address attacks via unmanaged devices via its Talon Enterprise Browser. The Talon Enterprise Browser will be combined with Palo Alto Networks' Prisma SASE platform and look to protect unmanaged endpoints that connect to SaaS enterprise applications.

Dig Security is a startup focused on data security posture management, or DPSM.

Terms of the deal weren't disclosed, but TechCrunch put the figure at $400 million for Dig Security. Talon Cyber Security reportedly went for $625 million

The two deals highlight how Palo Alto Networks plans to acquire startups that can help build out its platform. 

According to Palo Alto Networks, generative AI adoption will require enterprises to take control of sensitive data across cloud services, databases, vector databases and platform as a service. Dig Security's technology gives enterprises the ability to discover, classify, monitor and protect sensitive data wherever it resides on the cloud.

Like Talon's technology, Palo Alto Networks said that Dig Security's DPSM platform will be integrated into its Prisma Cloud.

There's a race to create next-gen security platforms powered by AI between Palo Alto Networks, Crowdstrike, Zscaler and a host of others. 

Digital Safety, Privacy & Cybersecurity Security Zero Trust Chief Information Officer Chief Information Security Officer Chief Privacy Officer

Ignorance of AI is no excuse

Understanding and explaining the workings of artificial brains—particularly deep neural networks—has been a problem for a decade or so. Some AI entrepreneurs seem almost to boast they don’t know how their creations work, as if mysteriousness is proof of real intelligence. But algorithmic transparency is being mandated in new European legislation so that individuals have better recourse when adversely affected when robots miscalculate their credit or health insurance risks.

I want to discuss another reason regulators have for getting inside the black box of AI: accountability under data privacy regimes.

The power of conventional privacy laws

Large language models (LLMs) and generative AI are making it hard now to tell fact from fiction.  The Some commentators, with great care, call this an existential threat to social institutions and social order. Naturally there are calls for new regulations. Such reforms could take many years

But I see untapped power to regulate AI in the existing principles-based privacy laws that prevail worldwide, a famous example being Europe’s General Data Protection Rule (GDPR).

I have written elsewhere about the “superpower” of orthodox data privacy laws. These are based on the idea of personal data, broadly defined as essentially any information which may be associated with an identifiable natural person. Data privacy laws such as the GDPR (not to mention 162 national statutes) seek to restrain the collection, use and disclosure of personal data.

Generally speaking, these laws are technology neutral; they are blind to the manner in which personal data is collected.

This means that when algorithms produce data that is personally identifiable, those algorithms and their operators are in scope for privacy laws in most places around the world.

Surprise!

Time and time again, technologists are taken by surprise by the privacy obligations of automated personal data flows:

  • In 2011, German regulators found that Facebook’s photo tag suggestions violated privacy law. The company was ordered to cease facial recognition and delete its biometric data sets. Facebook prudently went further, suspending tag suggestions worldwide for many years. See also this previous analysis of tag suggestions as a form of personal data collection.  
  • The counter-intuitive Right to be Forgotten (RTBF) first emerged as such in the 2014 European Court of Justice case Google Spain v AEPD and Mario Costeja Gonzálezi.  Often misunderstood, the case was not about “forgetting” anything in general but specifically de-indexing web search results. The narrow scope serves to highlight that personal data generated by algorithms (for that’s what search results are) is covered by privacy law. In my view, search results are not simple replicas of objective facts found in the public domain; they are the outcomes of complex Big Data processes.

What’s next?

The legal reality is straightforward. If personal data comes, by any means, to be held in an information system, then the organisation in charge of that system may be deemed to have collected that personal data and thus is subject to applicable data privacy laws.

As we have seen, privacy commissioners have thrown the book at analytics and Big Data.

AI may be next.

Being responsible for personal data, no matter what

If a large language model acquires knowledge about identifiable people—whether by deep learning or the gossip of simulacra—then that knowledge is personal data and the model’s operators may be accountable for it under data privacy rules.

Neural networks represent knowledge in weird and wonderful ways, quite unlike regular file storage and computer memory. It is notoriously hard to pinpoint where these AIs store their data.

But here’s the thing: privacy law probably doesn’t care about that design detail, because the effect still amounts to collection of personal data.

If a computer running a deep learning algorithm has inferred or extracted or uncovered or interpolated fresh personal data about individuals, then its operator has legal obligations to describe the data collection in a privacy policy, justify the collection, limit the collection to a specific purpose, and limit reuse of the collected personal data. In the privacy laws I have read, there is nothing to indicate that an information system based on neutral networks will be treated any differently from one running written in COBOL and running on a mainframe. 

Privacy law usually gives individuals the right to request a copy of all personal data that a company holds about them.  In some jurisdictions, individuals have a qualified right to have personal data erased.

I am not a lawyer but I can’t see that owners of deep learning systems holding personal data can excuse themselves from technology-neutral privacy law just because they don’t know exactly how the data got there.  Nor can they logically get around the right to erasure by appealing to the sheer difficulty of selectively removing knowledge that is distributed throughout a neutral network. Such difficulty may be seen as the result of their own design and decision-making.

And if selective erasure of specific personal data is impossible with these black boxes, then the worst case scenario for the field of AI may be that data protection regulators rule the whole class of technology to be non-compliant with standard privacy principles.

Are you getting prepared for AI? 

Constelltion is developing new AI preparedness tools to help organisations evaluate the regulatory and safety implications of machine learning. Get in touch if you'd like to know more about this reserch, or to exchange views. 

New C-Suite Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Security Zero Trust Chief Executive Officer Chief Information Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer Chief Privacy Officer

Why generative AI workloads will be distributed locally

This post first appeared in the Constellation Insight newsletter, which features bespoke content weekly.

Generative AI workloads have been dominated by Nvidia, a massive cloud buildout and compute that comes at a premium. I'm willing to bet that in a year, we'll be talking about distributed compute for model training and more workloads on edge devices ranging from servers to PCs to even smartphones.

On earnings conference calls, generative AI is still a common theme, but there's a subtle shift focusing on commoditization of the compute behind large language model (LLM) training and a hybrid approach that leverages devices that are being built with processors generative AI capable.

"There's a market shift towards local inferencing. It's a nod to both the necessity of data privacy and an answer to cloud-based inference cost," said Intel CEO Pat Gelsinger on the company's third quarter earnings conference call.

Here's a quick tour of what's bubbling up for local compute powered generative AI.

Amazon CEO Andy Jassy said:

"In these early days of generative AI, companies are still learning which models they want to use, which models they use for what purposes and which model sizes they should use to get the latency and cost characteristics they desire. In our opinion, the only certainty is that there will continue to be a high rate of change."

Indeed, the change coming for generative AI is going to revolve around local compute that's distributed.

Here's why I think we may get to distributed model training sooner than the industry currently thinks:

  • Enterprises are building out generative AI infrastructure that often revolves around Nvidia, who needs competition but right now has an open field and the margins to prove it.
  • The generative AI price tag is tolerated today because the low-hanging productivity gains are still being harvested. If you can improve software development productivity by 50% who is going to sweat the compute costs? Pick your use case and the returns are there at the moment.
  • But those easy returns are likely to disappear in the next 12 months. There will be more returns on investment, but compute costs will begin to matter.
  • Companies will also gravitate to smaller models designed for specific use cases. These models, by the way, will need less compute.
  • Good enough processors and accelerators will be used to train large language models (LLMs)--especially for cases where a fast turnaround isn't required. Expect AWS' Inferentia and Trainium to garner workloads as well as AMD GPUs. Intel, which is looking to cover the spectrum of AI use cases, can even benefit.
  • The good enough model training approach is likely to extend to leveraging edge devices for compute. For privacy and lower costs, smartphones, PCs and other edge devices are going to be equipped and ready to leverage local compute.

Ultimately, I wouldn't be surprised if we get to a peer-to-peer or Hadoop/MapReduce-ish approach to generative AI compute.

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Big Data ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Cloud CCaaS UCaaS Enterprise Service GenerativeAI Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Disrupting yourself can hurt even if it makes sense, just ask Paycom

A common mantra among technology companies is that it's better to disrupt yourself than let a competitor do it. On the whiteboard that mantra makes long-term sense. In reality, a business model transition can crush your stock.

Just ask Paycom, a human capital management software provider that created Beti, a service that dramatically reduces payroll errors and drives value for customers. The concept is shockingly simple: Give employees access to payroll to fix errors before the checks are cut.

Paycom shares were hammered this week because Beti is doing away with unscheduled payroll runs and error fixes. What's wrong with generating returns for customers? A more perfect payroll means fewer billable items for Paycom, which charged for corrections and unscheduled payrolls.

Here’s a look at the carnage (hint: lower right on chart is where “PAYC” wound up.)

To its credit, Paycom is playing the long game with customers. Sure, the business model transition hurts a bit, but other companies have gone through it. Think about how software companies transitioned from licenses to subscriptions after Adobe led the way.

Beti launched in 2021 and now two-thirds of Paycom's customer base is using it. Paycom is rolling out Beti to Mexico to complement US and Canada.

Paycom CEO Chad Richison explained the Beti effect.

"For most employees, the value of the perfect payroll is oftentimes immeasurable. If their check is perfect, they don't need to borrow money from a friend or family member to get through the weekend or make a bill payment. How do you measure the value of that?

We're getting better and better at helping employers measure the full value available to them when payrolls are perfect. A portion of that value is easy to calculate because it's the value they receive by the elimination of after-the-fact payroll errors that require correction payroll runs, manual checks, voided checks, direct deposit reversals, additional wires, tax adjustments, W2Cs, et cetera, et cetera.

Perfect payrolls eliminate these common after-the-fact payroll corrections that would otherwise be billable. So the more employees do their own payroll, the greater the savings delivered to the client from Paycom future billings, which results in lower related revenue recognized by Paycom."

Paycom isn't hurting. The company delivered third-quarter revenue of $406 million, up 22% from a year ago, with net income of $75 million, or $1.30 a share. Non-GAAP earnings were $1.77 a share. Third-quarter sales missed guidance because Beti was working too well. In addition, the outlook was light as Paycom projected revenue between $420 million to $425 million in the fourth quarter.

In other words, the more customers use Beti, the more value they receive at the expense of Paycom revenue. Paycom is expecting revenue growth in 2024 between 10% and 12%. A customer that had to ran 19 payrolls a quarter due to errors can now run 13.

The big question: Should Paycom have disrupted its own services business? It's a conundrum faced by many companies and the decision isn't easy. Here are a few thoughts about answering that question.

Paycom's long game. Richison said that he's focused on "the client value and the differential between what they're paying and what they're actually achieving." Overall, I don't think you can go wrong providing value to customers.

If Paycom didn't launch Beti someone else would have. I'd argue that payroll errors aren't a feature but a bug. Some startup would have disrupted the error-ridden payroll process with something similar to Beti anyway. Paycom would have lost the services revenue and customers too over time. Shortly after Beti launched, Richison said on Paycom's fourth quarter 2021 conference call:

"For years, I have been predicting the end of the old model whereby HR and payroll personnel’s routine of inputting data for employees is replaced by a self-service model that provides employees direct access to the database.

The old model is dying and that is good for both the business and the employee."

Here's a fun fact: Paycom's fourth quarter revenue in 2021 was $285 million, well below the projected $420 million or so two years later.

Don't forget word of mouth. Paycom received a good amount of attention as shares fell this week. Enterprise buyers who dig a bit will quickly find a "man bites dog" headline as Paycom sacrificed revenue for customer value. Something tells me this Paycom stock tale may wind up being good marketing.

Paycom is adding new customers. Should Beti ramp even further Paycom will have more enterprise customers on its platform. These customers tend to buy other services later if the vendor delivers.

"New business sales as well as cross-selling within our base has always been a mitigating factor to any type of transition shift, we make like this," said Richison. "New business sales remained strong. In fact, most of the calls we get in are about Beti. We've got our first enterprise rep and they're only targeting deals that have greater than 25,000 employees. And they've got plenty of leads."

Could Paycom have managed this transition better? Perhaps. But Beti appears to be a hit. In other words, take the win with a bit of pain. Disrupt yourself or be disrupted.

Future of Work Data to Decisions Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Executive Officer Chief Information Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Google Cloud CEO Thomas Kurian on DisrupTV: Generative AI will revamp businesses, industries

Google Cloud CEO Thomas Kurian said companies are starting to reimagine their businesses for artificial intelligence one process at a time. The big question is what industries hit scale first. 

Speaking on DisrupTV, Kurian said "in virtually every field, we're seeing people take the persona that they had, and then creating a digital version of that persona using AI and this is happening incredibly quickly."

"The common theme is taking the skills of the model that represents what human beings can do, but now creating a digital persona and assisting a function," he said.

Kurian cited use cases across industries including healthcare, insurance, cosmetics and media to name a few. Many of these industries started out with AI use cases that revolved around classification and categorization. The next phase for AI was prediction using models based on different parameters. Generation is the new phase.

"Generation is the next skill as you train a model with a set of inputs and it can generate output," said Kurian. "You can now put together and automate a complete workflow for the whole company. And this is happening in many places. We see it happening at scale at many around the world."

Related: Cloud customers still optimizing spend and should forever | Google Cloud delivered third quarter revenue of $8.4 billion, up 22% from a year ago, with an operating profit of $266 million

Constellation Research CEO Ray Wang asked Kurian whether incumbent companies or startups had advantages in generative AI. Kurian said:

"We're very early in the market. To do AI well, we need high quality data sets to fine tune the state-of-the-art models. For high quality datasets, you need state of the art models, and you obviously need the infrastructure to serve the models. But then just as importantly, you need to integrate these into the application surface.

The companies that succeed will have capabilities for state-of-the-art models that are driven by high quality datasets that they own, and the ability to activate these models. within account within the context of an application surface.

There will obviously be disruptors and they will take a function of process that was done in one way and fundamentally change it. You will see disruptions in different industries, where the fundamental business model itself may change.

"The winners are always those that solve a fundamental problem materially."

Going forward, generative AI will be democratized and simplified. AI will be an enabler for new technologies and access. In other words, we'll all become programmers to some degree. Kurian said:

"The code generation model will generate the skeleton code and create the environment. You can say 'hey, I want to support a million and a half users with less than 10 seconds latency of 10 milliseconds latency and I need to guarantee four nines of availability' and the system will do it for you. There's no reason that that cannot be done with where models are. And by doing that, we again change how widely accessible these things are. We are also very encouraged by the fact that we can make these models work not just for people in affluent countries, but also people in emerging markets."

Kurian also touched on other topics.

Security. Kurian said AI is already detecting threats, but also prioritizing them. "We built a model that can look at the threats that are emerging in your infrastructure, which one needs priority and how does that threat affect you," he said. "Models don't have that emotional bias. So, they can look at many more patterns to detect what's going on what is the attack the attack surface. It can remediate it and automate the creation of the rulebook to resolve the problem. We're applying AI to the whole spectrum."

AI-driven cybersecurity attacks. "We also see AI being used by bad actors to create new types of threats and we're also building our platform to thwart new types of threats," he said.

Running Google Cloud. Kurian said Google had great products but needed to build out enterprise capabilities as an independent unit.

"We're the fifth largest software company in the world, which is a long, you know, huge credit to the team. But when we looked at it, we felt we needed to do four things really well. You need to take great technology but convert it into solutions that people can use. Just having technology that's not accessible is a challenge.

Second, we need to build a great go to market function. What kind of structure do you have? How do you focus? We started with a set number of industries and countries.

It's an ecosystem game. It's not your company against another company. It's your ecosystem. So, we made decisions very early to partners. We started with 100 partners today there are 100,000 partners. And part of that is we wanted to bring that ecosystem so that people realize it's a bigger pie that they are creating, not slicing off the same pie. That's the third one.

And then you have to do something really well. It's just like sports. In order to play really well, you have to do the grunt work of training. We have to do a lot of the things below the surface of the water such as the systems, the legal contracting, and the frameworks to be more efficient as an organization. Those were all put in place so that you can go faster. Unless you have a strong core, you can't really play well. We've been super fortunate that we've been blessed with such a great team of people that have done so much of the work to get us where we are today."

Data to Decisions Tech Optimization Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work New C-Suite Next-Generation Customer Experience Google Google Cloud SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Apple Q4 better than expected, but it has a Mac sales sink

Apple's fourth quarter results were better-than-expected but revenue was down for the fourth consecutive quarter. Mac sales in the quarter were weaker than expected, but may get a lift with new MacBook Pro models on tap. 

The company reported fourth quarter earnings of $1.46 a share on revenue of $89.5 billion, down 1% from a year ago.

Wall Street was expecting Apple to report fourth quarter earnings of $1.39 a share on revenue of $89.28 billion.

Here's a breakdown of Apple fourth quarter results by product line and their targets via LSEG.

  • iPhone revenue: $43.8 billion; Estimate: $43.81 billion
  • Mac revenue: $7.61 billion; Estimate: $8.63 billion
  • iPad revenue: $6.44 billion; Estimate: $6.07 billion
  • Wearables, Home and Accessories revenue: $9.33 billion; Estimate: $9.43 billion
  • Services revenue: $22.31 billion; Estimate: $21.35 billion

In a statement, CEO Tim Cook said the company has a strong lineup for the holiday season including new Macs. Overall, Apple's game plan is to monetize its user base with services and subscriptions.

For fiscal 2023, Apple reported net income of $96.99 billion, or $6.13 a share, on revenue of $298.08 billion.

Other items to know:

  • China revenue in the fourth quarter was $15.08 billion, down from $15.47 billion a year ago.
  • Americas revenue was $40.11 billion, up from $39.8 billion a year ago.
  • Europe and Japan revenue was down slightly from a year ago.

Constellation Resarch CEO Ray Wang said:

"Despite continued revenue decline, Apple is a digital giant and flight to safety stock in good times and bad. The macro conditions are elongating the iPhone replacement cycle. China is the challenge as iPhone 15 sales slow and Huawei has revamped its offerings. The elongation of iPhone replacement cycles is the headwind. 

The new Mac Lineup provides cost savings and higher margins and could revitalize sales. The vertically integrated strategy is working For Mac, iPhone, Watch, and ultimately Vision Pro. Services is the bright spot. All eyes on the holiday forecast for the December quarter." 
 

More:

Next-Generation Customer Experience apple Chief Information Officer

Palantir's commercial business scales with help of AI boot camps

Palantir's commercial annual revenue run rate is closing in on the company's government business as Palantir Artificial Intelligence Platform (AIP) gains enterprise traction.

The upcoming parity point between commercial revenue and Palantir's core government business is worth watching. In the third quarter, Palantir commercial revenue grew 23% to $251 million with US revenue growing 33% to $116 million. Government revenue grew 12% to $308 million in the third quarter.

Overall, Palantir reported third quarter earnings of $72 million, or 3 cents a share, on revenue of $558 billion, up 17% from a year ago. Adjusted earnings were 7 cents a share. The results handily topped expectations.

Palantir has been trying to grow its enterprise revenue base for years and now has 181 commercial customers, up 37% from a year ago. As for the outlook, Palantir is projecting 2023 revenue between $2.216 billion and $2.22 billion. J.D. Power, Palantir team up on generative AI apps for auto value chain

On a conference call with analysts, Ryan Taylor, Palantir's Chief Revenue Officer, said the company closed 80 deals including 12 worth $10 million or more across 11 industries.

Taylor said:

"We're also seeing the acceleration of larger deals and shorter times to conversion and expansion, including a multiyear deal in excess of $40 million with one of the largest home construction companies in the U.S. to start up pilot and converted all within Q3. This growth is in part due to AIP's continued transformation of the way we partner with and deliver value for our customers, and we expect AIP's impact to continue to intensify."

Palantir has seen traction with "AIP boot camps," which deliver real workflows on customer data in 5 days or less. That approach is driving contract expansions. "We're on track to conduct boot camps for more than 140 organizations by the end of November, nearly half of those are taking place this month alone, which is more than the number of U.S. commercial pilots we conducted all of last year," said Taylor. "We almost tripled the number of AIP users last quarter and nearly 300 distinct organizations have used AIP since our launch just five months ago. We will continue investing meaningfully in boot camps as our go-to-market strategy for AIP."

Of course, Taylor noted that Palantir's government business remains strong and should accelerate going forward.

Shyam Sankar, Palantir's CTO, said AIP boot camps are driving the point home that you can't use LLMs without tools to provide algorithmic reasoning. Sankar recently gave a talk on the topic.

According to Palantir CEO Alex Karp the commercial success isn't surprising if you zoom out and consider the company's military experience. Karp said:

"AIP and U.S. commercial is not only is disrupting the market, it's setting a standard that I don't believe any other software company will be able to reach partly because they misunderstood the value of LLMs and their relative importance, and lack of importance, partly because they don't have decades of experience on the frontline as we do in the military with managing the core ways in which you make these things precise, the way in which you provide governance."

Karp added that AIP enables enterprises to manage LLMs and "basically pen test your enterprise." "My view of what we should do is build products that are so good that the competition stops competing, whether that's in commercial or on the battlefield and that's what we're doing. And that's what we're seeing in AIP," said Karp.

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Information Officer Chief Data Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer