Results

Celonis acquires Symbio, aims to meld AI, process intelligence in one package

Celonis acquires Symbio, aims to meld AI, process intelligence in one package

Celonis said it will acquire Symbioworld GmbH, a business process management software vendor, in a move that brings together process mining with AI-drive process modeling.

Terms of the deal weren't disclosed. Celonis and Symbioworld have also launched a version of Symbio's Process Cockpit in beta with select customers that will meld process insights and data in one experience. Disclosure: I used to work for Celonis

The process automation and management market are aiming to make process mining more accessible via AI, generative AI and various customer experiences.

The game is to meld process models, KPIs, insights and workflows into one experience. For instance, Microsoft bought Minit in 2022 and then wrapped that process mining technology into a Power Automate wrapper. SAP acquired Signavio and has its platform as well as Datasphere as a wrapper with additions such as LeanIX. UIPath has built out its automation platform that puts process mining together with other technologies including its AutoPilot generative AI. Celonis had its execution management system. Ultimately, generative AI has the potential to build process models as well as make them more accessible to a broader range of users. Rest assured there will be more process copilots and assistants in the near future.

In a statement, Celonis said Symbio would help it "provide managers and employees with a unified end-to-end process experience." Ultimately, Celonis is looking to use its process mining technology to surface how processes actually run and then leverage Symbio to continually improve them. with a "process first approach."

Symbio has five core products:

  • Navigator, which guides employees in daily work via an AI Copilot.
  • Action Center, a part of Navigator that simplified task tracking and compliance.
  • Process Cockpit, which gives enterprises a holistic view of processes, metrics and optimizations.
  • Business Manager, which maps processes via AI into a flow.

Celonis and Symbio initially collaborated on Process Cockpit, but the companies noted that enhanced process visibility is available in Navigator, which is embedded into Microsoft Teams. The companies added that the combination of generative AI assistants, process data and process knowledge can provide intelligence to a broader base of users.

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Revenue & Growth Effectiveness Chief Information Officer

How Wayfair's tech transformation aims to drive revenue while saving money

How Wayfair's tech transformation aims to drive revenue while saving money

Wayfair saw breakneck growth three years ago and an ensuing hangover that required a focus on operating margins and execution, but a technology transformation has the company thinking big again. The to-do list: build out a flexible technology infrastructure, drive revenue while saving the business money, and leverage years of experience in data analytics, artificial intelligence, and machine learning to create generative AI use cases.

During the COVID-19 pandemic, Wayfair became a home goods retailing giant as loyal customers and new ones outfitted home offices and decor. The numbers tell the tale. In 2020, Wayfair revenue surged to $14.1 billion from $9.1 billion in 2019 and it ended the year with 31 million active customers and 61 million orders delivered. In 2021, Wayfair revenue was $13.71 billion with 27 million active customers and 52 million orders delivered. By 2022, Wayfair revenue was $12.2 billion with 40 million active customers and 40 million orders delivered.

Niraj Shah, CEO of Wayfair, said 2023 was about returning to profitability, becoming more efficient, and investing in growth. On Wayfair's third quarter earnings conference call, Shah said, "I'm confident the overarching theme of 2023 will be execution. Our team came into this year with a plan, a plan to see our core recipe return to form, to return our business to profitability, and to continue pushing our major growth initiatives forward."

Get this customer story as a PDF

Indeed, Wayfair is delivering positive free cash flow and improving its active customer metric. Shah said Wayfair's plan is to "nail the basics--driving customer and supplier loyalty and cost efficiency." He added that Wayfair is focusing on what it can control, such as costs, amid a volatile economy and moving the needle on active customers, order values, repeat orders, and market share.

The average Wayfair shopper places about two orders per year totaling about $540. "This shopper isn't someone that's typically refitting an entire room or house but going through their home item by item, project by project making small updates on a much more frequent cadence," said Shah. "If our customers stay in their homes for longer, we're well-positioned to be their retailer of choice the next time they decide that they'd like a new lamp for the living room or want a new set of chairs for their dining table."

Shah said Wayfair's growth trajectory revolves around its specialty and luxury brands, international efforts, physical retail, supplier advertising, and professional services. Wayfair will be vigilant about the returns on these efforts relative to the investment thesis. "Even with a turbulent macro, we remain committed to being adjusted EBITDA profitable in good times and bad," said Shah.

Technology and business alignment

For Wayfair Chief Technology Officer Fiona Tan, a Constellation Research BT150 member, business alignment with information technology is critical--especially as conditions change. In an interview with Constellation Insights at Connected Enterprise, Tan said Wayfair's migration to the cloud paid off well during the COVID-19 pandemic since it was able to scale up when demand spiked.

Nevertheless, Wayfair, founded in 2002, invested heavily in supply chain, logistics, and scaling as demand surged in 2020. Wayfair wasn't alone as many pandemic winners invested in people, technology, and infrastructure only to scale back later.

"As we've seen the category pullback, it's been a good forcing function to make sure that we’re going back to really being proficient with our costs, not only from a technology perspective, but also from an operational perspective," said Tan. "We've been able to focus on growth and profitability in a home market that's still depressed. The fact that we're able to grow our market share in this environment will really set us up well when people start buying more for their home again."

Tan said her team's goal is to work with the business to ensure Wayfair has the "right platform and infrastructure from a technology perspective to enable us to continue to grow in a flexible, scalable, and lean way," said Tan.

As of Wayfair's Investor Day in August, the company said it had more than 2,000 people focused on technology out of more than 13,000 employees.

Tan added that Wayfair is fortunate that it is a digitally native company when it comes to business and operations. "I think a lot of our stakeholders also realize that tech enablement of the business is super important," said Tan. "We're always very focused on the customer experience and what outcomes we're trying to drive."

Wayfair's platform also has to accommodate a model that aggregates suppliers on the back end, retails the product to the customer, and fulfills the orders. This orchestration means Wayfair has to be skilled at merchandising and curating products, fulfilling orders, and shipping and handling expensive, large items.

Tan's technology organization is improving customer experience by helping to deliver the right products at the right price at the right time, leveraging available technologies, and using analytics powered by artificial intelligence and machine learning. Her team also works on marketing technology and generative AI use cases powered by Wayfair's first party data.

In addition, Wayfair has to continually cut technical debt. "We've been doing a lot of work to pay down some of our technical debt," said Tan. "We also see some cool things that were written a long time ago with old code that can potentially reused."

The tech, data, and customer journey platform

Tan said the technology platform behind Wayfair’s business is built around data driven insights, AI, and machine learning to create a personalized shopping journey and seamless experience for customers and suppliers. The main characteristic of Wayfair's technology platform is that it must be flexible.

"Everything we do with the tech platform has been custom built to enable our end-to-end customer journey," said Tan, who also said Wayfair's tech transformation is currently in flight and critical to scaling revenue going forward. "Our tech transformation encompasses both organization and technology improvements and we want to make sure that we're going to be able to build a world class platform while creating an environment where our teams can do their best work."

To date, Wayfair has moved from a monolithic model to cloud native microservices on Google Cloud. The move has enabled it to deploy and scale capabilities faster while being easier to maintain, she said. Moving to the cloud and a microservices model has enabled Wayfair to integrate machine learning, AI, and generative AI into the platform for everything from forecasting to marketing to UX.

Tan added that Wayfair is using Google's large language models for semantic searches and machine learning to help customers find products when they aren't exactly sure what they’re looking for. "There is a considerable amount of machine learning that's powering the customer journey," said Tan.

On the supplier side, Tan said Wayfair uses machine learning to help suppliers automate and manage pricing, inventory merchandizing, and post-service needs. . “With machine learning, we are actually able to extract, augment, and correct the information that we get from our suppliers," said Tan.

Technology drives experience

At Wayfair's Investor Day, Steve Conine, Co-Founder and Co-Chairman with Shah, said technology impacts every facet of the company's business including experience. "We have a massive product catalog and one of the things about home is it is shopped differently. It's very emotive and people often can't tell you exactly what they want, but need to be led to a solution," said Conine. "We've invested a lot in trying to figure out how do you help rationalize a massive catalog and make it exciting for consumers to shop."

Conine, who was CTO for Wayfair's first decade, added, "80% of ecommerce is really the operation side of it and it is very easy for us to throw up a website that promises everything’s in stock and going to ship same day and be delivered to you seamlessly. It is very hard to durably deliver that and make it seamless to a customer. You come on and it feels just like the shopping experience you expect. To make that happen is really what differentiates great retailers from mediocre retailers or retailers that go out of business."

According to Paul Toms, Wayfair Chief Marketing Officer, the customer experience is designed to address multiple life stages, budgets, and ages. "It could be everything for your first dorm room, really reflecting who you are and who you want to be when you go away to college or could be an empty nester couple who are redecorating their child's room and turning it into a guest room. And anything in between and frankly after," said Toms.

Data informs how Wayfair approaches different emotional states, financial needs and constraints, and life stages. "We want them to have good feelings about Wayfair early so that as they grow in their spending power, size of their home, or size of their family that they're thinking about Wayfair as being there for them along that journey," said Toms, who noted that Wayfair has 85 million contacts and can disintermediate media companies and talk to people directly.

Wayfair has a proprietary attribution and measurement platform called Themis, which aggregates customer history and offline activity and allocates incremental revenue to each touchpoint such as Google, Instagram, Facebook, Pinterest, and TV. Toms said Themis is powered by more than 4,000 machine learning models and 5 years of continuous investment.

Once the connection to a customer is made, Wayfair's merchandising engine kicks in where the retailer along with supplier partners identify the highest potential SKUs, merchandise them well with images and reviews, and show those products across various touchpoints.

"It starts with identifying the best items and the highest potential skews in our catalog. This is not just today's winners, but tomorrow's winners. We do this with both algorithms and merchants. So, we have human eyes on all of these products," explained Liza Lefkowski, Vice President of Curated Merchandising, Brands and Store at Wayfair.

Lefkowski, speaking at the company's Investor Day, added that Wayfair is opening physical locations to serve as retail labs where it has staged floor sets for every department.

Pricing also matters and Wayfair tests and models price levels to create optimal margins while competing in the market.

Once a sale closes, Wayfair's proprietary supply chain takes over. "We built that supply chain because we truly believe it is core. It is a key enabler for the business. And without that supply chain, we are not able to provide the quality, the convenience, and the cost base for the business to be successful," said Thomas Netzer, Wayfair's chief operating officer.

Wayfair's proprietary supply chain capabilities include in-house logistics, partnership management, and information management. Wayfair has also invested in robotics and algorithms designed to cut costs with "profit aware sorts," which prioritize products with low costs and low incident rates delivered at high speeds. Wayfair has more than 70 initiatives in supply chain designed to save more than $500 million.

Closing the experience is a sales and service team that is broken up into specialties to guide customers to the best items for their homes. Wayfair has more than 3,000 sales and service agents behind more than 15 million interactions a year. Machine learning and AI drive everything from intent identification to channel routing, agent matching, and follow-ups.

What's next?

Tan said the biggest item on her 2024 to-do list is enabling Wayfair's transformation by enabling new capabilities such as experiences powered by generative AI and large language models.

"I think there are going to be some interesting changes to how customers interact with their user experiences," said Tan, adding that the home category requires new experiences. "It's an interesting category where you can filter with visual searching and conversational searching together. I think that's interesting for us because it's both text and imagery on Wayfair."

At Wayfair's Investor Day, Tan said the company is looking at generative AI "in a very pragmatic way." Specifically, generative AI use cases are framed around experiences and productivity. Wayfair also has a framework for generative AI use cases based on whether there will be a human in the loop to evaluate output before going to a customer or supplier.

One proof of concept revolves around customer sales and service and generated response text. Results have shown a reduction in call resolution time as well as improvements in agent productivity and customer satisfaction, said Tan. Another effort revolves around developer productivity using Microsoft GitHub CoPilot as well as Google's Duet AI. "We are excited about how some of this code completion and generative code can actually help accelerate our tech transformation efforts," she said.

Tan also said that generative AI can play a role in the customer design experience. There are tests with customer facing generative AI in controlled environments. "We believe that some form of conversational search or AI guided assisted shopping will become prevalent in the next couple of years," said Tan. "Our goal is always to make sure that our search experience is the best in the home category. So, customers come to us first and come back whenever they want to search for something for their home."

Generative AI can also be used for the design experience so customers can imagine spaces and style preferences by combining text and images and create photorealistic spaces just based on uploading a picture of a room.

"We believe that we have multiple distinct advantages when it comes to our ability to win using generative AI. We have a very large, rich, and proprietary first party data set that is unique to us and can be used to fine tune and train these foundational LLMs specific to our use cases," said Tan.

According to Tan, Wayfair can leverage pre-trained models as a service and then tailor them to use cases. "Our ability to figure out good use cases for this capability is going to be important," she said.

Wayfair's technology strategy in recent years has revolved around digesting an investment surge and then becoming more efficient. Going forward, Wayfair's technology focus will be more on experience and the art of the customer journey. "We'll really be focused more on the art and that'll be an emphasis as well," said Tan.

In any case, Tan said Wayfair's technology platform will be ready.

Watch the full interview here:

Data to Decisions Next-Generation Customer Experience Future of Work Innovation & Product-led Growth New C-Suite Tech Optimization Digital Safety, Privacy & Cybersecurity Big Data ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration GenerativeAI Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

GitHub plans to infuse Copilot throughout its platform

GitHub plans to infuse Copilot throughout its platform

GitHub is doubling down on its copilot strategy with the launch of Copilot Chat, Copilot Enterprise and a series of tools designed to enhance software development productivity and collaboration.

At GitHub Universe 2023, GitHub outlined its latest offerings and roadmap. The company, owned by Microsoft, is copilot-happy just like its parent. However, GitHub Copilot is aimed at what is among the top generative AI use cases--software development productivity. GitHub noted a study from last year citing a 55% boost in developer productivity with copilots.

Real-world developer productivity gains are on par with GitHub's findings. Speaking at Constellation Research's Connected Enterprise, Pauline Yang, a Partner at Altimeter Capital speaking on the same CCE panel, laid out the software development economics. Yang said:

"One of the big use cases that we've seen really take off is developer productivity. If you talk to CTOs, they have all these different metrics--how happy their developers are, how much more pull requests are they getting, or how more productive their senior engineers are. We believe that a lot of companies are becoming software companies, even if you're not selling software, and the costs of engineers right now are so high that 40% productivity gains with your engineers is massive and so is happiness of paid developers. All of those gains are economic value."

Given that backdrop it's not surprising that GitHub CEO Thomas Dohmke said the company is "re-founded on Co-Pilot." In a blog post, Dohmke said: "It is our guiding conviction to make it easier for developers to do it all, from the creative spark to the commit, pull request, code review, and deploy—and to do it all with GitHub Copilot deeply integrated into the developer experience."

To that end, GitHub announced the following:

  • GitHub Copilot Enterprise, which can be personalized for an enterprise's code base including private code. This private code personalization capability was also cited by Amazon CEO Andy Jassy as a big use case for Amazon Code Whisperer. GitHub Copilot Enterprise will be available in February for $39 per user per month. Copilot Business is $19 per user per month.
  • GitHub Copilot Chat will be generally available in December as part of the GitHub Copilot subscription. GitHub Copilot Chat will be powered by OpenAI's GPT-4, have an inline Copilot Chat within the context of code editing, slash commands for big tasks and be available in JetBrains suite. GitHub Copilot Chat will also be integrated into the GitHub website and the company's mobile app.
  • GitHub Copilot Partner Program, an ecosystem that will add third-party developer tools to GitHub Copilot. There are more than 25 debut partners including Datadog, Postman and Datastax.
  • GitHub Advanced Security, which is a vulnerability prevention system for large language models. The AI-driven tools are available in preview and will be included in GitHub Advanced Security subscriptions.
  • GitHub Copilot Workspace, which is an early demo for GitHub Next. GitHub said its next-generation Copilot Workspace will be delivered in 2024.

Related:

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

How Baker Hughes Used AI and LLMs for ESG Materiality Assessments | 2023 SNA Award Winners

How Baker Hughes Used AI and LLMs for ESG Materiality Assessments | 2023 SNA Award Winners

Baker Hughes’ Marie Merle Caekebeke, Sustainability Executive – Strategic Engagement, was initially skeptical about AI. Now she’s thinking next phases and leveraging AI to make ESG more strategic. Here’s what she learned from a project with C3 AI.

On Insights <iframe width="560" height="315" src="https://www.youtube.com/embed/mREEJVBeyv4?si=Mo9iX648J_akmRq2" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

How Baker Hughes used AI, LLMs for ESG materiality assessments

How Baker Hughes used AI, LLMs for ESG materiality assessments

VIEW FULL CUSTOMER STORY HERE

Baker Hughes' Marie Merle Caekebeke admits she was a bit skeptical about artificial intelligence, but she wanted a way to speed up environmental, social and governance (ESG) materiality assessments so her team could focus on the big picture and stakeholder needs at the energy technology company.

"I was actually quite pleasantly surprised,” said Caekebeke, Sustainability Executive – Strategic Engagement, Baker Hughes. "I wanted the individuals on my team to take ownership of sustainability and to move the needle on progress. I felt that we could leverage a machine, but the decisions will be made by individuals."

Caekebeke (right), a 2023 SuperNova Award winner in the ESG category, started with a pilot with C3 AI to parse 3,500 stakeholder documents in 9 weeks and train natural language processing and large language models (LLMs) to identify and label paragraphs aligned to ESG topics via more than 1,700 training labels. The project quickly went to production and saved 30,000 hours in a two-year cycle time to complete the ESG materiality assessment. Today, Baker Hughes' sustainability executives can be more proactive with stakeholders.

Baker Hughes is an energy technology company that specializes in oil field services and equipment and industrial energy technology. The company aims to be a sustainability pioneer that minimizes environmental impact and maximizes social benefits.

Speaking on Baker Hughes' third-quarter earnings conference call, CEO Lorenzo Simonelli said the company sees strong orders for natural gas markets and electric machinery. The company's plan revolves around delivering financial results while investing in the future, said Simonelli.

"We are focused on our strategic framework of transforming our core to strengthen our margin and returns profile, while also investing for growth and positioning for new frontiers in the energy transition," said Simonelli, who noted that the company is working through three time frames. In 2027, Baker Hughes expects to focus on investing to solidify the company's presence in new energy and industrial sectors with an emphasis on decarbonization in 2030.

Simonelli added that Baker Hughes' execution over the coming years will position it to compete in carbon capture, usage and storage (CCUS), hydrogen, clean power and geothermal. "We expect decarbonization solutions to be a fundamental component, and in most cases, a prerequisite for energy projects, regardless of the end market. The need for smarter, more efficient energy solutions and emissions management will have firmly extended into the industrial sector," said Simonelli, who said Baker Hughes will focus on industry-specific use cases.

Baker Hughes is projecting new energy orders will grow to $6 billion to $7 billion in 2030 from $600 million to $700 million in 2023.

With that backdrop, Baker Hughes' sustainability team has to keep tabs on emerging trends and topics across multiple sources and ultimately customize the insights for various stakeholders, said Caekebeke. In other words, materiality assessments for ESG will become more of a living document.

"The sustainability space is shifting so quickly that I wanted more strategic engagements with our stakeholders," she said. "We're always going to have customer conversations; we're always going to have investor conversations and speak to our employees as well. But I wanted something to supplement it and look at those topics that matter to our stakeholders, weigh information and make sense for our assessment."

The project

Baker Hughes publishes a biennial ESG assessment that informs strategy at the company and creates a listening exercise for internal and external stakeholders. With the assessment, Baker Hughes aligns its strategic priorities and commercial strategy.

Caekebeke said the project started by weighting sources and information that is trustworthy. For instance, filings with the Securities and Exchange Commission (SEC), sustainability reports and annual reports had a higher weighting than something like social media where "everyone is a sustainability expert," she said. Reports from customers, competitors, investors and NGOs were also included.

In nine weeks, the data collection was complete and then Caekebeke's team focused on stakeholder expectations by role and what kind of decisions needed to be made. The lens of the project wasn't about automation as much as it was priorities. "We have a strong sustainability team, and I had enough humans and employees," explained Caekebeke. "It wasn't about running out of sweat equity as much as it was wanting individuals on my team focusing on implementation and change rather than manual tasks."

Baker Hughes, a long-time C3 AI customer, already had a strong partnership, systems in place and data. Caekebeke said C3 AI is a "progress partner" and more strategic vendor. "We reached out to see what C3 AI had and then continued to build together a solution that would demonstrate the ROI and then create sound data to make decisions on," she added.

Previously, Baker Hughes manually collected interviews, surveys, and documents on around 50 topics. At first, Caekebeke's team took a subset of those topics for a pilot. The team also narrowed down the list of stakeholders in the pilot. Baker Hughes collected employee insights and feedback from community resource groups within the company. Once KPIs, users and objectives were defined and the pilot proved the use case worked, the C3 AI application expanded topics, targeted a full list of stakeholders, and went into production.

One critical project consideration was identifying topics and aligning them to roles. "I wanted a tool that would be nimble enough that if I wanted to run a report only on emissions, I could do that. If I wanted to run a report only on just transition and how environmental justice was playing a part, especially after a key event, I could do that too," said Caekebeke.

Using C3 AI as a platform, Baker Hughes was able to train LLMs to fill gaps in the ESG materiality process including:

  • Parsing 3,500 stakeholder documents to produce more than 400,000 paragraphs.
  • Training Natural Language Processing machine learning pipelines to identify and label paragraphs to align with ESG topics and training labels.
  • Deploying a workflow to compute time series ESG materiality scores for source documents at the paragraph, document, stakeholder and stakeholder group levels.
  • Configuring an interface to visually represent ESG scores, analysis, evidence packages and benchmarks.

The returns boiled down to time. A human would have taken 790 hours to analyze the volume of content for the ESG materiality report, while the C3 AI ESG application took less than an hour and was able to focus on the 10% of relevant content. The manual process required nearly 30,000 hours and a 2-year cycle time to complete the ESG assessment without AI.

What's next

Caekebeke said the AI-driven ESG materiality process will enable Baker Hughes to keep better tabs on new topics, impacts of events, legislation and policy around the world. "We work in over 120 countries. We have 55,000 employees so we have a broad reach. And so, it is important to really look across the world at what's happening,” she said.

Going forward, the plan is to use more AI to drive decisions faster with more data transparency. Caekebeke said using AI is also likely to curb unconscious bias in ESG materiality assessments.

"When we were looking at the way that we did it manually, you will have some stakeholders that answer the surveys and then ones that don't. If you make an analysis, you read through their information and then essentially translate that into what you think you're hearing. But there's that kind of unconscious bias that we all have as we're reading through it," said Caekebeke "An engine doesn't really have that bias."

Caekebeke is also betting that the C3 AI ESG application will also connect dots between environmental impacts and social issues.

"Where communities are marginalized, they are feeling the deepest impact of climate change. Those areas are also where you have human rights violations and people that are not making a fair wage," said Caekebeke. "It's about looking at ESG holistically and leveraging AI to look at it so you could draw some parallels."

Baker Hughes released its sustainability framework in April and the goal is to use the lessons from the C3 AI tool to deploy the strategy across the organization. "Moving forward in 2024 is about making sure that the deployment of our sustainability strategy is well understood and that initiatives are pushed all the way to the deepest level," said Caekebeke. "My vision is that I want everyone to have that same focus on sustainability, understand the value and understand our environmental and social footprint. For 2024, it will be a deeper engagement with employees all from the top to the bottom, across regions where we work and also across the functions."

Lessons learned

Caekebeke said the project surfaced a few lessons learned about the intersection of ESG and AI. Here's the breakdown.

Get high level support from executives. Baker Hughes leadership supported the effort and that helped overcome concerns about using AI. "There's a lot of skepticism around AI. Some people love it. Some people are nervous. There should be a bit of both," she said.

Governance is critical. Caekebeke said governance should be laid out in advance of pilots and deployment.

Have a strong partner. Caekebeke said that C3 AI worked closely with her time to customize the application and produce something that works with transparency. Training models require collaboration and back and forth between customer and vendor teams.

Time is a core metric. "We are mindful of the fact that as sustainability requirements are increasing, people have less time," she said.

Start small. There are so many metrics to follow in ESG, but it's critical to narrow them down to the ones that are risks to your enterprise. "It's easier actually to build that up than to go the other way around. A lot of the times we want to please every stakeholder you know, and it's important to listen, but then you have to prioritize," said Caekebeke.

Efficiency and optimization are also sustainability. Internal stakeholders need to realize that "when you make something efficient, you're also making it more sustainable," said Caekebeke.

Keep iterating. "I was an AI skeptic. And I was really surprised to see the efficiency of the tool to the point where we're now in production phase, and we're working on the next iteration," said Caekebeke. "Pick the three or four things you want to do this year and then the next phase, so you have measurable projects from year to year. Just incremental steps in the right direction will really help the company move forward."

Related:

Watch the full interview here:

 

New C-Suite Data to Decisions Innovation & Product-led Growth AR Executive Events Chief Information Officer

Microsoft uses Oracle Cloud Infrastructure for Bing conversational workloads

Microsoft uses Oracle Cloud Infrastructure for Bing conversational workloads

Microsoft is using Oracle Cloud Infrastructure for its Microsoft Bing generative AI searches.

Oracle announced the multi-year agreement with Microsoft in a press release. What we don't know is whether Microsoft is using Oracle Cloud to overflow Bing workloads or completely due to efficiency and/or procurement of Nvidia GPUs.

The two companies recently outlined a partnership. Oracle also fired up Nvidia-powered instances and apparently has been able to procure GPUs. Oracle and Microsoft have a history of partnership announcements that seem to be refreshed often. For instance, Oracle and Microsoft outlined an interoperability partnership between clouds in 2019. In 2022, the two companies announced the general availability of Oracle Database Service for Microsoft Azure.

Also: Oracle adds vector search to Oracle Database 23c, melds generative AI, transactional data | Oracle's Q1 better than expected and Ellison loves generative AI

Although Microsoft CEO Satya Nadella and Oracle CTO Larry Ellison may seem like odd bedfellows, both companies have joint customers and mutual rivals in AWS and Google Cloud.

The Oracle Cloud-Bing announcement could also simply be a headliner use case for enterprises. Microsoft is using Oracle Cloud along with its Azure AI infrastructure for inferencing for Bing and managed services such as Azure Kubernetes Service to orchestrate workloads. The connection also uses Oracle Interconnect for Microsoft Azure.

Constellation Research analyst Holger Mueller said that "generative AI has the potential to change the cloud market landscape." Time will tell if the Microsoft-Oracle partnership for Bing conversational workloads is a data point for cloud leadership changes.

Mueller made the following points about reading the tea leaves with the Oracle and Microsoft partnership. 

  • It is a milestone because no cloud vendor has ever moved internal workloads - or any workloads to a partner/competitor. 
  • It is a sign that Microsoft may be maxxed out on capacity. 
  • It is a sign it needs and does charge more to customers than it wants to afford internally.
  • Oracle gets the workload of the second largest search engine for generative AI search and evidently have the capacity.
  •  Oracle seems to get Nvidia chips at a better rate than Microsoft.

Research

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization Future of Work Next-Generation Customer Experience Oracle SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Leadership VR Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

OpenAI launches GPTs as it courts developers, models for use cases

OpenAI launches GPTs as it courts developers, models for use cases

OpenAI launched new developer tools, models and GPTs designed for specific use cases.

On its first developer day, OpenAI moved to expand its ecosystem, enable developers, and leverage the popularity of its models so they can be customized.

For OpenAI, the set of announcements brings it closer to where enterprises are going--smaller large language models (LLMs) and generative AI that is tailored to tasks. These task-specific models--called GPTs--will roll out today to ChatGPT Plus and Enterprise users. Constellation Research analyst Holger Mueller said:

"Buried in the press release in a side sentence is what is the biggest challenge in enterprises adopting LLMs. Important information is in OLTP systems and can't be accessed by LLMs. If the OpenAI GPTs capability to access OlTP databases work - we will enter the next generative AI era." 

In a blog post, OpenAI said:

"Since launching ChatGPT people have been asking for ways to customize ChatGPT to fit specific ways that they use it. We launched Custom Instructions in July that let you set some preferences, but requests for more control kept coming. Many power users maintain a list of carefully crafted prompts and instruction sets, manually copying them into ChatGPT. GPTs now do all of that for you."

While OpenAI's ChatGPT is being used for context specific use cases in Microsoft productivity applications, the company is also looking to put its own mark on its models. The game plan for OpenAI is to build a community and launch a GPT Store, which will feature GPTs across a broad range of categories. Developers will get a cut of the proceeds from the GPT Store.

Key points about GPTs:

  • Your chats with GPTs are not shared with developers. If a GPT uses third party APIs you have control over what data can be sent to API and what can be used for training.
  • Developers can use plug-ins and connect to real-world data sources.
  • Enterprises can use internal-only GPTs with ChatGPT Enterprise. These GPTs can be customized for use cases, departments, proprietary data and business units. Amgen, Bain and Square are early customers.
  • OpenAI also launched Copyright Shield, which will indemnify customers across ChatGPT Enterprise and the developer platform.

That backdrop of GPTs complements a bevy of other OpenAI launches that are more aligned with where the LLM market is headed.

Related: Why generative AI workloads will be distributed locally | Software development becomes generative AI's flagship use case | Enterprises seeing savings, productivity gains from generative AI | Get ready for a parade of domain specific LLMs

Here's the breakdown.

  • OpenAI launched GPT-4 Turbo, which is up to date through April 2023 and has a 128k context window. It is also optimized for a 3x cheaper price for input tokens and 2x cheaper price for output tokens relative to GPT-4.
  • GPT-4 Turbo will also be able to generate captions and analyze real-world images in detail and read documents with figures.
  • DALL-E 3 has been updated to programmatically generate images and designs. Prices start at 4 cents per image generated.
  • The company improved its function calling features that enable you to describe functions of your app to external APIs to models. Developers can now call multiple functions, send one message to call multiple functions and improve accuracy.
  • GPT-4 Turbo supports OpenAI's new JSON mode.
  • OpenAI added reproducible outputs to its models for more consistent returns. Log probabilities will also be released so developers can improve features.
  • The company released the Assistants API in beta for developers to build agent-like experiences in applications. These assistants use Code Interpreter to write and run Python code, Retrieval to leverage knowledge outside of OpenAI models and Function calling.
  • Developers will also get text-to-speech APIs to generate human-quality speech.

As for prices, OpenAI models are enabling lower costs.

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity openai ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Data Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Palo Alto Networks acquires Talon Cyber Security, Dig Security

Palo Alto Networks acquires Talon Cyber Security, Dig Security

Palo Alto Networks said it has acquired Talon Cyber Security, an enterprise browser security startup. The move comes days after the company acquired Dig Security.

Talon Cybersecurity aims to address attacks via unmanaged devices via its Talon Enterprise Browser. The Talon Enterprise Browser will be combined with Palo Alto Networks' Prisma SASE platform and look to protect unmanaged endpoints that connect to SaaS enterprise applications.

Dig Security is a startup focused on data security posture management, or DPSM.

Terms of the deal weren't disclosed, but TechCrunch put the figure at $400 million for Dig Security. Talon Cyber Security reportedly went for $625 million

The two deals highlight how Palo Alto Networks plans to acquire startups that can help build out its platform. 

According to Palo Alto Networks, generative AI adoption will require enterprises to take control of sensitive data across cloud services, databases, vector databases and platform as a service. Dig Security's technology gives enterprises the ability to discover, classify, monitor and protect sensitive data wherever it resides on the cloud.

Like Talon's technology, Palo Alto Networks said that Dig Security's DPSM platform will be integrated into its Prisma Cloud.

There's a race to create next-gen security platforms powered by AI between Palo Alto Networks, Crowdstrike, Zscaler and a host of others. 

Digital Safety, Privacy & Cybersecurity Security Zero Trust Chief Information Officer Chief Information Security Officer Chief Privacy Officer

Ignorance of AI is no excuse

Ignorance of AI is no excuse

Understanding and explaining the workings of artificial brains—particularly deep neural networks—has been a problem for a decade or so. Some AI entrepreneurs seem almost to boast they don’t know how their creations work, as if mysteriousness is proof of real intelligence. But algorithmic transparency is being mandated in new European legislation so that individuals have better recourse when adversely affected when robots miscalculate their credit or health insurance risks.

I want to discuss another reason regulators have for getting inside the black box of AI: accountability under data privacy regimes.

The power of conventional privacy laws

Large language models (LLMs) and generative AI are making it hard now to tell fact from fiction.  The Some commentators, with great care, call this an existential threat to social institutions and social order. Naturally there are calls for new regulations. Such reforms could take many years

But I see untapped power to regulate AI in the existing principles-based privacy laws that prevail worldwide, a famous example being Europe’s General Data Protection Rule (GDPR).

I have written elsewhere about the “superpower” of orthodox data privacy laws. These are based on the idea of personal data, broadly defined as essentially any information which may be associated with an identifiable natural person. Data privacy laws such as the GDPR (not to mention 162 national statutes) seek to restrain the collection, use and disclosure of personal data.

Generally speaking, these laws are technology neutral; they are blind to the manner in which personal data is collected.

This means that when algorithms produce data that is personally identifiable, those algorithms and their operators are in scope for privacy laws in most places around the world.

Surprise!

Time and time again, technologists are taken by surprise by the privacy obligations of automated personal data flows:

  • In 2011, German regulators found that Facebook’s photo tag suggestions violated privacy law. The company was ordered to cease facial recognition and delete its biometric data sets. Facebook prudently went further, suspending tag suggestions worldwide for many years. See also this previous analysis of tag suggestions as a form of personal data collection.  
  • The counter-intuitive Right to be Forgotten (RTBF) first emerged as such in the 2014 European Court of Justice case Google Spain v AEPD and Mario Costeja Gonzálezi.  Often misunderstood, the case was not about “forgetting” anything in general but specifically de-indexing web search results. The narrow scope serves to highlight that personal data generated by algorithms (for that’s what search results are) is covered by privacy law. In my view, search results are not simple replicas of objective facts found in the public domain; they are the outcomes of complex Big Data processes.

What’s next?

The legal reality is straightforward. If personal data comes, by any means, to be held in an information system, then the organisation in charge of that system may be deemed to have collected that personal data and thus is subject to applicable data privacy laws.

As we have seen, privacy commissioners have thrown the book at analytics and Big Data.

AI may be next.

Being responsible for personal data, no matter what

If a large language model acquires knowledge about identifiable people—whether by deep learning or the gossip of simulacra—then that knowledge is personal data and the model’s operators may be accountable for it under data privacy rules.

Neural networks represent knowledge in weird and wonderful ways, quite unlike regular file storage and computer memory. It is notoriously hard to pinpoint where these AIs store their data.

But here’s the thing: privacy law probably doesn’t care about that design detail, because the effect still amounts to collection of personal data.

If a computer running a deep learning algorithm has inferred or extracted or uncovered or interpolated fresh personal data about individuals, then its operator has legal obligations to describe the data collection in a privacy policy, justify the collection, limit the collection to a specific purpose, and limit reuse of the collected personal data. In the privacy laws I have read, there is nothing to indicate that an information system based on neutral networks will be treated any differently from one running written in COBOL and running on a mainframe. 

Privacy law usually gives individuals the right to request a copy of all personal data that a company holds about them.  In some jurisdictions, individuals have a qualified right to have personal data erased.

I am not a lawyer but I can’t see that owners of deep learning systems holding personal data can excuse themselves from technology-neutral privacy law just because they don’t know exactly how the data got there.  Nor can they logically get around the right to erasure by appealing to the sheer difficulty of selectively removing knowledge that is distributed throughout a neutral network. Such difficulty may be seen as the result of their own design and decision-making.

And if selective erasure of specific personal data is impossible with these black boxes, then the worst case scenario for the field of AI may be that data protection regulators rule the whole class of technology to be non-compliant with standard privacy principles.

Are you getting prepared for AI? 

Constelltion is developing new AI preparedness tools to help organisations evaluate the regulatory and safety implications of machine learning. Get in touch if you'd like to know more about this reserch, or to exchange views. 

New C-Suite Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Security Zero Trust Chief Executive Officer Chief Information Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer Chief Privacy Officer

Why generative AI workloads will be distributed locally

Why generative AI workloads will be distributed locally

This post first appeared in the Constellation Insight newsletter, which features bespoke content weekly.

Generative AI workloads have been dominated by Nvidia, a massive cloud buildout and compute that comes at a premium. I'm willing to bet that in a year, we'll be talking about distributed compute for model training and more workloads on edge devices ranging from servers to PCs to even smartphones.

On earnings conference calls, generative AI is still a common theme, but there's a subtle shift focusing on commoditization of the compute behind large language model (LLM) training and a hybrid approach that leverages devices that are being built with processors generative AI capable.

"There's a market shift towards local inferencing. It's a nod to both the necessity of data privacy and an answer to cloud-based inference cost," said Intel CEO Pat Gelsinger on the company's third quarter earnings conference call.

Here's a quick tour of what's bubbling up for local compute powered generative AI.

Amazon CEO Andy Jassy said:

"In these early days of generative AI, companies are still learning which models they want to use, which models they use for what purposes and which model sizes they should use to get the latency and cost characteristics they desire. In our opinion, the only certainty is that there will continue to be a high rate of change."

Indeed, the change coming for generative AI is going to revolve around local compute that's distributed.

Here's why I think we may get to distributed model training sooner than the industry currently thinks:

  • Enterprises are building out generative AI infrastructure that often revolves around Nvidia, who needs competition but right now has an open field and the margins to prove it.
  • The generative AI price tag is tolerated today because the low-hanging productivity gains are still being harvested. If you can improve software development productivity by 50% who is going to sweat the compute costs? Pick your use case and the returns are there at the moment.
  • But those easy returns are likely to disappear in the next 12 months. There will be more returns on investment, but compute costs will begin to matter.
  • Companies will also gravitate to smaller models designed for specific use cases. These models, by the way, will need less compute.
  • Good enough processors and accelerators will be used to train large language models (LLMs)--especially for cases where a fast turnaround isn't required. Expect AWS' Inferentia and Trainium to garner workloads as well as AMD GPUs. Intel, which is looking to cover the spectrum of AI use cases, can even benefit.
  • The good enough model training approach is likely to extend to leveraging edge devices for compute. For privacy and lower costs, smartphones, PCs and other edge devices are going to be equipped and ready to leverage local compute.

Ultimately, I wouldn't be surprised if we get to a peer-to-peer or Hadoop/MapReduce-ish approach to generative AI compute.

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Big Data ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Cloud CCaaS UCaaS Enterprise Service GenerativeAI Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer