Results

C3.ai's next move: Convert generative AI pilots to production deals

C3.ai's fiscal second quarter results showed promise, but the biggest challenge facing the company is abundantly clear: It has to convert pilots to production to start collecting consumption revenue at scale.

If pilots with customers (prospective or existing) were revenue, C3.ai would be putting up stronger sales gains. Part of the issue with C3.ai's second quarter is expectations. If the company is going to live up to its billing as an enterprise generative AI juggernaut it should have stronger growth.

Another wrinkle is that C3.ai transitioned its revenue model to one based on consumption-based pricing from licensing. That transition equates to more predictable revenue in the future, but there is a transition. C3.ai used to live on large enterprise subscriptions that were lumpy and now has a model more in line with cloud providers and companies like Snowflake.

That transition, which appears to be largely complete, is highlighted in C3.ai's investor deck.

C3.ai reported second quarter revenue of $73.2 million, up 17% from a year ago. Subscription revenue was up 12% from a year ago. C3.ai reported a net loss of 59 cents a share and a non-GAAP net loss of 13 cents a share.

The company projected third quarter revenue of $74 million to $78 million, up 11% to 17% from a year ago.

In many ways, C3.ai is in the right place at the right time. Generative AI has taken off and C3.ai is a key player. I recently detailed a C3.ai project at Baker Hughes illustrating a sustainability generative AI use case (PDF).

Constellation ShortListâ„¢ Artificial Intelligence and Machine Learning Cloud Platforms | C3 AI launches domain-specific generative AI models, targets industries | Get ready for a parade of domain specific LLMs | C3 AI CEO Tom Siebel: Generative AI enterprise search will have wide impact

The challenge is that C3.ai's go-to-market ground game is a work in progress. C3.ai has been building out its partner network with AWS becoming a big channel as the two companies target overlapping industries. C3.ai's qualified pipeline with AWS more than doubled in the second quarter. C3.ai also has partnerships with Google Cloud, Microsoft, Booz Allen and others.

There's clearly interest in what C3.ai can offer. Bookings in the second quarter were up 100% from a year ago, new agreements were up 148% and pilots grew 177%.

Yet, these deals start small and have yet to deliver in full just yet. C3.ai is more a steady climb than a revenue rocket ship at this point. C3.ai launched with massive deals from a small number of customers--often US government agencies and large enterprises--and now is more land and expand due to generative AI.

CEO Tom Siebel said the company is converting pilots to more projects. C3.ai converted two Department of Defense logistics pilots into projects. Siebel said:

"In Q2, we closed 62 agreements, including 36 pilots and trials. Our new pilot count is up 270% from a year ago. Notably, 20 of these were generative AI pilots, a 150% increase from Q1. With the lower entry price points of our pilots, we are more easily able to land new accounts. With our pilots, we are engaging customers across a diverse set of industries in this quarter. Our pilots came from manufacturing, federal, defense, aerospace, pharmaceuticals and other industries."

Over time, C3.ai's business is going to look different. Nearly half of the company's revenue comes from federal, defense and aerospace customers. The pilots and trials underway cover many more industries.

What remains to be seen is how long it takes C3.ai to develop its revenue growth flywheel. Siebel said the sales cycle for generative AI use cases can be as fast as 24 hours with a live application in a month or two with low price points. These customers will ultimately expand their deals as C3.ai delivers value.

"The standard pilot that we have for generative AI and the enterprise is like $250,000. You can get the C3 Generative AI: AWS Marketplace Edition that’s free for 14 days," said Siebel.

Siebel did say that C3.ai is seeing interest in generative AI applications, decisions are taking longer as enterprise put governance around AI. He said:

"Virtually every company in the last 3 to 6 months has created a new AI governance function as part of its decision-making process. These AI governance functions assess and approve those AI applications that will be allowed to be installed in the enterprise. This has candidly added a step to the decision process in AI. You might have heard it here first, but you will be hearing this from every AI vendor in the next few quarters. Take it to the bank."

Overall, Siebel said more AI governance is a good development even if it lengthens the sales cycle. See: The Urgent Case for a Chief AI Officer

Siebel added that the C3 AI Platform will gain traction because it can "solve the disqualifying hobgoblins that are preventing the adoption of generative AI in government, in defense, intelligence, in the private sector."

Those issues include answers from large language models (LLMs) that aren't easily traced and can leak intellectual property and create other liabilities. Enterprises will also move toward an LLM agnostic strategy.

Siebel said:

"I don’t think anybody wants to hook their wagon on to any given LLM today with all the innovation that’s going on in the market and to be dependent on any LLM provider. Our solution is LLM agnostic and addresses every one of those hobgoblins that prevent the installation of generative AI in the enterprise. It took 14 years and $2 billion of software engineering for us to be ready for this."

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration GenerativeAI Chief Information Officer Chief Data Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Recognising digital government excellence

Service NSW wins the 2023 Supernova Award for Digital Safety, Governance and Privacy  

In something of a rarity, Constellation Research has recognised a major government for innovation.   

New South Wales (NSW) is the largest state of Australia. Within a federation rather like the USA and Canada, Australian state governments are responsible for hospitals, schools, community colleges, policing, most land-based public transport, land management, real estate registration, births, deaths & marriage registration and so on.  

Service NSW is the customer-facing delivery arm of the New South Wales state government. Service NSW is a one-stop shop for citizens and businesses dealing with most state G2C and G2B services, such as driver licensing, local business compliance, trade licensing, vehicle registration, traffic violations, and home building, among others. Service NSW operates a network of physical service centres and omnichannel online services.

Declaration: The author has been contracted from time to time by the government of NSW as a privacy adviser. He undertook some of the privacy impact assessments mentioned in this article.

Mobile first but with care

A key asset of the agency is the MySNSW mobile app.

The Service NSW technology posture has shifted to mobile-first over the past several years but with great attention to access and fairness across the community (more in this below). 

MySNSW includes Australia’s first Digital Driver Licence, released in 2019. The app has become a multi-function digital wallet, including vehicle registration, small business registration, boating, fishing and trade licenses, home building permits, seniors card and so on.

As well as presenting credentials, the mobile app is able to read and verify selected credentials from other MyNSW wallets, enabling citizens to check important bona fides of other people. Every NSW citizen is able to check car registrations, trade licenses and Working-With-Children Checks, among others. I will explain a little later how this capability proved crucial in the government’s pandemic response.

Interoperability pilots are underway with Australian federal government credentials. The technology baseline is in the process of being upgraded to cryptographically verifiable credentials.

Service NSW innovation

The Supernova Award was awarded to Service NSW with particular reference to two extensions of the mobile app in the COVID-19 pandemic, which exemplified how digital technology can empower individuals and protect their digital safety and privacy.

COVID-19 contact tracing

Early in the pandemic, Australian public health authorities mandated that most public venues instigate customer and visitor check-in to support contact tracing.

At first, there was no standard for this, so each venue approached the record keeping requirement differently. Very quickly, third party mobile apps emerged to help automate visitor records. QR codes overnight became synonymous with COVID check-in.

But where was this data being sent and how was it being safeguarded? The NSW government perceived that Australians would have greater confidence in contact tracing if public health authorities handled check-in data, since community acceptance of COVID management was generally high.

COVID Stimulus Program

Pandemic lockdowns ordered by the authorities created enormous economic challenges for citizens and businesses. The NSW Government resolved to help mitigate the impact through an economic stimulus program targeting the tourism and hospitality sectors.

How could a large amount of liquid funds be distributed securely and quickly to citizens? Service NSW had the answer, leveraging the agency’s existing banking relationships with businesses and the high level of customer engagement with the MySNSW mobile app.

Four $25 cash-equivalent vouchers were allocated to every adult in NSW, redeemable at qualified food outlets and entertainment venues.

Service NSW was already assisting businesses with COVID-Safe training, compliance and customer communications. All public businesses in NSW were required to demonstrate COVID-Safe practices and be registered as such. Service NSW built on that footprint to quickly generate venue geolocation specific QR codes for every registered business.

Remember that the mobile app can read credentials. So, with a software update, businesses were suddenly able to use MySNSW to read and process the $25 COVID Stimulus vouchers presented by customers.

Results

From the day that the government decided to allocate COVID Stimulus funds to citizens through the digital channel, money was flowing to customers within 16 weeks.

I rate this performance as world’s best practice in digital delivery.

The product team undertook business and customer UX research, upgraded the mobile software, integrated it with the B2G payments channel, undertook a privacy impact assessment, completed user acceptance testing, and deployed the solution in just 16 weeks.

The successful COVID stimulus vouchers program subsequently served as a prototype for the delivery of further state government family assistance such as subsidies for school holiday and children’s physical fitness programs.

MyPOV

The Supernova award highlights Service NSW’s best-in-breed digital transformation program. The government’s mobile app has proved to be a robust and agile platform for delivering multiple waves of meaningful digital customer services.  Privacy, safety and good governance are embedded in the government’s product development.  

Further reading

More details about the MySNSW app are available at the Supernova awards page.

Get ready for the 2024 Supernova awards here

 

Digital Safety, Privacy & Cybersecurity Data to Decisions Security Zero Trust Chief Customer Officer Chief People Officer Chief Digital Officer Chief Information Security Officer Chief Privacy Officer

Big Idea: The Future of AI and Biology: De-Mystifying Benefits, Risks, and Opportunities Ahead

Media Name: @rwang0 pexels-tara-winstead-8386440.jpg

AI and biology have the potential to greatly benefit humanity in several ways. Here are some key illustrative opportunities to consider:

* Accelerated research and development: AI can analyze vast amounts of biological data and identify patterns and correlations that humans may miss. This can lead to faster and more accurate discoveries in fields such as drug development, disease diagnosis, and genetic engineering.

* Precision medicine: By combining AI algorithms with biological research, personalized medicine can become a reality. AI can analyze an individual's genetic information, medical history, and lifestyle factors to provide tailored treatment plans and preventive measures.

* Improved agricultural practices: AI can help optimize crop yields, reduce the use of pesticides and fertilizers, and enhance sustainable farming practices. By analyzing data on soil composition, weather patterns, and plant genetics, AI can provide insights to improve crop productivity and address food security challenges.

* Environmental conservation: AI can assist in monitoring and protecting ecosystems by analyzing data from sensors, satellites, and drones. This can help identify endangered species, track deforestation, and mitigate the impacts of climate change.

* Enhanced disease surveillance: AI can analyze large-scale data from various sources, including social media, to detect and track disease outbreaks in real-time. This can enable early intervention and help prevent the spread of infectious diseases.

That said, there has been both excessive fear and unwarranted hype surrounding the topics of AI and biology. Specifically, the idea that artificial intelligence (AI) will increase the risks associated with biotechnology misuse, such as creating harmful pathogens or promoting bioterrorism, overlooks three important factors.

Firstly, AI can only use data that already exists. If the data is available, it can be used by humans without the need for AI. Therefore, controlling access to data or AI won't prevent the misuse of biological research, as the data can still be found and used by human experts.

Secondly, governments usually prevent misuse of biotechnology by focusing on the preparatory actions taken by those intending to create bioweapons. This approach can also be applied to AI. For example, when steam engines led to a rise in train robberies, the solution wasn't to stop using steam engines, but to improve security measures. Similarly, we need to develop early warning systems and detection methods to identify if biological research is being used for harmful purposes.

Thirdly, AI often makes mistakes and can produce inaccurate results, so any AI used in biotechnology will need to be checked by a human expert. This means that AI doesn't replace the need for human knowledge and expertise. Even if an AI can suggest new ways to create pathogens or biological materials, these suggestions still need to be tested and reviewed by human experts.

It’s worth considering the significant benefit that already has occurred because of the intersection of biology and data science of the last two decades. For instance, two COVID-19 mRNA vaccines were designed on a computer and then printed using a nucleotide printer. This technology significantly sped up the vaccine development process.

In the future, AI can continue to benefit biological research and biotechnology, but it's important to ensure that AI models are trained correctly. This involves focusing on data curation and using the right training approaches for AI models of biological systems.

Moreover, it's important to remember that AI systems are not all the same. They use different approaches and models, and they are only as good as the data they are trained on. Current AI systems are not conscious nor are they anywhere close to Artificial General Intelligence (AGI) sought be some. They are good at detecting patterns and solving problems, but they are not capable of working across a wide range of problems without extensive training data.

To address the global challenges of our time, such as climate change and food security, we need to use both AI and biological research. For example, bacteria generated through computational means can consume methane, a potent greenhouse gas, and return nitrogen to the soil, improving agricultural yields. We need to focus on using these technologies to address important issues, while also ensuring that they are used ethically. By approaching AI with a balanced perspective, we can harness its potential while mitigating risks. It is essential to foster a culture of responsible and informed engagement with AI, promoting its beneficial applications while addressing concerns and ensuring ethical practices.

In summary: the integration of AI and biology holds immense potential for advancing scientific knowledge, improving healthcare outcomes, and addressing global challenges. It is crucial for communities to recognize and support these advancements to harness their benefits for the betterment of humanity

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Leadership Chief Analytics Officer Chief Data Officer Chief Digital Officer Chief Executive Officer Chief Information Officer Chief Sustainability Officer Chief Technology Officer Chief AI Officer Chief Information Security Officer Chief Product Officer Chief Experience Officer

Google launches Gemini, its ChatGPT rival, adds AI Hypercomputer to Google Cloud

Alphabet's Google has launched Gemini, its most powerful model designed to compete with OpenAI's ChatGPT, in three sizes Gemini Ultra focused on complex tasks, Gemini Pro, an all-purpose model, and Gemini Nano, which is aimed at on-device usage.

The Gemini 1.0 rollout will catch some folks by surprise given that there were reports that Gemini would be pushed to early 2024. With the introduction of Gemini, all three hyperscale cloud providers have announced or upgraded models in recent days. Amazon Web Services outlined Amazon Q at re:Invent. Microsoft is upgrading Copilot with the latest from OpenAI.

"These are the first models of the Gemini era and the first realization of the vision we had when we formed Google DeepMind earlier this year. This new era of models represents one of the biggest science and engineering efforts we’ve undertaken as a company," said Alphabet CEO Sundar Pichai.

In a blog post, Demis Hassabis, CEO of Google DeepMind, said the company set out to make Gemini multimodal from the start. Typically, large language models (LLM) have different modes stitched together.

Hassabis said:

"Until now, the standard approach to creating multimodal models involved training separate components for different modalities and then stitching them together to roughly mimic some of this functionality. These models can sometimes be good at performing certain tasks, like describing images, but struggle with more conceptual and complex reasoning.

We designed Gemini to be natively multimodal, pre-trained from the start on different modalities. Then we fine-tuned it with additional multimodal data to further refine its effectiveness. This helps Gemini seamlessly understand and reason about all kinds of inputs from the ground up, far better than existing multimodal models — and its capabilities are state-of-the-art in nearly every domain."

Google cited a bevy of benchmarks for Gemini and said the model "exceeds current state-of-the-art results on 30 of the 32 widely used academic benchmarks used in LLM research and development."

For instance, Gemini Ultra scored 90% on MMLU (massive multitask language understanding), which is based on a combination of 57 subjects, world knowledge and problem solving. Google said its approach to Gemini enables it to think more carefully before answering difficult questions.

In a chart, Google outlined Gemini benchmarks vs. ChatGPT. Gemini 1.0 fared well vs. ChatGPT, but lagged in HellaSwag, a benchmark for commonsense reasoning for everyday tasks. Both models scored about 53% on challenging math problems, which is still better than I'd do. Overall, Gemini 1.0 benchmarks are slightly better than the ChatGPT-4 comparison.

While these benchmarks for models are interesting, enterprises are likely to get a bit of Deja vu with the semiconductor benchmark battles. Read the fine print, conditions and realize that in real-world use cases a slightly better benchmark score may not matter.

According to Google, Gemini Ultra excels in "several coding benchmarks." Coding ability is perhaps the use case with the most returns for LLMs as developer productivity has been a game changer for enterprises.

Google also said that Gemini has built-in safety evaluations including safety classifiers to identify, label and sort out content that's toxic. Gemini Ultra is currently completing trust and safety checks before rolling out broadly. Bard Advanced will launch with Gemini Ultra early in 2024.

Gemini will be used to upgrade Google's Bard and Gemini Nano will power generative AI features on Pixel 8 Pro. 

Infrastructure on Google Cloud

No model is complete without an announcement about training infrastructure and in-house processors.

Google trained Gemini on its in-house Tensor Processing Units v4 and v5e. Google is launching Cloud TPU v5p, which is aimed at training AI models.

Cloud TPU v5p, which powers a pod composed of 8,960 chips and an inter-chip interconnect at 4,800 Gbps/chip.

Those TPUs will be part of Google Cloud's upcoming AI Hypercomputer, a supercomputer that will include integrated hardware, open software and flexible consumption models.

Here's a look at the AI Hypercomputer stack:

Constellation Research analyst Andy Thurai said:

"Google is taking on competitors in three major categories -- OpenAI/Microsoft ChatGPT with their Gemini, AWS and NVIDIA on infrastructure with new TPU chips to become the AI training platform, and IBM/HP/Oracle/ with the AI Hypercomputer.

Gemini was delayed due to concerns about the readiness, safety concerns, and especially the problems with non-English queries. It is clear that Google wants to be careful with Gemini since it will be embedded in multiple products. That caution is another reason the live version got replaced with "virtual demos." But Google can't afford to wait with Gemini and lose mindshare as OpenAI/Microsoft and AWS continually release models.

Gemini has a few differentiators. First, Gemini is multimodal from the ground up. Technically, this LLM could cross the boundary limitations of modalities. Second, Google also released three model sizes rather than one size fits all categories. Third, there are safety guardrails to avoid any toxic content.

Bottom line: Google is trying to become a one stop shop for large enterprises to train their massive LLMs and run on Google Cloud."

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization Future of Work Next-Generation Customer Experience Google Cloud Google SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

GitLab sees traction as it competes against GitHub, Atlassian

GitLab is on a $559 million revenue run rate exiting the third quarter with 8,175 base customers. While that quarter is garnering Wall Street attention, enterprises should focus on the competitive dynamics moving GitLab's results as its DevSecOps platform gains traction.

The company reported third-quarter revenue of $149.7 million, up 32% from a year ago, with a net loss of $1.84 a share. Non-GAAP earnings for the third quarter checked in at 9 cents a share.

Gitlab's portfolio includes GitLab Duo, a suite of AI tools, and a platform for agile development, automated software delivery, source code management, security and compliance and continuous integration and delivery.

Constellation Research analyst Holger Mueller said:

"Gitlab is on a path to break half a billion in revenue for the first time in its history, fueled by software powered enterprises. Whether the security angle of its DevOps product is the key driver remains to be seen, as the market and the competition is growing."

Here's a look at why GitLab's third quarter was better than expected.

Jira switching to cloud-only model opens the door. GitLab CEO Sid Sijbrandij said Atlassian Jira customers are evaluating platforms and GitLab is competitive. He said:

"Atlassian’s decision to stop support for its server offering is making customers reconsider what product they use for Enterprise Agile Planning. We are focused on making it easier for these customers to move to GitLab SaaS and self-managed. We recently launched a new Enterprise Agile Planning SKU. Now, GitLab Ultimate customers can easily bring non-technical users into the platform.

Because of that server offering being deprecated, a lot of customers are taking stock of where they're at. It is a natural point for enterprises to evaluate."

AWS and Google Cloud are strong partners and motivated to replace Microsoft's GitHub. Sijbrandij said that it won a third quarter deal with a European telecom against GitHub in partnership with AWS.

With Google Cloud, GitLab is being built into the console. Sijbrandij cited a handful of customer wins against GitHub as enterprises aimed to centralize platforms. Sijbrandij said:

"We've got strong partnerships with both Google and AWS. And they're interested because we help their customers move to their cloud faster. We help them accelerate moving those workloads. And an interesting thing we did with Google recently, we announced at Google Next that they'll be integrating GitLab into their development console for GCP. I think that's a really interesting development that will pay off as many things over the longer term. I think it speaks to the strength of these partnerships together with the AWS example."

Compliance and governance. Sijbrandij said security and compliance integration has become a key selling point. GitHub appears to be landing customers like Lockheed Martin and Carfax because it builds security and compliance directly into workflows.

The ability to set controls and governance frameworks that's integrated into developer workflows has become a selling point. Sijbrandij said:

"With point solutions, developers have to wait on security teams to identify vulnerabilities. Or, if they have access to a security scanner to assess their code, they need to manually copy and paste the scanner results back into their development tool. The result is code that isn’t scanned at the time it’s written. This increases the time required to detect and resolve vulnerabilities."

GitLab has GitLab Dedicated, a single-tenant SaaS platform that gives customers data isolation and residency. Sijbrandij added:

“GitLab is the only DevSecOps platform that brings together security, compliance & governance, AI, and enterprise agile planning. Enterprises face complexity from all directions in the form of rapidly increasing user expectations, more advanced cyber-attacks, and more strict industry regulations. We believe they need GitLab to help them navigate this complexity and realize business value. Our platform improves engineering productivity, reduces software spending."

DevSecOps is a critical category as AI workloads grow. Sijbrandij said that AI and integrating it with developer workflows is also driving sales.

"AI needs to be throughout the life cycle and for multiple things, like only 25% of the time of a developer spends on coding, 75% is other tasks. And as developers get more productive, they write more code, you need to also increase the productivity of security and operations. So, we're focused on making it work throughout the life cycle," he said.

Expansion beyond developers. GitLab's enterprise planning suite is attracting usage beyond developers. Business users that have to interact with developers and engineers can use portfolio and project management tools without getting bogged down.

 

Data to Decisions Digital Safety, Privacy & Cybersecurity Chief Information Officer

Microsoft updates Copilot with ChatGPT-4 Turbo

Microsoft Copilot is being upgraded to OpenAI's GPT-4 Turbo model as the software giant rolls out a series of generative AI updates. The move comes as Microsoft and OpenAI move to put the Sam Altman kerfuffle in the rear-view mirror.

In a blog post, Microsoft said Copilot will soon be able to answer with OpenAI's latest model, which is being tested and will roll out more widely in the weeks ahead.

What remains to be seen is whether these model updates affect usage or become a simple behind-the-scenes rollout. With model choice becoming more of a concern, enterprises are likely to wonder about model updates and what it would take to swap out foundational models if needed.

For now, the Copilot model updates land between the OpenAI soap opera and AWS' launch of its Q generative AI assistant that runs across its services.

In addition to GPT-4 Turbo upgrades, Copilot will get OpenAI's new DALL-E 3 Model for images and capabilities that combine GPT-4, vision with Bing image search and web search data.

Microsoft also said it is testing Code Interpreter, a service that will provide more accurate calculations, coding, data analysis and visualization. Code Interpreter is in limited testing too. Bing will also get Deep Search, which will use GPT-4 to provide better search results for complex topics.

Also see: How Generative AI Has Supercharged the Future of Work | Generative AI articles | Why you need a Chief AI Officer | Software development becomes generative AI's flagship use case | Enterprises seeing savings, productivity gains from generative AI | Work in a generative AI world will need critical, creative thinking

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Microsoft AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

IBM, Meta form AI Alliance: What can it accomplish?

IBM and Meta have formed the AI Alliance with 50 founding members aiming to provide "open and transparent innovation" in artificial intelligence. The catch is that many of the big names--Google, Nvidia and Microsoft--driving the AI advances so far aren't among the founding members.

The concept behind the AI Alliance is notable in that the group wants to bring international players, academics, researchers, governments and developers together to provide an open science and technology ecosystem. The tension around technologies like generative AI revolves around for-profit motives vs. doing what's right for humanity. Security expert Bruce Schneier just penned a long missive about AI and trust worth a read.

And IBM and Meta have a history of contributing to open source as well as initiatives such as Open Compute Project. However, it remains to be seen what the AI Alliance ultimately accomplishes without some key AI leaders.

Also see: How Generative AI Has Supercharged the Future of Work | Generative AI articles | Why you need a Chief AI Officer | Software development becomes generative AI's flagship use case | Enterprises seeing savings, productivity gains from generative AIWork in a generative AI world will need critical, creative thinking

The founding members and collaborators include the following: AMD, Anyscale, CERN, Cerebras, Cleveland Clinic, Cornell University, Dartmouth College, Dell Technologies, EPFL, ETH, Hugging Face, Imperial College London, Intel, INSAIT, Linux Foundation, MLCommons, MOC Alliance operated by Boston University and Harvard University, NASA, NSF, Oracle, Partnership on AI, Red Hat, Roadzen, ServiceNow, Sony Group, Stability AI, University of California Berkeley, University of Illinois, University of Notre Dame, The University of Tokyo, Yale University and others.

Constellation Research analyst Andy Thurai was quick to note who was missing. He said:

"Major A list players such as AWS, Databricks, Snowflake, Dataiku, Data Robot, Domino Data Lab,  Scale.AI, NVIDIA, Microsoft, OpenAI, Google, Anthropic, Cohere, AI 21 Labs and many others are missing from the list."

Like any alliance, federation or task force by any other name, the jury is out until something actually happens. The AI Alliance indicated that it will be action oriented and focus on "fostering an open community and enabling developers and researchers to accelerate responsible innovation in AI while ensuring scientific rigor, trust, safety, security, diversity and economic competitiveness."

Focus areas for AI Alliance include:

  • Developing benchmarks and evaluation standards for responsible development of AI systems as well as tools for safety, security and trust.
  • Advance an ecosystem of open foundation models with diversity on many fronts.
  • Create an AI hardware accelerator ecosystem.
  • Support AI skills and research.
  • Develop educational content and resources.

Meta and IBM's experience in open source as well as open hardware could come in handy for kicking off AI Alliance. Simply put, the group is worth watching pending further developments.

Thurai said it's unclear what AI Alliance will ultimately accomplish. He said:

"There are many tools and software components, both open source and commercial already available focused on AI Alliance initiatives. Depending on what is measured there are standards out there. For example, tools like GAIA, MLPerf, and other tools to measure F1 score, model performance, accuracy, and drift are available. I am not sure what the alliance is offering at this point. There are some promises made which are yet to be seen. Tools like Fairlearn, AI fairness 360, Accenture Fairness Tool, etc. to measure model fairness are already available. Some popular benchmarking tools include DAWNBench, GLUE, and SQuAD.”

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity meta IBM AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

New Quantum Computing Road Map Towards Utility | Interview with IBM Partner Heather Higgins

It's safe to say #quantumcomputing is hard. That's why IBM has expanded its latest quantum road map to include a holistic view of scale, quality, and speed - shifting from an era of DISCOVERY to an era of UTILITY...

IBM's newly launched road map offers:

📌 Optimal balance point between scale, quality, and speed, including error mitigation run-time to unlock #business utility.
📌 Acceleration in error-correction techniques, offering a faster path to #modularity and quantum-centric #supercomputing.
📌 Broader accessibility of quantum systems and problems, through abstracting and simplifying techniques.

In an interview with Constellation analyst Holger Mueller, IBM Quantum Partner Heather Higgins advises C-level decision-makers to start preparing now for quantum #technology through a purposeful investment approach and strategy that fits organizational needs. Holger agrees that paying attention to the rapid acceleration of quantum technology could solve key problems for #enterprises.

On cloud_convos <iframe width="560" height="315" src="https://www.youtube.com/embed/2OUosUHDNwc?si=rShdMvOItKgYxhuI" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
Media Name: YouTube Video Thumbnails (3).png

IBM launches latest Heron quantum processor, IBM Quantum System Two

IBM launched its IBM Quantum System Two as well as its next-generation quantum processor as the company touts what's possible today with quantum computing.

At the company's IBM Quantum Summit in New York, Big Blue launched IBM Quantum Heron, a 133-qubit processor that's designed to deliver performance and low error rates relative to its predecessor.

Three Heron processors will power the IBM Quantum System Two, which is the company's first modular quantum computer. Modular quantum processors are being pursued by a bevy of industry players as quantum computing is likely to blend with supercomputing at first. Quantum System Two is located in Yorktown Heights, NY.

IBM also outlined its development roadmap to 2033 with a focus on the quality of gate operations, quantum circuits and running at scale.

Constellation Research CEO Ray Wang said:

“Quantum computers need quantum chips and IBM Quantum System Two addresses the need for faster and more precise error mitigation and correction before we achieve mass adoption. The path to the Blue Jay system by 2033 has been one of IBM's most consistent achievements in computing.”

IBM demonstrated how its 127-qubit IBM Quantum Eagle processor can power systems used as scientific tools to solve chemistry, physics and materials problems.

With IBM Quantum Heron, IBM is planning to use a 5x performance boost with improved error rates to build out a fleet of systems. These systems will mostly be available to customers via cloud platforms. In many respects, quantum computing is entering a more accessible era. 

Meanwhile, IBM outlined a Quantum Development Roadmap with future processor plans and systems. Here's the roadmap (larger format).

IBM also detailed a software stack that will use generative AI and watsonx to make quantum software programming easier. IBM is taking Qiskit 1.0 and adding Qiskit Patterns, which enables developers to create code more easily. Combined with Qiskit Runtime and Quantum Serverless and IBM is looking to create an entire quantum computing stack for multiple scenarios.

Data to Decisions Tech Optimization Innovation & Product-led Growth IBM Quantum Computing Chief Information Officer

MongoDB Atlas Vector Search, Atlas Search Nodes generally available

MongoDB announced the general availability of MongoDB Atlas Vector Search and MongoDB Atlas Search Nodes in the latest in a series of announcements to enable developers to more easily build generative AI applications.

The general availability of MongoDB Atlas Vector Search and Atlas Search Nodes follows up from a June launch where the company set out to enable AI and large language model (LLM) workloads.

Customers using MongoDB Atlas Vector Search and Atlas Search Nodes include AT&T Cybersecurity, Pathfinder Labs and UKG.

Since that launch MongoDB has had a steady cadence of launches. For instance, at AWS re:Invent, the company said it would integrate MongoDB Atlas Vector Search with Amazon Bedrock and announced it would optimize Amazon CodeWhisperer suggestions for MongoDB developers.

MongoDB steps up generative AI rollout across platform

Those horizontal services are also being complemented by MongoDB's focus on industry specific use cases for healthcare, public sector, manufacturing and automotive as Atlas ramps.

MongoDB, along with companies such as Databricks and Snowflake, are aiming to help developers and enterprises leverage real-time data to build generative AI applications. The argument is that AI applications can't be built efficiently without strong data strategies. Atlas Vector Search and Atlas Search Nodes are designed to give enterprise the capability to search real-time data efficiently.

Key points about MongoDB's two additions:

MongoDB Atlas Vector Search, which is an integrated vector database for MongoDB. Developers can use one API to build generative AI applications across AWS, Microsoft Azure and Google Cloud without duplicating and synchronizing data. MongoDB Atlas Vector Search also enables enterprises to use retrieval-augmented generation (RAG) with pre-trained foundation models.

Atlas Vector Search integrates with models from LangChain, Cohere, OpenAI, LlamaIndex, Hugging Face and Nomic.

MongoDB Atlas Search Nodes provide dedicated infrastructure to manage workloads using MongoDB Atlas Vector Search and Atlas Search independent of operational nodes of the database. This workload isolation performs better at scale and optimizes costs.

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity mongodb Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer