Results

Salesforce rolls out Sales GPT, Service GPT

Salesforce launched the latest installment of generative AI across its clouds with the launch of Sales GPT and Service GPT.

The news follows Salesforce's rollout of AI Cloud this month as well as the Einstein GPT Trust Layer, which enables customers to keep data secure while leveraging large language models. 

What Movies Get Wrong…and Salesforce Gets Right…About AI | Salesforce launches AI Cloud, aims to be abstraction layer between corporate data, generative AI models | Salesforce launches Marketing GPT, Commerce GPT, aims to connect generative AI to ROI

Sales GPT includes the following:

  • Auto-generated sales emails personalized for customers based on data.
  • Call summaries and transcription for sales calls and follow-up actions.
  • A sales assistant to summarize each step of the sales cycle including account resarch, meeting prep and contract drafts.

Service GPT and Field Service GPT will include the following generative AI tools:

  • Service replies that are auto-generated with real-time data and personalization.
  • Work summaries of service cases and customer engagements.
  • Knowledge articles that are auto-generated and updated based on real-time data.
  • Mobile work briefings for field service team appointments and summarization of issues.

Sales GPT and Service GPT are expected to be generally available this year.

Data to Decisions Next-Generation Customer Experience Innovation & Product-led Growth Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity New C-Suite Sales Marketing salesforce ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Synthetic biology: What you need to know

Synthetic biology promises to be disruptive across multiple industries because it can be used to develop new biological parts and devices while reengineering existing systems.

DisrupTV caught up with three experts in the field on a recent episode. The roundtable captured some of the promise and peril with synthetic biology, a multidisciplinary field of science focused on reengineering organisms.

Here's a look at the key themes from the DisrupTV discussion and what you need to know.

What is synthetic biology? "Synthetic biology is the ability to design biological systems," said Dr. Megan Palmer, Senior Director for Public Impact at Ginkgo Bioworks and Adjunct Professor of Bioengineering at Stanford University. "We've been able to cut and paste DNA because we know the underlying code biology runs on for the last 50 years or so. Humans have been modifying biology for selective breeding of plants and animals for much longer than that."

"Now scientists and engineers are developing even better tools to be able to read, write, edit and evolve biological systems in ways that are easier, faster, more precise and predictable," said Palmer.

The upshot is that scientists and engineers are unlocking this ability to partner with biology in new ways.

The promise of synthetic biology. Palmer said she considers biology already the most powerful technology on the planet and the ability to program it means "we can use biology to manufacture nearly everything that is currently made with petrochemicals in ways that are more sustainable." Palmer said synthetic biology can impact multiple sectors in the economy such as health, food and manufacturing. There's even potential for data storage using DNA.

8 takeaways from Constellation Research's Healthcare Transformation Summit

Use cases for synthetic biology. Panelists noted a bevy of use cases ranging from space travel to food production to biomanufacturing and sustainability. Dr. Divya Chander, Anesthesiologist, Neuroscientist, and Data Scientist, said synthetic biology could play a role in space travel and "developing astronaut resilience." Chander said:

"We as humans aren't really very good at traveling in space because of microgravity in the radiation environment. There is a possibility of using gene editing tool, which are part of the synthetic biology toolkit, and turn genes on and off to give us more tolerance. Synthetic biology could also enable food production systems in space to improve nutrition and use fewer inputs like water or pesticides. We can even engineer our plants in space to do things like scrub toxins from the environment. CO2 is a big thing for astronauts, but also could do similar things on earth."

Chander said this gene editing could also be good for patients who are undergoing chemotherapy for cancer. With biomanufacturing, synthetic biology could print drugs that are more targeted.

Dr. David Bray, Distinguished Fellow at the Stimson Center and Business Executives for National Security, said sustainability will be a big use case for climate change. "I'm a big believer that the only way we're going to deal with climate change is with synthetic biology," said Bray, who noted that the technology could address the following:

  • Clean water.
  • Sustainable supply chains.
  • Preventing pandemics before they start developing.

"We can organize ourselves in new ways so we can biologize industry instead of just industrializing biology," said Bray.

Chander said other use cases for synthetic biology include:

  • Addressing chronic diseases such as cancer and extending longevity.
  • Editing biologic machinery to address things like antibiotic resistance.
  • Bio manufacturing targeted pharmaceuticals.

Growing the ecosystem and next generation. Palmer said iGEM, a non-profit focused on advancing synthetic biology, education and competition, has gone a long way to developing community in the industry. Palmer said iGEM competitions have encouraged students to design biological machines that are modular to solve problems. "Thousands of students across dozens of countries every year are developing biological innovations that are cool technologies, but also bake in social responsibility, safety and security into designs," said Palmer.

Risks with synthetic biology. Bray said it's promising that biology technologies are being democratized, but there will need to be some guardrails. "You know unleashed and craziness is going to happen, but we have multiple revolutions happening in parallel," said Bray, who added that the combination of AI and synthetic biology could be powerful for good uses and bad. "We are going to need the equivalent of smoke detectors for the biological space," he said. "Technologies like synthetic biology will be a tremendous force for good, but we also need to be ready for when some people try to use it for not so good purposes."

According to the US Government Accountability Office, synthetic biology presents safety and security concerns such as biological and chemical weapons and product tampering, environmental effects and public acceptance.

Ethics will be critical, said Palmer. Synthetic biology will require transparency and permission to use an individual's data. Society as a whole will have to think through ethics and biological data best practices, she said. Data trust will also be a key concept since individuals will be data producers. If you are going to read or write from someone's brain, you'll need full consent. The GAO noted that regulatory frameworks will be needed to address future applications of synthetic biology.

Multiple disciplines needed. The panelists said that synthetic biology will need multiple skills and sits at the intersection of data science, biology as well as ethics experts and technologists. Depending on the use case, industry expertise will also be needed.

Tech Optimization Data to Decisions Future of Work Innovation & Product-led Growth New C-Suite Next-Generation Customer Experience AR Chief Technology Officer

Databricks Data + AI Summit: LakehouseIQ, Lakehouse AI and everything announced

Databricks infused its data platform with generative AI capabilities across its Lakehouse Platform with tools to enable customers to leverage large language models (LLMs), federate and govern data and knowledge engines that learn corporate cultures.

The company outlined the news at its Data + AI Summit. Databricks' news follows announcements from Snowflake and MongoDB designed to land more workloads and enable customers to leverage generative AI. Leading up to Databricks' conference, the company announced the acquisition of MosaicML and launch of Lakehouse Apps. The big takeaway is that data platforms and generative AI capabilities are converging.

Constellation Research analyst Doug Henschen summed up the data platform game: "All three (Snowflake, Databricks and MongoDB) want customers to do as much as possible on their platforms so they are invading each other’s turf. But their original (and still predominant) dance partners are data warehouse for Snowflake, data science for Databricks and developers for MongoDB."

Here's a roundup of Databricks' enhancements to its platform.

  • The company said it will make it easier to deploy and manage LLMs with Lakehouse AI additions to offer monitoring and governance for LLM development. Databricks is adding Vector Search, a collection of opensource models, LLM-optimized Model Servicing, MLflow 2.5 with LLM tools and Lakehouse monitoring.
  • Lakehouse AI will unify the AI lifecycle from data collection and preparation to model development. Databricks Vector Search will manage and automatically create vectors in Unity Catalog. Databricks AutoML will feature a low-code approach to fine tuning LLMs. And Databricks will curate a list of open-source models in its marketplace.
  • Databricks outlined MLflow 2.5, a new release of the Linux Foundation open-source project MLflow. Updates include MLflow AI Gateway, which allows developers to swap out backend models and switch between LLM providers, and MLflow Prompt Tools, a no-code set of visual tools.
  • Databricks Lakehouse Monitoring will monitor and manage data and AI assets within Lakehouse.
  • LakehouseIQ adds a natural language interface to the Lakehouse Platform. LakehouseIQ uses generative AI to understand company specific jargon, data usage and organizational structure to answer questions within the context of a business. The goal is to democratize data analytics across a corporation. According to Databricks, LakehouseIQ will learn from signals embedded in corporate data including schemas, documents, queries, popularity, lineage, notebooks and dashboards.
  • Lakehouse Federation in Unity Catalog will include query federation across data assets and platforms outside of Databricks. Databricks is also offering governance outside of its platform via Unity Catalog.
  • Delta Lake 3.0, the latest contribution to Linux Foundation's Delta Lake project, will add Universal Format (UniForm), which will allow data stored in Delta to be read as if it were Apache Iceberg or Apache Hudl.

Doug Henschen's take:

  • Databricks is, first and foremost, a platform for data scientists and it’s used by many of its 10,000+ customers as a platform for a significant chunk, if not a majority, of their data. Databricks is doing everything it can do to enable those customers to innovate with their data using AI, ML, and analytics, and it’s doing a great job of it.
  • Databricks has spend the last three years building up the warehouse side of its Lakehouse platform, but this year the generative AI tsunami has rightly refocused Databricks on what has always been its greatest strength: data science including ML and AI. It’s very clear to me that Databricks customers are building AI models with Databricks today and there’s a deep well of capabilities that are generally available and now-emerging capabilities that are just becoming GA or are in public preview and very close to becoming GA.
  • What was very clear during today’s Databricks keynote is how far along Databricks customers such as JPMorgan Chase, Jet Blue Airways and Rivian are in building innovative ML, AI and even generative AI capabilities using Databricks. A bunch of new enablers were announced today, several of which are already in public preview and are expected to go GA this year.

 

Data to Decisions New C-Suite Innovation & Product-led Growth Tech Optimization Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Big Data AI ML Machine Learning LLMs Agentic AI Generative AI Analytics Automation B2B B2C CX EX Employee Experience HR HCM business Marketing Metaverse developer SaaS PaaS IaaS Supply Chain Quantum Computing Growth Cloud Digital Transformation Disruptive Technology eCommerce Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP Leadership finance Social Healthcare VR CCaaS UCaaS Customer Service Content Management Collaboration M&A Enterprise Service GenerativeAI Chief Information Officer Chief Analytics Officer Chief Data Officer Chief Technology Officer Chief Information Security Officer Chief Executive Officer Chief Digital Officer Chief AI Officer Chief Product Officer

Driving Equality Through Accessibility: Building an Inclusive Digital Future

How do we build an inclusive #digital future? 🫱🏽‍🫲🏾 Watch Impact TV Episode 2 to unpack the barriers and opportunities for #accessibility equality from the following subject experts...

0:00: Impact TV introduction with co-hosts R "Ray" Wang, founder of Constellation Research, and Teresa Barreira, CMO of Publicis Sapient.

04:15: Alison Walden, VP and Accessibility Lead at Publicis Sapient, partners with clients to create #inclusive digital experiences. She highlights how many teams don't understand the variety of ways people access experiences online, and don't provide adequate access. Accessibility must be driven top-down at companies through #awareness, mandatory #training, measurement criteria, and hiring specialists. "Do it, do it right, do it right now."

15:40: Frances West, founder of FrancisWest & Co and former Chief Accessibility Officer at IBM, explains the increased attention around accessibility: 1) the rise in inclusive social movements, including people with disabilities, 2) the increased use of #technology from the pandemic, and 3) the growing demographic of people aged 50+ who have difficulty with digital experiences, and 4) the increasing global legislations around accessibility #rights. Technology should be designed to include "edge users" with an intuitive simplicity. We should lead with accessibility, not add it as an afterthought.

31:27: David Bray, PhD, Distinguished Fellow at the Stinson Center, describes how we've advanced in user-friendly websites, but many #apps don't have built-in accessibility. Statistics say 1 in 4 people will have an accessibility challenge, so it is relevant for everyone. Any forward-leaning business has to prioritize accessibility to remain relevant in the #AI era. How are your #developers using code to bake in accessibility from the start? The best way to for orgs to stay ahead is to remain a learning environment about user experience. When doing usability testing, make sure to include a diverse set of people. We have a human obligation to think about accessibility - involve customers, stakeholders, and citizens.

Accessibility is a human right.

Stay tuned for another episode of Impact TV coming down the pipeline in the coming weeks!

On <iframe width="560" height="315" src="https://www.youtube.com/embed/KSmSv7uIaxY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
Media Name: ImpacTV Graphic Template.png

What Movies Get Wrong…and Salesforce Gets Right…About AI

The voice was calm yet determined. Frank was dead. Dave remained.

“Open the pod bay doors, HAL.”

“I’m sorry Dave. I’m afraid I can’t do that.”

The Artificial Intelligence aboard the Discovery One space craft envisioned by Arthur C Clarke’s short story, The Sentinel, and Stanley Kubrick’s movie 2001: Space Odyssey, had been listening in and wasn’t having what Dave had in mind. The heuristic programmed algorithm was designed to solve problems quickly…and people were the problem.

HAL 9000 is a delicious villain. In fact, HAL was named the 13th greatest movie villain of all-time by the American Film Institute. The cool indifference of HAL is haunting. But sadly HAL, and other nihilistic machines like him, have become the baseline of awareness about AI for FAR too many people.

While conversations start with the innovation and the change AI can usher in, conversations will inevitably turn to the danger of the machines taking over. From discussions around ethical AI to the capacity for sentience, there is a sense that AI, left unchecked or allowed to read lips, will try to take over and be the downfall of humanity. There is never an in-between.

But what does AI mean for the average, everyday Marketing team? In the early days of OpenAI’s ChatGPT, headline after headline bragged about the Generative AI’s eventuality of “replacing marketers” because of its ability to generate ad campaign copy, slogans and email subject lines in seconds. A variation on the HAL theme to be sure, but still, the script has the sentient super-villain machine with a touch of blood lust rising to rid the world of agency copywriters and marketing managers.

Before ChatGPT shows us the pod bay doors, let’s take a step back and consider if we got our movie references all wrong. What if AI in marketing is less Space Odyssey and more Devil Wears Prada?

As the tale goes, the devil boss, Miranda, has a new assistant, the protagonist of the book and movie, Andy. There is a moment during a glamorous charity gala, when a swanky donor approaches to greet the hostess. Andy leans in and whispers the name of the guest, along with a couple key factoids just in time for Miranda throws her arms up with all the warmth and recognition of an old friend.

Andy, not HAL, is the AI Marketing needs. And Andy…or rather Salesforce’s version of her…is called Marketing GPT and was purpose-built to lean in and whisper exactly what a marketer needs to engage and interact in the most personal and profitable way. Trained to not just understand customers, conversations, or engagement, but trained to also understand a specific business, Marketing GPT draws intelligence from Data Cloud and relies on a new trust layer to ensure that this isn’t just a story of the right message to the right customer at the right time…but the right model to deliver the right personalization and contextualization to the right marketer.

Digging into the Marketing GPT announcements, let’s focus in on a couple highlights that stood out (at least stood out to me):

  • Segment Creation: imagine just asking your marketing tools to create a new audience segment. Marketers understand that the question is rarely the problem…instead it is all the preparation that is required to even get to the point of asking. With Segment Creation, both sides of that audience opportunity equation are addressed with AI, bringing the data together and giving marketers the opportunity to interrogate that data differently, all using natural language.
  • Segment Intelligence for Data Cloud: This is where marketing’s work proves impact and real, tangible business values by connecting the first-party data marketers rely upon for deeper engagement with the revenue data and third-party paid media data. This isn’t just about ‘more metrics.” Instead, Segment Intelligence is about obtaining a truly comprehensive view into audience engagement. Knowing how someone engaged with initiatives is great…knowing how that connected to the business and revenue is even better.

There are other AI-super-powered capabilities in this initial introduction of Marketing GPT including integrating generative AI tools into everything from email content creation (with auto-generated copy recommendations that can be included in testing and engagement campaigns) to integrations with the creative upstart Typeface to create contextual visual assets that are aligned with approved brand voice, style guides and messaging.

Another announcement of note comes from the Salesforce Commerce GPT introduction. While the solution is packed with AI-powered assistive tools including Commerce Concierge for personalized engaging shopping engagements and Dynamic Product Descriptions automatically filling in missing catalog data for merchants, it is the inclusion of Goals-Based Commerce that had me leaning to learn more. This is not just about delivering the capacity for growth. It is about productive and efficient growth. With the Goals-Based Commerce tool, brands can set targets and goals based on what is top of mind for the business (and let’s be honest, those details can change minute to minute even while all still pointing towards profitability) and get AI powered recommendations and even automations to help reach those goals. It connects Data Cloud, Einstein AI and Salesforce Flow to quickly move from goals to outcomes.

While AI is important to business, trusting AI is critical to us all. This was the message told time and again at both Connections and Salesforce AI Day. So HOW does Salesforce make Marketing GPT the built for enterprise safe-AI solution. It truly starts and stops with data.

Salesforce AI Cloud is billed as a cloud-based end-to-end AI solution that supports multiple models, prompts and training data sets. A purpose-built suite of capabilities, AI Cloud works to deliver trusted, real-time generative experiences across all applications and workflows, with a focus on super-charging CRM. Einstein sits at the heart of AI Cloud and, according to Salesforce, now powers over 1 trillion predictions per week across Salesforce applications. Thanks to AI Cloud, organizations can tap into multiple large language models that are trusted in an environment that is open and extensible. Customers will have access to multiple models to optimize the right model for the right task, be it third party LLMs, using Salesforce’s proprietary LLM (developed by Salesforce AI Research) or bringing a customer’s own custom LLM. See: Salesforce launches AI Cloud, aims to be abstraction layer between corporate data, generative AI models

Initial LLMs include AWS, Anthropic and Cohere to start. Salesforce had previously announced an extensive partnership with OpenAI and the APIs to access the GPT-4 model. Salesforce has also announced a partnership and integration with Google’s Vertex AI, adding yet another bring-your-own model capability into the mix (Salesforce had previously announced the ability to bring models from Amazon SageMaker) directly through the newly announced Einstein GPT Trust Layer.

Why is this “trust layer” so important? This is what brings us back to the trust factor. By bringing these models, be them internal (via Google Vertex), from Salesforce or from a third party like OpenAI, a customer’s data remains within the boundaries established as trusted BY the customer. The Trust layer is intended to be where the identity and governance controls so that company data is not sent to a model, as many organizations fear. Instead, once a query is run on a customer’s system, data (including data that has been aggregated and harmonized in Salesforce Data Cloud) is retrieved, masked and fed to the model via secure gateway to generate the response. This prompt is not retained by the model and in seconds responses are delivered back, routed through what Salesforce notes as “toxicity detection” and finally audited and logged for visibility.

The promise here is that enterprises can secure, govern and orchestrate AI in a more constructive and intentional way. This is not a new concept. Trust and “enterprise-ready” offerings, tools and promises are cropping up everywhere from Adobe (with the guardrails around their suite of generative AI models in Adobe Firefly), to Microsoft’s Azure OpenAI Service (which only addresses safety and moderation of text and image generation using OpenAI models) and Nvidia’s open-source toolkit, NeMo Guardrails, that takes aim at toxic content.

But Salesforce arguably feels a responsibility to push innovation forward and to take the lead on having the tough ethics and security conversations in AI. For Salesforce, the Einstein GPT Trust Layer is a critical, if not mandatory move.

Marketing GPT tools are quickly entering pilot this summer (as early as June) and many are expected to GA by October (Segment Creation, as an example, is expected to go GA by October 2023. Segment Intelligence for Data Cloud is also expected to be GA by October) with other tools like Dynamic Product Descriptions expected to be GA in July 2023 and Goals Based Commerce expected by February 2024. This is a welcome departure for Salesforce which has earned the reputation of longer aspiration-to-availability timelines.

Yes…there were a TON of GPT labeled announcements made at Salesforce Connections (prompting some of us in attendance to just add GPT to the end of every proper name available.) But there was also a lot of excitement around the prospect of having that well trained personal business assistant whispering all the just-right details into our ears just in the moment we need it to make an amazing impression on our customers and prospects. It is a welcome shift in narrative from the machines ready to replace marketers to a safe, purpose-built, enterprise-ready and trained AI empowering and adding to a marketer’s success.

 

Data to Decisions Future of Work Marketing Transformation Next-Generation Customer Experience Chief Customer Officer Chief Marketing Officer Chief Digital Officer Chief Data Officer

Snowflake, Nvidia team up to enable custom enterprise generative AI apps

Snowflake and Nvidia said they're integrating Snowflake Data Cloud and Nvidia NeMo, a platform for large language models (LLMs), so enterprises can build custom generative AI applications.

The news, outlined during the kickoff of Snowflake Summit 2023, enables Snowflake customers to combine their proprietary data with foundational LLMs within Snowflake Data Cloud. Snowflake said it will host and run NeMo in its Data Cloud and include NeMo Guardrails, which ensures applications line up with business specific topics, safety and security.

Also see: Snowflake launches Snowpark Container Services, linchpin to generative AI strategy

Vendors have been racing to enable enterprises to combine their data with LLMs in a secure way. Salesforce has a trust layer to keep customer data cordoned from LLMs and Oracle is planning a similar service. Enterprise technology buyers have been wary of the compliance and privacy issues with building generative AI applications. Meanwhile, Snowflake rivals MongoDB and Databricks are also targeting LLM data workloads. Databricks doubled down on LLMs with the $1.3 billion acquisition of MosaicML.

With Nvidia NeMo, Snowflake customers can use their accounts to create custom LLMs for chatbots, search and summarization while keeping proprietary data separate from LLMs.

Snowflake CEO Frank Slootman said the partnership with Nvidia will add high performance machine learning and AI to Snowflake's platform. Nvidia CEO Jensen Huang said the partnership with Snowflake will "create an AI factory" for generative AI enterprise applications.

For enterprises, the Snowflake and Nvidia alliance may make it easier to tune custom LLMs for specialized use cases. This approach was outlined recently by Goldman Sachs CIO Marco Argenti.

Snowflake Data Cloud offers industry specific versions across financial services, manufacturing, healthcare, retail and other verticals. With Nvidia, Snowflake's bet is generative AI applications will proliferate across industries.

More:

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity snowflake AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Data Officer Chief Technology Officer Chief Information Security Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Why your quantum computing vendors are going to look familiar

Your quantum computing vendors may look a lot like your cloud, data center and supercomputing providers today as Microsoft, IBM and Intel all had quantum related announcements in recent days.

The big question is whether smaller quantum vendors will be able to deliver the breakthroughs that can propel them to the big leagues. Constellation Research's Shortlists for quantum computing platforms, software and full stack providers include a mix of traditional vendors and startups.

Recent events include:

Now it's not like startups are being lapped. IonQ last week announced its IonQ Forte system was commercially available. IonQ has a partnership with Dell Technologies and is available on all three major cloud providers (AWS, Google Cloud, Azure). The company just raised its 2023 bookings growth to $45 million to $55 million. In its first quarter, IonQ had revenue of $4.3 million.

But overall, it's telling that quantum computing seems to be driven by established enterprise technology players with the biggest R&D budgets. It's really hard to sneak up on big tech these days.

Mueller said the verdict is still out on whether the big enterprise tech players will all pivot to quantum. CIOs could explore Quantinuum, formed by the combination of Honeywell Quantum and Cambridge Quantum, technically isn't an IT vendor. He said:

"Clear trend: It will be the first enterprise tech that will be practially only availalble in the cloud. With that every cloud vendor needs to play to remain relevant. But we are still in basic tech phase. Who will win? It is VHS vs Betamax." 

Short version: You're not quite ready to buy into quantum computing at scale just yet.

Kirk Bresniker, Hewlett Packard Labs Chief Architect and HPE Fellow, said in a tech talk at HPE Discover that quantum computing will require decades of hard engineering work to be mainstream. However, quantum computing will have role in a hybrid supercomputing approach.

"HP Labs is here to partner and apply engineering expertise to make this process real," said Bresniker. He acknowledged that quantum computing is still early in its development--akin to vacuum tubes in old classical computers--but can accelerate. "We're looking to partner to give enterprises a better set of information so they can reason over this quantum future. You want to make reasoned investments in these technologies over time," he said.

His architecture slide is worth checking out from a vision perspective.

Bresniker said HPE is betting that supercomputing will evolve with an architecture that includes CPUs, GPUs, various accelerators and quantum computing to tackle problems.

For now, quantum computing is worth experiments and use cases in select industries. TCS recently noted how it is using IBM Quantum infrastructure for financial advisor scenarios. Financial services and life sciences are obvious areas for quantum computing.

For now, quantum computing is clearly in the press release stage, but there seems to be consensus around the following:

  • Quantum computing will likely be consumed through the cloud.
  • Select industries should explore use cases.
  • Key metrics on how to measure efficiency and performance are being debated.
  • Quantum computing will be part of what's emerging as a hybrid supercomputing approach.
  • Projections about how quantum computing will scale are often based on assumptions of breakthroughs. However, you can't predict breakthroughs.

In the meantime, enjoy the parade of quantum computing announcements flying by.

Data to Decisions Tech Optimization Innovation & Product-led Growth Quantum Computing SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer

Databricks adds on to the Lakehouse, acquires MosaicML for $1.3 billion

Databricks said it will acquire MosaicML, which is a generative AI platform specializing in large language models (LLMs), for $1.3 billion.

The news lands as data platforms such as Snowflake, Databricks and MongoDB race to provide ways for enterprises to build their own generative AI models while keeping corporate data secure. The data platform game is focused on fast training of LLMs and models with strong data governance.

Related:

MosaicML is best known for its MPT LLMs. For instance, MosaicML has more than 3.3 million downloads of MPT-7B and MPT-30B LLMs.

Databricks will take MosaicML and integrate its models into Databricks Lakehouse. According to Databricks, MosaicML will enable customers to train LLMs in hours not days and for "thousands of dollars, not millions."

Constellation Research analyst Doug Henschen said:

“Databricks has spent the last few years building up the house side of its Lakehouse platform, but the company’s beginnings were as a data science platform. It can’t afford to lose its distinction and differentiation as a platform for data science, so the acquisition of MosaicML makes complete sense. What’s more, it’s a good fit in terms of company culture and location.”

The $1.3 billion price tag is inclusive of retention packages. Databricks said it expects the entire MosaicML team to join the company. Retaining MosaicML's team will be critical as Databricks integrates and scales the combined platform.

In a blog post MosaicML said it started talking to Databricks about partnerships, but it became clear the effort would scale better combined. MosaicML said:

“Generative AI is at an inflection point. Will the future rely mostly on large generic models owned by a few? Or will we witness a true Cambrian explosion of custom AI models that are built by many developers and companies from every corner of the world? MosaicML’s expertise in generative AI software infrastructure, model training, and model deployment, combined with Databricks’ customer reach and engineering capacity, will allow us to tip the scales in the favor of the many.”

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity databricks Big Data ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration GenerativeAI Chief Information Officer Chief Data Officer Chief Technology Officer Chief Information Security Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

IBM acquires Apptio for $4.6 billion, wants to optimize, automate your IT

IBM said it has acquired Apptio, which makes IT management and optimization software, for $4.6 billion. Big Blue said the move will bolster its IT automation offerings.

Vista Equity Partners bought Apptio in 2018 for $1.94 billion.

IBM said it will combine Apptio with its IT automation software including Turbonomic, Instana and AIOps and Watsonx AI platform. Enterprises are increasingly looking to automate their IT operations and maximize financial returns (FinOps). IBM said that Apptio will bring anonymized IT spending data to provide insights.

Big Blue noted that it is at the early stages of integrating Apptio and developing roadmaps. 

Constellation Research CEO Ray Wang said:

"It’s sign of the times. Companies want to know how to manage their cloud budgets and Apptio is one of the tools with cost management tools and technology portfolio management or FinOPs. IBM is betting that customers will want to buy software to manage cloud costs and tech spending."

Apptio has more than 1,500 customers and has integrations with multiple IT vendors including Amazon Web Services, Microsoft Azure, Google Cloud Platform, ServiceNow, Salesforce, Oracle, SAP and others.

IBM launches Watsonx, an AI platform with open source models, governance

With the Apptio Purchase, IBM will own three core SaaS offerings including.

  • ApptioOne, which tracks hybrid cloud spend management and optimization.
  • Apptio Cloudability, which provides public cloud spend management and optimization visibility.
  • Apptio Targetprocess, which aligns IT projects with business outcomes.

IBM said the plan is to scale Apptio's products via Red Hat, IBM's portfolio of software and AI products and IBM Consulting. 

Here's a look at Apptio's platform. 

Tech Optimization Data to Decisions Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity IBM AI ML Machine Learning LLMs Agentic AI Generative AI Robotics Analytics Automation Cloud SaaS PaaS IaaS Quantum Computing Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service developer Metaverse VR Healthcare Supply Chain Leadership business Marketing finance Customer Service Content Management Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

MongoDB launches Atlas Vector Search, Atlas Stream Processing to enable AI, LLM workloads

MongoDB added Atlas Vector Search and Atlas Stream Processing to its MongoDB Atlas platform along with other enhancements as it aims to be the top choice for data application developers.

The news, announced at its MongoDB.local NYC developer conference, highlights the race for enterprise developers looking to create modern applications that can readily incorporate generative AI capabilities at scale.

MongoDB's announcements come days after Databricks launched Lakehouse Apps to broaden its development platform ambitions. In addition, Snowflake will unveil updates at its Snowflake Summit next week. Snowflake CEO Frank Slootman last month promised "significant product announcements" at Snowflake Summit.

Dev Ittycheria, CEO of MongoDB, said during a keynote that developers spend most of their time working with data instead of creating software. Multiple clouds, endpoints and data stores have also made development more complicated. Streaming data technologies are also heterogenous. "AI is about building smarter and more intelligent applications," said Ittycheria. "There has been an explosion of AI companies running and building apps on MongoDB. We believe there are 1,500 companies building AI workloads on MongoDB today."

MongoDB Atlas Vector Search will bring generative AI capabilities to the Atlas platform by bringing forward highlight relevant information retrieval and personalization.

Doug Henschen, analyst at Constellation Research, put Atlas Vector Search in context:

"Vector Search isn't a generative AI capability on its own, it's an enabler for companies interested in developing their own generative AI capabilities. In announcing this feature, which is entering public preview, Mongo DB is joining a group of leading-edge data platform companies that have recently made, or are about to make, vector-search-related announcements."

In addition, MongoDB Atlas Stream Processing will surface high-velocity streams of complex data. Atlas Stream Processing, which is in private preview, will enable enterprises to leverage large language models (LLMs) and process streams of real-time data in one unified experience.

Henschen said Atlas Stream Processing is a key addition for MongoDB. He said:

"Atlas Stream Processing is the most important announcement at this week’s event, with Vector Search being the second most important announcement in my book. Low-latency workloads and requirements are only becoming more prevalent, so MongoDB really had to step up on this front if it is to live up to the company’s billing as a "developer data platform." Rival data platforms associated with analytics, such as Snowflake and Databricks, have already addressed real-time needs, so MongoDB is filling a competitive gap."

Vector Search and Stream Processing are likely to appeal to developers building AI-based applications. MongoDB said Beamable, Pureinsights, Anywhere Real Estate and Hootsuite are building next-gen applications with the new Atlas capabilities.

To round out the Atlas updates, MongoDB also added Atlas Search Nodes with dedicated resources for search workloads, efficiency improvements with MongoDB Time Series collections and new Atlas Data Federation for queries and isolating workloads on Microsoft Azure.

For MongoDB, the race to build enterprise-grade generative AI apps is an opportunity to grow its multi-cloud developer data platform. Although enterprises aren't scaling LLMs and generative AI applications yet, the interest is there.

To capitalize on the generative AI and LLM interest, MongoDB is looking to address the following with Atlas Vector Search:

  • Provide the flexibility to store and process different types of data. LLMs require data in the form of vectors to represent data types such as text, images and audio.
  • Store vectors so LLMs can use them without needing a specialized database that lacks integration with existing technology stacks.
  • Enable developers to deploy new workloads such as semantic search, text and image search and personalized product recommendations in one platform.
  • Provide developers with the ability to augment pre-trained generative AI models with their own data.
  • Integrate frameworks such as open source LangChain and LlamaIndex and use them so developers can access LLMs from partners.

With Atlas Stream Processing, MongoDB is looking to do the following:

  • Provide developers with real-time streaming data from IoT devices, browsing and inventory feeds to create real-time experience and optimize on the fly.
  • Leverage streaming data without specialized programming languages, APIs and drivers.
  • Give developers one interface to extract insights from streaming data across multiple data types, connectors and technologies.

While Atlas Vector Search and Atlas Stream Processing were the headliners, MongoDB had a series of other launches. Here's the breakdown.

  • MongoDB Atlas Search Nodes give developers dedicated resources so enterprises can scale search workloads independent of the database.
  • MongoDB Time Series collections provide options to modify data that has already been ingested. Time Series collections will also improve storage efficiency and query speeds.
  • The company is adding Microsoft Azure support to MongoDB Atlas Online Archive and Atlas Data Federation to go with Amazon Web Services. Support for Microsoft Azure Blog Storage means MongoDB customers can work with Azure and AWS datasets.
  • MongoDB launched MongoDB Relational Migrator, a tool that streamlines the process of migrating applications and legacy databases.
  • Google Cloud Vertex AI LLMs will integrate with MongoDB Atlas so developers can use Google Cloud foundational models across MongoDB Atlas Vector Search.
  • The company outlined MongoDB Atlas for Industries, which is a set of integrated tools for vertical use cases. The first industry targeted by MongoDB Atlas for Industries is financial services.
  • MongoDB also outlined additional programming language support for deploying MongoDB Atlas on AWS, the Kotlin Driver for MongoDB for server-side applications and more streamlined functionality for Kubernetes and Python.

Bottom line: MongoDB sent the message that developers can leverage the Atlas platform for generative AI capabilities. Henschen said:

"I think the 5% of companies that are innovators and the next 20% to 25% of companies that are fast followers will be the ones that are most interested in these features. It promises to make MongoDB stickier for developers at these companies as they now know they can turn to MongoDB, a tool they already know and love, for help in developing generative AI capabilities."

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity mongodb Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Data Officer Chief Technology Officer Chief Information Security Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer