Results

How a writing-based culture can rewrite work

How a writing-based culture can rewrite work

Adam Nathan, CEO of Almanac, said modern work is broken and needs to be fixed by reframing remote work, creating writing-based cultures and processes and providing enough space for the magic of human collaboration.

Nathan's approach, which was outlined on DisrupTV Episode 328, is worth a listen starting at the 40 minute mark. Here are some of the takeaways.

Work itself is broken. "It was broken before Covid but since then it's very clear that where we work has changed but how we work hasn't. Teams are experiencing a ton of burnout, a ton of chaos at work. People are just not being able to get stuff done," said Nathan.

Creating a modern work method. Nathan said Almanac set out to conduct 5,000 interviews with organizations that have mastered collaboration. The research informed Almanac's modern work method. "We largely found that regardless of what a team does, purpose of the company or location is that the teams that are working the fastest and delivering the most value work with a lot more structure, more transparency and can't wait for meetings," said Nathan.

That tired remote work debate. "I think there has been a very loud push especially in the New York Times and from what I call old white guys on Twitter to return to the office but if you actually look at the data on this remote work a percent of the workforce has actually continued to grow even after the end of the end of Covid-era restrictions in September 2022. If you look at white collar professional jobs before the pandemic about 22% of the workforce was working in a remote or hybrid fashion. Today that number is 66%. I don't love this debate between office versus remote. It's a tiring one if you think about remote work as internet work," said Nathan. He added:

"Internet work is a disruptive and inexorable trend. Just like our consumer lives have moved from shopping in person to e-commerce and hanging out in person to social media, the same thing is happening to work. Working on the internet is not going away anytime soon. I think the questions we're asking are almost all the wrong ones. Theory and data don't support this idea that life is going to return to how it was."

7 future of work themes to know now | 12 lessons for when work, life collide | Future of work research and insights

Advantages of internet work. "For workers, there's obviously flexibility and freedom. You can work when you want and where you want and that gives people a lot more time to get into focus and flow to balance their lives better between work, family and friends and hobbies. I think that's why CEOs and owners are pushing back so much," said Nathan. "There's another chapter in this tension between capital and labor and who controls the leverage. Labor has gotten broad new freedoms and I think CEOs, people who own real estate and maybe some elected officials that there's discomfort with how this new normal is going to work out. There's not the same sense of control anymore over people's time and location. For the last 50 to 70 years managers have been managing my presence by butts and seats and did you attend meetings. I don't think that was at all correlated with effectiveness, growth or value delivery."

Well managed teams do well remote or in person (the office hides dysfunction). "The other silly thing is that remote is not a place--it's the absence of one. What we've seen in the data is that remote work and network really expose how teams are functioning. Well-managed teams tend to do better in remote settings because they already have good systems and structures and processes in place," said Nathan. "There's a high trust level. Teams that were dysfunctional don't have the theater of the office to cover it up, so all the dysfunction is exposed. These teams are often facing the choice of do we want to improve how we're working or revert and ignore it by going back into the office. There are bosses that clearly don't know how to operate in a distributed environment and would prefer the control an office creates."

Culture of writing. Amazon is well known for requiring employees to draft a memo before any meeting. Bridgewater is another example of an organization with high performance and a culture and decision-making process based on writing. "There's this misconception that the only way to get stuff done in stressful environments is to get everyone together, create a lot of chaos and move really fast," explained Nathan. "In the Marines slow means smooth and smooth means fast. A lot of organizations we've interviewed and observed are calm working environments. Everything feels really smooth, everyone's really calm and yet they're moving extremely fast in part due to a culture of writing."

How to get there? Nathan said high performing organizations start with a doc before a meeting. "Sometimes the doc obviates the need for a meeting. Even when there is a meeting everyone has read beforehand and commented. It makes the synchronous time they're spending more effective," he said.

Another move is to understand what recurring meetings aren't useful anymore. "What happens in organizations is that back-to-back meetings are just an accumulation of things that were once useful," said Nathan. Use documents to cancel meetings and store them so they can find answers easily.

Generative AI's impact on writing cultures. "I think the main thing LLMs are doing right now is producing fuzzy first drafts. It's the average of everything out there to give you an answer. Now we have a better chat interface that's going to get us to look over a larger amount of information much faster and produce a better outcome," said Nathan. "I think the productivity curve of what we can do with writing is to move up and into the right."

"What happens in the future is there are going to be some people who are going to really be able to exploit this technology to their advantage and some people who fall behind. Teamwork and collaboration are still a deeply human exercise. The human brain is constantly rewiring based on interactions it has with other people." 

The magic of human collaboration. "What makes collaboration so magical is we don't know what will happen when we get together to work on a problem together. We might see AI almost like a collaborator in some ways but LLMs are just looking at past information, decisions, and knowledge," said Nathan. "I think the magic of human collaboration will always be there and what we do together might be more elevated because we have better technology to automate the overhead work."

Future of Work Tech Optimization Innovation & Product-led Growth Data to Decisions New C-Suite Chief Executive Officer Chief People Officer Chief Information Officer Chief Experience Officer

Rivian: AI, data power customer experiences

Rivian: AI, data power customer experiences

Rivian is betting that data, AI and machine learning will continually improve its customer experience and detect and prevent future vehicle issues.

Speaking at the Databricks Data + AI Summit, Wasssym Bensaid, SVP of Software Development at Rivian, walked through how the electric vehicle manufacturer created an architecture that enables it to ingest telemetry data from vehicles, boost battery life and roll out new features with over-the-air (OTA) updates.

Bensaid said that the data ecosystem for EV makers goes well beyond autonomy and other technologies that grab headlines. "With software defined vehicles and an amazing hardware platform you can have all-in-one vehicles," he said. "Everything at Rivian is data driven from our supply chain to manufacturing to the customer relationship."

Rivian said it produced 13,992 vehicles in the second quarter and said it is on track to produce 50,000 for the year.

Speaking at an investor conference June 15, Rivian CFO Claire McDonough said the EV maker is looking to own "the full end-to-end ownership experience for commercial customers and for consumers as well." On the commercial side, Rivian counts Amazon as its flagship customer with a purpose-built delivery vehicle connected to fleet management software called FleetOS.

Data from the Rivian vehicle is shared with Amazon's back-end software system to improve efficiency and make life easier for drivers with perks like cooled seats.

For consumers, McDonough outlined:

"What we started with was how to create a seamless transaction experience. If you go online and you buy a Rivian, you can purchase, right, insurance, financing, trade-in your vehicle in about 6 minutes. Is that convenient and fast? Some of the early investments we’ve made have really ensured that we had this integrated experience for our customers with services like financing and insurance. Over time, we’re constantly updating our vehicles with over-the-air updates. And we’ve added incremental drive modes and feature sets to the vehicles. Over time, we will have the opportunity to create features that can be bundled or paid features for consumers. But right now, we’re really excited about offering continuous value accretion for Rivian owners that have seen the range of their vehicles, increase over the lifespan of their ownership and true enhancements to some of the new drive modes that we’ve offered as well."

The architecture

Bensaid said Rivian initially struggled with data silos and multiple systems, data types and tools. Rivian also had a team of experts focused on Rivian's data strategy and ultimately became the bottleneck. As a result, Bensaid said Rivian moved to democratize access to data while ensuring security, privacy and governance.

Rivian used Databricks and its Lakehouse platform to build a new architecture on top of data lakes that could scale. Bensaid said Rivian also uses Databricks Unity Catalog to create one version of the truth.

The EV maker has automated more than 95% of its Databricks provisioning workflows. Rivian's stack includes a data and analytics layer that runs through Rivian technology, cloud, product development and operations, products and services.

"We're using data and AI to achieve and unlock business outcomes," said Bensaid.

Ultimately, Rivian is planning to improve the customer experience and become more predictive about maintenance and improving performance. "Imagine a world where a vehicle will self-monitor its health and schedule its own appointments to deliver an amazing experience."

 

Data to Decisions Next-Generation Customer Experience Innovation & Product-led Growth Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Information Officer Chief Data Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

How JetBlue is leveraging AI, LLMs to be 'most data-driven airline in the world'

How JetBlue is leveraging AI, LLMs to be 'most data-driven airline in the world'

JetBlue is actively using artificial intelligence and machine learning across its business and actively using generative AI for its internal operations and ultimately revenue-producing products.

Speaking at the Databricks Data + AI Summit, Sai Ravuru, Senior Manager of Data Science and Analytics at the airline, walked through how the company was using Databricks Lakehouse on multiple fronts. Databricks launched a series of new additions to its platform and said it will acquire MosaicML

"Over the last two years, we've made investments in data science and data refinement so raw data is continuously hydrated and reliable," said Ravuru. He said that AI and machine learning teams at JetBlue work alongside data scientists. "AI/ML scouts for the next use case before handing off to the data science team," explained Ravuru, who noted the goal for JetBlue is to be the most data-driven airline.

Ravuru said that data touches every part of JetBlue's business including operations, commercial and support functions. JetBlue is creating a unified digital twin of its business with cross-team collaboration and process-driven data science fueled by data from multiple systems.

Databricks Lakehouse is what absorbs and creates modeling across JetBlue's data footprint.

 

The airline has leveraged Databricks' platform to create an ecosystem of models called BlueSky to enable decision making. "The BlueSky product was built from scratch internally," said Ravuru. "It is a continually refreshed network with embedded LLM and real-time components for frontline staff."

BlueSky serves as JetBlue's AI-driven operating system.

Ravuru also said that JetBlue has created a unified LLM called BlueBot that uses open-source models complemented by corporate data integrated with BlueSky. BlueBot can be used by all teams at JetBlue since access to data is governed by role. For instance, the finance team may see data from SAP and regulatory filings, a new employee may just be served FAQs, and operations would see maintenance information, explained Ravuru.

"BlueBot brings crew members much closer to data and insights without change management," he said.

JetBlue is using Databricks for generative AI use cases that are experimental as well as production.

What's next? JetBlue is looking at LLMs to create new revenue channels so customers "can book from BlueBot or plan trips better." In addition, JetBlue is looking at efficiency gains by using LLMs to provide the "technical operations team with WebMD style diagnoses for each and every aircraft."

Related:

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration GenerativeAI Chief Information Officer Chief Data Officer Chief Technology Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Salesforce rolls out Sales GPT, Service GPT

Salesforce rolls out Sales GPT, Service GPT

Salesforce launched the latest installment of generative AI across its clouds with the launch of Sales GPT and Service GPT.

The news follows Salesforce's rollout of AI Cloud this month as well as the Einstein GPT Trust Layer, which enables customers to keep data secure while leveraging large language models. 

What Movies Get Wrong…and Salesforce Gets Right…About AI | Salesforce launches AI Cloud, aims to be abstraction layer between corporate data, generative AI models | Salesforce launches Marketing GPT, Commerce GPT, aims to connect generative AI to ROI

Sales GPT includes the following:

  • Auto-generated sales emails personalized for customers based on data.
  • Call summaries and transcription for sales calls and follow-up actions.
  • A sales assistant to summarize each step of the sales cycle including account resarch, meeting prep and contract drafts.

Service GPT and Field Service GPT will include the following generative AI tools:

  • Service replies that are auto-generated with real-time data and personalization.
  • Work summaries of service cases and customer engagements.
  • Knowledge articles that are auto-generated and updated based on real-time data.
  • Mobile work briefings for field service team appointments and summarization of issues.

Sales GPT and Service GPT are expected to be generally available this year.

Data to Decisions Next-Generation Customer Experience Innovation & Product-led Growth Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity New C-Suite Marketing Transformation salesforce ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Synthetic biology: What you need to know

Synthetic biology: What you need to know

Synthetic biology promises to be disruptive across multiple industries because it can be used to develop new biological parts and devices while reengineering existing systems.

DisrupTV caught up with three experts in the field on a recent episode. The roundtable captured some of the promise and peril with synthetic biology, a multidisciplinary field of science focused on reengineering organisms.

Here's a look at the key themes from the DisrupTV discussion and what you need to know.

What is synthetic biology? "Synthetic biology is the ability to design biological systems," said Dr. Megan Palmer, Senior Director for Public Impact at Ginkgo Bioworks and Adjunct Professor of Bioengineering at Stanford University. "We've been able to cut and paste DNA because we know the underlying code biology runs on for the last 50 years or so. Humans have been modifying biology for selective breeding of plants and animals for much longer than that."

"Now scientists and engineers are developing even better tools to be able to read, write, edit and evolve biological systems in ways that are easier, faster, more precise and predictable," said Palmer.

The upshot is that scientists and engineers are unlocking this ability to partner with biology in new ways.

The promise of synthetic biology. Palmer said she considers biology already the most powerful technology on the planet and the ability to program it means "we can use biology to manufacture nearly everything that is currently made with petrochemicals in ways that are more sustainable." Palmer said synthetic biology can impact multiple sectors in the economy such as health, food and manufacturing. There's even potential for data storage using DNA.

8 takeaways from Constellation Research's Healthcare Transformation Summit

Use cases for synthetic biology. Panelists noted a bevy of use cases ranging from space travel to food production to biomanufacturing and sustainability. Dr. Divya Chander, Anesthesiologist, Neuroscientist, and Data Scientist, said synthetic biology could play a role in space travel and "developing astronaut resilience." Chander said:

"We as humans aren't really very good at traveling in space because of microgravity in the radiation environment. There is a possibility of using gene editing tool, which are part of the synthetic biology toolkit, and turn genes on and off to give us more tolerance. Synthetic biology could also enable food production systems in space to improve nutrition and use fewer inputs like water or pesticides. We can even engineer our plants in space to do things like scrub toxins from the environment. CO2 is a big thing for astronauts, but also could do similar things on earth."

Chander said this gene editing could also be good for patients who are undergoing chemotherapy for cancer. With biomanufacturing, synthetic biology could print drugs that are more targeted.

Dr. David Bray, Distinguished Fellow at the Stimson Center and Business Executives for National Security, said sustainability will be a big use case for climate change. "I'm a big believer that the only way we're going to deal with climate change is with synthetic biology," said Bray, who noted that the technology could address the following:

  • Clean water.
  • Sustainable supply chains.
  • Preventing pandemics before they start developing.

"We can organize ourselves in new ways so we can biologize industry instead of just industrializing biology," said Bray.

Chander said other use cases for synthetic biology include:

  • Addressing chronic diseases such as cancer and extending longevity.
  • Editing biologic machinery to address things like antibiotic resistance.
  • Bio manufacturing targeted pharmaceuticals.

Growing the ecosystem and next generation. Palmer said iGEM, a non-profit focused on advancing synthetic biology, education and competition, has gone a long way to developing community in the industry. Palmer said iGEM competitions have encouraged students to design biological machines that are modular to solve problems. "Thousands of students across dozens of countries every year are developing biological innovations that are cool technologies, but also bake in social responsibility, safety and security into designs," said Palmer.

Risks with synthetic biology. Bray said it's promising that biology technologies are being democratized, but there will need to be some guardrails. "You know unleashed and craziness is going to happen, but we have multiple revolutions happening in parallel," said Bray, who added that the combination of AI and synthetic biology could be powerful for good uses and bad. "We are going to need the equivalent of smoke detectors for the biological space," he said. "Technologies like synthetic biology will be a tremendous force for good, but we also need to be ready for when some people try to use it for not so good purposes."

According to the US Government Accountability Office, synthetic biology presents safety and security concerns such as biological and chemical weapons and product tampering, environmental effects and public acceptance.

Ethics will be critical, said Palmer. Synthetic biology will require transparency and permission to use an individual's data. Society as a whole will have to think through ethics and biological data best practices, she said. Data trust will also be a key concept since individuals will be data producers. If you are going to read or write from someone's brain, you'll need full consent. The GAO noted that regulatory frameworks will be needed to address future applications of synthetic biology.

Multiple disciplines needed. The panelists said that synthetic biology will need multiple skills and sits at the intersection of data science, biology as well as ethics experts and technologists. Depending on the use case, industry expertise will also be needed.

Tech Optimization Data to Decisions Future of Work Innovation & Product-led Growth New C-Suite Next-Generation Customer Experience AR Chief Technology Officer

Databricks Data + AI Summit: LakehouseIQ, Lakehouse AI and everything announced

Databricks Data + AI Summit: LakehouseIQ, Lakehouse AI and everything announced

Databricks infused its data platform with generative AI capabilities across its Lakehouse Platform with tools to enable customers to leverage large language models (LLMs), federate and govern data and knowledge engines that learn corporate cultures.

The company outlined the news at its Data + AI Summit. Databricks' news follows announcements from Snowflake and MongoDB designed to land more workloads and enable customers to leverage generative AI. Leading up to Databricks' conference, the company announced the acquisition of MosaicML and launch of Lakehouse Apps. The big takeaway is that data platforms and generative AI capabilities are converging.

Constellation Research analyst Doug Henschen summed up the data platform game: "All three (Snowflake, Databricks and MongoDB) want customers to do as much as possible on their platforms so they are invading each other’s turf. But their original (and still predominant) dance partners are data warehouse for Snowflake, data science for Databricks and developers for MongoDB."

Here's a roundup of Databricks' enhancements to its platform.

  • The company said it will make it easier to deploy and manage LLMs with Lakehouse AI additions to offer monitoring and governance for LLM development. Databricks is adding Vector Search, a collection of opensource models, LLM-optimized Model Servicing, MLflow 2.5 with LLM tools and Lakehouse monitoring.
  • Lakehouse AI will unify the AI lifecycle from data collection and preparation to model development. Databricks Vector Search will manage and automatically create vectors in Unity Catalog. Databricks AutoML will feature a low-code approach to fine tuning LLMs. And Databricks will curate a list of open-source models in its marketplace.
  • Databricks outlined MLflow 2.5, a new release of the Linux Foundation open-source project MLflow. Updates include MLflow AI Gateway, which allows developers to swap out backend models and switch between LLM providers, and MLflow Prompt Tools, a no-code set of visual tools.
  • Databricks Lakehouse Monitoring will monitor and manage data and AI assets within Lakehouse.
  • LakehouseIQ adds a natural language interface to the Lakehouse Platform. LakehouseIQ uses generative AI to understand company specific jargon, data usage and organizational structure to answer questions within the context of a business. The goal is to democratize data analytics across a corporation. According to Databricks, LakehouseIQ will learn from signals embedded in corporate data including schemas, documents, queries, popularity, lineage, notebooks and dashboards.
  • Lakehouse Federation in Unity Catalog will include query federation across data assets and platforms outside of Databricks. Databricks is also offering governance outside of its platform via Unity Catalog.
  • Delta Lake 3.0, the latest contribution to Linux Foundation's Delta Lake project, will add Universal Format (UniForm), which will allow data stored in Delta to be read as if it were Apache Iceberg or Apache Hudl.

Doug Henschen's take:

  • Databricks is, first and foremost, a platform for data scientists and it’s used by many of its 10,000+ customers as a platform for a significant chunk, if not a majority, of their data. Databricks is doing everything it can do to enable those customers to innovate with their data using AI, ML, and analytics, and it’s doing a great job of it.
  • Databricks has spend the last three years building up the warehouse side of its Lakehouse platform, but this year the generative AI tsunami has rightly refocused Databricks on what has always been its greatest strength: data science including ML and AI. It’s very clear to me that Databricks customers are building AI models with Databricks today and there’s a deep well of capabilities that are generally available and now-emerging capabilities that are just becoming GA or are in public preview and very close to becoming GA.
  • What was very clear during today’s Databricks keynote is how far along Databricks customers such as JPMorgan Chase, Jet Blue Airways and Rivian are in building innovative ML, AI and even generative AI capabilities using Databricks. A bunch of new enablers were announced today, several of which are already in public preview and are expected to go GA this year.

 

Data to Decisions New C-Suite Innovation & Product-led Growth Tech Optimization Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Big Data AI ML Machine Learning LLMs Agentic AI Generative AI Analytics Automation B2B B2C CX EX Employee Experience HR HCM business Marketing Metaverse developer SaaS PaaS IaaS Supply Chain Quantum Computing Growth Cloud Digital Transformation Disruptive Technology eCommerce Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP Leadership finance Social Healthcare VR CCaaS UCaaS Customer Service Content Management Collaboration M&A Enterprise Service GenerativeAI Chief Information Officer Chief Analytics Officer Chief Data Officer Chief Technology Officer Chief Information Security Officer Chief Executive Officer Chief Digital Officer Chief AI Officer Chief Product Officer

Driving Equality Through Accessibility: Building an Inclusive Digital Future

Driving Equality Through Accessibility: Building an Inclusive Digital Future

How do we build an inclusive #digital future? 🫱🏽?🫲🏾 Watch Impact TV Episode 2 to unpack the barriers and opportunities for #accessibility equality from the following subject experts...

0:00: Impact TV introduction with co-hosts R "Ray" Wang, founder of Constellation Research, and Teresa Barreira, CMO of Publicis Sapient.

04:15: Alison Walden, VP and Accessibility Lead at Publicis Sapient, partners with clients to create #inclusive digital experiences. She highlights how many teams don't understand the variety of ways people access experiences online, and don't provide adequate access. Accessibility must be driven top-down at companies through #awareness, mandatory #training, measurement criteria, and hiring specialists. "Do it, do it right, do it right now."

15:40: Frances West, founder of FrancisWest & Co and former Chief Accessibility Officer at IBM, explains the increased attention around accessibility: 1) the rise in inclusive social movements, including people with disabilities, 2) the increased use of #technology from the pandemic, and 3) the growing demographic of people aged 50+ who have difficulty with digital experiences, and 4) the increasing global legislations around accessibility #rights. Technology should be designed to include "edge users" with an intuitive simplicity. We should lead with accessibility, not add it as an afterthought.

31:27: David Bray, PhD, Distinguished Fellow at the Stinson Center, describes how we've advanced in user-friendly websites, but many #apps don't have built-in accessibility. Statistics say 1 in 4 people will have an accessibility challenge, so it is relevant for everyone. Any forward-leaning business has to prioritize accessibility to remain relevant in the #AI era. How are your #developers using code to bake in accessibility from the start? The best way to for orgs to stay ahead is to remain a learning environment about user experience. When doing usability testing, make sure to include a diverse set of people. We have a human obligation to think about accessibility - involve customers, stakeholders, and citizens.

Accessibility is a human right.

Stay tuned for another episode of Impact TV coming down the pipeline in the coming weeks!

On <iframe width="560" height="315" src="https://www.youtube.com/embed/KSmSv7uIaxY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
Media Name: ImpacTV Graphic Template.png

What Movies Get Wrong…and Salesforce Gets Right…About AI

What Movies Get Wrong…and Salesforce Gets Right…About AI

The voice was calm yet determined. Frank was dead. Dave remained.

“Open the pod bay doors, HAL.”

“I’m sorry Dave. I’m afraid I can’t do that.”

The Artificial Intelligence aboard the Discovery One space craft envisioned by Arthur C Clarke’s short story, The Sentinel, and Stanley Kubrick’s movie 2001: Space Odyssey, had been listening in and wasn’t having what Dave had in mind. The heuristic programmed algorithm was designed to solve problems quickly…and people were the problem.

HAL 9000 is a delicious villain. In fact, HAL was named the 13th greatest movie villain of all-time by the American Film Institute. The cool indifference of HAL is haunting. But sadly HAL, and other nihilistic machines like him, have become the baseline of awareness about AI for FAR too many people.

While conversations start with the innovation and the change AI can usher in, conversations will inevitably turn to the danger of the machines taking over. From discussions around ethical AI to the capacity for sentience, there is a sense that AI, left unchecked or allowed to read lips, will try to take over and be the downfall of humanity. There is never an in-between.

But what does AI mean for the average, everyday Marketing team? In the early days of OpenAI’s ChatGPT, headline after headline bragged about the Generative AI’s eventuality of “replacing marketers” because of its ability to generate ad campaign copy, slogans and email subject lines in seconds. A variation on the HAL theme to be sure, but still, the script has the sentient super-villain machine with a touch of blood lust rising to rid the world of agency copywriters and marketing managers.

Before ChatGPT shows us the pod bay doors, let’s take a step back and consider if we got our movie references all wrong. What if AI in marketing is less Space Odyssey and more Devil Wears Prada?

As the tale goes, the devil boss, Miranda, has a new assistant, the protagonist of the book and movie, Andy. There is a moment during a glamorous charity gala, when a swanky donor approaches to greet the hostess. Andy leans in and whispers the name of the guest, along with a couple key factoids just in time for Miranda throws her arms up with all the warmth and recognition of an old friend.

Andy, not HAL, is the AI Marketing needs. And Andy…or rather Salesforce’s version of her…is called Marketing GPT and was purpose-built to lean in and whisper exactly what a marketer needs to engage and interact in the most personal and profitable way. Trained to not just understand customers, conversations, or engagement, but trained to also understand a specific business, Marketing GPT draws intelligence from Data Cloud and relies on a new trust layer to ensure that this isn’t just a story of the right message to the right customer at the right time…but the right model to deliver the right personalization and contextualization to the right marketer.

Digging into the Marketing GPT announcements, let’s focus in on a couple highlights that stood out (at least stood out to me):

  • Segment Creation: imagine just asking your marketing tools to create a new audience segment. Marketers understand that the question is rarely the problem…instead it is all the preparation that is required to even get to the point of asking. With Segment Creation, both sides of that audience opportunity equation are addressed with AI, bringing the data together and giving marketers the opportunity to interrogate that data differently, all using natural language.
  • Segment Intelligence for Data Cloud: This is where marketing’s work proves impact and real, tangible business values by connecting the first-party data marketers rely upon for deeper engagement with the revenue data and third-party paid media data. This isn’t just about ‘more metrics.” Instead, Segment Intelligence is about obtaining a truly comprehensive view into audience engagement. Knowing how someone engaged with initiatives is great…knowing how that connected to the business and revenue is even better.

There are other AI-super-powered capabilities in this initial introduction of Marketing GPT including integrating generative AI tools into everything from email content creation (with auto-generated copy recommendations that can be included in testing and engagement campaigns) to integrations with the creative upstart Typeface to create contextual visual assets that are aligned with approved brand voice, style guides and messaging.

Another announcement of note comes from the Salesforce Commerce GPT introduction. While the solution is packed with AI-powered assistive tools including Commerce Concierge for personalized engaging shopping engagements and Dynamic Product Descriptions automatically filling in missing catalog data for merchants, it is the inclusion of Goals-Based Commerce that had me leaning to learn more. This is not just about delivering the capacity for growth. It is about productive and efficient growth. With the Goals-Based Commerce tool, brands can set targets and goals based on what is top of mind for the business (and let’s be honest, those details can change minute to minute even while all still pointing towards profitability) and get AI powered recommendations and even automations to help reach those goals. It connects Data Cloud, Einstein AI and Salesforce Flow to quickly move from goals to outcomes.

While AI is important to business, trusting AI is critical to us all. This was the message told time and again at both Connections and Salesforce AI Day. So HOW does Salesforce make Marketing GPT the built for enterprise safe-AI solution. It truly starts and stops with data.

Salesforce AI Cloud is billed as a cloud-based end-to-end AI solution that supports multiple models, prompts and training data sets. A purpose-built suite of capabilities, AI Cloud works to deliver trusted, real-time generative experiences across all applications and workflows, with a focus on super-charging CRM. Einstein sits at the heart of AI Cloud and, according to Salesforce, now powers over 1 trillion predictions per week across Salesforce applications. Thanks to AI Cloud, organizations can tap into multiple large language models that are trusted in an environment that is open and extensible. Customers will have access to multiple models to optimize the right model for the right task, be it third party LLMs, using Salesforce’s proprietary LLM (developed by Salesforce AI Research) or bringing a customer’s own custom LLM. See: Salesforce launches AI Cloud, aims to be abstraction layer between corporate data, generative AI models

Initial LLMs include AWS, Anthropic and Cohere to start. Salesforce had previously announced an extensive partnership with OpenAI and the APIs to access the GPT-4 model. Salesforce has also announced a partnership and integration with Google’s Vertex AI, adding yet another bring-your-own model capability into the mix (Salesforce had previously announced the ability to bring models from Amazon SageMaker) directly through the newly announced Einstein GPT Trust Layer.

Why is this “trust layer” so important? This is what brings us back to the trust factor. By bringing these models, be them internal (via Google Vertex), from Salesforce or from a third party like OpenAI, a customer’s data remains within the boundaries established as trusted BY the customer. The Trust layer is intended to be where the identity and governance controls so that company data is not sent to a model, as many organizations fear. Instead, once a query is run on a customer’s system, data (including data that has been aggregated and harmonized in Salesforce Data Cloud) is retrieved, masked and fed to the model via secure gateway to generate the response. This prompt is not retained by the model and in seconds responses are delivered back, routed through what Salesforce notes as “toxicity detection” and finally audited and logged for visibility.

The promise here is that enterprises can secure, govern and orchestrate AI in a more constructive and intentional way. This is not a new concept. Trust and “enterprise-ready” offerings, tools and promises are cropping up everywhere from Adobe (with the guardrails around their suite of generative AI models in Adobe Firefly), to Microsoft’s Azure OpenAI Service (which only addresses safety and moderation of text and image generation using OpenAI models) and Nvidia’s open-source toolkit, NeMo Guardrails, that takes aim at toxic content.

But Salesforce arguably feels a responsibility to push innovation forward and to take the lead on having the tough ethics and security conversations in AI. For Salesforce, the Einstein GPT Trust Layer is a critical, if not mandatory move.

Marketing GPT tools are quickly entering pilot this summer (as early as June) and many are expected to GA by October (Segment Creation, as an example, is expected to go GA by October 2023. Segment Intelligence for Data Cloud is also expected to be GA by October) with other tools like Dynamic Product Descriptions expected to be GA in July 2023 and Goals Based Commerce expected by February 2024. This is a welcome departure for Salesforce which has earned the reputation of longer aspiration-to-availability timelines.

Yes…there were a TON of GPT labeled announcements made at Salesforce Connections (prompting some of us in attendance to just add GPT to the end of every proper name available.) But there was also a lot of excitement around the prospect of having that well trained personal business assistant whispering all the just-right details into our ears just in the moment we need it to make an amazing impression on our customers and prospects. It is a welcome shift in narrative from the machines ready to replace marketers to a safe, purpose-built, enterprise-ready and trained AI empowering and adding to a marketer’s success.

 

Data to Decisions Future of Work Marketing Transformation Next-Generation Customer Experience Chief Customer Officer Chief Marketing Officer Chief Digital Officer Chief Data Officer

Snowflake, Nvidia team up to enable custom enterprise generative AI apps

Snowflake, Nvidia team up to enable custom enterprise generative AI apps

Snowflake and Nvidia said they're integrating Snowflake Data Cloud and Nvidia NeMo, a platform for large language models (LLMs), so enterprises can build custom generative AI applications.

The news, outlined during the kickoff of Snowflake Summit 2023, enables Snowflake customers to combine their proprietary data with foundational LLMs within Snowflake Data Cloud. Snowflake said it will host and run NeMo in its Data Cloud and include NeMo Guardrails, which ensures applications line up with business specific topics, safety and security.

Also see: Snowflake launches Snowpark Container Services, linchpin to generative AI strategy

Vendors have been racing to enable enterprises to combine their data with LLMs in a secure way. Salesforce has a trust layer to keep customer data cordoned from LLMs and Oracle is planning a similar service. Enterprise technology buyers have been wary of the compliance and privacy issues with building generative AI applications. Meanwhile, Snowflake rivals MongoDB and Databricks are also targeting LLM data workloads. Databricks doubled down on LLMs with the $1.3 billion acquisition of MosaicML.

With Nvidia NeMo, Snowflake customers can use their accounts to create custom LLMs for chatbots, search and summarization while keeping proprietary data separate from LLMs.

Snowflake CEO Frank Slootman said the partnership with Nvidia will add high performance machine learning and AI to Snowflake's platform. Nvidia CEO Jensen Huang said the partnership with Snowflake will "create an AI factory" for generative AI enterprise applications.

For enterprises, the Snowflake and Nvidia alliance may make it easier to tune custom LLMs for specialized use cases. This approach was outlined recently by Goldman Sachs CIO Marco Argenti.

Snowflake Data Cloud offers industry specific versions across financial services, manufacturing, healthcare, retail and other verticals. With Nvidia, Snowflake's bet is generative AI applications will proliferate across industries.

More:

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity snowflake AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Data Officer Chief Technology Officer Chief Information Security Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Why your quantum computing vendors are going to look familiar

Why your quantum computing vendors are going to look familiar

Your quantum computing vendors may look a lot like your cloud, data center and supercomputing providers today as Microsoft, IBM and Intel all had quantum related announcements in recent days.

The big question is whether smaller quantum vendors will be able to deliver the breakthroughs that can propel them to the big leagues. Constellation Research's Shortlists for quantum computing platforms, software and full stack providers include a mix of traditional vendors and startups.

Recent events include:

Now it's not like startups are being lapped. IonQ last week announced its IonQ Forte system was commercially available. IonQ has a partnership with Dell Technologies and is available on all three major cloud providers (AWS, Google Cloud, Azure). The company just raised its 2023 bookings growth to $45 million to $55 million. In its first quarter, IonQ had revenue of $4.3 million.

But overall, it's telling that quantum computing seems to be driven by established enterprise technology players with the biggest R&D budgets. It's really hard to sneak up on big tech these days.

Mueller said the verdict is still out on whether the big enterprise tech players will all pivot to quantum. CIOs could explore Quantinuum, formed by the combination of Honeywell Quantum and Cambridge Quantum, technically isn't an IT vendor. He said:

"Clear trend: It will be the first enterprise tech that will be practially only availalble in the cloud. With that every cloud vendor needs to play to remain relevant. But we are still in basic tech phase. Who will win? It is VHS vs Betamax." 

Short version: You're not quite ready to buy into quantum computing at scale just yet.

Kirk Bresniker, Hewlett Packard Labs Chief Architect and HPE Fellow, said in a tech talk at HPE Discover that quantum computing will require decades of hard engineering work to be mainstream. However, quantum computing will have role in a hybrid supercomputing approach.

"HP Labs is here to partner and apply engineering expertise to make this process real," said Bresniker. He acknowledged that quantum computing is still early in its development--akin to vacuum tubes in old classical computers--but can accelerate. "We're looking to partner to give enterprises a better set of information so they can reason over this quantum future. You want to make reasoned investments in these technologies over time," he said.

His architecture slide is worth checking out from a vision perspective.

Bresniker said HPE is betting that supercomputing will evolve with an architecture that includes CPUs, GPUs, various accelerators and quantum computing to tackle problems.

For now, quantum computing is worth experiments and use cases in select industries. TCS recently noted how it is using IBM Quantum infrastructure for financial advisor scenarios. Financial services and life sciences are obvious areas for quantum computing.

For now, quantum computing is clearly in the press release stage, but there seems to be consensus around the following:

  • Quantum computing will likely be consumed through the cloud.
  • Select industries should explore use cases.
  • Key metrics on how to measure efficiency and performance are being debated.
  • Quantum computing will be part of what's emerging as a hybrid supercomputing approach.
  • Projections about how quantum computing will scale are often based on assumptions of breakthroughs. However, you can't predict breakthroughs.

In the meantime, enjoy the parade of quantum computing announcements flying by.

Data to Decisions Tech Optimization Innovation & Product-led Growth Quantum Computing SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer