Results

Truly Confusing? Truly Different. Truly Zoho.

It is difficult to describe Zoho. You can use terminology you might use to describe any other organization and feel like you are failing. You can talk about culture, corporate social responsibility, innovation or sustainability until you realize how big the gap between “what Zoho means” versus “what everyone else means” comes into view. You can try, but in the end, you are left with a sense that you failed to accurately and fairly describe Zoho. At least that’s what happens to me.

Analysts Take on India: Truly Zoho 2023In early 2023 I joined a rogue gaggle of industry analysts to trek to Zoho’s campus just outside of Chennai for an event dubbed Truly Zoho. Panel after panel of Zoho leaders shared an insider’s view and we analysts tried to accurately and fairly describe what we were hearing, seeing and experiencing. I’ve read article after article beautifully sharing the experience…but for some reason I was struggling. It wasn’t because there wasn’t plenty to share. I was struggling to document things in a way that was fair, accurate and, well, truly about Zoho.

Here is where I landed: Talking about Zoho is easy. Understanding Zoho is an entirely different experience and endeavor.

As a company Zoho is outright defiant in their individuality. What do you do when, ethically, you do not believe in tracking users or consumers with cookies? Build your own infrastructure and cloud to guarantee privacy is a baseline expectation and core to the business value of every product and offering. When opportunities and a lack of R&D is holding a country back, what next? You invest in rural revival to bring globally in-demand skills and innovation to India despite the assumptions of the world that innovation only happens in places like the Silicon Valley.

For some this brazen, maverick nature is frustratingly confusing. "How can you scale this?" "How will you keep this pace of growth?" "You can't possibly mean you building that from scratch?" "You can't do that."

These are all statements those of us who follow Zoho are used to hearing. I’ve heard people say, with earnest concern, that Zoho might not know what they are doing. They can’t possibly understand where their decisions will lead. There is an earnest worry that a group of good people will learn a hard lesson.

None of this is an accident. It is, however, the outcome of hundreds if not thousands of experiments. Zoho is happy to be home to teams of dreamers willing to experiment. Unlike other organizations where experiments are isolated or contained to reduce risk, Zoho removes any assumption that a failed experiment is a total failure. Failings are valued lessons, not grounds for termination. If an idea bubbles up and aligns with a customer’s need or request, teams are empowered to try…. empowered to experiment.

One early and lasting experiment: finding a new way to identify, educate and train the next generation of experimenters. For 17 years, Zoho Schools of Learning (informally called Zoho University by some,) has seen over 1,400 graduates advance across technology, design and business. Built as an alternative to traditional college or university programs that can often exclude students from far-flung rural villages across India, Zoho Schools focuses on the often-overlooked student that may not have the means to attend University but has the curiosity and will to learn and experiment.

This is most noticeable in the Zoho School’s boot camp style career re-entry program for women looking to return to work after a career break. During the Truly Zoho sessions, we had the opportunity to hear from women who had left the technology workforce. Most of these women told an all too familiar tale of leaving work to start or raise a family. The Marupadi program provides an intensive immersive retraining program to empowers these women for a comeback, brushing up on the latest technologies and skills during a full-time 3-month program. After a supervised internship program where graduates are placed with mentors to help guide them back into a role, Marupadi graduates are invited to interview for full time roles with Zoho.

While meeting the leaders of Zoho was an insightful glimpse into how and why Zoho exists today, it was the chance to meet with the students at Zoho Schools and especially the students and teachers at Kalaivani Kalvi Maiyam, the rural school teaching children as young as 2, that gave me the chance to see what Zoho will be in the future.

Zoho has not just existed but thrived by rejecting a berth in the global game of business dominance. It isn't that they don't want to play a game on the global stage...they just want us to come and play THEIR game. They want the rest of us, the rest of modern business, to stand up and fight for the future of innovation and experimentation. It is a bold and brazen dare: start a school, invest in tomorrow’s research and development, make the choice to sacrifice profit in order to power progress.

Sacrifice profit??? Zoho’s leaders decided to sacrifice growth to make a bold promise: nobody would be laid off as the world grappled with the threat of global financial recession and decline. For months we have seen headline after headline announcing layoffs. In order to appease Wall Street, investors, backers or shareholders, companies have made tough decisions to lay people off, cut back on research investments and implement austerity measures to keep ledgers in the black and ensure growth percentages did not fall. Zoho decided that the growth velocity they had consistently enjoyed over several years could slow if people could be prioritized.

Everyday management decisions in how to lead defy traditional business thinking. Decision making is pushed down into the teams and individuals closest to where those decisions turn into actions, especially when those decisions directly impact a customer’s experience with Zoho.

For those heading to an upcoming Zoholics event (I myself will be heading to the Austin whistlestop) these are the things I urge you to keep in mind:

  • Ask why. It is OK if you think (possibly more than once) that what you see or what you think about Zoho doesn’t make sense. Instead of trying to fit Zoho or their technologies into a pre-existing mold, take a chance and jump into a conversation around WHY a new technology makes sense.
  • Ask the strange questions. Ask the questions other vendors might think are irrelevant including where a Zoho employee is from or what path brought them to Zoho. The answers are as relevant to WHY a tool or solution exists as the market or technology itself.
  • Ask what. In a world where words (especially buzzwords) are freely batted around, it can be easy to let things gloss over. Instead of assuming a phrase is being used for the buzz, ask what Zoho means. I especially encourage you to ask this anytime someone mentions privacy…trust me…they have a very intentional and foundational point of view on this that is totally intentional.

Perhaps the most important advice is this: suspend your disbelief. Just like my time in India, it will be totally worth it to learn who Zoho truly is.

 

New C-Suite Marketing Transformation Next-Generation Customer Experience Chief Customer Officer Chief Executive Officer Chief Information Officer Chief Marketing Officer Chief Digital Officer Chief Revenue Officer

FinancialForce Unleashes Spring '23 Release, Strengthening Opportunity-to-Renewal

1

Finding new ways to improve opportunity-to-renewal is core to any services business's growth.

FinancialForce has long bet its business on the belief that it could streamline opportunity-to-renewal for people- and software-centered businesses better than any other vendor. In delivering their Spring '23 release, they're proving how adept they are at delivering new features on a faster release cadence of three major releases a year. Out of its workforce of 1,000 people, FinancialForce has 400 full time employees in DevOps, engineering, product management, and quality, and nearly 100 outside resources in R&D.

FinancialForce's overarching goal with the Spring '23 release is to strengthen the customer's ability to excel at opportunity-to-renewal. The feature refresh for Spring '23 includes 18 different areas of their platform, with the most, eight, being in Services CPQ. Dan Brown, Chief Product and Strategy Officer at FinancialForce, says, "Opportunity-to-renewal is core to companies that deliver services. It's an area that has been dramatically underserved by classic vendors in this space. Most are fairly product-centric, and that tends to hold companies that are service-oriented back."

Services-as-a-Business is gaining traction

FinancialForce's Spring '23 release shows how Services-as-a-Business is closing gaps and improving the opportunity-to-renewal process. Tight labor markets, spiraling costs and prices due to inflation, and blind spots in opportunity-to-renewal cycles continually jeopardize services revenue. As a result, professional services and software companies relying on service revenue risk losing Annual Recurring Revenue (ARR) and seeing reduced Customer Lifetime Value for every account. The Spring '23 release provides a more granular, 360-degree view across eight core areas of the opportunity-to-renewal process to help services businesses meet new growth challenges.

"Our new Spring '23 release is designed to give organizations the kind of certainty they need in these very uncertain economic times," said Scott Brown, President, and Chief Executive Officer at FinancialForce. "Given the pace at which market and business conditions change, services businesses need confidence in their ability to manage estimates, skills and resources, and solve complex problems. This new release gives organizations a complete, customer-centric view of their business to turn continuous disruption into a competitive edge."

Spring '23 release doubles down in the areas of Service CPQ and Resource Management, which are the areas where the majority of new features have been added to this release.

Improving Services CPQ process performance protects margins

FinancialForce is prioritizing Services CPQ, first introduced in the Winter '22 release, to help customers get more in control of their margins and time management. The number and depth of new features in this area and Dan Brown's insights into how popular Services CPQ has become with enterprise accounts demonstrate that prioritization. FinancialForce's enterprise accounts are adopting Services CPQ to save time during sales cycles by providing their prospects with the visibility to identify resources available for quoting work, their billable rate, skills, and previous experience.

Dan Brown said that "in (quote) estimation, you now can reach into your PSA (Professional Services Automation) system and identify the resource that you're going to quote, what's their billable rate, what's their skills, what's their capabilities. A big issue our customers have is that the As Quoted versus the As Delivered are almost always materially very different."

He continued, emphasizing, "And that's where you end up with margin erosion, that's where you end up with revenue leakage for our customers. Now with Services CPQ, the As Quoted and As Delivered features are tightly linked together. And that has driven enormous improvements.”

Scott Brown added, “When I was a customer, this was a big pain point. For me, the capability to connect your pre-sales activities to your post-sale delivery is a real game changer for us."

Underscoring how vital Services CPQ is to FinancialForce's opportunity-to-renewal strategy, the Spring ‘23 Customer Overview notes that "with usability improvements in Services CPQ, support for additional pricing and costing scenarios, and streamlined estimate export for correct Statements of Work, services teams will be able to create accurate and competitive proposals faster, leading to higher win rates on projects, with much lower risk profiles."

Among the many enhancements to Services CPQ are usability enhancements to the Estimate Builder, helping to reduce errors in As Quoted and As Delivered Results.

New features to optimize resources and projects

Additional goals of the spring '23 release are to provide customers with improved workflows for optimizing resources and streamlining project management. Given how every professional services firm and software company today is under pressure to continually find new ways to optimize resources and be more done with less, the timing of Resource Optimizer Enhancements and introducing Resource Manager Work Planner is excellent. FinancialForce allows assigning multiple resources to project enhancements, integrating with MS Outlook and Google Calendar, as well as mass deletion of pass utilization results. FinancialForce also delivers task-based scheduling of held resource requests.

The Spring '23 release is designed to help enterprises optimize resources from small-scale to multi-location projects by adding Resource Work Planner and Enhanced Skills Maintenance that can scale across multiple global locations.

How FinancialForce's Spring '23 Release Strengthens Opportunity-to-Renewal

"This new release gives organizations a complete, customer-centric view of their business to turn continuous disruption into a competitive edge," remarked Scott Brown during a recent briefing. FinancialForce aims to help services businesses more efficiently monetize their time and resources by concentrating their development efforts across opportunity-to-renewal.

The release shows how services companies are looking to real-time financial analytics, including new risk management features, as guardrails to keep their businesses on track to margin and profit goals. The Spring '23 release shows FinancialForce's view of the opportunity-to-renewal process and what strengths it can offer customers, from a new Scheduling Risk Dashboard that provides early intervention and project course corrections in real time, to streamlined estimate exports for accurate Statements of Work (SOWs).

The following table uses the opportunity-to-renewal process as a framework to put the new release into context. It compares each phase of the opportunity-to-order process, how FinancialForce defines their role, how the Spring '23 release strengthens each area, what the people and software-oriented benefits are, along with their leading customer references. You can also download a copy of the Opportunity-to-Renewal Process comparison here.

Media Name: figure-2.jpg
Media Name: figure-3.jpg
Media Name: opportunity-renewal-process-financialforce.jpg
Data to Decisions Chief Executive Officer Chief Financial Officer Chief Information Officer Chief Revenue Officer

Monday's Musings: Is ChatGPT Hype or The Future Of CX?

Media Name: @rwang0 Open AI Logo.png

ChatGPT or Generative AI Is This Year’s POC And Shiny Object

While generative AI has been around for some time, ChatGPT has captured the hearts and minds of the general population in highlighting tangible possibilities of what AI can accomplish both in the consumer and enterprise world. In fact, Generative AI has the ability to create chat responses, designs, and other new content including deep fakes and synthetic data. Neural network techniques such as generative adversarial networks (GANs), variational autoencoders (VAEs), and transformers work together to create original content based on prompts.

On the languages side, GPTs or what’s known as a generative pre-trained transformer, generate conversational text using deep learning. The pre-training capability allows the AI to take the model from one machine learning task to train another model. These models are then pre-trained on large corpus of text. Transformers, a type of neural network, maps the relationships among all the data sources such as text and sentence patterns.

For images, diffusion models allow images to be created from text prompts. Using random noise applied to a set of training images, the diffusion models allow one to remove noise and create a desired image. Common approaches include DALL-E also from OpenAI, Dreambooth by Google, Imagen, Lensa, Midjourney, and Stable Diffusion.

The more organizations interact with these AI systems, the quicker the AI systems will improve their rate of learning.

Move Beyond The Hype And Start With Five Use Cases

Constellation Research sees five emerging use cases for generative AI in CX among an infinite permutation of possibilities:

  1. Marketing. Diffusion models will dynamically generate content, provide translation capability, and run A/B and experimentation tests for user experiences. Personalization models will gain greater context, enabling hyper targeting for campaigns, ad networks, and polling with ChatGPT.
  2. Sales. Sales specific tasks such as pipeline reviews, scheduling meetings, install base analysis, and forecasting will move from manual to automated. Ticklers and alerts will reach out to sales reps to remind them to follow-up on actions.
  3. Service. Crawlers inside one’s internal systems can scan knowledge bases, augment case history, and hasten issue resolution. The AI can create new case tickets, augment missing information, and predict customer satisfaction.
  4. Commerce. Speed of product catalog creation will improve as diffusion models will take prompts from regulatory requirements enabling faster global rollouts of new products and services content. ChatGPT models will serve as the front end interface for order capture.
  5. Customer success. Generative AI will identify accounts with low adoption and automatically identify at risk customers based on their level of interaction to increase the frequency of engagement. Expect dynamic polling to generate surveys based on parameters such as dollar value, length of relationship, past interactions, customer satisfaction.

Choose When to Design For Machine Scale And When To Add Human Scale

Organizational success requires more than large learning models or better algorithms. CX leaders will need to identify the largest corpus of data available, the customer experience questions to be answered, and what skills are required to keep up with human scale in a machine world. In core CX processes such as campaign to lead, lead to order capture, order capture to order fulfillment, order fulfillment to order completion, Incident to resolution, and others, there will be opportunities for generative AI to provide missing content along the way.

Along the way every leader must determine which CX journeys are fully automated, augmenting the machine with a human, augmenting a human with a machine, or instead requiring a human touch (see Figure 1).

Figure 1. The Four Questions Every CX Leader Will Ask In Their Journeys

Source: Constellation Research, Inc.

The Bottom Line: Generative AI Is Here To Stay

Despite the massive amounts of hype, pragmatic use cases for generative AI will emerge. Given today’s labor shortages and need to improve time to market, expect more pragmatic use cases to emerge. Those organizations who fail to build a generative AI strategy will continue to fall behind. Those who adopt early, will have an opportunity to deliver on exponential growth and more meaningful customer experiences.

Your POV

What are you doing with ChatGPT and Generative AI?  What's use case will you start with?

Add your comments to the blog or reach me via email: R (at) ConstellationR (dot) com or R (at) SoftwareInsider (dot) org. Please let us know if you need help with your strategy efforts. Here’s how we can assist:

  • Developing your metaverse and digital business strategy
  • Connecting with other pioneers
  • Sharing best practices
  • Vendor selection
  • Implementation partner selection
  • Providing contract negotiations and software licensing support
  • Demystifying software licensing

Reprints can be purchased through Constellation Research, Inc. To request official reprints in PDF format, please contact Sales.

Disclosures

Although we work closely with many mega software vendors, we want you to trust us. For the full disclosure policy,stay tuned for the full client list on the Constellation Research website. * Not responsible for any factual errors or omissions.  However, happy to correct any errors upon email receipt.

Constellation Research recommends that readers consult a stock professional for their investment guidance. Investors should understand the potential conflicts of interest analysts might face. Constellation does not underwrite or own the securities of the companies the analysts cover. Analysts themselves sometimes own stocks in the companies they cover—either directly or indirectly, such as through employee stock-purchase pools in which they and their colleagues participate. As a general matter, investors should not rely solely on an analyst’s recommendation when deciding whether to buy, hold, or sell a stock. Instead, they should also do their own research—such as reading the prospectus for new companies or for public companies, the quarterly and annual reports filed with the SEC—to confirm whether a particular investment is appropriate for them in light of their individual financial circumstances.

Copyright © 2001 – 2023 R Wang and Insider Associates, LLC All rights reserved.

Contact the Sales team to purchase this report on a a la carte basis or join the Constellation Executive Network

Media Name: @rwang0 When to automate.png
Data to Decisions Marketing Transformation Matrix Commerce New C-Suite Next-Generation Customer Experience Sales Marketing Revenue & Growth Effectiveness Innovation & Product-led Growth Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity openai SoftwareInsider B2C CX ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Marketing B2B Customer Experience EX Employee Experience Growth eCommerce Social Customer Service Content Management Collaboration AR business SaaS PaaS IaaS CRM ERP finance Healthcare Chief Customer Officer Chief Marketing Officer Chief Digital Officer Chief Revenue Officer Chief Analytics Officer Chief Technology Officer Chief Experience Officer Chief Executive Officer Chief Information Officer Chief AI Officer Chief Data Officer Chief Information Security Officer Chief Product Officer

How Generative AI Has Supercharged the Future of Work

In today's fast-paced and data-driven business world, generative AI is now in the midst of transforming the way companies innovate, operate, and work. With proof points like ChatGPT, generative AI will soon enough have a significant competitive impact on revenue as well as bottom lines. With the power of AI that can help people broadly synthesize knowledge, then rapidly use it to create results, businesses can automate complex tasks, accelerate decision-making, create high-value insights, and unlock capabilities at scale that were previously impossible to obtain.

Most industry research agrees with this, such as a major study that recently determined that businesses in countries that widely adopt AI are expected to increase their GDP by 26% by 2035. Moreover, the same study predicts that the global economy will benefit by a staggering $15.7 trillion in both revenue and savings by 2030 thanks to the transformative power of AI. As a knowledge worker or business leader, embracing generative AI technology can deliver a wide range of new possibilities for an organization, helping them stay competitive in an ever-changing marketplace while achieving greater efficiency, innovation, and growth.

While many practitioners are focusing on industry-specific AI solutions for sectors like finance services or healthcare, the broadest and most impactful area of AI will be in general purpose capabilities that quickly enablie the average professional to get their work done better and faster. In short, helping knowledge workers work more effectively to achieve meaningful outcomes to the business. It's in this horizontal domain that generative AI has dramatically raised the stakes in the last six months, while garnering widespread attention for the seemingly immense promise it holds to boost productivity as it blazes a fresh technology trail towards bringing the full weight of the world's knowledge upon any individual task.

Generative AI, Large Language Models, Foundation Models, AI Apps, and the Future of Work

Delivering the Value of Generative AI While Navigating the Challenges

In my professional opinion, the ability for generative AI to produce useful, impressively synthesized text, images, and other types of content almost effortlessly based on a few text cues has already become an important business capability worthy of providing to most knowledge workers. In my research and experiments with the technology, many work tasks will benefit from between a 1.3x to 5x gain in speed alone. There are other less quantifiable benefits related to innovation, diversity of input, and opportunity cost that come into play as well. Generative AI can also provide particularly high value types of content such as code or formatted data, which normally require extensive expertise and/or training to create. It also has the capability to conduct advanced-level reviews of complex, domain-specific materials including legal briefs and even medical diagnoses.

In short, the latest generative AI services have proven that the capability is now at a tipping point and is ready to deliver value in a widespread, democratized away to the average worker in many situation.

Not so fast, say a chorus of cautionary voices that point out the many underlying challenges. AI is a potent technology that cuts both ways, and therefore a little advance preparation is required to use the technology while avoiding the potential issues, which are generally are:

  • Data bias: Generative AI models are only as good as the data they are trained on, and if the data contains inherent biases, the model will replicate those biases. This can lead to unintended consequences, such as perpetuating undesirable practices or excluding certain groups of people.
  • Model interpretability: Generative AI models can be complex and their results difficult to interpret, which can make it challenging for businesses to understand how they arrived at a particular decision or recommendation. This lack of explainability can lead to mistrust or skepticism, particularly in high-stakes decision-making scenarios, although this is likely to be addressed over time.
  • Cybersecurity threats: Like any technology that processes and stores sensitive data, generative AI models can be vulnerable to cyber threats such as hacking, data breaches, malicious attacks, or more insidiously, input poisoning. Businesses must take appropriate measures to protect their AI systems for work and their data from these risks.
  • Legal and ethical considerations: The use of generative AI may raise legal and ethical concerns, particularly if it is used to make decisions that impact people's lives, such as hiring or lending decisions. Businesses must ensure that their use of AI aligns with legal and ethical standards and does not violate privacy or other rights. Others have noted that some generative AI systems used today can violate privacy laws, which countires like Italy have already taken action over.
  • Overreliance on AI: Overreliance on generative AI models over time can lead to a loss of human judgment and decision-making, which can be detrimental in situations where human intervention has to be resumed, yet the expertise is now lost. Businesses must ensure that they strike the right balance between the use of AI and human expertise.
  • Maintenance and sustainability: Generative AI models require ongoing maintenance and updates to remain effective, which can be time-consuming and expensive. As businesses scale up their use of AI, they must also ensure that they have the resources and infrastructure to support their AI systems, especially as they begin to build their own foundation models for their enterprise knowledge. Making sure the resource-intensive nature of large language models don't consume excessive energy will be a significant issue as well.

Succeeding with General Purpose AI in the Workplace

However, the siren song of the benefits that AI can bring -- everything from raw task productivity to strategically wielding knowledge more effectively -- will continue as more proof points continue to emerge that today's generative AI solutions can genuinely deliver the goods. This will require organizations to begin putting into place the necessary operational, management, and governance safeguards into place as they climb the AI adoption maturity curve.

Some of the initial moves virtually all organizations should make this year as they situate generative AI in the digital workplace and roll it out to workers includes:

  • Clear AI guidelines and policies: Establish clear guidelines and policies on how the AI tools should be used, including guidelines around data privacy, security, and ethical considerations. Make sure these policies are communicated clearly to workers and are easily accessible.
  • Education and training: Provide workers with comprehensive education and training on how to use the AI tools effectively and safely. This includes training on the technologies and solutions themselves, as well as on any relevant legal and ethical considerations that they are required to follow. Digital adoption platforms can also be particularly useful in broadly accelerated situated use of AI tools at work.
  • AI governance structures: Establish clear governance structures to oversee the use of AI tools within the organization. This includes assigning responsibility and providing budget for overseeing AI systems, establishing clear lines of communication, and ensuring that there are appropriate checks and balances in place.
  • Oversight and monitoring: Establish processes for ongoing oversight and monitoring of the AI tools to ensure that they are being used by workers effectively and safely. This includes monitoring the performance of the AI systems, monitoring compliance with policies and guidelines, ensuring consistent models are being used across the organization, and monitoring for any potential biases or ethical concerns.
  • Collaboration and feedback: Encourage collaboration and feedback among workers who are using the AI tools, as well as between workers and management. This includes creating channels for workers to provide feedback and suggestions for improvement, sharing of best practices on using AI, as well as fostering a culture of collaboration and continuous learning on AI skills.
  • Create clear ethical guidelines: Companies should establish clear ethical guidelines for the use of AI tools in the workplace, based on principles such as transparency, fairness, and accountability. These guidelines should be communicated to all workers who use the AI tools.
  • Conduct ethical impact assessments: Before deploying AI tools, companies should conduct ethical impact assessments to identify and address potential ethical risks and ensure that the tools are aligned with responsible practices as well as the company's ethical principles and values.
  • Monitor for AI bias: Companies should regularly monitor AI tools for bias, both during development and after deployment. This includes monitoring for bias in the data used to train the tools, as well as bias in the outcomes produced by the tools.
  • Provide transparency: Companies should provide transparency around the use of AI tools, including how they work, how decisions are made, and how data is used. This includes providing explanations for the decisions made by AI tools and making these explanations understandable to workers and other stakeholders.
  • Ensure compliance with regulations: Companies should ensure that the use of AI tools is compliant with all relevant regulations, including data privacy laws and regulations related to discrimination and bias across the AI tool portfolio.

While the totality of theis list may seem to be a tall order, most organizations actually have many pieces of all this in various places in their organization already from department AI efforts. In addition, if they have developed an enterprise-wide ModelOps capability, this is a particularly good home for a large part of these AI oversight practices, in close conjunction with appropriate internal functions including human resources, legal, and compliance.

Related: See my exploration of ModelOps and how it helps organizations have a consistent, cost-effective AI capability with safety and ethics built-in

The Core Focus for Enabling AI in the Workplace: Foundation Models

Organizations looking at providing their workforce with AI-enabled tools will generally be looking at solutions that are powered by an AI model that is able to easily produce useful results without significant effort or training on the part of the worker. While the compliance, bias, and safety issues mentioned above may seem to be a significant hurdle, the reality is that most AI models already have basic protections and safety layers, while many of the others can be provided centrally through an appropriate AI or analytics Center of Excellence or ModelOps capability. 

Large language models (LLMs) are particularly interesting as the basis for AI workplace tools because they are powerful foundation models that have been trained on a tremendous amount of open textual knowledge. Vendors for LLM-based work tools are generally going down one of several roads: The majority of them are building on an existing proprietary model that is specially tuned/optimized for certain behaviors or results they desire, or they are allowing model choice, enabling businesses to utilize language or foundation models they have already vetted. Some are also taking the middle road by starting with well-known, highly-capable models such as OpenAI's GPT-4, and adding their own special sauce to them on top.

While there will always be AI tools for the workplace based on lesser known and not-as-established AI frameworks and models, right now the most compelling results tend to be found with the better-known LLMs. While this list is always changing, the leading foundation models known currently, with varying degrees of industry adoption are (in alphabetical order):

It's also important to keep in mind that while some enterprises will be seeking to work directly with LLMs and other foundation models to create their own custom AI work tools, the majority of organizations are going to start with easy to use business-grade apps that already have an AI model embedded within them. Nevertheless, understanding which AI models are underneath which worker tools is very helpful in understanding their capabilities, supporting properties (like safety layers), and general known risks.

The Leading AI Tools for Work

The following is a list of AI-enabled tools that primarily use some form of foundation model to synthesize or otherwise produce useful business content and insights. I had a tough choice to make on whether to include the full gamut of generative AI services including images, video, and code. But those are covered in sufficient detail elsewhere online and in any case, they focus more on specific creative roles.

Instead, I sought to focus on business-specific AI work tools based on foundation models that were primarily text-based and more horizontal in nature, and thus would be a good basis for a broad rollout to more types of workers:

Here are some of the more interesting solutions for AI tools that can be used broadly in work situations (in alphabetical order):

  • Bard - Google's entry into the LLM-based knowledge assistant market.
  • ChatGPT - The general purpose knowledge assistant that started the current generative AI craze.
  • ChatSpot - Content and research assistant by Hubspot for marketing, sales, and operations.
  • Docugami- AI for business document management that uses a specialized business document foundation model.
  • Einstein GPT - Content, insights, and interaction assistant for the Salesforce platform.
  • Google Workspace AI Features - Google has added a range of generative AI features to their productivity platform.
  • HyperWrite - An business writing assistant that accelerates content creation.
  • Jasper for Business - A smart writing creator that helps keep workers on-brand for external content.
  • Microsoft 365 Copilot/Business Chat - AI-assisted content creation and contextual user data-powered business chatbots.
  • Notably - An AI-assisted business research platform.
  • Notion AI - Another business-ready entry in the popular content and writing assistant category.
  • Olli - Enterprise-grade analytics/BI dashboards created using AI.
  • Poe by Quora - A knowledge assistant chatbot that uses Anthropic's AI models.
  • Rationale - A business decision-making tool that uses AI.
  • Seenapse - An AI-assisted business ideation tool.
  • Tome - An AI-powered tool for creating PowerPoint presentations.
  • WordTune - A general purpose writing assistant.
  • Writer - An AI-based writing assistant.

As you can see, writing assistants tend to dominate Ai tools for work, since they are generally the easiest to create using LLMs, as well as the most general purpose. However, there are a growing number of AI tools that cover many other aspect of generative work as well, some of which you can see emerging in the list above.

In future coverage for AI and the Future of Work, I'll be exploring vertical AI solutions based on LLMs/foundation models for legal, HR, healthcare, financial services, and other industries/functions. Finally, if you have an AI for business startup that a) primarily uses a foundation model in how it works, b) has paying enterprise customers, and c) you would like to be added to this list, please send me a note. You are welcome to contact me for AI-in-the-workplace vendor briefings or client advisory as well.

My Related Research

How Leading Digital Workplace Vendors Are Enabling Hybrid Work

Every Worker is a Digital Artisan of Their Career Now

How to Think About and Prepare for Hybrid Work

Why Community Belongs at the Center of Today’s Remote Work Strategies

Reimagining the Post-Pandemic Employee Experience

It’s Time to Think About the Post-2023 Employee Experience

Research Report: Building a Next-Generation Employee Experience

Revisiting How to Cultivate Connected Organizations in an Age of Coronavirus

How Work Will Evolve in a Digital Post-Pandemic Society

A Checklist for a Modern Core Digital Workplace and/or Intranet

Creating the Modern Digital Workplace and Employee Experience

The Challenging State of Employee Experience and Digital Workplace Today

The Most Vital Hybrid Work Management Skill: Network Leadership

New C-Suite Future of Work Chief Data Officer Chief Digital Officer Chief Executive Officer Chief Information Officer Chief People Officer Chief Revenue Officer Chief Supply Chain Officer Chief Technology Officer

Don’t Kill Innovation, But Apply Guardrails For AI

Don’t Kill Innovation, But Apply Guardrails For AI

On March 29, 2023, over 1100 notable signatories signed the  Open Letter from the Future of Life Institute asking for a moratorium on AI development.  This wake-up call to society highlights a need for tech policy to catch up with technology and brings awareness to the pervasive impact of AI on society for decades to come. As one of the major investors in Open AI and most notable signatories, Elon Musk has been advocating for a pause. Per the letter, “Powerful AI systems should be developed only once we are confident that their effects will be positive, and their risks will be manageable.”

With OpenAI just releasing its next powerful LLM (Large Language Model) GPT-4, these AI experts and industry executives have suggested a 6-month pause on the development of any models more powerful than GPT-4, citing an AI apocalypse.  To provide context on the size of the GPT-4 model compared to the current GPT-3 model, the current GPT-3 uses 175 billion parameters whereas the new GPT-4 uses 100 trillion parameters (see Figure 1). 

Figure 1. Size of GPT-4 LLM vs GPT-3 LLM

Understand What Can Go Wrong With AI

  1. AI will fall into the wrong hands

Technologies can be used for good or for evil.  That power lies in the hands of the user.  For example, bad actors can create phishing emails that are personalized to each individual which would sound so legitimate that you can’t resist clicking on it -- only to compromise your system by exposing it to malware.

Other examples include scammers using voice cloning the grandson of an elderly couple to trick them to send money claiming he was in jail and needed bail money. The scary part was that the scammers used just a few spoken sentences from a YouTube video to clone his voice almost perfectly.

  1. Humans find it harder to distinguish between originals vs fakes

    Programs such as MidJourney can create deceptive, realistic, yet fake images. There are samples on the internet floating around, from Pope wearing a funky puffer jacket to Donald Trump getting arrested and manhandled by police to a realistic Tom Cruise fake video (from a few years ago.) While even the educated, sophisticated mind has a problem grasping and segregating the real content from fake, less-educated masses will believe anything they see on the Internet or on TV. And many radical groups on both sides of the political spectrum started to use this as a starter to create dangerous propaganda that could lead to undesirable results. Governments could be toppled, a political game could be played, and the masses can be convinced as AI can help create fake news on a massive scale.

    One thing that can be forced on AI companies is to provide an option to do content verification using cryptographical or other strong trustworthy methods. This can potentially provide an option to differentiate fake from real. If they can't provide that, then they shouldn't be allowed to produce that content in the first place. Content authenticity will remain a challenge as AI proliferates disinformation at an exponential scale
     
  2. A six month voluntary unenforceable moratorium will do little to halt progress in AI

The moratorium proposed by Musk and others is calling for a 6-month pause. Not sure what the arbitrary period will do exactly. First of all, is the pause and/or ban only for US-based companies or is it applicable worldwide? If it is worldwide, who is going to enforce it? If not enforced properly and worldwide, forcing US companies to abandon their efforts for the next 6 months will help other nations get much ahead in the AI arms race.

What happens after 6 months?  Would regulatory bodies have caught up by then?

Interestingly enough, asking to pause AI experiments is like polluting factories calling for a pause on emission regulation and continuing to pollute because they can’t properly measure or mitigate their pollution or face the shutdown risk. A pause is not going to make government or regulatory bodies move any faster to solve this issue. This is the equivalent of “kicking the can down the road”.

 

Apply Guardrails For AI Ethics And Policy

  1. Deploy risk mitigation measures for AI


    At this point, OpenAI and other vendors are offering the LLM as a "use it at your own risk" mode. While ChatGPT has some basic guardrails and safety measures in answering specific questions and topics, it has been jailbroken by many, which leads to unpleasant answers and behaviors (such as falling in love with a NY Times reporter or accelerating the decision of a married father to commit suicide, etc.) Enterprises that want to use AI need to understand the risks associated and have a plan to mitigate them. By using this in real business use cases, if your business gets hurt, they will take no responsibility, and you will be on your own. Are you ready to accept that risk?

    Conduct a business risk assessment of AI usage in specific usecases, implement proper security, and have humans in the loop making the actual decisions with AI in helping mode. More importantly, it needs to be field tested before it can go into production – extensive tests proving human-produced results will be the same as AI-produced results, every single time.  Make sure there are no biases in the data or the decision-making process. Be sure to capture the data snapshots, models in productions, data provenance, and decisions for auditing purposes. Invest in explainable AI to provide a clear explanation of why a decision was taken by AI.

    Although generative AI can help create realistic-looking documents, marketing brochures, content, or writing code, etc.  humans should spend time reviewing the content for accuracy and bias. In other words, instead of trusting AI completely, it should be used to augment any work, if at all with strong guardrails on what is accepted and expected.

    There is also a major security risk in using LLMs are they are not properly secured as of today. The current security systems are not ready to handle the newer AI solutions yet.  There is a strong possibility of IP information leaking by simple attacks over LLMs which is proven by research students as many of these systems have weak security protocols in place.

 

  1. Use ChatGPT and other LLMs with caution

    Keep in mind most LLMs, including ChatGPT, are still in beta. And they not only use the material provided to it, but store it in its database, and retrain their model using that data. Unless specific policies protect employees from using it, they could get lazy and use them but leak confidential information. In a classic case, Samsung employees used ChatGPT to fix their faulty code and asked to scribe a meeting which turned out to be a colossal mistake. ChatGPT is now exposing Samsung’s confidential data such as semiconductor equipment measurement data, and product yield information. Employees should be provided guidelines immediately on what is an acceptable use of ChatGPT and other LLMs.

 

  1. Understand chatGPT is not a thinker or decision intelligence system

    People assume that chatGPT and other LLMs can understand how the world automatically and make decisions that will end the human world. They tend to forget it is merely a large language model trained on the entire world’s data that is publicly available. This means if it hadn’t happened before, or written before, they can’t give you information without context unless human brains can make subjective decisions. Even with that information it is very error prone in the current iteration. It is humans that will use those systems that will use it for either good or bad.
     

Bottom line: Don’t kill Innovation

How society deploys guard rails for AI should not be about stopping a certain technology, or company, unless these companies really go rogue. There was a similar outcry when IoT initially became popular about personal data collection as well but wearing Fitbits and sharing the data in self-quantified technologies seem to be very common and accepted practice now with the right privacy policies and permission requests.

Policy makers, technology companies, and ethicists should move the focus to work on guardrails within which these systems.  The focus should be on regulations, security, oversight, and governance. Define what is acceptable and what is not. Define how much of the decisions can be automated versus human involvement. At the end of the day, AI and analytics are here to stay. Just pausing it for 6 months is not going to let the safeguard measures catch up.

 

 

Data to Decisions Future of Work New C-Suite Tech Optimization Innovation & Product-led Growth Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR AR business Marketing SaaS PaaS IaaS CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Information Officer Chief Digital Officer Chief Analytics Officer Chief Data Officer Chief Technology Officer Chief Executive Officer Chief AI Officer Chief Information Security Officer Chief Product Officer

Glossary of terms commonly used in Incident Response, Observability, and AIOps

The following are the common terms used in Incident Management, AIOps, and Observability practice areas:

Alert Fatigue

Alert fatigue is a phenomenon that occurs when the on-call personnel and incident responders receive an overwhelming number of alerts or notifications (either in volume or in frequency), causing them to ignore, dismiss, or become indifferent and insensitive to some of the highly sensitive and critical notifications/alerts as there are too many causing fatigue. This might result at times either in support personnel missing the right alert thereby not taking the right action or taking an inappropriate action that might make matters worse.

To mitigate this, many enterprises use AIOps or Incident Management solutions that help reduce or group the number of redundant, irrelevant, or non-critical alerts either by,

  1. Prioritize the alerts so only highly critical alerts will make it to the on-call support personnel.
  2. Group the alerts so any alerts, or notifications related to a specific event will be grouped in one single bunch for analysis.
  3. Noise reduction, or dynamic filtering, by reducing the irrelevant alerts and focusing only on the critical alerts that need to be attended immediately.
  4. False alarms.

These effective solutions can reduce burnouts of on-call personnel and SREs and help them more efficient with them resolving critical incidents faster.

ChatOps

ChatOps is a collaborative communication tool and process that is commonly used in incident management. The physical war rooms have evolved into virtual collaboration channels. Generally, as soon as an incident is identified and acknowledged, one of the first steps is step is to create a collaboration channel. This place centralizes all communications and assets about incidents. It also holds information about incident progress, status, plans, and resolution (if any), which allows anyone involved to get the status and necessary information in real-time. Anyone who needs to know the information or who can assist in solving the incident can be invited to the collaboration channel as needed. Oftentimes, on-call systems, alert/notification tools, and chatbots are often included in this category as well.

Incident

An incident is defined as unplanned downtime, or interruption, that either partially or fully disrupts a service by offering a lesser quality of service to the users. If the incident is major, then it is a crisis or major incident. When it starts to affect the quality of service delivered to customers, it becomes an issue, because most service providers have service-level agreements (SLAs) with their consumers that often have penalties built in. The longer the incident remains unsolved, the more it costs an organization.

They expect and prepare for major digital incidents and handle them well when they happen. They use a mix of open-source, commercial, and homegrown tools that blend well. Most of those organizations also successfully implement the following processes.

Incident Acknowledgement

Once an incident alert/notification is generated, it needs to be acknowledged by someone either from the support or SRE team or from the service owner. An acknowledgment of an incident is not a guarantee that the incident will be fixed soon. However, this is an early indication of how alert the incident teams are and how soon they can get to incidents. While a user has taken the responsibility for the incident, this doesn’t mean it has been escalated to the right user, yet. This acknowledgment mechanism is very common in most on-call alerting/notification tools. If there is no acknowledgment, the on-call tool will continue to escalate or try to find the right person until the incident is acknowledged.

Incident Commander

An Incident Commander (IC), or the Incident Manager, is a member of the IT team responsible for managing a coordinated critical incident response especially if the incident is considered an emergency or a crisis situation. An IC has ultimate control and final say on all incident decisions. He or she is also responsible for inviting the right personnel and escalating the incident to necessary teams as necessary and ultimately responsible for efficient and quick resolution of an incident.

Incident Lifecycle

The incident lifecycle is the duration of the incident from the occurrence to the time it is resolved. While the post-mortem analysis and fixing the underlying issue so the incident never repeats again is not part of the incident lifecycle, it is an important adjacent step that must be performed to avoid the recurrence of events.  If a specific incident occurs regularly and a possible solution is known, it should be automated as well which is not part of the incident lifecycle but will help reduce the lifecycle of future incidents.

MTTA (Mean Time To Acknowledge)

Mean time to acknowledge is the measure of how long it takes to acknowledge an incident. This time shows the efficiency and responsiveness of the responders and gives confidence to the customers that an enterprise is aware that the services are down and they are working on it. As soon as the first acknowledgment is done, the status update must be updated as well – such as status pages, email alerts, pager notifications, etc.

At a high level, MTTA is calculated by dividing the total time it has taken to acknowledge all incidents by the number of total incidents over the sampling period. 

MTTI (Mean Time To Innocence)

Mean Time to Innocence is a metric that is used to prove that someone is not guilty or associated with an incident. When an incident happens it has become a common practice to invite everyone that is deemed remotely associated with the incident to the incident collaboration channel. This results in a lot of wasted time and resources. Most times it becomes difficult to identify the root cause or solve the incident efficiently because there are “too many cooks in the kitchen,” each offering different advice, knowledge, and wisdom that is neither useful nor relevant.

Many organizations also are measuring mean time to innocence (MTTI) thereby letting the teams/personnel who are not directly responsible or cannot offer help in solving the incident leave the incident collaboration channel. This allows the innocent parties to continue to be productive in their regular job rather than waste time figuring out how to resolve the unplanned downtime that is unrelated to them, and about which they have no knowledge.

However, care should be taken while measuring this metric and having the team participate in this practice. Oftentimes, the teams involved will start to blame each other in order to prove their innocence. Either the guilty party becomes defensive or totally denies their responsibility. By trying to shift blame to others, the need for a collaborative mindset and keeping customers and solving unplanned outage focus can get lost while the blame game happens.

MTTR (Mean Time To Resolution)

Mean time to resolve is the average time it takes to resolve an incident. The resolution is defined as the combination of identification of the incident, identification of the root cause, and fixing the incident. In other words, the time it takes to bring the service back to the mode of its normal operation. Resolving the current incident doesn’t guarantee such events won’t happen in the future. 

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR business Marketing SaaS PaaS IaaS CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Information Officer Chief Digital Officer Chief Analytics Officer Chief Data Officer Chief Information Security Officer Chief Technology Officer Chief Executive Officer Chief AI Officer Chief Product Officer

SVB Collapse, Generative AI Trends, Empowerment Culture | CRTV Episode 54

ConstellationTV Episode 54 features Constellation analysts Liz Miller and Holger Mueller analyzing SVB & ChatGPT-4, then Liz shares highlights from #TrulyZoho23 and Holger sits down with Jon Reed of diginomica to discuss Workday's #AI and #ML Summit.

Listen here:

 

1:43: Technology News with Liz and Holger
14:40: Truly Zoho 2023 Recap
24:37: Workday AI & ML Summit Analysis with Jon Reed, co-founder of diginomica
37:25: CRTV Bloopers

 

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/49oR2WoxWk8" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
Media Name: YouTube Video Thumbnails (2).png

Einstein GPT Analysis, MLOps Best Practices & ShortList Highlights | CRTV Episode 53

A brand new episode of ConstellationTV dropped today! Here's what you'll find in episode 53...

1:08 - Analyst co-hosts Dion Hinchcliffe and Doug Henschen discussing the latest #tech news, including Microsoft's latest announcement of Einstein GPT and generative #AI and #BI trends.

11:33 - Updates from Hannah Hock on our upcoming event, Ambient Experience Summit.

13:00 - Key takeaways from the 2023 Mobile World Congress in Barcelona

17:25 - #MLOps best practices from Andy Thurai

23:00 - New 2023 #ShortList recaps

Learn more from Constellation analysts at constellationr.com or reach out at [email protected].

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/F7zHmQF-q0Q" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

An Update on IBM Cloud for the CIO

Recently I had the opportunity to take a deep-dive on IBM Cloud to get a sense of it's current capabilities. My takeaway is that the platform has matured well and has come a long way since its early days, when it was known as Bluemix, and was a scrappy contender among the fast-growing hyperscalers.

Somewhere along the way, IBM Cloud continued its journey but didn't quite get the same attention from developers and organizations as the dazzling new Internet cloud firms. That's unfortunate, because IBM Cloud has evolved considerably into an enterprise cloud contender of significance. I make the case below that it's appropriate for most CIOs and IT execs to conduct a revisit of the platform and its current capabilities.

Before we take a deeper dive into its differentiation and potential, no re-introduction and catch-up on IBM Cloud would be complete without noting that IBM brings modern cloud services to organizations with the fully expected set of IBM's most-renowned and best classic product sensibilities. These are deep industry-expertise, overall stability, an understanding of the unique needs of large enterprises, and a profound respect for security and compliance. In fact, IBM Cloud has some of the most extensive cloud compliance certifications in the industry.

A Modern Take on IBM Cloud Platform for the CIO in 2023

Sizing Up IBM Cloud Today

Here are some of the more notable achievements of IBM Cloud over the last few years:

IBM Cloud has also grown quite a bit in the last decade, currently sporting a number of vital proof points of its evolving and maturing footprint:

  • IBM Cloud has 17 major cloud service categories, with almost 200 individual services across AI/ML, analytics, blockchain, compute, databases, integration, IoT, networking, quantum computing, storage, security, and storage
  • Customers of IBM Cloud can now run their workloads in over 46 data centers across 9 regions and 27 availability zones on 5 continents
  • IBM Cloud's customer base now includes tens of thousands of businesses from startups to Fortune 500 firms, with extensive adoption notably in financial services, manufacturing, travel, hospitality, construction, healthcare, and education

Comparing IBM Cloud to the Hyperscalers

When comparing IBM Cloud with the big cloud hyperscalers like AWS, Azure, and Google Cloud, there are several important differentiations to consider. One of the most significant differentiators is IBM's focus on hybrid cloud solutions. IBM has a long history of providing enterprise-grade IT solutions, and their cloud offerings are no exception. IBM Cloud is designed to work seamlessly with on-premises infrastructure, providing a consistent experience across the entire IT environment. This is particularly important for businesses that have invested heavily in on-premises hardware and software, up to and including mainframes, and are looking to leverage the cloud for additional capacity or functionality. IBM's hybrid cloud approach enables businesses to move workloads between their on-premises infrastructure and the cloud without having to completely overhaul their IT infrastructure, especially if they are significant IBM customers already.

Another differentiation of IBM Cloud is their focus on data security and compliance. IBM has a wealth of experience in providing enterprise-grade security solutions, and this expertise is evident in their cloud offering. IBM Cloud provides robust security features, including identity and access management, network security, and data encryption. Additionally, IBM Cloud complies with a wide range of industry-specific regulations and standards, such as HIPAA, GDPR, and PCI DSS, making it an ideal choice for businesses operating in heavily regulated industries as well as the public sector. It also has many national and regional certifications as well.

IBM Cloud also offers a range of industry-specific solutions, including Watson Discovery and IBM Cloud Financial Services. These solutions leverage IBM's expertise in financial domains and regulated industry to provide businesses with powerful tools for analyzing data, making informed decisions, and running the business in the cloud. IBM Cloud also has solutions for retail, government, health, academia, and gaming.

Finally, IBM Cloud's pricing model is another differentiation from the cloud hyperscalers. While the hyperscalers typically offer a pay-as-you-go pricing model, IBM Cloud offers more flexible pricing options, including reserved instances and dedicated hosts. This can be particularly advantageous for businesses that have predictable workloads or that require dedicated infrastructure. This options can be particularly appealing to CIOs as the operational costs of cloud have been growing considerably in recent years.

Enterprise Developer Attraction

Much is made of the developer interest in the hyperscalers, but IBM Cloud is unique in that IBM has one of the largest and most engaged developer communities in enterprise IT in my long experience. There are a great many business developers on the "IBM track" around the world, and they remain interested in developing new skills to stay caught up the evolving IBM Cloud story.

ISVs and VARs also have experienced developers who want access to IBM's global customer base and can use their experience in both legacy IBM technology and the latest IBM Cloud developments to create compelling new solutions in the market. I still attend enthusiastic developer conferences for IBM legacy tech, like DB2, which remains very popular in many quarters around the world. For organizations that have these developers, IBM Cloud can propel them to do adopt, build skills, and innovate. Then there are developers that actually prefer alternatives other than the main ones, especially when they might have capabilities or engineering qualities not found in the other clouds. 

IBM Cloud and the CIO Perspective

These days I run into CIOs fairly often that have signed large all-in cloud contracts with one of the Big Three, who then soon find they are completely beholden to a cloud giant, with all their eggs in one basket, with little flexibility in pricing or control over putting workloads where they might most make overall sense. IBM Cloud can serve as a strong and capable fourth alternative that can act both as a hedge to the hyperscalers and as a strong core cloud partner with the many unique strengths and characteristics explored above.

In short, IBM Cloud isn't just for IBM customers, but orgs that need serious enterprise-class cloud with most flexibility, deep understanding of the needs of large and sophisticated organizations, modern cloud-native features, and the global footprint they need for education, support, and compliance. In short, I find that IBM Cloud is the most significant enterprise cloud that many CIOs still don't put on their shortlist, when they probably should keep their options open to more qualified cloud alternatives. In my analysis, IBM Cloud is a capable option for IT departments as a cloud provider as well as for maximizing their cloud options, choices, and needed capability/vendor mix.

Related Research

An Oracle NetSuite Roadmap for the CIO and CFO

How a Transformation Platform Reimagines Success

Digital Transformation Blueprint for the Office of the CFO

Real-Time Data: A Key Cloud Trend for Enterprises

The Cloud Reaches an Inflection Point for the CIO in 2022

The CIO Must Lead Business Strategy Now

New C-Suite Chief Information Officer

What to Expect from Generative AI in Analytics and Business Intelligence

I'm wrapping up research for a fresh Market Overivew report on analtyics and business intelligence (BI) products. As in other arenas, the timliest, most potentially game-changing trend in this space is the introduction of Generative AI capabilities. We've already seen announcements from Microsoft, Salesforce/Tableau and ThoughtSpot, and we'll undoubtedly see more. Here's a quick rundown on what to expect.

First off, it's noteworthy that everything is in private preview or limited public preview at this time, which should tell you something. Even a Natural Language to DAX code feature that Microsoft announced for Power BI way back in 2021 is still not technically generally available. On March 16 the company announced Microsoft 365 Copilot, which brings brings new generative AI capabilites to Microsoft productivity apps, Power Apps and Power Automate (though no new features specific to Power BI). I'm expecting to see a bevy of Power BI-related news during the Microsoft Build event in May. 

On March 7, Salesforce introduced Einstein GPT in preview. Einstein GPT will be open to using multiple large language models, including OpenAI’s GPT-3, as well as Anthropic, Cohere and perhaps others. Salesforce has demoed use cases for sales, service and marketing and has discussed possible analytical use cases, including better NL query and explanations and better data story telling and sentiment analysis. Release dates are not available and I wouldn't anticipate general availability until the second half of 2023.

ThoughSpot dove into the generative AI world with its March 7 announcement of ThougthSpot Sage. Sage combines ThoughtSpot’s search experience with GPT-3 and, in future, possibly other large language models. ThoughtSpot says the integration offers advantages over generative AI alone because GPT-3 on its own has limitations, such as lacking the context of business context, not handling complex, multi-dimensional schema well, getting confused when there are lots of columns, not handling temporal functions well and, without specific training, generating generic SQL rather than platform-specific SQL. Sage, which is in preview but expected in 1H 2023, is said to overcome these limitations. ThoughtSpot says Sage will improve the company’s NL query and NL explanations and will be able to generate new search data models based on NL input.    

Among other vendors, I've seen multiple SiSense blogs on generative AI, but no formal product announcement. I've had conversations with several other vendors who say they're working on the technology, but aren't ready to reveal their plans.

To sum things up in my report, I created the graphic above to explain the types of capabilities to expect. In my view, generative AI has the potential to transform many aspects of analytics/BI products and related administrative and analysis tasks. Again, all the generative AI features that have been announced in the analytics and BI space remain in preview, so caution is advised. Vendors are universally insisting that data-privacy concerns have been addressed and that humans remain in the loop to review AI-generated text or code before it is used. My concern would be that humans will review suggested content for a day, a week or maybe even a month, but will then just fall into the habit of hitting send/share on a chart or natural language explanation. If it's generated code, at least it will (presumably) face the usual QA of rigorous dev and test processes.

Another question that has yet to be answered is just how expensive generative capabilities will be. Large language models are notoriously resource intensive, particularly in the training phase. Customers will want to know which LLMs vendors are using, where those models are running, how their own data is used and, perhaps most importantly, how much it will cost if they turn on generative AI features and see a flood of adoption? Perhaps the cost can be justified if they are seeing a dramatic increase in productivity and breakthrough outcomes?

I'm updating my Cloud-Based Analytics and BI ShortList and I'm publishing out first ever Embedded Analytics ShortList in conjunction with this upcoming report. I'm holding off on an update of our Augmented BI and Analytics ShortList, last published in August 2022, precicely because there are so many generative AI capabilities now in preview that will need a thorough vetting. Hopefully at least a few of them will be generally available in time for our Q3 2023 update. 

 

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity tableau openai Microsoft salesforce ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR GenerativeAI Chief Information Officer Chief Analytics Officer Chief Data Officer Chief Technology Officer Chief Executive Officer Chief AI Officer Chief Information Security Officer Chief Product Officer