The Future of Hybrid Cloud 2023 | Virtual Event Recap
The Future of Hybrid Cloud 2023 | Virtual Event Recap
<p><a href="https://vimeo.com/818405155">
It is difficult to describe Zoho. You can use terminology you might use to describe any other organization and feel like you are failing. You can talk about culture, corporate social responsibility, innovation or sustainability until you realize how big the gap between “what Zoho means” versus “what everyone else means” comes into view. You can try, but in the end, you are left with a sense that you failed to accurately and fairly describe Zoho. At least that’s what happens to me.
In early 2023 I joined a rogue gaggle of industry analysts to trek to Zoho’s campus just outside of Chennai for an event dubbed Truly Zoho. Panel after panel of Zoho leaders shared an insider’s view and we analysts tried to accurately and fairly describe what we were hearing, seeing and experiencing. I’ve read article after article beautifully sharing the experience…but for some reason I was struggling. It wasn’t because there wasn’t plenty to share. I was struggling to document things in a way that was fair, accurate and, well, truly about Zoho.
Here is where I landed: Talking about Zoho is easy. Understanding Zoho is an entirely different experience and endeavor.
As a company Zoho is outright defiant in their individuality. What do you do when, ethically, you do not believe in tracking users or consumers with cookies? Build your own infrastructure and cloud to guarantee privacy is a baseline expectation
and core to the business value of every product and offering. When opportunities and a lack of R&D is holding a country back, what next? You invest in rural revival to bring globally in-demand skills and innovation to India despite the assumptions of the world that innovation only happens in places like the Silicon Valley.
For some this brazen, maverick nature is frustratingly confusing. "How can you scale this?" "How will you keep this pace of growth?" "You can't possibly mean you building that from scratch?" "You can't do that."
These are all statements those of us who follow Zoho are used to hearing. I’ve heard people say, with earnest concern, that Zoho might not know what they are doing. They can’t possibly understand where their decisions will lead. There is an earnest worry that a group of good people will learn a hard lesson.
None of this is an accident. It is, however, the outcome of hundreds if not thousands of experiments. Zoho is happy to be home to teams of dreamers willing to experiment. Unlike other organizations where experiments are isolated or contained to reduce risk, Zoho removes any assumption that a failed experiment is a total failure. Failings are valued lessons, not grounds for termination. If an idea bubbles up and aligns with a customer’s need or request, teams are empowered to try…. empowered to experiment.
One early and lasting experiment: finding a new way to identify, educate and train the next generation of experimenters. For 17 years, Zoho Schools of Learning (informally called Zoho University by some,) has seen over 1,400 graduates advance across technology, design and business. Built as an alternative to traditional college or university programs that can often exclude students from far-flung rural villages across India, Zoho Schools focuses on the often-overlooked student that may not have the means to attend University but has the curiosity and will to learn and experiment.
This is most noticeable in the Zoho School’s boot camp style career re-entry program for women looking to return to work after a career break. During the Truly Zoho sessions, we had the opportunity to hear from women who had left the technology workforce. Most of these women told an all too familiar tale of leaving work to start or raise a family. The Marupadi program provides an intensive immersive retraining program to empowers these women for a comeback, brushing up on the latest technologies and skills during a full-time 3-month program. After a supervised internship program where graduates are placed with mentors to help guide them back into a role, Marupadi graduates are invited to interview for full time roles with Zoho.
While meeting the leaders of Zoho was an insightful glimpse into how and why Zoho exists today, it was the chance to meet with the students at Zoho Schools and especially the students and teachers at Kalaivani Kalvi Maiyam, the rural school teaching children as young as 2, that gave me the chance to see what Zoho will be in the future.
Zoho has not just existed but thrived by rejecting a berth in the global game of business dominance. It isn't that they don't want to play a game on the global stage...they just want us to come and play THEIR game. They want the rest of us, the rest of modern business, to stand up and fight for the future of innovation and experimentation. It is a bold and brazen dare: start a school, invest in tomorrow’s research and development, make the choice to sacrifice profit in order to power progress.
Sacrifice profit??? Zoho’s leaders decided to sacrifice growth to make a bold promise: nobody would be laid off as the world grappled with the threat of global financial recession and decline. For months we have seen headline after headline announcing layoffs. In order to appease Wall Street, investors, backers or shareholders, companies have made tough decisions to lay people off, cut back on research investments and implement austerity measures to keep ledgers in the black and ensure growth percentages did not fall. Zoho decided that the growth velocity they had consistently enjoyed over several years could slow if people could be prioritized.
Everyday management decisions in how to lead defy traditional business thinking. Decision making is pushed down into the teams and individuals closest to where those decisions turn into actions, especially when those decisions directly impact a customer’s experience with Zoho.
For those heading to an upcoming Zoholics event (I myself will be heading to the Austin whistlestop) these are the things I urge you to keep in mind:
Perhaps the most important advice is this: suspend your disbelief. Just like my time in India, it will be totally worth it to learn who Zoho truly is.
New C-Suite Marketing Transformation Next-Generation Customer Experience Chief Customer Officer Chief Executive Officer Chief Information Officer Chief Marketing Officer Chief Digital Officer Chief Revenue Officer
Finding new ways to improve opportunity-to-renewal is core to any services business's growth.
FinancialForce has long bet its business on the belief that it could streamline opportunity-to-renewal for people- and software-centered businesses better than any other vendor. In delivering their Spring '23 release, they're proving how adept they are at delivering new features on a faster release cadence of three major releases a year. Out of its workforce of 1,000 people, FinancialForce has 400 full time employees in DevOps, engineering, product management, and quality, and nearly 100 outside resources in R&D.
FinancialForce's overarching goal with the Spring '23 release is to strengthen the customer's ability to excel at opportunity-to-renewal. The feature refresh for Spring '23 includes 18 different areas of their platform, with the most, eight, being in Services CPQ. Dan Brown, Chief Product and Strategy Officer at FinancialForce, says, "Opportunity-to-renewal is core to companies that deliver services. It's an area that has been dramatically underserved by classic vendors in this space. Most are fairly product-centric, and that tends to hold companies that are service-oriented back."
Services-as-a-Business is gaining traction
FinancialForce's Spring '23 release shows how Services-as-a-Business is closing gaps and improving the opportunity-to-renewal process. Tight labor markets, spiraling costs and prices due to inflation, and blind spots in opportunity-to-renewal cycles continually jeopardize services revenue. As a result, professional services and software companies relying on service revenue risk losing Annual Recurring Revenue (ARR) and seeing reduced Customer Lifetime Value for every account. The Spring '23 release provides a more granular, 360-degree view across eight core areas of the opportunity-to-renewal process to help services businesses meet new growth challenges.
"Our new Spring '23 release is designed to give organizations the kind of certainty they need in these very uncertain economic times," said Scott Brown, President, and Chief Executive Officer at FinancialForce. "Given the pace at which market and business conditions change, services businesses need confidence in their ability to manage estimates, skills and resources, and solve complex problems. This new release gives organizations a complete, customer-centric view of their business to turn continuous disruption into a competitive edge."

Spring '23 release doubles down in the areas of Service CPQ and Resource Management, which are the areas where the majority of new features have been added to this release.
Improving Services CPQ process performance protects margins
FinancialForce is prioritizing Services CPQ, first introduced in the Winter '22 release, to help customers get more in control of their margins and time management. The number and depth of new features in this area and Dan Brown's insights into how popular Services CPQ has become with enterprise accounts demonstrate that prioritization. FinancialForce's enterprise accounts are adopting Services CPQ to save time during sales cycles by providing their prospects with the visibility to identify resources available for quoting work, their billable rate, skills, and previous experience.
Dan Brown said that "in (quote) estimation, you now can reach into your PSA (Professional Services Automation) system and identify the resource that you're going to quote, what's their billable rate, what's their skills, what's their capabilities. A big issue our customers have is that the As Quoted versus the As Delivered are almost always materially very different."
He continued, emphasizing, "And that's where you end up with margin erosion, that's where you end up with revenue leakage for our customers. Now with Services CPQ, the As Quoted and As Delivered features are tightly linked together. And that has driven enormous improvements.”
Scott Brown added, “When I was a customer, this was a big pain point. For me, the capability to connect your pre-sales activities to your post-sale delivery is a real game changer for us."
Underscoring how vital Services CPQ is to FinancialForce's opportunity-to-renewal strategy, the Spring ‘23 Customer Overview notes that "with usability improvements in Services CPQ, support for additional pricing and costing scenarios, and streamlined estimate export for correct Statements of Work, services teams will be able to create accurate and competitive proposals faster, leading to higher win rates on projects, with much lower risk profiles."

Among the many enhancements to Services CPQ are usability enhancements to the Estimate Builder, helping to reduce errors in As Quoted and As Delivered Results.
New features to optimize resources and projects
Additional goals of the spring '23 release are to provide customers with improved workflows for optimizing resources and streamlining project management. Given how every professional services firm and software company today is under pressure to continually find new ways to optimize resources and be more done with less, the timing of Resource Optimizer Enhancements and introducing Resource Manager Work Planner is excellent. FinancialForce allows assigning multiple resources to project enhancements, integrating with MS Outlook and Google Calendar, as well as mass deletion of pass utilization results. FinancialForce also delivers task-based scheduling of held resource requests.

The Spring '23 release is designed to help enterprises optimize resources from small-scale to multi-location projects by adding Resource Work Planner and Enhanced Skills Maintenance that can scale across multiple global locations.
How FinancialForce's Spring '23 Release Strengthens Opportunity-to-Renewal
"This new release gives organizations a complete, customer-centric view of their business to turn continuous disruption into a competitive edge," remarked Scott Brown during a recent briefing. FinancialForce aims to help services businesses more efficiently monetize their time and resources by concentrating their development efforts across opportunity-to-renewal.
The release shows how services companies are looking to real-time financial analytics, including new risk management features, as guardrails to keep their businesses on track to margin and profit goals. The Spring '23 release shows FinancialForce's view of the opportunity-to-renewal process and what strengths it can offer customers, from a new Scheduling Risk Dashboard that provides early intervention and project course corrections in real time, to streamlined estimate exports for accurate Statements of Work (SOWs).
The following table uses the opportunity-to-renewal process as a framework to put the new release into context. It compares each phase of the opportunity-to-order process, how FinancialForce defines their role, how the Spring '23 release strengthens each area, what the people and software-oriented benefits are, along with their leading customer references. You can also download a copy of the Opportunity-to-Renewal Process comparison here.

While generative AI has been around for some time, ChatGPT has captured the hearts and minds of the general population in highlighting tangible possibilities of what AI can accomplish both in the consumer and enterprise world. In fact, Generative AI has the ability to create chat responses, designs, and other new content including deep fakes and synthetic data. Neural network techniques such as generative adversarial networks (GANs), variational autoencoders (VAEs), and transformers work together to create original content based on prompts.
On the languages side, GPTs or what’s known as a generative pre-trained transformer, generate conversational text using deep learning. The pre-training capability allows the AI to take the model from one machine learning task to train another model. These models are then pre-trained on large corpus of text. Transformers, a type of neural network, maps the relationships among all the data sources such as text and sentence patterns.
For images, diffusion models allow images to be created from text prompts. Using random noise applied to a set of training images, the diffusion models allow one to remove noise and create a desired image. Common approaches include DALL-E also from OpenAI, Dreambooth by Google, Imagen, Lensa, Midjourney, and Stable Diffusion.
The more organizations interact with these AI systems, the quicker the AI systems will improve their rate of learning.
Constellation Research sees five emerging use cases for generative AI in CX among an infinite permutation of possibilities:
Organizational success requires more than large learning models or better algorithms. CX leaders will need to identify the largest corpus of data available, the customer experience questions to be answered, and what skills are required to keep up with human scale in a machine world. In core CX processes such as campaign to lead, lead to order capture, order capture to order fulfillment, order fulfillment to order completion, Incident to resolution, and others, there will be opportunities for generative AI to provide missing content along the way.
Along the way every leader must determine which CX journeys are fully automated, augmenting the machine with a human, augmenting a human with a machine, or instead requiring a human touch (see Figure 1).
Figure 1. The Four Questions Every CX Leader Will Ask In Their Journeys

Source: Constellation Research, Inc.
Despite the massive amounts of hype, pragmatic use cases for generative AI will emerge. Given today’s labor shortages and need to improve time to market, expect more pragmatic use cases to emerge. Those organizations who fail to build a generative AI strategy will continue to fall behind. Those who adopt early, will have an opportunity to deliver on exponential growth and more meaningful customer experiences.
What are you doing with ChatGPT and Generative AI? What's use case will you start with?
Add your comments to the blog or reach me via email: R (at) ConstellationR (dot) com or R (at) SoftwareInsider (dot) org. Please let us know if you need help with your strategy efforts. Here’s how we can assist:
Reprints can be purchased through Constellation Research, Inc. To request official reprints in PDF format, please contact Sales.
Although we work closely with many mega software vendors, we want you to trust us. For the full disclosure policy,stay tuned for the full client list on the Constellation Research website. * Not responsible for any factual errors or omissions. However, happy to correct any errors upon email receipt.
Constellation Research recommends that readers consult a stock professional for their investment guidance. Investors should understand the potential conflicts of interest analysts might face. Constellation does not underwrite or own the securities of the companies the analysts cover. Analysts themselves sometimes own stocks in the companies they cover—either directly or indirectly, such as through employee stock-purchase pools in which they and their colleagues participate. As a general matter, investors should not rely solely on an analyst’s recommendation when deciding whether to buy, hold, or sell a stock. Instead, they should also do their own research—such as reading the prospectus for new companies or for public companies, the quarterly and annual reports filed with the SEC—to confirm whether a particular investment is appropriate for them in light of their individual financial circumstances.
Copyright © 2001 – 2023 R Wang and Insider Associates, LLC All rights reserved.
Contact the Sales team to purchase this report on a a la carte basis or join the Constellation Executive Network
In today's fast-paced and data-driven business world, generative AI is now in the midst of transforming the way companies innovate, operate, and work. With proof points like ChatGPT, generative AI will soon enough have a significant competitive impact on revenue as well as bottom lines. With the power of AI that can help people broadly synthesize knowledge, then rapidly use it to create results, businesses can automate complex tasks, accelerate decision-making, create high-value insights, and unlock capabilities at scale that were previously impossible to obtain.
Most industry research agrees with this, such as a major study that recently determined that businesses in countries that widely adopt AI are expected to increase their GDP by 26% by 2035. Moreover, the same study predicts that the global economy will benefit by a staggering $15.7 trillion in both revenue and savings by 2030 thanks to the transformative power of AI. As a knowledge worker or business leader, embracing generative AI technology can deliver a wide range of new possibilities for an organization, helping them stay competitive in an ever-changing marketplace while achieving greater efficiency, innovation, and growth.
While many practitioners are focusing on industry-specific AI solutions for sectors like finance services or healthcare, the broadest and most impactful area of AI will be in general purpose capabilities that quickly enablie the average professional to get their work done better and faster. In short, helping knowledge workers work more effectively to achieve meaningful outcomes to the business. It's in this horizontal domain that generative AI has dramatically raised the stakes in the last six months, while garnering widespread attention for the seemingly immense promise it holds to boost productivity as it blazes a fresh technology trail towards bringing the full weight of the world's knowledge upon any individual task.
In my professional opinion, the ability for generative AI to produce useful, impressively synthesized text, images, and other types of content almost effortlessly based on a few text cues has already become an important business capability worthy of providing to most knowledge workers. In my research and experiments with the technology, many work tasks will benefit from between a 1.3x to 5x gain in speed alone. There are other less quantifiable benefits related to innovation, diversity of input, and opportunity cost that come into play as well. Generative AI can also provide particularly high value types of content such as code or formatted data, which normally require extensive expertise and/or training to create. It also has the capability to conduct advanced-level reviews of complex, domain-specific materials including legal briefs and even medical diagnoses.
In short, the latest generative AI services have proven that the capability is now at a tipping point and is ready to deliver value in a widespread, democratized away to the average worker in many situation.
Not so fast, say a chorus of cautionary voices that point out the many underlying challenges. AI is a potent technology that cuts both ways, and therefore a little advance preparation is required to use the technology while avoiding the potential issues, which are generally are:
However, the siren song of the benefits that AI can bring -- everything from raw task productivity to strategically wielding knowledge more effectively -- will continue as more proof points continue to emerge that today's generative AI solutions can genuinely deliver the goods. This will require organizations to begin putting into place the necessary operational, management, and governance safeguards into place as they climb the AI adoption maturity curve.
Some of the initial moves virtually all organizations should make this year as they situate generative AI in the digital workplace and roll it out to workers includes:
While the totality of theis list may seem to be a tall order, most organizations actually have many pieces of all this in various places in their organization already from department AI efforts. In addition, if they have developed an enterprise-wide ModelOps capability, this is a particularly good home for a large part of these AI oversight practices, in close conjunction with appropriate internal functions including human resources, legal, and compliance.
Related: See my exploration of ModelOps and how it helps organizations have a consistent, cost-effective AI capability with safety and ethics built-in
Organizations looking at providing their workforce with AI-enabled tools will generally be looking at solutions that are powered by an AI model that is able to easily produce useful results without significant effort or training on the part of the worker. While the compliance, bias, and safety issues mentioned above may seem to be a significant hurdle, the reality is that most AI models already have basic protections and safety layers, while many of the others can be provided centrally through an appropriate AI or analytics Center of Excellence or ModelOps capability.
Large language models (LLMs) are particularly interesting as the basis for AI workplace tools because they are powerful foundation models that have been trained on a tremendous amount of open textual knowledge. Vendors for LLM-based work tools are generally going down one of several roads: The majority of them are building on an existing proprietary model that is specially tuned/optimized for certain behaviors or results they desire, or they are allowing model choice, enabling businesses to utilize language or foundation models they have already vetted. Some are also taking the middle road by starting with well-known, highly-capable models such as OpenAI's GPT-4, and adding their own special sauce to them on top.
While there will always be AI tools for the workplace based on lesser known and not-as-established AI frameworks and models, right now the most compelling results tend to be found with the better-known LLMs. While this list is always changing, the leading foundation models known currently, with varying degrees of industry adoption are (in alphabetical order):
It's also important to keep in mind that while some enterprises will be seeking to work directly with LLMs and other foundation models to create their own custom AI work tools, the majority of organizations are going to start with easy to use business-grade apps that already have an AI model embedded within them. Nevertheless, understanding which AI models are underneath which worker tools is very helpful in understanding their capabilities, supporting properties (like safety layers), and general known risks.
The following is a list of AI-enabled tools that primarily use some form of foundation model to synthesize or otherwise produce useful business content and insights. I had a tough choice to make on whether to include the full gamut of generative AI services including images, video, and code. But those are covered in sufficient detail elsewhere online and in any case, they focus more on specific creative roles.
Instead, I sought to focus on business-specific AI work tools based on foundation models that were primarily text-based and more horizontal in nature, and thus would be a good basis for a broad rollout to more types of workers:
Here are some of the more interesting solutions for AI tools that can be used broadly in work situations (in alphabetical order):
As you can see, writing assistants tend to dominate Ai tools for work, since they are generally the easiest to create using LLMs, as well as the most general purpose. However, there are a growing number of AI tools that cover many other aspect of generative work as well, some of which you can see emerging in the list above.
In future coverage for AI and the Future of Work, I'll be exploring vertical AI solutions based on LLMs/foundation models for legal, HR, healthcare, financial services, and other industries/functions. Finally, if you have an AI for business startup that a) primarily uses a foundation model in how it works, b) has paying enterprise customers, and c) you would like to be added to this list, please send me a note. You are welcome to contact me for AI-in-the-workplace vendor briefings or client advisory as well.
My Related Research
How Leading Digital Workplace Vendors Are Enabling Hybrid Work
Every Worker is a Digital Artisan of Their Career Now
How to Think About and Prepare for Hybrid Work
Why Community Belongs at the Center of Today’s Remote Work Strategies
Reimagining the Post-Pandemic Employee Experience
It’s Time to Think About the Post-2023 Employee Experience
Research Report: Building a Next-Generation Employee Experience
Revisiting How to Cultivate Connected Organizations in an Age of Coronavirus
How Work Will Evolve in a Digital Post-Pandemic Society
A Checklist for a Modern Core Digital Workplace and/or Intranet
Creating the Modern Digital Workplace and Employee Experience
The Challenging State of Employee Experience and Digital Workplace Today
The Most Vital Hybrid Work Management Skill: Network Leadership
New C-Suite Future of Work Chief Data Officer Chief Digital Officer Chief Executive Officer Chief Information Officer Chief People Officer Chief Revenue Officer Chief Supply Chain Officer Chief Technology OfficerOn March 29, 2023, over 1100 notable signatories signed the Open Letter from the Future of Life Institute asking for a moratorium on AI development. This wake-up call to society highlights a need for tech policy to catch up with technology and brings awareness to the pervasive impact of AI on society for decades to come. As one of the major investors in Open AI and most notable signatories, Elon Musk has been advocating for a pause. Per the letter, “Powerful AI systems should be developed only once we are confident that their effects will be positive, and their risks will be manageable.”
With OpenAI just releasing its next powerful LLM (Large Language Model) GPT-4, these AI experts and industry executives have suggested a 6-month pause on the development of any models more powerful than GPT-4, citing an AI apocalypse. To provide context on the size of the GPT-4 model compared to the current GPT-3 model, the current GPT-3 uses 175 billion parameters whereas the new GPT-4 uses 100 trillion parameters (see Figure 1).
Figure 1. Size of GPT-4 LLM vs GPT-3 LLM

Technologies can be used for good or for evil. That power lies in the hands of the user. For example, bad actors can create phishing emails that are personalized to each individual which would sound so legitimate that you can’t resist clicking on it -- only to compromise your system by exposing it to malware.
Other examples include scammers using voice cloning the grandson of an elderly couple to trick them to send money claiming he was in jail and needed bail money. The scary part was that the scammers used just a few spoken sentences from a YouTube video to clone his voice almost perfectly.
The moratorium proposed by Musk and others is calling for a 6-month pause. Not sure what the arbitrary period will do exactly. First of all, is the pause and/or ban only for US-based companies or is it applicable worldwide? If it is worldwide, who is going to enforce it? If not enforced properly and worldwide, forcing US companies to abandon their efforts for the next 6 months will help other nations get much ahead in the AI arms race.
What happens after 6 months? Would regulatory bodies have caught up by then?
Interestingly enough, asking to pause AI experiments is like polluting factories calling for a pause on emission regulation and continuing to pollute because they can’t properly measure or mitigate their pollution or face the shutdown risk. A pause is not going to make government or regulatory bodies move any faster to solve this issue. This is the equivalent of “kicking the can down the road”.
At this point, OpenAI and other vendors are offering the LLM as a "use it at your own risk" mode. While ChatGPT has some basic guardrails and safety measures in answering specific questions and topics, it has been jailbroken by many, which leads to unpleasant answers and behaviors (such as falling in love with a NY Times reporter or accelerating the decision of a married father to commit suicide, etc.) Enterprises that want to use AI need to understand the risks associated and have a plan to mitigate them. By using this in real business use cases, if your business gets hurt, they will take no responsibility, and you will be on your own. Are you ready to accept that risk?
Conduct a business risk assessment of AI usage in specific usecases, implement proper security, and have humans in the loop making the actual decisions with AI in helping mode. More importantly, it needs to be field tested before it can go into production – extensive tests proving human-produced results will be the same as AI-produced results, every single time. Make sure there are no biases in the data or the decision-making process. Be sure to capture the data snapshots, models in productions, data provenance, and decisions for auditing purposes. Invest in explainable AI to provide a clear explanation of why a decision was taken by AI.
Although generative AI can help create realistic-looking documents, marketing brochures, content, or writing code, etc. humans should spend time reviewing the content for accuracy and bias. In other words, instead of trusting AI completely, it should be used to augment any work, if at all with strong guardrails on what is accepted and expected.
There is also a major security risk in using LLMs are they are not properly secured as of today. The current security systems are not ready to handle the newer AI solutions yet. There is a strong possibility of IP information leaking by simple attacks over LLMs which is proven by research students as many of these systems have weak security protocols in place.
How society deploys guard rails for AI should not be about stopping a certain technology, or company, unless these companies really go rogue. There was a similar outcry when IoT initially became popular about personal data collection as well but wearing Fitbits and sharing the data in self-quantified technologies seem to be very common and accepted practice now with the right privacy policies and permission requests.
Policy makers, technology companies, and ethicists should move the focus to work on guardrails within which these systems. The focus should be on regulations, security, oversight, and governance. Define what is acceptable and what is not. Define how much of the decisions can be automated versus human involvement. At the end of the day, AI and analytics are here to stay. Just pausing it for 6 months is not going to let the safeguard measures catch up.
Data to Decisions Future of Work New C-Suite Tech Optimization Innovation & Product-led Growth Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR AR business Marketing SaaS PaaS IaaS CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Information Officer Chief Digital Officer Chief Analytics Officer Chief Data Officer Chief Technology Officer Chief Executive Officer Chief AI Officer Chief Information Security Officer Chief Product Officer
The following are the common terms used in Incident Management, AIOps, and Observability practice areas:
Alert fatigue is a phenomenon that occurs when the on-call personnel and incident responders receive an overwhelming number of alerts or notifications (either in volume or in frequency), causing them to ignore, dismiss, or become indifferent and insensitive to some of the highly sensitive and critical notifications/alerts as there are too many causing fatigue. This might result at times either in support personnel missing the right alert thereby not taking the right action or taking an inappropriate action that might make matters worse.
To mitigate this, many enterprises use AIOps or Incident Management solutions that help reduce or group the number of redundant, irrelevant, or non-critical alerts either by,
These effective solutions can reduce burnouts of on-call personnel and SREs and help them more efficient with them resolving critical incidents faster.
ChatOps is a collaborative communication tool and process that is commonly used in incident management. The physical war rooms have evolved into virtual collaboration channels. Generally, as soon as an incident is identified and acknowledged, one of the first steps is step is to create a collaboration channel. This place centralizes all communications and assets about incidents. It also holds information about incident progress, status, plans, and resolution (if any), which allows anyone involved to get the status and necessary information in real-time. Anyone who needs to know the information or who can assist in solving the incident can be invited to the collaboration channel as needed. Oftentimes, on-call systems, alert/notification tools, and chatbots are often included in this category as well.
An incident is defined as unplanned downtime, or interruption, that either partially or fully disrupts a service by offering a lesser quality of service to the users. If the incident is major, then it is a crisis or major incident. When it starts to affect the quality of service delivered to customers, it becomes an issue, because most service providers have service-level agreements (SLAs) with their consumers that often have penalties built in. The longer the incident remains unsolved, the more it costs an organization.
They expect and prepare for major digital incidents and handle them well when they happen. They use a mix of open-source, commercial, and homegrown tools that blend well. Most of those organizations also successfully implement the following processes.
Once an incident alert/notification is generated, it needs to be acknowledged by someone either from the support or SRE team or from the service owner. An acknowledgment of an incident is not a guarantee that the incident will be fixed soon. However, this is an early indication of how alert the incident teams are and how soon they can get to incidents. While a user has taken the responsibility for the incident, this doesn’t mean it has been escalated to the right user, yet. This acknowledgment mechanism is very common in most on-call alerting/notification tools. If there is no acknowledgment, the on-call tool will continue to escalate or try to find the right person until the incident is acknowledged.
An Incident Commander (IC), or the Incident Manager, is a member of the IT team responsible for managing a coordinated critical incident response especially if the incident is considered an emergency or a crisis situation. An IC has ultimate control and final say on all incident decisions. He or she is also responsible for inviting the right personnel and escalating the incident to necessary teams as necessary and ultimately responsible for efficient and quick resolution of an incident.
The incident lifecycle is the duration of the incident from the occurrence to the time it is resolved. While the post-mortem analysis and fixing the underlying issue so the incident never repeats again is not part of the incident lifecycle, it is an important adjacent step that must be performed to avoid the recurrence of events. If a specific incident occurs regularly and a possible solution is known, it should be automated as well which is not part of the incident lifecycle but will help reduce the lifecycle of future incidents.
Mean time to acknowledge is the measure of how long it takes to acknowledge an incident. This time shows the efficiency and responsiveness of the responders and gives confidence to the customers that an enterprise is aware that the services are down and they are working on it. As soon as the first acknowledgment is done, the status update must be updated as well – such as status pages, email alerts, pager notifications, etc.
At a high level, MTTA is calculated by dividing the total time it has taken to acknowledge all incidents by the number of total incidents over the sampling period.
Mean Time to Innocence is a metric that is used to prove that someone is not guilty or associated with an incident. When an incident happens it has become a common practice to invite everyone that is deemed remotely associated with the incident to the incident collaboration channel. This results in a lot of wasted time and resources. Most times it becomes difficult to identify the root cause or solve the incident efficiently because there are “too many cooks in the kitchen,” each offering different advice, knowledge, and wisdom that is neither useful nor relevant.
Many organizations also are measuring mean time to innocence (MTTI) thereby letting the teams/personnel who are not directly responsible or cannot offer help in solving the incident leave the incident collaboration channel. This allows the innocent parties to continue to be productive in their regular job rather than waste time figuring out how to resolve the unplanned downtime that is unrelated to them, and about which they have no knowledge.
However, care should be taken while measuring this metric and having the team participate in this practice. Oftentimes, the teams involved will start to blame each other in order to prove their innocence. Either the guilty party becomes defensive or totally denies their responsibility. By trying to shift blame to others, the need for a collaborative mindset and keeping customers and solving unplanned outage focus can get lost while the blame game happens.
Mean time to resolve is the average time it takes to resolve an incident. The resolution is defined as the combination of identification of the incident, identification of the root cause, and fixing the incident. In other words, the time it takes to bring the service back to the mode of its normal operation. Resolving the current incident doesn’t guarantee such events won’t happen in the future.
Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR business Marketing SaaS PaaS IaaS CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Information Officer Chief Digital Officer Chief Analytics Officer Chief Data Officer Chief Information Security Officer Chief Technology Officer Chief Executive Officer Chief AI Officer Chief Product OfficerConstellationTV Episode 54 features Constellation analysts Liz Miller and Holger Mueller analyzing SVB & ChatGPT-4, then Liz shares highlights from #TrulyZoho23 and Holger sits down with Jon Reed of diginomica to discuss Workday's #AI and #ML Summit.
Listen here:
1:43: Technology News with Liz and Holger
14:40: Truly Zoho 2023 Recap
24:37: Workday AI & ML Summit Analysis with Jon Reed, co-founder of diginomica
37:25: CRTV Bloopers
On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/49oR2WoxWk8" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
A brand new episode of ConstellationTV dropped today! Here's what you'll find in episode 53...
1:08 - Analyst co-hosts Dion Hinchcliffe and Doug Henschen discussing the latest #tech news, including Microsoft's latest announcement of Einstein GPT and generative #AI and #BI trends.
11:33 - Updates from Hannah Hock on our upcoming event, Ambient Experience Summit.
13:00 - Key takeaways from the 2023 Mobile World Congress in Barcelona
17:25 - #MLOps best practices from Andy Thurai
23:00 - New 2023 #ShortList recaps
Learn more from Constellation analysts at constellationr.com or reach out at [email protected].