Results

New Release: Constellation Research Releases Q3 2022 ShortList™ Updates Naming Top Vendors Across Major Technology Sectors

We are thrilled to reveal the latest updates to the Constellation ShortList™ portfolio.

The Constellation ShortList™ portfolio highlights the key players when considering investments across all of our coverage areas, including HR tech, healthcare, AI, marketing, customer experience, analytics, machine learning, and more. We update the lists once per year to every six months depending on the category. Our goal is to match the rapidly changing requirements with customer needs and demands.

Today, we released these 34 new and updated lists:

Each offering meets the threshold criteria as determined by our analysts through client inquiries, partner conversations, customer references, vendor selection projects, market share, and internal research. These reports are part of Constellation’s open research library and are free to download. For more information, visit https://www.constellationr.com/shortlist.

We know the ShortList™ are starting points in your vendor selection process. If you would like to take advantage of our expertise with software vendor selection, contract negotiations, and partner selection, please reach out to [email protected]

If there’s a coverage area we are missing that you think we should start coverage, please let us know with a short note to ([email protected])

Be sure to check back next Wednesday for the final updates for the quarter.

Data to Decisions Digital Safety, Privacy & Cybersecurity Future of Work Marketing Transformation Matrix Commerce New C-Suite Next-Generation Customer Experience Tech Optimization Chief Analytics Officer Chief Customer Officer Chief Data Officer Chief Digital Officer Chief Executive Officer Chief Financial Officer Chief Information Officer Chief Information Security Officer Chief Marketing Officer Chief People Officer Chief Privacy Officer Chief Procurement Officer Chief Revenue Officer Chief Supply Chain Officer

In a totally synthetic metaverse, what does “authentic” even mean?

Despite all the hype, we don’t know what the metaverse will eventually look like. But whatever its eventual visual form, questions of identity should be ironed out now, at the formative stages.

We can assume that the metaverse will be a richer and more complete form of virtual reality, or augmented reality, than anything we’ve seen so far.  

The main technology enablers will be more compute power, better mobile technology, better connectivity, and better AI to create lifelike experiences.

Let me say up front, though, that I don’t believe there’s any fundamental call for decentralisation technology, shared ledger, blockchain, or NFTs. Indeed, none of that is even helpful here.

These new VR/AR platforms will be centrally managed for any number of commercial and performance reasons. Any decentralisation technology in a metaverse hosted by data companies will only be for show and to allow them to brag “We’re on the blockchain”.

Far more important will be the authenticity and reliability of data about people and other entities in the metaverse. And by “data about people” I mean much more than “identity”.

Yes, it’s important that we can know reliably that the animated avatar we’re chatting with does indeed represent our friend Sarah Turner. But we might also need to know that her avatar is currently under her proper control, that her physical location is where she says it is, and that her pleas to send money are genuine.

Data about people and things will need to be radically more reliable than in Web1 and Web2 today. That will be an enormous challenge.

Indeed, in a wholly synthetic metaverse, what does “authentic” even mean?

Some of the questions we need to answer are deeply philosophical. So far there are few answers.

What does “identity” mean in an unreal world? Will we have to agree on what counts as a “real” identity under the covers? Will there always be biological or “legal” identities behind every metaverse entity? What happens when metaverse entities create completely synthetic digital children? Will there be levels of identity that bottom out somewhere?

I want to shift the focus away from these mind-bending puzzles about identity and focus on data and the truth. That in turn will lead us to consider issues such as data quality and data protection.

If you’re dealing with a digital entity in the metaverse: What do you need to know about it? Where will you get the knowledge you need? How will you be sure the knowledge is true, or at least true enough?

These questions are subtly different from identity, and I’ve tried to frame the questions independently of “reality”.

What is “real” in the metaverse anyway? Does it even matter? Maybe “real” only means “we are confident this entity has a corresponding physical twin” — although at this stage it isn’t clear to me that physical twins will always be needed for every kind of entity. 

What we definitely need, though, is agreement on the sources of truth for data. We therefore need properly reliable data supply chains.

If you or your metaverse entity have some data about another entity, then you need to know where that data has come from, whether it has been processed, and by who, using what algorithm.

You’ll need to know its age, and any terms and conditions for its use. In many cases it will be sensible to have warranties over data, so that the data supplier will accept responsibility for its quality.

We therefore need more than algorithmic transparency. We need complete data supply chain traceability.

In most cases, if not all, virtual entities in a metaverse will need to have corresponding cryptographic keys, with assurance of the quality of those keys, and of their ongoing custody or possession.

Maybe one day fully autonomous virtual entities will have their own keys — but I hope that for the foreseeable future, those entities will be anchored to real-world high-quality cryptographic keys.

Web3 actions are going to get richer, faster, deeper, more automated and more impactful very quickly, so we need to protect the human users with more energy and clarity than we do today on Web1 and Web2.

We therefore need to very gently ease our way into these new augmented and virtual realities. We need sound, transparent and accountable anchors between the virtual and the physical.

Let’s assume that fully synthetic realities are a long way off and that, at least for now, every metaverse entity will have a corresponding real-world twin — some individual or other legal entity that can be held responsible for things that happen in the metaverse.

The challenges where are both conceptual and technical.

Conceptually, we didn’t think clearly enough about identity in Web1 and Web2. We over-egged identity, making it too complicated online.

Technically, we didn’t have the proper focus and determination to harden identity data against theft. Identity and authentication are treated in a piecemeal fashion in every use case, with no consistency.

Identity as a technology has become costly and confusing, and thanks to all the soft spots and weak links, it sets people up to fail at the hands of identity thieves.

We can solve the conceptual and technical issues at the same time by framing the problem in terms of verifiable claims and attributes.

Our approach to payment card data illustrates this approach.

The payments world has worked hard on a consistent set of cryptographic standards so that all cards are verifiable and all card-present transactions are digitally signed.

Mobile wallets now have the same security as cards confirming to the EMV-standard (Europay, Mastercard, Visa) thanks to secure elements in handsets. New web browser payments protocols are bringing consistency and cryptographic strength to internet payments.

If we treated digital identity as another form of data, we could make all personal data just as secure as payment cards, and just as portable. We should.

Ultimately, I envision a new taxonomy of metadata so that all data in the metaverse, about all the things both “real” and “virtual”, comes with signals about its origin, terms and conditions, reliability, warranty, and so on.

Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity

News Analysis: Major Announcements From Splunk .conf22 Bring Observability and Security to the Forefront

Major Announcements From Splunk .conf22 Bring Observability and Security to the Forefront

Observability and security have come to the forefront of IT service delivery, a convergence that was long overdue. This was the urgent theme of the 2022 Splunk conference in Las Vegas.

I had the privilege of attending the Splunk .conf22 as an analyst. Below are some noteworthy announcements and their potential impact for enterprises. My apologies for the delayed capture here: It has been a busy event season, and I have been parsing the big themes and trends for prospects and buyers.

Like Peanut Butter and Jelly, Observability and Security Go Together

When a service goes down, it takes time to figure out when it went wrong, what went wrong, why it went wrong, and what to do to fix it. This is where observability practices and solutions help DevOps/ IT support/site reliability engineers (SREs) to figure out the when, what, and why, and incident management helps to fix it. If it is an operational incident, then DevOps, SREs, support teams, and incident response teams need to engage to quickly mitigate the situation and get IT running normally for the business as soon as possible. If it is a security event, then security teams need to engage to protect vital data and system assets.

 

Observability, AIOps, and Incident Management all play a major part in identifying and fixing an incident as soon as or sometimes even before it happens. I recently analyzed 11 vendors that provide offerings in this area called Incident Management in the Cloud Era Market Overview report, which can be found here

 

Observability depends on full fidelity, high-quality data

Splunk has been in the Observability space for many years now. What is different compared to some of the other vendors is that Splunk realizes data need to be consumed the way the customer provides it. Some of the announcements about full-fidelity data, never losing any IT operations data ever, federated search across hybrid and cloud locations and usage of cheap storage options such as Amazon S3 were all major announcements that set Splunk apart.

 

Splunk realizes data will not live in a single location and is taking steps to cope with distributed data. The vendor put together a compelling infrastructure operations strategy to help large customers— highly scalable, high-fidelity data; data shareability across teams (both operations and security); automatable incident intelligence; and so forth. With the combination of SignalFx, Omnition, SplunkInfrastructure Monitoring, Splunk APM, Splunk Real User Monitoring (RUM), Splunk Log Observer, Splunk On-Call, Splunk Observability Cloud, Rigor, and Plumbr, the vendor has assembled a solid set of tools. This combination can appeal to security operations center (SOC), network operations center (NOC), DevOps, IT service management (ITSM), DevSecOps, and SRE teams and potentially can enable them to all work together.

 

Splunk offers a decent set of AIOps use case implementations such as forecasting, predictive analytics, outlier detection, and event clustering. I would like to see a lot more AIOps use cases as Splunk continues to mature. You can find a full set of AIOps use cases in my recent report A CIO’s Guide to AIOps.

 

The Bottom Line: Customers Are One Incident Away From Bankruptcy

Fast-moving innovation comes with reliability problems for any digital enterprise. As I have written before, enterprises should be more prepared than they are to deal with major incidents. It is only a matter of time before an incident happens. Splunk is one of the few vendors that serve both the innovation and the reliability sides of the equation. By providing high-fidelity observability data, along with removing the barriers between findings and action, the vendor helps customers achieve faster digital innovation.

 

Splunk, with its large ecosystem and implementation partners, is a company to watch. If you are in the market for observability, AIOps, and incident management solutions, Splunk definitely is worthy of consideration.

 

To get my full view of all their major announcements and my deep analysis can be found in this 12-page report here (Free for Constellation subscribers).

Random Musings from the conference

  • I kept hearing .com constantly, which reminded me of circa 2000 until I figured out that they were losing the “f” from .conf!
  • Those opening dancers with light, song, and dance sequence was one of the most mesmerizing dance sequences that I have ever seen. I was really in awe after watching the show. It took me a while to recover from tweeting and taking pictures of it 😊 Light Balance is a Ukrainian LED dance troupe and an America Got Talent 2017 performer. Watch them at the link here https://www.youtube.com/watch?v=E8Ecz_sntDo, they are impressive!!!

  • I was told Splunk restricted the in-person attendance to about 5k (though it felt like more) and apparently, virtually 20K people attended.
  • As with many events, these in-person events seem to be Covid super spreaders so please exercise caution and safety if you attend in person!
  • As a foodie, I love Vegas events as I get to pig out! Even though I walked 10,000+ steps every day, it still seems to catch up after multiple trips. Need to lose some weight.
  • Splunk has this enviable acronym for their employees/users/practitioners as “Splunkers” which kinda sounds cool!
  • The T-shirt bit from their chief T-shirt officer, Shelly, from Splunk T-shirt co., was a great bit. Splunk if you are listening, I want one of those Invisibili-T shirts!!!
  • Interesting to see the amount of money spent on Formula 1 racing by technology companies - Splunk, Oracle, Dell, HPE, Alteryx, Datarobot, â€¦..the list goes on. The amount of money spent on these is mind-boggling. I don’t get it, but oh well.

 

New C-Suite Data to Decisions Tech Optimization Innovation & Product-led Growth Digital Safety, Privacy & Cybersecurity Future of Work Splunk Security Zero Trust Chief Information Officer Chief Digital Officer Chief Analytics Officer Chief Data Officer Chief Technology Officer Chief Information Security Officer Chief Privacy Officer

2022 SuperNova Award Finalists Announced

We are excited to announce the finalists for our 12th annual SuperNova Awards! This year’s frontrunners include outstanding leaders and teams implementing disruptive and innovative technology initiatives within their industries and communities.

The SuperNova Awards represent the largest digital transformation case study library in the world. With over 250 case studies on business transformation in the Constellation Research library, our Constellation Executive Network provides subscribers with the world’s largest case study library for practitioners. The case study submissions highlight many of the top technology projects over the past year from industries, geographies, and companies across the globe.

Now, it’s time to narrow down the finalists and choose the winners. But to do that, we need your help! The polls will be open from August 8 to September 2, so be sure to cast a vote for your favorite nominee in each category before the deadline.

The results will be announced at Constellation’s Connected Enterprise on October 26, 2022. Constellation analysts will reveal the winners during the SuperNova Awards Gala, held on the third night of Connected Enterprise.

Here are this year’s finalists…
 

2022 SuperNova Award Finalists

 

AI and Augmented Humanity

Data to Decisions


Digital Safety, Governance, and Privacy

ESG & Sustainability

Future of Work: Employee Experience

Future of Work: Human Capital Management

Marketing Transformation

Next-Generation Customer Experience

Tech Optimization and Modernization

The 2022 SuperNova Award judges, comprised of technology thought leaders and journalists, selected finalists who demonstrated success in implementing leading-edge business models and emerging technologies for their organizations.

Were you named a finalist? Learn how to register for Connected Enterprise, get more information about public polling, and more. Check the SuperNova Award Finalist Resources page here.

 

Chief Information Officer Chief Data Officer Chief Technology Officer

News Analysis: Big Tech Earnings in Q2 2022 Show Tech Rebound Among The MATANA Stocks

Media Name: rwang0-foxbusiness-varneyco-big-tech-back-china-tarrifs.png
News Analysis: Big Tech Earnings in Q2 2022 Show Tech Rebound Among The MATANA Stocks Big Tech Q2 2022 rwang0 Sun, 07/31/2022 - 13:54

Not All Big Tech Companies Are Equally Created

The MATANA stocks continue to do well despite a confluence of crises. MIcrosoft, Apple, Tesla, Alphabet, Nvidia, and Amazon, collectively showed that big tech, digital giants who:

  1. Build the biggest networks
  2. Disintermediate customer account control
  3. Compete for data supremacy
  4. Maximize digital monetization
  5. Execute with a long term mindset

will continue to deliver on both growth and margins.  These companies build data-driven digital networks (DDDNs) that serve as 100-year mulit-sided platforms.

Source: Fox Business

Digital Giants Companies Continue To Show Outsized Growth

As the NASDAQ finds a floor in 11,000, and tech company forecasts for Q3 2022 reassure investors that growth is still in play, the digital giants have shown what's required to grow both market share and margins.  Five trends emerged from this quarters tech earnings:

  1. Not all digital advertising is created equal. Search based digital advertising is more bullet proof than social.  Tech companies such as Facebook, Meta, Snap, and Twitter carry more risk while Amazon grew digital advertising revenue via commerce to 8.76B. Search based and commerce based digital advertising appear to be more recession proof
  2. Enterprise cloud growth continues to surprise many.  The tech industry is in the 3rd inning of a 9 inning market battle for cloud dominance.  The public cloud providers have shown that even amidst intense competition, growth remains in high double digits. For example, Microsoft Azure grew 46%.  Google Cloud grew revenue 47% and market leader Amazon grew 33% year over year for the quarter.
  3. Successful digital giants build multi-sided platform businesses with data driven digital networks (DDDNs).   Apple showed they were  still able to grow iPhone sales and services sales.  Apple added 160 million paid subscribers in the last quarter bringing its total to 860M paid subscribers for services.  Services revenue grew 12% to $19.6 billion
  4. One-trick ponies do not fare well in a recession.  Apple, Amazon, Google , and Microsoft guidance show the dichotomy of multiple digital monetization models.  Digital giants have diversified busines models and multiple monetization models.  Successful digital giants monetize more than one area in ads, search, goods, services, subscriptions, and memberships.  One trick pony monetization models of Meta (Facebok), and Netflix show lethargy in growth.
  5. Enterprise tech companies remain undervalued. While much attention is focused on the mega cap digital giants, especially consumer oriented stocks, the mega cap enterprise stocks remain very attractive.  Nvidia, Oracle, Adobe, Salesforce.com, Adobe, IBM, ServiceNow, Snowflake and Atlassian have positioned themselves for a lot of upside.  Cybersecurity stocks such as Palo Alto Networks, Fortinet, and Crowdstrike remain recession proof as well.

The Bottom Line: MATANA Digital Giants Dominate Return To Tech Rotation

Move over FAANG, focus on the digital giants.  During the world's biggest margin call (as my friend Keith Fitzgerald has proclaimed,) overall tech socks are mostly trading at 10 to 15% of historical lows.  With a strong dollar, weak Europe, zero-COVID locked down China, and  high interest rates, the quest for alpha will prove to be even harder.  Those investors taking a longer term view of 12 to 18 months may want to buy at historic lows in the cycle.  Given that the world has psyched themselves into a recession, few sectors will have the 20 to 40% growth rates of big tech and now may be a strategic time to buy back into the market.

 

Your POV

Do you think we hit the bottom for big tech? Are you diving back in or are you dollar cost averaging? Where do you think we will be during the recession?

Add your comments to the blog or reach me via email: R (at) ConstellationR (dot) com or R (at) SoftwareInsider (dot) org. Please let us know if you need help with your strategy efforts. Here’s how we can assist:

  • Developing your metaverse and digital business strategy
  • Connecting with other pioneers
  • Sharing best practices
  • Vendor selection
  • Implementation partner selection
  • Providing contract negotiations and software licensing support
  • Demystifying software licensing

Reprints can be purchased through Constellation Research, Inc. To request official reprints in PDF format, please contact Sales.

Disclosures

Although we work closely with many mega software vendors, we want you to trust us. For the full disclosure policy,stay tuned for the full client list on the Constellation Research website. * Not responsible for any factual errors or omissions.  However, happy to correct any errors upon email receipt.

Constellation Research recommends that readers consult a stock professional for their investment guidance. Investors should understand the potential conflicts of interest analysts might face. Constellation does not underwrite or own the securities of the companies the analysts cover. Analysts themselves sometimes own stocks in the companies they cover—either directly or indirectly, such as through employee stock-purchase pools in which they and their colleagues participate. As a general matter, investors should not rely solely on an analyst’s recommendation when deciding whether to buy, hold, or sell a stock. Instead, they should also do their own research—such as reading the prospectus for new companies or for public companies, the quarterly and annual reports filed with the SEC—to confirm whether a particular investment is appropriate for them in light of their individual financial circumstances.

Copyright © 2001 – 2022 R Wang and Insider Associates, LLC All rights reserved.

Contact the Sales team to purchase this report on a a la carte basis or join the Constellation Executive Network

Data to Decisions Digital Safety, Privacy & Cybersecurity Future of Work Marketing Transformation Matrix Commerce New C-Suite Next-Generation Customer Experience Tech Optimization Sales Marketing Innovation & Product-led Growth crowdstrike palo alto networks tesla alphabet X meta snowflake adobe servicenow Insider Associates apple nvidia Google salesforce IBM amazon Oracle Microsoft Marketing B2B B2C CX Customer Experience EX Employee Experience AI ML Generative AI Analytics Automation Cloud Digital Transformation Disruptive Technology Growth eCommerce Enterprise Software Next Gen Apps Social Customer Service Content Management Collaboration Leadership Chief Analytics Officer Chief Customer Officer Chief Data Officer Chief Digital Officer Chief Executive Officer Chief Financial Officer Chief Information Officer Chief Information Security Officer Chief Marketing Officer Chief People Officer Chief Privacy Officer Chief Procurement Officer Chief Revenue Officer Chief Supply Chain Officer Chief Sustainability Officer Chief Technology Officer Chief Operating Officer Chief Experience Officer

The C-Suite has Trust Issues with AI

This post was originally published in Harvard Business Review.

Despite rising investments in artificial intelligence (AI) by today’s enterprises, trust in the insights delivered by AI can be a hit or a miss with the C-suite. Are executives just resisting a new, unknown, and still unproven technology, or their hesitancy is rooted in something deeper? Executives have long resisted data analytics for higher-level decision-making, and have always preferred to rely on gut-level decision-making based on field experience to AI-assisted decisions.

AI has been adopted widely for tactical, lower-level decision-making in many industries — credit scoring, upselling recommendations, chatbots, or managing machine performance are examples where it is being successfully deployed. However, its mettle has yet to be proven for higher-level strategic decisions — such as recasting product lines, changing corporate strategies, re-allocating human resources across functions, or establishing relationships with new partners.

Whether it’s AI or high-level analytics, business leaders still are not yet ready to stake their business entirely on machine-made decisions in a profound way. An examination of AI activities among financial and retail organizations by Amit Joshi and Michael Wade of IMD Business School in Switzerland finds that “AI is mainly being used for tactical rather than strategic purposes — in fact, finding a cohesive long-term AI strategic vision is rare.”

More than two in three executives responding to a Deloitte survey, 67%, say they are “not comfortable” accessing or using data from advanced analytic systems. In companies with strong data-driven cultures, 37% of respondents still express discomfort. Similarly, 67% of CEOs in a similar survey by KPMG indicate they often prefer to make decisions based on their own intuition and experience over insights generated through data analytics. The study confirms that many executives lack a high level of trust in their organization’s data, analytics, and AI, with uncertainty about who is accountable for errors and misuse. Data scientists and analysts also see this reluctance among executives — a recent survey by SAS finds 42% of data scientists say their results are not used by business decision makers.

When will executives be ready to take AI to the next step, and trust it enough to act on more strategic recommendations that will impact their business? There are many challenges, but there are four actions that can be taken to increase executive confidence in making AI-assisted decisions:

  • Create reliable AI models that deliver consistent insights and recommendations
  • Avoid data biases that skew recommendations by AI
  • Make sure AI provides decisions that are ethical and moral
  • Be able to explain the decisions made by AI instead of a black box situation

Create reliable models

Executive hesitancy may stem from negative experiences, such as an AI system delivering misleading sales results. Almost every failed AI project has a common denominator — a lack of data quality. In the old enterprise model, structured data was predominant, which classified the data as it arrived from the source, and made it relatively easy to put it to immediate use.

While AI can use quality structured data, it also uses vast amounts of unstructured data to create machine learning (ML) and deep learning (DL) models. That unstructured data, while easy to collect in its raw format, is unusable unless it is properly classified, labeled, and cleansed — videos, images, pictures, audio, text, and logs — all need to be classified, labeled for the AI systems to create and train models before the models can be deployed in the real world. As a result, data fed into AI systems may be outdated, not relevant, redundant, limited, or inaccurate. Partial data fed into AI/ML models will only provide a partial view of the enterprise. AI models may be constructed to reflect the way business has always been done, without an ability to adjust to new opportunities or realities, such as we saw with disruptions in supply chains caused by the effects of a global pandemic. This means data needs to be fed real time to create or change models real time.

It is not surprising that many data scientists spend half their time on data preparation, which remains as a single significant task in creating reliable AI models that can deliver proper results. To gain executive confidence, context, and reliability are key. There are many AI tools that are available to help in data prep – from synthetic data to data debiasing, to data cleansing, organizations should consider using some of these AI tools to provide the right data at the right time to create reliable AI models.

Avoid data biases

Executive hesitancy may be grounded in ongoing, and justifiable, concern that AI results are leading to discrimination within their organizations, or affecting customers. Similarly, inherent AI bias may be steering corporate decisions in the wrong direction. If proper care is not taken to cleanse the data from any biases, the resulting AI models will always be biased, resulting in a “garbage in, garbage out” situation. If an AI model is trained using biased data, it will skew the model and produce biased recommendations.

The models and the decisions can be only as good as non-bias in the data. Bad data, knowingly or unknowingly, can contain implicit bias information — such as racial, gender, origin, political, social, or other ideological biases. In addition, other forms of bias that are detrimental to the business may also be inherent. There are about 175 identified human biases that need care. This needs to be addressed through analysis of incoming data for biases and other negative traits. As mentioned above, AI teams spend an inordinate amount of time preparing data formats and quality, but little time on eliminating bias data.

Data used in higher-level decision-making needs to be thoroughly vetted to assure executives that it is proven, authoritative, authenticated, and from reliable sources. It needs to be cleansed from known discriminatory practices that can skew algorithms.

If data is drawn from questionable or unvetted sources, it should either be eliminated altogether or should be given lower confidence scores. Also, by controlling the classification accuracy, discrimination can be greatly reduced at a minimal incremental cost. This data pre-processing optimization should concentrate on controlling discrimination, limiting distortion in datasets, and preserving utility.

It is often assumed — erroneously — that AI’s mathematical models can eventually filter out human bias. The risk is that such models, if run unchecked, can result in additional unforeseen biases — again, due to limited or skewed incoming data.

Make decisions that are ethical and moral

Executive hesitancy may reflect the fact that businesses are under pressure as never before to ensure that their businesses operate morally and ethically, and AI-assisted decisions need to reflect ethical and moral values as well. Partly because they want to appear as a company with ethical, moral values and operate with integrity, and partly because of the legal liabilities that may arise from making wrong decisions that can be challenged in courts – especially given that if the decision were either AI made or AI assisted it will go through an extra layer of scrutiny.

There is ongoing work within research and educational institutions to apply human values to AI systems, converting these values into engineering terms that machines can understand. For example, Stuart Russell, professor of computer science at the University of California at Berkeley, pioneered a helpful idea known as the Value Alignment Principle that essentially “rewards” AI systems for more acceptable behavior. AI systems or robots can be trained to read stories, learn acceptable sequences of events from those stories, and better reflect successful ways to behave.

It’s critical that works such as that conducted by Russell are imported into the business sector, as AI has enormous potential to skew decision-making that impacts lives and careers. Enterprises need to ensure there are enough checks and balances to ensure that AI-assisted decisions are ethical and moral.

Be able to explain AI decisions

Executives could be wary in absorbing AI decisions if there is lack of transparency. Most AI decisions don’t have explainability built into it. When a decision is made and an action is taken that risks millions of dollars for an enterprise, or it involves people’s lives/jobs, saying AI made this decision so we are acting on it is not good enough.

The results produced by AI and actions taken based on that cannot be opaque. Until recently, most systems have been programmed to explicitly recognize and deal with predetermined situations. However, traditional, non-cognitive systems hit a brick wall when encountering scenarios for which they were not programmed. AI systems, on the other hand, have some degree of critical thinking capability built in, intended to more closely mimic the human brain. As new scenarios arise, these systems can learn, understand, analyze and act on the situation without the need for additional programming.

The data used to train algorithms needs to be maintained in an accountable way — through secure storage, validation, auditability, and encryption. Emerging methods such as blockchain or other distributed ledger technologies also provide a means for immutable and auditable storage. In addition, a third-party governance framework needs to be put in place to ensure that AI decisions are not only explainable but also based on facts and data. At the end of the day, it should be possible to prove that if a human expert, given the same data set, would have arrived at the same results — and AI didn’t manipulate the results.

Data-based decisions by AI are almost always based on probabilities (probabilistic versus deterministic). Because of this, there is always a degree of uncertainty when AI delivers a decision. There has to be an associated degree of confidence or scoring on the reliability of the results. It is for this reason most systems cannot, will not, and should not be automated. Humans need to be in the decision loop for the near future. This makes the reliance on the machine based decisions harder when it comes to sensitive industries such as healthcare, where 98% probability of a decision is not good enough.

Things get complex and unpredictable as systems interact with one another. “We’re beginning to accept that the true complexity of the world far outstrips the laws and models we devise to explain it,” according to David Weinberger, Ph.D., affiliate with the Berkman Klein Center for Internet and Society at Harvard University. No matter how sophisticated decision-making becomes, critical thinking from humans is still needed to run today’s enterprises. Executives still need to be able to override or question AI-based output, especially within an opaque process.

Tasks to raise executive confidence

Consider the following courses of action when seeking to increase executives’ comfort levels in AI:

  • Promote ownership and responsibility for AI beyond the IT department, from anyone who touches the process. A cultural change will be required to boost ethical decisions to survive in the data economy.
  • Recognize that AI (in most situations) is simply code that makes decisions based on prior data and patterns with some guesstimation of the future. Every business leader — as well as employees working with them — still needs critical thinking skills to challenge AI output.
  • Target AI to areas where it is most impactful and refine these first, which will add the most business value.
  • Investigate and push for the most impactful technologies.
  • Ensure fairness in AI through greater transparency, and maximum observability of the decision-delivery chain.
  • Foster greater awareness and training for fair and actionable AI at all levels, and tie incentives to successful AI adoption.
  • Review or audit AI results on a regular, systematic basis.
  • Take responsibility, and own decisions, and course correct if a wrong decision is ever made — without blaming it on AI.

Inevitably, more AI-assisted decision-making will be seen in the executive suite for strategic purposes. For now, AI will be assisting humans in decision-making to perform augmented intelligence, rather than a unicorn-style delivery of correct insights at the push of a button. Ensuring that the output of these AI-assisted decisions is based on reliable, unbiased, explainable, ethical, moral, and transparent insights will help instill business leaders’ confidence in decisions based on AI for now and for years to come.

I also wrote on similar topics earlier which can be seen here "8 Tips for Building An Effective AI System Infrastructure" and here "It is Our Responsibility to Make Sure Our AI is Ethical and Moral" which might be worthy of reading as well.

 

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR business Marketing SaaS PaaS IaaS CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Information Officer Chief Data Officer Chief Technology Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

News Analysis: Ready For The Industrial Metaverse?

Media Name: rwang0-nvidia-siemens-buschro-jensenhuang-bmwgroup-milan-nedeljkovic-industrial-metaverse.png

@rwang0 @BuschRo @jensenhuang @nvidia @siemens 

Customers Can Expect Physics-Based And Photo Realistic Digital Twins Today

On June 29th, 2022, a venerable digital industrial and one of the leaders in the metaverse, announced a collaboration to create an industrial metaverse.  The goal - accelerate the adoption and mainstream use of industrial automation.  At the core of this relationship is Siemens digital transformation platform, Siemens Xcelerator, and NVIDIA's Omniverse™, a 3D-design and collaboration platform.  Siemens brings physics-based digital models to the partnership while NVIDIA brings its real-tine AI to increase decision velocity.  As NVIDIA states, "Omniverse is a multi-GPU scalable virtual world engine that enables teams to connect 3D design and CAD applications for collaborative design workflows and allows users to build physically accurate virtual worlds for training, testing and operating AI agents such as robots and autonomous machines." NVIDIA boasts 25,000 companies using their metaverse platform.

@rwang0 @siemens @nvidia side by side

Source: NVIDIA

As part of the partnership, Omniverse joins the Siemens Xcelerator open partner ecosystem.  Roland Busch, CEO of Siemens's, emphasized how data integration from mechanical, electrical, software, plant systems, IoT, edge networks, will drive the convergence of IT and OT.  Siemen's Xcelerator brings a curated data marketplace and common set of API's to the digital twin platform.

The NVIDIA Omniverse delivers a rich AI powered, virtual world engine.  Coupled with NVIDIA.AI, Siemens has access to a powerful computational engine to render the rich digital representations in real-time.  In addition, one of the benefits of the NVIDiA Omniverse is the adoption and standardization on PIxar's Universal Scene Description (USD) to create photo realistic simulations from an integrated circuit to an entire factory.  NVIDIA's CEO, Jensen Huang, noted at the press conference at Siemens HQ, that these tools start you "down the journey of the industrial metaverse". 

BMW Creates A Digital Twin With iFACTORY

During the press conference, Jensen Huang and Roland Busch joined Dr. Milan Nedeljković BMW Group AG’s Member of the Board of Management to share how BMW Group has progressed in their Digital Twin efforts.

@rwang0 Jensen Huang Nvidia Roland Busch Siemens BMW Group Milan Nedeljković

Source: Siemens

Nedeljković shared how iFACTORY would enable the carmaker to make its factories more "lean, green, and digital".  BMW sees a faster concept to market opportunity with their joint partnership with Siemens and NVIDIA.  BMW expects the digital factory in Debrecen, Hungary to be up and running by 2025.  Other customers in attendance include AT&T, GT Rail UK, and Merck Group.

 

The Bottom Line: Industrial Metaverse Will Accelerate Digital Transformation

While most small and medium sized entities may not be able to afford the digital twins, companies that have NX or Tecnomatix platforms may gain access later in the year.  The availability of digital twins will offer customers a highly immersive experiences and allow real time decisions in both the physical and digital worlds.  Key benefits give customers the ability to:

  • Create creative environments for rich immersive simulations
  • Improve productivity for operations
  • Optimize processes across the product lifecycle
  • Access to real-time performance data
  • Rapid turnaround of industrial IoT solutions
  • Deliver greater decision velocity from data to decisions

With many partners such as Unity doubling down on its digital twins and other 3D engine companies and digital industrials in hot pursuit, customers and prospects can expect many more alternatives to arise in the delivery of data-driven digital networks (DDDNs) that will accelerate digital transformation

Your POV

Do you trust Siemens and NVIDIA to deliver an open platform for collaboration? Will you jump in to create your next digital twin? Does this announcement peak your interest in the metaverse economy?

Add your comments to the blog or reach me via email: R (at) ConstellationR (dot) com or R (at) SoftwareInsider (dot) org. Please let us know if you need help with your strategy efforts. Here’s how we can assist:

  • Developing your metaverse and digital business strategy
  • Connecting with other pioneers
  • Sharing best practices
  • Vendor selection
  • Implementation partner selection
  • Providing contract negotiations and software licensing support
  • Demystifying software licensing

Reprints can be purchased through Constellation Research, Inc. To request official reprints in PDF format, please contact Sales.

Innovation & Product-led Growth Next-Generation Customer Experience Future of Work Sales Marketing Insider Associates SoftwareInsider Metaverse AR Leadership Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief Experience Officer

Snowflake Summit 2022: Educating the Masses on Cutting-Edge Innovation

The gap between Snowflake’s bold and innovative vision and its product and customer realities were on display at Snowflake Summit 2022.

Attend any big annual enterprise tech show, and you’re likely to hear about the vendor’s far-reaching vision for innovation. That’s why we’re shown those “forward-looking statements” disclaimer slides that nobody ever bothers to read.

So we had Snowflake’s product announcements and presentations at Snowflake Summit 2022 in Las Vegas and then the realities of the company’s cross-cloud platform and its customer base as of the June 13-16 event.

The most palpable reality, given proof by the packed Caesar’s Forum Conference Center, was the scale of Snowflake’s success. The event drew 7,000-plus attendees -- a massive increase over the 2,000-plus attending the company’s last live Summit in 2019. The numbers also spoke for themselves: Snowflake became the fastest enterprise software company to reach $1 billion in revenue last year, and its fiscal year 2022 (ended January 31) closed at $1.2 billion in revenue, as reported by CEO Frank Slootman during his opening keynote. The company now has more than 6,300 customers -- more than several of its well-known independent competitors combined.

The reality is that Snowflake’s highly automated, low-touch, cross-cloud platform has attracted a great big tent of organizations who love the Data Cloud’s ease of deployment, scaling and use. But Snowflake is not just an easy-to-use alternative to running data warehouses on-premises. Or so executives including Slootman; co-founder and President of Product, Benoit Dageville; and Senior VP of Product, Christian Kleinerman; kept telling Summit attendees.

Snowflake co-founder and President of Product, Benoit Dageville, kicks off Snowflake Summit 2022

“Disrupting analytics” was the company’s vision way back in 2014, Dageville explained. That mission was succeeded in 2018 by “disrupting collaboration” through data sharing and the Snowflake Data Marketplace. And the new mission, introduced at Snowflake Summit 2022, is “disrupting app development.”

This new vision, detailed largely by Kleinerman in a series of product announcements, is about building modern applications that might include transactional as well as analytical capabilities. Mind you, Snowflake is not going after traditional Oracle Database type workloads so much as after modern, cloud-native apps blending transactional and analytical requirements.

Here are some of the key components announced, along with their development and release status:

  • Hybrid Tables (in private preview) are a new Snowflake table type supporting fast, single-row operations and suitable for both operational and analytical queries.
  • Unistore (in development and powered by Hybrid Tables) brings together transactional and analytical data to support transactional applications that can also instantly query historical data for analytical context.
  • Native Applications (in private preview) is a framework for building apps using Snowflake functions including UDFs, stored procedures, and (in-development) integrations with the Streamlit low-code development framework, acquired by Snowflake in March. Running on Snowflake takes advantage of the Data Cloud’s management and governance capabilities.

Importantly, the Native App Framework is tied to the Snowflake Marketplace (renamed from “Data Marketplace” because it’s now also about apps, models and more). Using the Marketplace, companies will be able distribute and monetize their apps with built-in commerce capabilities (which, along with Snowflake’s community and networking power, differentiate the company from competitors that have added data sharing support – often through third party marketplaces). Apps also offer the advantage of securely harnessing Snowflake data without copying it or exposing it to app users.   

Snowflake Senior VP of Product, Christian Kleinerman, details the big announcements at Snowflake Summit 2022

The promise of Native Apps was certainly palpable during Wednesday’s “Building in the Data Cloud - Machine Learning and Application Development” keynote. Private-preview customer 84.51°, a retail data science, insights and media company owned by Kroger, presented on an app it’s developing that will securely blend transactional grocery store sales and inventory data with privacy-safeguarded customer loyalty card data to deliver insights on customer buying trends to store chains and consumer products goods companies. App users won’t be able to see or touch the encrypted transactional sales data or the customer loyalty card data used inside the app, but they will get the trend insights derived from these data. Similarly, LiveRamp, a data-enablement, measurement and marketing company, is building an app that will help with identity resolution while respecting data privacy safeguards.   

I felt a bit sorry for these innovators, however, as the room was more than half empty by the time they presented. The content was quite compelling, I thought, but attendees either had competing breakout sessions or it wasn’t (yet) relevant for the less sophisticated Snowflake customers. It didn’t help that the first hour of the keynote was dedicated to Snowflake’s data science capabilities. Here the announcements included:

  • External Table access to on-premises object stores (entering private preview by the end of June) will enable Snowflake users to bring Parquet files from on-premises objects stores (with S3-compatible APIs) into Snowflake analyses without moving that data. This was a big ask among firms that were either not ready or unwilling to move data into the Snowflake Data Cloud, perhaps for data-residency reasons. (This feature is clearly relevant to many customers, not just those interested in data science.)
  • Iceberg Tables (In development) will introduce support for Apache Iceberg to supplement Snowflake-native tables. This open-source choice opens up the platform to employ multiple tools in addition to Snowflake, such as Spark, Flink and Trino. Performance promises to be close to that of Snowflake-native tables without foregoing the governance capabilities that they provide (features for Iceberg Tables such as encryption and replication are said to be in development).
  • Snowpark for Python (in public preview) adds to the Java and Scala support previously offered by Snowpark. To Snowflake’s credit, the offering includes popular Python open-source libraries (and package-management and update capabilities) through an integration with Anaconda.
  • Snowflake Worksheets for Python (in private preview) will support the building of Python user defined functions using Python in conjunction with Snowpark for Python.
  • Large-Memory Instances (in development) will support demanding data science workloads, such as feature engineering and model training on large datasets.

Most of the big crowd there for the day-two keynote stayed through the first 45 minutes or so, but seats gradually started to empty once the demos involving data scientists and data engineers coding in Python commenced. If you’re not a data scientist, I suppose a demo involving Python coding is not likely to be terribly compelling. By the time the app development half of the keynote began, the crowd was asked, “How many of you are data scientists, data engineers or data developers?” I’d guess less than 20% of remaining attendees raised their hands.

This apparent gap in interest left me wondering, why was Snowflake scarcely mention the SQL Machine Learning feature also announced at the Summit? SQL ML is in the same vein as what several rival vendors offer as either “AutoML” features or as built-in data-science algorithms designed for in-database execution. These features are implemented through simple SQL commands, so it would certainly seem to be very relevant to the mainstream crowd at Snowflake Summit. I’m guessing Snowflake didn’t highlight SQL ML because the (private-preview) feature only supports time-series forecasting at this point.

This brings me to the gap between Snowflake Summit announcements and the realities of what’s generally available today. Snowflake’s development stages typically last four to six months. So something just announced as “in development” isn’t likely to be generally available until a year to 18 months later. Something just entering “private preview” is likely to be eight to 12 months away, and “now in public preview” generally means GA is four to six months away.  

Almost all vendors pre-announce capabilities (some more aggressively than others). My sense is that Snowflake’s data science and app-development announcements are aimed at the larger and more cutting-edge customers (and attracting more of them) that will be invited into private previews. The keynote talks were about preparing the rest of the crowd for capabilities that won’t see general availability for at least another year.

Doug's Analysis

I came away from the Summit impressed by a bold and forward-looking company that has a fast-growing base of enthusiastic customers. The cutting-edge innovators taking part in those private previews include the likes of GEICO, CapitalOne, WarnerMedia, JetBlue, AT&T and Fidelity Investments, all of which presented at Snowflake Summit.

I also came away recognizing that Snowflake is sometimes in the position of having to backfill very mainstream, highly requested features, as Slootman acknowledged the company had to do early in his three-year tenure to step up governance capabilities. Over the last year to 18 months, Snowflake also has been stepping up cost-control features, including consumption analytics, budget guard rails and optimization features, with more such features said to be in the works.

Don’t get me wrong: a good annual customer conference should get customers dreaming about cutting-edge capabilities, but I heard more whoops and hollers when Kleinerman introduced new budget and resource group features and new account replication and pipeline replication features than I did at any point during the data science and data apps presentations.

Forward-looking vision and leadership is important for every vendor, but there also has to be a balance of crowd-pleasing upgrades and pain-fixes tied to core functionality. Snowflake delivered a mix of both types of announcements at the Summit, but at times is seemed like the band leader was getting a little too far out ahead of the parade. 

Related Research:
Market Overview: What to Look for in Analytical Data Platforms for a Cloud-Centric World
Trend Report: What to Consider When Choosing a Cloud-Centric Analytical Data Platform
ThoughtSpot Rides the Wave of Customer Cloud Transitions

 

 

Data to Decisions Tech Optimization snowflake Big Data ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Information Officer Chief Analytics Officer Chief Data Officer Chief Technology Officer Chief Information Security Officer

The CUBE Appearance: Day 1 Keynote Analysis | Snowflake Summit 2022

Doug Henschen, VP & Principal Analyst, Constellation Research, sits with Lisa Martin & Dave Vellante, as they Kick-Off Day 1 at Snowflake Summit 2022 at the Caesar’s Forum Convention Center in Las Vegas, NV.

Data to Decisions Tech Optimization Chief Information Officer Chief Analytics Officer Chief Data Officer Chief Technology Officer On <iframe width="560" height="315" src="https://www.youtube.com/embed/H7Y3OXcxigM" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

ConstellationTV Episode 35

ConstellationTV is here to bring you the latest in what is disruptive and reshaping business and technology. In every episode, you’ll hear from our fellow analysts, from leaders across our network of business transformation experts and influencers, as well as from cutting-edge vendors.

ConstellationTV is a twice monthly Web series with Constellation Research analysts via LinkedIn & Twitter. The show airs live at 9:00 a.m. PT/ 12:00 p.m. ET every other Wednesday. Follow us on Twitter @CRTV_Show & #CRTVShow.

On ConstellationTV <iframe src="https://player.vimeo.com/video/718039693?h=defa86deaf&amp;badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=75194" width="1280" height="720" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen title="CRTV Episode 35 Final.mp4"></iframe>