Results

Combining Two Modern Practices Propels E-Commerce Success: Product Content Syndication (PCS) and Product-to-Consumer (P2C)

Combining Two Modern Practices Propels E-Commerce Success: Product Content Syndication (PCS) and Product-to-Consumer (P2C)

As I noted last year when describing the emerging Product-to-Consumer (P2C) category,  as the e-commerce sector has grown to high levels of maturity it accumulated "ever-growing overhead in time, resources, and management attention on making the many moving pieces -- product catalogs, commerce systems, feeds, channels, and marketplaces -- fit together and properly operational in a way that is truly sustainable as a business." E-commerce will be a $7.3 trillion global industry by 2023, but only those prepared to evolve and modernize their ecosystems will thrive in an ever-more digitally sophisticated operational environment.

This is particularly true of the activity that is the lifeblood of e-commerce: The process of optimizing and maximizing product content to create sales. This product content syndication, or PCS, is increasingly seen as cenral to driving growth. This flow of product data has become both a technical and strategic advantage when it comes to omnichannel sales in today’s fast-paced e-commerce space, particularly in competitive environments. In short, having the best, richest, and most accurate set of product listings is now of pre-eminant importance for capturing market share and attracting buyers.

Related Research Report: The New E-Commerce Category of P2C Management

Creating a Winning Product Content Syndication Approach

Since a rich variety of product content, combined with a maximal distribution strategy for that content, makes the difference between merely surviving and actually competing, e-commerce managers are seeking better ways beyind the more static product information management approach of the last decade, with more dynamic methods that understand the individual details and optimal operating needs of the full end-to-end ecosystem.

Product Content Syndication with Product-to-Consumer as the Organizing Model for E-Commerce and Digital Business

To help navigate the choices, I've researched some of the top means for syndicating product content and determined what the best and most capable methods are in a new research report, Driving E-Commerce Growth With Product Content Syndication. As noted in the report, syndication puts product content into motion and therefore is the key step that brings an e-commerce ecosystem to life. Selecting the approach and technical solution for PCS is therefore a key decision that is a key determining factor the ultimate success of a digital business and all of its often far-flung constituent elements.

Key Takeaway: Make PCS an Integral Part of an E-Commerce Strategy

Without a more systemic and contextual approach -- I found the overall approach of using P2C management to be the most effective -- every kind of digital businesses, ranging from the existing the traditional e-commerce stores to the hot new emerging D2C channels from manufacturers, each have the same set of management challenge when it comes to keeping their product content updated and optimally used across their ecosystem. Namely, not to just maximize the product content but also to fundamentally transform their engagement with the market in a far more dynamic, intelligent, and compelling way.

The syndicated options outlined in my report range form the most basic and fundamental, all the way up to the most holistic vision available to e-commerce firms currently. Brands, retailers, and stores will be successfully primarily by their effective management of the product content ecosystem. In fact, it is by making PCS a fully integrated aspect of an e-commerce strategy that they can properly realize the insights and knowledge contained within the feedback loops that link them to the market. Harnessing this feedback with PCS in a contextually and data-driven way is how to provide the highest-impact results.

The best sustainable strategy for succeeding with product content is to contextually address each and every channel via automation. E-commerce firms and digital businesses that can that do this from a holistic strategy and matching platform will be in a better position to seize opportunity and survive rapid shifts in the market. Digital business has moved away from the simplistic models of years' past to much more deeply integrated new systems that can cope with today’s operating requirements, regardless of how sophisticated they are. For most organizations, developing and operating a more robust, enlightened, and sustainable PCS strategy will be essential to their long-term growth, maturity, and success.

Additional Reading

The Strategic New Digital Commerce Category of Product-to-Consumer (P2C) Management

The P2C Management Vendor ShortList for 2021

Realizing a Decisive Advantage in Digital Commerce Through Economic Flexibility

How Headless Revolutionized Content Management

The Future of Enterprise Content Management

A New Digital Experience Maturity Model for Improved Business Outcomes

How CXOs Can Attain Minimum Viable Digital Experience for Customers, Employees, and Partners 

To Strategically Scale Digital, Enterprises Must Have a Multicloud Experience Integration Stack

Marketing Transformation Matrix Commerce New C-Suite Next-Generation Customer Experience Tech Optimization Chief Analytics Officer Chief Customer Officer Chief Data Officer Chief Digital Officer Chief Executive Officer Chief Information Officer Chief Marketing Officer Chief Revenue Officer Chief Supply Chain Officer Chief Technology Officer

Tableau Gets Back on the Conference Circuit in a Time of Change

Tableau Gets Back on the Conference Circuit in a Time of Change

It was Albert Einstein who said, “the measure of intelligence is the ability to change.” With apologies to Einstein, I’d say the ability to change is also the measure of a good business intelligence (BI) platform and vendor. I had changing customer needs and expectations very much in mind when I attended Tableau Conference 2022 (TC22), May 16-19, in Las Vegas.

In one sense, the TC22 reunion of the “data fam” was trip back in time, from the 1990s/2000s rock-and-pop-hit keynote soundtrack, to the return of the “Devs on Stage” and “IronViz” sessions, to the packed and playful “Data Village” opening night reception, complete with playground equipment and Elvis impersonators. It was also nostalgic seeing more than 5,000 people streaming into the Mandalay Bay Convention Center (about 1/3 of the event’s record attendance, but it felt like a TC, just like it used to be).  

Outward appearances notwithstanding, Tableau executives acknowledged from the start of TC22 that it’s a time of change for Tableau -- and for BI and analytics more broadly. Much of the change for Tableau is related to its absorption into Salesforce, which acquired the company two-and-a-half years ago. Mind you, the Tableau name is absolutely NOT going away – it’s the analytics unit within Salesforce, just as Mulesoft has retained its identity and integration role since its acquisition by Salesforce. But the one-million-customer-strong Tableau community is definitely gaining closer ties with the 16-million-customer strong Salesforce Community.

CEO Mark Nelson said during his opening keynote that Tableau can now draw on Salesforce as “a superpower.” A concrete example was the announcement of a coming Model Builder feature for Tableau. The no-code predictive-modeling capability is based on Salesforce’s Einstein Discovery, and it will be integrated into Tableau by the end of 2022.

In his Tableau Conference keynote, CEO Mark Nelson says Tableau is drawing on Salesforce as a "superpower."

Another sign of Salesforce’s superpower influence and contributed strengths has been Tableau’s emphasis on adding enterprise capabilities since the acquisition. At TC21, held virtually in November, enterprise-oriented announcements included a Centralized Row-Level Security feature and a Connected Applications feature. That latter enables administrators to set up trusted relationships with external services.

The enterprise push continued at TC22 with the introduction of Advanced Management capabilities, including Customer-Managed Encryption Keys, an Activity Log feature that tracks how individuals are using Tableau, and an Admin Insights feature that uses Tableau analyses to help admins track dataset usage, license adoption, and visualization load times.

What’s in a Name

Some TC announcements seemed more about branding than dramatically new capabilities (a Salesforce influence?), but sometimes it’s important to get names right. For example, the company announced the rebranding of Tableau Online as Tableau Cloud -- in line with Salesforce naming conventions. The name change is justified in that 70% of Tableau customers now deploy first (and often only) in the cloud. So it’s not about taking the Tableau experience “online,” the roots of the old name; it’s about delivering a cloud-first Tableau. That evolution will certainly require more than the cloud-centric accelerators that were added to the Tableau Exchange as part of the Tableau Cloud unveiling. For example, I’m hoping Tableau Cloud will soon take advantage of the Salesforce superpower known as Hyperforce, which would give it the ability to run with consistency across multiple public clouds. Tableau execs say support for Hyperforce is on the roadmap, but target release dates have yet to be announced.   

Another brand change was the unveiling of CRM Analytics, which is the new name for what was previously called Tableau CRM (and before that Einstein Analytics and before that Wave Analytics). The change to CRM Analytics was actually announced in April, but the move was explained more deeply to analysts at the Tableau Conference. The new name was greeted by customers with a “sigh of relief,” according to one executive, because the product has always been a part of Salesforce and was not developed by Tableau. The naming helps make it super clear that you choose CRM Analytics when you want to bring insights and predictive guidance into sales and service rep Salesforce workflows.

Increasingly, CRM Analytics will be available through even more targeted applications, such as the Revenue Intelligence application, introduced in February, and through five industry-focused versions of Revenue Intelligence – for the Salesforce Financial Services, Manufacturing, Consumer Goods, Communications, and Energy and Utilities clouds – set to be released this summer.

Long Live Dashboards

Despite all the talk of dashboards being dead – talk mostly heard at Tableau-rival events – it was clear from the armies of analysts at TC22 and the rowdy throng at the IronViz competition that data visualizations and drillable dashboards are still very much in demand (something natural-language-search-centric vendor ThoughtSpot acknowledged at its May 9-12 Beyond event, which I also attended.)

The whole self-service movement came about because organizations wanted to make it easier to create pre-defined views of business conditions without having to wait in line for IT to create new reports. All the better that visualization centric dashboards also supported lower-latency monitoring and drill-down analysis, unlike the static reports that they often replaced.

To promote easier consumption by a broad base of business users, Tableau introduced a Data Orientation Pane feature that drew big applause during the popular “Devs On Stage” session. The purpose of the pane is to guide new and novice users in the use of a dashboard by providing descriptions, links to resources (such as how-to-use videos), list of data sources, and details on which fields are being used, which filters are in effect, and what outliers might indicate.

Seeing Tableau With a Beginner's Mindset

In another sign of change afoot at Tableau, there were presentations on new initiatives led by new executives who are taking a look at Tableau offerings with a beginner’s mindset. For example, a Data Fabric initiative led by Amazon/AWS veteran Volker Metten, now a Tableau VP, Product Management, is aimed at better aligning capabilities including Tableau Prep, Tableau Catalog, Virtual Connections, Row-Level Security and Governance Rules. The goal is to ease data access and data usage while ensuring that the right data gets to the right people.

Another initiative led by Metten is working toward greater consistency and interoperability among Tableau products and capabilities that have matured at different rates. Tableau Prep, for example, currently has one version for Tableau Server and a slightly different version for Tableau Cloud, so the company plans to align the two. The team also wants to ensure that Prep will be able to draw data from Salesforce and publish transformed data directly into CRM Analytics – a capability expected to be available before the end of this year.

Francois Ajenstat, Tableau's Chief Product Officer, announces Tableau Model Builder, which is being built on Salesforce Einstein Discovery.

As for changes within the larger BI/analytics market, there’s big push today for what’s variously called “actionable analytics” or “actionable insights.” As I’ve often observed in my research and on social media, the problem that Tableau helped to solve 15 years ago was that of having too much data and not enough insight. Now that self-service capabilities are pervasive, the challenge is often having too many insights, sometimes conflicting insights, and not enough clarity on what actions to take.

At TC22, the announcements geared to improving and clarifying insight included the Data Orientation Pane and new “Data Stories” automated plain-language explanations. This natural-language-explanation capability, based on the Narrative Science acquisition, will be showing up within Tableau as well as within Salesforce applications, bringing insight closer to the action. The coming Model Builder feature for Tableau will go even further to move the needle toward predictions and recommended actions.

These examples are clearly just a start on a journey that will bring more change. As long-time Tableau Chief Product Officer Francois Ajenstat told analysts at TC22, Tableau has gone through “many, many” changes since its founding in 2003 and its initial public offering in 2013. Most notably, there was an executive leadership change when Tableau’s stock slumped in 2016 (due largely to the introduction of Microsoft Power BI), yet Tableau managed to reach new heights.

At TC22 we saw a company that is benefitting from the marketing muscle, extensive tech talent, and intellectual property portfolio of its new corporate parent. And because there is new blood and open discussion and exploration of BI/analytics market changes by Tableau executives, I think we also saw a company that is embracing change rather than clinging to the glory days of the self-service era.

Data to Decisions Tech Optimization Marketing Transformation Innovation & Product-led Growth Next-Generation Customer Experience Future of Work intel Big Data Marketing B2B B2C CX Customer Experience EX Employee Experience AI ML Generative AI Analytics Automation Cloud Digital Transformation Disruptive Technology Growth eCommerce Enterprise Software Next Gen Apps Social Customer Service Content Management Collaboration Machine Learning LLMs Agentic AI business SaaS PaaS IaaS Enterprise IT Enterprise Acceleration IoT Blockchain CRM ERP finance Healthcare Chief Information Officer Chief Analytics Officer Chief Data Officer Chief Technology Officer Chief Information Security Officer

The CUBE Appearance: Couchbase Application Modernization Event

The CUBE Appearance: Couchbase Application Modernization Event

A "power panel" of analysts including Tony Baer, Doug Henschen and Sanjeev Mohan join Dave Vellante of The CUBE for coverage of the Couchbase Application Modernization event.

Data to Decisions Chief Information Officer Chief Digital Officer Chief Data Officer Chief Technology Officer On <iframe width="560" height="315" src="https://www.youtube.com/embed/FqSVJH_a0PY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

Monday's Musings: Decision Velocity Will Determine Winners and Losers In A Digital Age

Monday's Musings: Decision Velocity Will Determine Winners and Losers In A Digital Age

Everybody Wants To Rule The World 

Speed Provides Exponential Advantage

Speed has always been a critical success factor in winning wars on the battlefield. You need to move troops faster, reach targets more quickly, and strike with speed and precision. However, what is often not talked about is how the speed with which decisions are made plays a role in claiming victory. Alexander the Great’s success on the battlefield is often credited to the rapid decision-making capabilities of his armies. Enabled by trust and a decentralized command structure, his troops were able to beat their enemies by “out-decisioning” them. In most cases, his opponents had bureaucratic decision architectures, where minor decisions would travel up multiple levels of command before traveling back down to be executed. In the 330s BC, that could mean it took days to make a decision on the battlefield. Such a centralized control and detailed micro-management approach was no match for Alexander the Great’s nimble teams. British military strategist J. F. C. Fuller, writing on Alexander the Great, explained, “Time was his constant ally; he capitalized every moment, never pondered on it, and thereby achieved his ends before others had settled on their means.”1

The speed of decision making plays a similar role in the age of digital giants. Any organization that can make decisions twice as fast or one hundred times faster than its competitors will decimate them. Time is a friend to those who make can make faster, more accurate decisions. While the human brain may take minutes to make a decision and it takes hours for a decision to work through an internal organizational structure, in the digital world machines and artificial intelligence engines can make a decision in milliseconds. Whomever masters these automated decisions at high velocity will have an exponential advantage over those who don’t.

To succeed, businesses must achieve decision velocity: First you have to amass a huge number of users and collect rich data and insights about their interactions—what I call data supremacy. Then you must train artificial intelligence to recognize patterns in that data and automate decisions, processes, and tasks based on those patterns. The higher the number of users, the higher the number of interactions, the higher the amount of data, the higher the quality of insights that AI can learn from, the higher the level of automation of your decisions in your organization. The higher the level of automation of the organization’s decisions, the higher chances you’ll rule your market.

It All Starts With Quality Data - Lots of It

Data is the foundation and the first priority for every organization’s growth and development. You must find and harvest all relevant sources of data and control, if not own, the upstream raw data sources. On the downstream side, you must control access to how the data is shared, monetized, and controlled.  This means identifying where the biggest pools of quality data reside and understanding how data is consumed inside the organization.

However, the battle for data is often misunderstood. Many think data supremacy is only about accumulating the greatest troves of data. But having the most data does not necessarily mean you win. This is a battle for the most insight from well- curated, highly contextual data. Quality trumps quantity. The real goal is to understand the relationships among data. You want to learn how the data interacts with each other and what patterns arise from these interactions.

Where does the raw data come from? Successful organizations mine their organizations top to bottom, harvesting data from enterprise transactional systems like their accounting systems, supply chain, operations, and performance data. Then they pair their baseline back office data with front office data that includes customer interactions from sales, marketing, service, and commerce. They also mine “machine-generated data”—log files from equipment—and external sources such as social media feeds and feedback surveys.

The next source of data organizations rely on is user-generated. Every organization gets excited whenever users provide data on their own, whether through an online resume, a social profile, a customer account for a website, payment information, location data when they “check in” to a restaurant or shop, or photos that can be used for facial recognition and image recognition. The more organizations drive engagement with their users, the richer the data sets they collect and the more opportunities they have to find insight in the data.

These insights come from correlations, associations, and relationships—their “interactions”—among all the data produced and captured. Successful organizations are masters at identifying “signal intelligence,” the meaningful patterns or trends that emerge from the cacophony of data interactions. And they use this signal intelligence to make all sorts of “precision decisions,” from how much to charge for a product, to what customers ought to be targeted for what marketing campaign, to what product should be recommended to what customers.

Thus, the combination of good analytics, automation, and AI will help organizations improve decision velocity and carry this forward the learnings throughout the enterprise

 

Your POV

Have you organized your enterprise to optimize for decision velocity? Ready to move from data to decisions?

Add your comments to the blog or reach me via email: R (at) ConstellationR (dot) com or R (at) SoftwareInsider (dot) org. Please let us know if you need help with your strategy efforts. Here’s how we can assist:

  • Developing your metaverse and digital business strategy
  • Connecting with other pioneers
  • Sharing best practices
  • Vendor selection
  • Implementation partner selection
  • Providing contract negotiations and software licensing support
  • Demystifying software licensing

Reprints can be purchased through Constellation Research, Inc. To request official reprints in PDF format, please contact Sales.

Disclosures

Although we work closely with many mega software vendors, we want you to trust us. For the full disclosure policy,stay tuned for the full client list on the Constellation Research website. * Not responsible for any factual errors or omissions.  However, happy to correct any errors upon email receipt.

Constellation Research recommends that readers consult a stock professional for their investment guidance. Investors should understand the potential conflicts of interest analysts might face. Constellation does not underwrite or own the securities of the companies the analysts cover. Analysts themselves sometimes own stocks in the companies they cover—either directly or indirectly, such as through employee stock-purchase pools in which they and their colleagues participate. As a general matter, investors should not rely solely on an analyst’s recommendation when deciding whether to buy, hold, or sell a stock. Instead, they should also do their own research—such as reading the prospectus for new companies or for public companies, the quarterly and annual reports filed with the SEC—to confirm whether a particular investment is appropriate for them in light of their individual financial circumstances.

Copyright © 2001 – 2022 R Wang and Insider Associates, LLC All rights reserved.

Contact the Sales team to purchase this report on a a la carte basis or join the Constellation Executive Network

Innovation & Product-led Growth Tech Optimization Future of Work Insider Associates SoftwareInsider AR Leadership Chief Information Officer Chief Technology Officer Chief Digital Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Executive Officer Chief Operating Officer Chief Experience Officer

News Analysis: Inside Disney's Earnings and Streaming Wars Among A Tech Market Rout

News Analysis: Inside Disney's Earnings and Streaming Wars Among A Tech Market Rout

Disney+ Logo 

Disney's Performance Shows Strength and Depth Of Portfolio

Quality balance sheets, predictable revenues are key to sustaining stock prices during this current market rout.  Investors only care about future forecast guidance despite current quarterly performance. While Disney's Q1 2022 showed 23% YoY gains with $1.4 billion in operating profit, guidance has been muted in spite of the near pandemic comparables.

Disney+ Continues To Grow While Netflix Falters

Subscriber growth slowed in Q1, but Disney's streaming offering still grew revenue 5% and added 7.9M subscribers to a total of 137.7M total subscribers.  Disney+ as a standalone offering is the clear #3 in the market.  When the complete Disney streaming offerings are tabulated, they now now surpass Amazon prime with 205 million total subscribers.
 
Good news for investors as Disney contemplates a new ad supported subscription tier and continued international expansion.   International expansion will definitely drive down average revenue per user.  However, the streaming player faces additional headwinds with content libraries being pulled back. Lack of content availability may have an impact on near term subscriber adds.  Further, costs are up as Disney plans $32B in content spend

Figure 1. The Key Streaming Players

Netflix 221.8M
Amazon Prime 200M
Disney 137.7M
HBO Max 73.8M
Paramount+ 56M
Hulu 45.3M
Discovery+ 22M
Apple+ 20M

Parks

The park business showed massive demand for revenge travel.  Disney doubled its revenues to $6.8B as hotel, cruise, and concessions showed growth. Disney’s parks business is a shining light for reopening but inflation will impact Disney later in the year as labor costs, energy costs, and supply chain costs will eat at profit margins.  Disney could see more growth upside if Asia finally opens up as Hong Kong is open but Shanghai is closed.

Studios

Movie openings will be a bright spot thought as this is one revenue stream that has room to grow as Americans flock movie theaters for openings this summer.  Disney could see upside with future box office hits.
 

The Bottom Line

Meanwhile, the culture wars continue to roil Disney internally as 200 employees are protesting a move to Florida and the war with the state continues as backlash.  Overall, Disney has weathered the streaming wars well during lock down and is poised for success with more re-openings.  Add potential Metaverse opportunities, expect Disney to move from media giant to tech giant in the next five years.

Your POV

Who do you think will win the streaming wars? Where do you see Disney in the future of the metaverse?

Add your comments to the blog or reach me via email: R (at) ConstellationR (dot) com or R (at) SoftwareInsider (dot) org. Please let us know if you need help with your strategy efforts. Here’s how we can assist:

  • Developing your metaverse and digital business strategy
  • Connecting with other pioneers
  • Sharing best practices
  • Vendor selection
  • Implementation partner selection
  • Providing contract negotiations and software licensing support
  • Demystifying software licensing

Reprints can be purchased through Constellation Research, Inc. To request official reprints in PDF format, please contact Sales.

Disclosures

Although we work closely with many mega software vendors, we want you to trust us. For the full disclosure policy,stay tuned for the full client list on the Constellation Research website. * Not responsible for any factual errors or omissions.  However, happy to correct any errors upon email receipt.

Constellation Research recommends that readers consult a stock professional for their investment guidance. Investors should understand the potential conflicts of interest analysts might face. Constellation does not underwrite or own the securities of the companies the analysts cover. Analysts themselves sometimes own stocks in the companies they cover—either directly or indirectly, such as through employee stock-purchase pools in which they and their colleagues participate. As a general matter, investors should not rely solely on an analyst’s recommendation when deciding whether to buy, hold, or sell a stock. Instead, they should also do their own research—such as reading the prospectus for new companies or for public companies, the quarterly and annual reports filed with the SEC—to confirm whether a particular investment is appropriate for them in light of their individual financial circumstances.

Copyright © 2001 – 2022 R Wang and Insider Associates, LLC All rights reserved.

Contact the Sales team to purchase this report on a a la carte basis or join the Constellation Executive Network

Innovation & Product-led Growth Tech Optimization Future of Work Insider Associates apple SoftwareInsider AR Leadership Chief Information Officer Chief Technology Officer Chief Digital Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Executive Officer Chief Operating Officer Chief Experience Officer

ThoughtSpot Rides the Wave of Customer Cloud Transitions

ThoughtSpot Rides the Wave of Customer Cloud Transitions

The return to live tech conferences is heating up, and so, too, is competition in the always-competitive business intelligence (BI) and analytics market. At its May 9-12 Beyond.2022 event held in Las Vegas, ThoughtSpot made it clear that it is accelerating growth by focusing on companies that are moving to modern cloud data platforms offered by the likes of Snowflake, Databricks, Google and AWS.

An innovator of natural language querying and AI-augmented BI and analytics, 10-year-old ThoughtSpot made a cloud transition of its own from 2020 into 2021 by moving from selling software that customers had to deploy to offering its software as a service. Many of ThoughtSpot’s largest customers still self-manage ThoughtSpot’s platform in their own data centers, but the vendor’s double-digit growth is now centered around SaaS and companies moving to those hot cloud data warehouses and lakehouses.

To land new enterprise customers, ThoughtSpot is going to market in lock step with partners including Snowflake and Databricks. Executives from both of these companies joined the Beyond.2022 keynote and were there to rub elbows with joint ThoughtSpot customers including Disney Streaming, Verizon, Merck, CapitalOne and others.

ThoughtSpot has aligned itself with cloud data platforms and touts a "new playbook" for insight-enabling a broader base of business users.

The big news at Beyond.2022 was that ThoughtSpot is now also going after midmarket customers with new “Team Edition” and “Pro Edition” offers priced at $95 and starting at $2,500 per month, respectively. Each edition has different limitations on data capacity and user groups with unlimited users, the only Team limitation being the scope of data that team members can explore and the number of groups they can create for administrative purposes. The Pro edition supports up to five such user groups. The Team and Pro editions are delivered through self-service cloud deployment and community support and would not have been possible before ThoughtSpot’s switch to SaaS.

ThoughtSpot has long favored unlimited-seat subscription approaches rather than pricing per seat because the focus is on reaching as many users as possible – particularly business users who tend to take to its natural language query and computer-augmented analysis approaches. The subscription approach is a key differentiator versus competitors such as Tableau and PowerBI, and it also applies at the enterprise level, where ThoughtSpot is moving to consumption-based (per-query) pricing, again with unlimited seats and no pricing barriers to analyzing more data (other than the cloud-platform’s storage costs and query consumption).  

ThoughtSpot’s unlimited-users, no-shelfware approach appeals to high-scale customers including Verizon. Presenting during the Beyond.2022 keynote, Dr. Ansar Kassim of Verizon said the company’s ThoughtSpot deployment now surpasses 10,000 users – well beyond the analyst class and into the business-user mainstream.

Inviting customers to reimagine the next “decade of data,” ThoughtSpot co-founder and chairman Ajeet Singh encouraged customers and would-be customers to “throw out the old playbook” and move to a “true self-service” approach in which business users can explore and analyze data for themselves. That’s done through the vendor’s search-driven interface, which is focused on uncovering what happened (a.k.a. descriptive analytics) and the SpotIQ interface, which brings AI and augmented capabilities to bear to uncover exceptions and patterns and why they occurred (a.k.a. diagnostic analytics).

ThoughtSpot's search-driven and augmented SpotIQ interfaces are geared to intuitive data exploration, but it also delivers dashboard-like "Liveboards" offering pre-defined views of a business.

ThoughtSpot executives acknowledged that customers still want SQL tools for developers and the pre-defined views of the business provided by dashboards. What’s more, many customers don’t want to have to support two vendors to get everything they need. Toward that end, ThoughtSpot demoed stepped-up dashboarding capabilities and new tools for SQL-savvy developers including a data workspace, new ELT templates, and prebuilt “data block” connectors to popular data sources. Conventional, pixel-perfect reporting remains a gap in the ThoughtSpot portfolio, though many companies now favor dashboards coupled with exception-based alerting and mobile, ad hoc analysis over scheduled delivery of static PDF reports.  

Another initiative that seems to be working for the company is its 18-month-old “ThoughtSpot Everywhere” embedded program, which is aimed at helping software and SaaS companies to seamlessly weave ThoughtSpot analytical capabilities into their own offerings. On this front ThoughtSpot offers both the consumption-based (per-query) model and a capacity-based (per-row) pricing model. 

Whether a relatively small, 700-employee challenger like ThoughtSpot can make serious inroads against giants like Tableau and Microsoft Power BI has yet to be seen. The jury is also out on whether ThoughtSpot can successfully meet all BI and analytics needs for the majority of its customers (I talked to a few who are keeping legacy reporting platforms in place). Nonetheless, I came away from Beyond.2022 with a sense of a focused company that is poised to ride the coattails of the larger trends toward cloud data platforms and the desire to enable business users to use insights to drive action and make better decisions.        

Related Reading:
Market Overview: What to Look for in Analytical Data Platforms for a Cloud-Centric World
Big Idea: Why Data-Driven Innovators Choose Embedded Analytics
Trend Report: It’s Time to Prepare for the E in Environmental, Social, and Governance Initiatives

 

Data to Decisions Tech Optimization Marketing Transformation Innovation & Product-led Growth Next-Generation Customer Experience Future of Work intel Marketing B2B B2C CX Customer Experience EX Employee Experience AI ML Generative AI Analytics Automation Cloud Digital Transformation Disruptive Technology Growth eCommerce Enterprise Software Next Gen Apps Social Customer Service Content Management Collaboration Machine Learning LLMs Agentic AI business SaaS PaaS IaaS Enterprise IT Enterprise Acceleration IoT Blockchain CRM ERP finance Healthcare CCaaS UCaaS Enterprise Service Chief Information Officer Chief Analytics Officer Chief Data Officer Chief Technology Officer Chief Information Security Officer Chief Executive Officer

FinancialForce's Spring 2022 Release Defines the Future of FP&A In Services

FinancialForce's Spring 2022 Release Defines the Future of FP&A In Services

1

Economic uncertainty sends shock waves throughout businesses, with service organizations seeing its brunt. The recent drastic drop-off in Netflix subscribers is a case in point. Services CFOs say there is an urgent need to track how well their overarching planning strategies linking finance and operations perform. However, getting the data to analyze has been challenging for even the largest services businesses.

As a result, CFOs need Financial Planning & Analysis (FP&A) integrated with operational planning applications to make it easier to track plan performance across all P&Ls and financials. FinancialForce's decision to launch a fully-featured FP&A on their ERP Cloud platform shows they read the services market clearly and listen to their customers' CFOs on what matters most.

 

CFOs Want To Know The Financial Impact Of Every Planning Decision

 

Even during economic stability, finance teams struggle to get operations planning teams the data they need to predict the financial outcomes of decisions. Line-of-business leaders look to finance to provide accurate, detailed information on the financial implications of every planning decision. By having FP&A use the same data accounting, reporting and planning have, CFOs, COOs, and their teams get greater visibility and control over every aspect of budgeting and forecasting.
 

One of FP&A's greatest shortcomings in the past was relying only on siloed financial data alone with little visibility into operational planning. Financial teams need access to all available data across finance and operations to do their jobs well and create accurate forecasts. Getting FP&A right with any ERP platform needs to start with the goal of delivering integrated business planning. Sales management and their teams also need visibility into FP&A reporting and analysis to manage revenue. FinancialForce's decades of experience on the Salesforce platform combined with the integration expertise Salesforces' MuleSoft acquisition brought to the company four years ago will increase the probability of their FP&A solution gaining adoption.
 

Services companies' CFOs are grappling with new economic uncertainties every week. As a result, they're most interested in getting greater visibility and control over the planning process, including version control, more automated multi-planning options, and more real-time enterprise-wide collaboration, all on a single platform. FinancialForce's DevOps and product management teams deserve credit for identifying these challenges and including them in their FP&A application delivered in the Spring 2022 release. 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 




FinancialForce's long-awaited FP&A solution enables analysts to create multiple what-if scenarios using calculation rules and mass functions, create dynamic plans and stress-test assumptions and better anticipate their return by area and investment.
 

The future of FP&A Is An Integrated Cloud

Service organizations are quicker to migrate to the cloud versus their product-based counterparts. That's because procurement, order-to-cash, and supply chain management workflows tend to be less complex than product-based businesses. Services organizations also need financial management, procure-to-pay, and Professional Services Automation (PSA), all on the same platform to support operational planning with FP&A.
 

FinancialForce's Multi-X functionality is expanded in the Spring 2022 release to simplify the consolidation of financial statements and meet the needs of multi-entity organizations. In the latest release, it's possible to record taxes due from intercompany tax transactions, accelerating the intercompany process for taxation and reporting. The Spring 2022 release also streamlines the creation of multi-company sales invoices and simplifies consolidated financial statement preparation with consolidation group structure capabilities.
 

Multi-X enables the recording and sharing across a multi-tier or multi-entity business.
 

New localization features that are essential to running a global business were added, including support for Switzerland, Denmark, Finland, and Austria, as well as enhanced business operations in Germany and Australia. In addition, multi-X supports multi-company invoicing support and advanced invoice consolidations for multi-revenue billing. Calculating and recording tax on intercompany transactions and enabling cash matching process across companies are also supported.
 

FP&A's future is an integrated cloud, further validated by FinancialForce's' launch of ERP Cloud, Professional Services Cloud, and enhancements to its Customer Success solutions. "In today's business environment, organizations must be able to respond to disruptions quickly while continuing to innovate and deliver tangible outcomes to their customers," said Dan Brown, Chief Product and Strategy Officer at FinancialForce. "Our Spring 2022 release gives our customers a richer toolset to help pursue their primary goal, delivering exceptional customer outcomes while improving the customer experience across the opportunity-to-renewal journey."
 

New Professional Services (PS) Cloud additions in the Spring 2022 release include customer-requested improvements to skills and resource management, services estimating, and project management capabilities. FinancialForce's customers have also requested improved resource management to scale their efforts to train and retain their workforce. As a result, the Spring 2022 Release adds intelligent automation to the staffing process by enabling auto-assignment of resource requests that meet specific criteria and an expanded capability to model ideal staffing scenarios across a project, opportunity, or region. These enhancements improve PS Cloud's resource optimization capabilities and enable resource managers to deploy ever larger and more complex teams efficiently and cost-effectively.

 

Conclusion

 

Services organizations are looking for cloud-based professional services ERP systems that deliver greater forecast accuracy, faster forecasting and budgeting, and improved accountability, visibility, and control. Integrated clouds are the future of FP&A for all these factors and the need all services organizations have to improve revenue and operations performance. In addition, given the growing economic uncertainty today, CFOs also want to increase better predictability and better risk management strategies while also supporting more collaboration. All these factors combined are defining the future of FP&A in an integrated cloud, which is what FinancialForce has been doing for decades on the Salesforce platform.

 

 

 

Data to Decisions Revenue & Growth Effectiveness New C-Suite Future of Work Innovation & Product-led Growth Tech Optimization financialforce business finance AI Analytics Automation Cloud CRM Data to Decisions Digital Transformation Disruptive Technology eCommerce Enterprise IT Enterprise Software HCM HR ML Machine Learning Next Gen Apps SaaS PaaS IaaS Supply Chain Chief Executive Officer Chief Financial Officer Chief Information Officer Chief Procurement Officer Chief Revenue Officer Chief Technology Officer Chief Data Officer Chief Digital Officer Chief Analytics Officer Chief Operating Officer Chief Marketing Officer

Atlassian Outage - Thoughts on What to Do When Your Provider Goes Down

Atlassian Outage - Thoughts on What to Do When Your Provider Goes Down

[With comments from Holger Mueller]

Update: 4/29/2022

Atlassian hired a new CTO - Rajeev Rajan from Meta (ex-Microsoft). While this itself is not the complete answer, this is a solid first step by getting someone with a strong enterprise engineering team background to address some of their issues. As I stated in my conclusion of this blog earlier, "Atlassian cloud has growing pains." Hope he is their answer, and hope they will continue to address the issue at hand to instill confidence in their customers. 

--------------------------------------------------------

The latest Atlassian outage goes to show that every cloud provider is prone to unplanned downtime sooner or later. While every company strives to achieve that unicorn status of zero downtime, it is almost impossible to achieve that in the face of “Unknown Unknowns.” Especially with the need and demand for “always-on,” there are more opportunities than ever for things to break, and incidents do not wait for a convenient time.

What actually happened?

On April 4th, a small portion of Atlassian customers (400 of the 226,000 ish customers) experienced an outage on a number of Atlassian Cloud sites for Jira Software, Jira Work Management, Jira Service Management, Confluence, Opsgenie, Statuspage, and Atlassian Access. While the number of customers affected was low, those customers lost complete access to all of their Atlassian services. And the actual number of affected could be in the 100s of thousands of actual users. If the enterprise depends on the Atlassian cloud suite for their DevOps, Enterprise Service, or Incident Management, they were at a standstill until the issues were resolved. (The issues were finally resolved on 4/18/22, after 2 weeks, per Atlassian).

This outage that originally started on April 4th took almost 2 weeks now for some customers. The timing couldn’t have been any worse, with their executive team pitching how great their cloud services are and how they are putting an enterprise sales/service model for large enterprise customer customers at their Atlassian Team ’22 conference in Vegas during April 5-7. All this was happening while the Atlassian executives were on stage talking about how they are building a resilient cloud bar none.

Why is it bad?

This outage from Atlassian is pretty bad for a few reasons.

  1. They were putting on a big show in Vegas, where they were talking about their strategy, direction, vision, and mission for their cloud services when their cloud services were down. I was attending the conference in person, and some of my tweets and many others’ tweets were met with hostile customer replies with angry comments. Bad publicity.

  2. Atlassian as a company has a primary solution set that is mainly focused on helping customers prepare for such unplanned outages. Agile development, DevOps cycle, issue/bug tracking, incident management, ITSM, incident response, Statuspage, etc.

  3. The issue is self-inflicted. The damage was not due to misconfiguration, hacking, or affected by other provider-dependent services. As Atlassian CTO stated, two critical errors were committed. First, instead of deactivating a specific app, the entire cloud site for certain customers with all apps was deactivated due to a communication gap. Second, the scripts deleted the entire apps “permanently” vs “temporarily recoverable” for compliance reasons. The combination of these two errors led to the colossal mishap.

  4. They took a long time to respond with something meaningful. Granted, they were all busy with the big show in Vegas. Atlassian claims they figured out the issue and root cause within hours, but the cryptic messaging to the customers and their status pages were not very clear on the situation. Until then, only the cryptic message of “your service is down” with no ETA was relayed to the affected customers. Only about a week later, Atlassian CTO wrote a detailed post on what went wrong. Until then, customers were scrambling to figure out what went wrong and were doing patchwork with spreadsheets, Word docs, and other collaboration tools like Slack to manage the gap.

  5. Atlassian announced that they will no longer sell new licenses for server on-prem installations (but will still continue their Data Center offerings) and will discontinue support for on-prem server in 2024 for all existing customers effectively forcing the server version of the customers to move to the cloud in 2 years. Which makes sense on their part only to maintain only SaaS and Data Center versions instead of having a fragmented solution set.

  6. Incidentally, Atlassian CTO stated in his last blog (before this fiasco) "At our engineering Town Hall meeting, I announced that we were in a "Five-Alarm Fire" due to poor reliability and cloud operations. Our customers needed to trust that we could provide the next level of reliability, security, and operational maturity to support our business transition to the cloud in the coming years." and that he is raising a “Five alarm fire” to fix that. That particularly called for reliability, security, and operational maturity to support the transition to the cloud. With this incident, they failed in two of those three categories, unfortunately.

  7. Finally, more importantly, Atlassian claims an SLA of 99.99% for Premium and 99.95% for enterprise cloud products. They also claim a 6-hour RTO (recovery time objective) for tier 1 customers. Unfortunately, neither held up this time.

Why it happened?

Rather than me trying to paraphrase, you can read Atlassian CTO’s blog that explains what happened in detail here.

What now?

Accept that no cloud service is invincible. High-profile outages are becoming more and more common. Even the mighty AWS had their US-East region down for many hours recently. Unplanned downtimes are expected and will happen at the most unfortunate time – holidays, nights, weekends, or during flagship events. The following steps, while not a complete solution, can help mitigate the situation somewhat:

  1. SLAs: Most cloud-based SaaS SLAs are written with either 4 or 5 9s (such as 99.999). While those contracts won’t stop incidents from happening, they will at least give some financial recourse when such events happen. While it might be preferred to write large SaaS contracts with business outage costs rather than technical outage and data loss costs, most SaaS vendors may not accept such language in contracts. The higher the penalty for such incidents, or higher penalties for long resolution times, the faster it gets attended to. In events like this, where vendors restore a few customers per batch, you want to be the first in line and your contract needs to reflect that.

  2. Have a backup option. Ideally, it might be better if you have a backup solution that is either by a different provider or on a different cloud for such occasions. But that can be expensive. Multi-cloud solutions are easier said than done. If your business is that critical, those options must be considered.

  3. Have a plan for such extended downtime. When such long outages happen, part of the issue is about productivity to your employees, partners, and services. Your business can not be at a standstill because your service provider is down. Whether it is a backup service document-based notes, there has to be a plan in place ahead of time to act.

  4. A lot of Atlassian competitors were using this opportunity to pitch their solution on Twitter, LinkedIn, and other social media on how this would NEVER happen to their product lines. Don’t jump from the frying pan to the fire in the hour of immediate need just because of this incident. However, it is time to consider the other worthy offerings to evaluate if they might fit your business model better.

  5. As discussed in my Incident Management report, “Break Things Regularly” and see how your organization responds. Most digital enterprises make a lot of assumptions about their services and breaking things regularly is a great exercise for validating those assumptions. A couple of options discussed in my report involved either breaking things and seeing how long it will take for support/SRE/resiliency teams to fix it (Chaos monkey theory-based), or creating game-day exercises (from AWS well-architected principles) to make teams react to a controlled exercise to create a “muscle memory” to react fast in such situations. Assumption is a dangerous thing in the digital economy. You are one major incident away from disaster, which can happen anytime.

  6. Measure what matters. If you are just checking “health” of your services and your provider services, your customers will unearth a lot of incidents before your SRE team can. I discuss a lot of instrumentation, observability, and customer real situation monitoring ideas in my Incident Management report.

  7. Review SaaS vendor’s resiliency, backup, failover, restoration, architecture, data protection, and security measures in detail. Not just a claim of x hours of restoration time is good enough. If you have architected a reliable solution on-prem or on another cloud, make sure that the SaaS vendor’s plan and design at the very least match or exceed your capabilities.

  8. When deleting "permanently" make sure it is a staged delete even if it is for compliance reasons. A gestation period of 24 or 48 hours to make sure the deletion didn't do more damage than intended.

  9. Before you execute a script that does mass operations, test many times first to make sure the intended results. Triple-check the mass scripts and delete operations or any major modifications.

  10. Finally, as discussed in my report, take ownership and communicate well. Customers do appreciate that. While such incidents do happen occasionally, how they communicate with the customers, how soon they fix the incident, how detailed is their postmortem, and, most importantly, what they do so such incidents don’t occur in the future is more important.

Bottomline

Atlassian cloud has growing pains. It may be a tough pill to swallow, but they need to go back to the drawing board and reassess the situation. Not only do they need to take a hard look at their cloud architecture, their processes, their operations, and more importantly their mandate to convert all customers to the cloud or Data Center by 2024. It is such a shame as I like their suite of products. A solid line of products that have performed well for large enterprise customers with their On-prem Data Center version for many years until now.

They also need to automate a lot of their cloud operations such as restoring deleted customer sites in one batch rather than painful small batches. They should have fully automated rollbacks for any changes whether it is configuration, functions, features, or code changes. If something didn’t work, they should be able to roll back to a previous version in an automated fashion quickly in a matter of hours – not days or weeks.

This happens when companies grow too soon, too fast. An added complexity in the case of Atlassian is the list of acquisitions they did which they are trying to bring together in one cloud platform.

This too shall pass. How they will respond to this event by putting newer processes, controls, approvals, automation, and more importantly automated rollbacks can tell whether they are trustable going forward. Once the picture is clear, enterprises can decide whether it is worthy of continuing with Atlassian or look at some worthy alternatives.

It is too soon to tell at this point.

PS: I had a call with their head of engineering (Mike Tria) on 4/19/2022, who addressed some of these concerns and explained in detail some of the measures they are doing to fix this issue so it won’t happen again. It included items like staged permanent deletes, operational quality, mass auto rollbacks, customer restoration across product lines, etc. He also discussed at length about what they are doing to the customers who cleverly instantiated instances in parallel while waiting for the issues to be resolved and how they can be merged into their main service.

I was assured by Atlassian that this incident will be reviewed in detail and the measures they are taking going forward will be addressed in detail in a PIR (Post-mortem Incident Report) that is scheduled to be released soon (before the end of the month per Atlassian).

 

 

 

 

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Digital Officer Chief Analytics Officer Chief Data Officer Chief Information Security Officer Chief Technology Officer Chief Executive Officer Chief AI Officer Chief Product Officer