Results

Confidence in computing

I recently wrote about the inaugural Confidential Computing Summit, a milestone in the development of this new field.
In this piece I provide more context on the Confidential Computing movement and reflect on its potential for all computing.
Acknowledgement and Declaration: I was helped in preparing this article by Manu Fontaine, founder of new CCC member Hushmesh, who did attend the summit. I am a strategic adviser to Hushmesh.

Down to security basics; all the way down

Confidential Computing is essentially about embedding encryption and physical security at the lowest levels of computing machinery, in order to better protect the integrity of information processing. The CC movement is a logical evolution and consolidation of numerous well understood hardware-based security techniques.
Generally speaking, information security measures should be implemented as far down the technology stack as possible; as they say, near to the “silicon” or the “bare metal”. By carrying out cryptographic operations (such as key generation, hashing, encryption and decryption) in firmware or in wired logic, we enjoy faster execution, better tamper resistance and above all, a smaller attack surface.
The basics are not new. Many attempts have been made over decades to standardize hardware and firmware-based security, and make these measures ubiquitous to software processes and general computing. Smartcards led the way.
Emerging in Europe in the 1990s, the whole point of a smartcard was to provide a stand-alone, compact, well-controlled computer, separated from the network and regular computers, where critical functions could be carried out safely. Cryptography was of special concern; smartcards were capable enough to offer a complete array of signing, encryption, and key management tools, critical for retail payments, telephony, government ID and so on.

The smarts

So the discipline of smartcards led to clearer thinking about security for microprocessors in general, and spawned a number of special purpose processors.
  • From the early 1990s, the digital Global System for Mobile communications (GSM) cell phone system featured SIM cards — subscriber identification modules — essentially cryptographic smartcards holding each individual’s master account number in a digital certificate signed by her provider. The start and end of each new phone call is digitally signed in the SIM, thus providing secure metadata to support billing (and thus the SIM by the way is probably the world’s first cryptographically verifiable credential).
  • In 2003, Bill Gates committed Microsoft to smartcard authentication, writing to thousands of customers in an executive e-Mail that “over time we expect most businesses will go to smart card ID”.
  • ARM started working on the TrustZone security partition for its microprocessor architecture sometime before 2004.
  • Trusted Platform Modules (TPMs) were conceived as security co-processors for PCs and all manner of computers, to uplift cyber safety across the board (in only adoption was as widespread as anticipated)
  • NFC (near field communications) chip sets enable smartphones to emulate smartcards and thus function as payment cards. Security is paramount or else banks wouldn’t countenance virtual clones of their card products. But security was weaponized in the first round of “wallet wars” around 2010, with access to the precious NFC secure elements throttled, and Google forced to engineer a compromise “cloud wallet”.
Now, security wasn’t meant to be easy, and hardware security especially so!
Standardization of smartcards, trusted platform modules and the like been tough going, for all sorts of reasons which need not concern us right now.
Strict hardware-based security is also unforgiving. The FIDO Alliance originally adopted a strenuous key management policy where private authentication keys were never to leave the safety of approved chips. But the impact on users when their personal devices need to be changed out is harsh, and so FIDO has pivoted — very carefully mind you — to “synchronized” private keys in the cloud, a solution branded Passkeys.

TEE time!

The Confidential Computing Consortium (CCC) is a relatively new association comprising hardware vendors, cloud providers and software developers aiming to “accelerate the adoption of Trusted Execution Environment (TEE) technologies and standards”.
The CCC is certainly not the only game in town, with the long running Trusted Computing Group (TCG, est. 2003) continuing to develop standards for the important Trusted Platform Module (TPM) architecture. Membership of these groups overlaps. I do not mean to compare or rank security industry groups; I merely take this opportunity to report on the newest thinking and developments.
So TEEs sit at the centre of Confidential Computing.
The CCC offers the following definition:
Confidential Computing protects data in use by performing computation in a hardware-based, attested Trusted Execution Environment. These secure and isolated environments prevent unauthorized access or modification of applications and data while in use, thereby increasing the security assurances for organizations that manage sensitive and regulated data.
So Confidential Computing crucially goes beyond conventional encryption of data at rest and in transit, to protect data in use.
Attestation of the computing machinery is a central idea. This is the means by which any user or stakeholder can tell that a processing module is operating correctly, within its specifications, with up-to-date parameters and code. The CCC updated its definition of confidential computing, not long before the CCC Summit, to make attestation essential.

There’s more to CC than meets the eye

Confidential Computing as a field has yet to register with most IT professionals. I find that if people know anything at all about CC, they tend to see it in terms of secure storage, data vaults, and “hardened” or “locked down” computers.
But there is so much more to it.
At Constellation Research we have always taken a broad view of digital safety, beyond data privacy and cybersecurity. Safety must also mean confidence, even certainty for practical purposes. We believe stakeholders must have evidence for believing that a system is safe. Safety is about both rules and tools.
Security and privacy are always context dependent. Safety is judged relative to benchmarks, so we need to know the specifics behind calling a system fit for purpose.
What are the conditions in which a system is safe? What has it been designed for? What standards apply and who determined they are being followed? And do we know the detailed history of a system, from its boot-up through to the present minute?
This type of thinking leads to the need for finer grained signals to help users be confident that a system is safe and that given information is reliable. Data today has a life of its own, created from complex algorithms, training sets and analytics, typically with multiple contributions over time. We often need to know the story behind the data.
This is where CC comes in, with its explicit focus on traceability, accountability and evidence (see the CCC’s April 2023 blog Why is Attestation Required for Confidential Computing?).
With Confidential Computing we should be able to account for the entire life story of all important devices and all important data, and make those details machine readable and verifiable.

Recapping the CC Summit

As reported, the #CCSummit on June 29 featured a breadth of topics and perspectives.
  • The provenance of machine learning training data, algorithmic transparency and the pedigree of generative AI products are all excellent CC use cases.
  • Intel Chief Privacy Officer Xochitl Monteon argued for protecting data through its entire lifecycle in a CC ecosystem.
  • Google’s Head of Product for Computing and Encryption Nelly Porter explained how CC strengthens digital sovereignty in emerging economies.
  • Opaque Systems founder Raluca Ada Popa advocated for “Privacy-preserving Generative AI” including secure enclaves to protect machine learning models in operation.

Reflections: Can all computing be Confidential Computing?

Well, perhaps not all, but Confidential Computing should be the norm for most computing in future.
However, in my opinion the label “confidential” is limiting. Of course, some things need to be kept secret but the real deal with CC is certainty about the cryptographic state of our IT. Admittedly that’s a bit of a mouthful but let’s be clear about the requirement.
Cryptography is now so critical in digital infrastructure, it has to be a given. Cryptography is ubiquitous, and not just for encryption to keep things secrecy that matters; encryption for authentication is actually far more pervasive. Digital signatures, website authentication, code signing, device pedigree, version numbering and content watermarking are all part of the digital fabric. These techniques all rest on cryptographic processors operating properly without interference, and cryptographic keys being generated faithfully and distributed to the proper holders.
Yet as Hushmesh founder Manu Fontaine observes, “Cryptography is unforgiving but people are unreliable”.
That is, cryptography can’t be taken for granted – not yet.
If cryptography is to be a given, we must automate as much of it as possible, especially the attestation of the state of the machinery, to put certainty beyond the reach of tampering and human error.
Hushmesh has one of the most innovative applications for Confidential Computing. They have re-thought the way cryptographic relationships (usually referred to as bindings) are formed between users, devices and data, and turned to CC to automate the way these relationships are formed, so that users and data are fundamentally united instead of arbitrarily linked.

No room for error

Botnet attacks show us that the most mundane devices (all devices these days are computers) can become the locus of gravely serious vulnerabilities.
The scale of the IoT and the ubiquity of microprocessors (MCUs) and field upgradable software means that even light bulbs actually need what we used to call “military grade” security and reliability.
The military comparisons are obsolete. We really need to shift the expectation of consumer grade security and make serious encryption the norm everywhere.
The state of all end-points in cyberspace needs to be standardized, measurable, locked down, and verifiable. So many end-points now generate data and send messages back into the network. As this data spreads, we need to know where it’s really come from and what it means, not only to protect against harm but to maximize the value and benefits data can bring.

Privacy and data control

Remember that privacy is more to do with controlling personal data flows than confidentiality. A rich contemporary digital life requires data sharing, not data hiding.
A cornerstone of data privacy is disclosure minimization. A huge amount of extraneous information today is disclosed as circumstantial evidence collected in a vain attempt to lift confidence in business transactions, to support possible forensic activities, to try and deter criminals. Think about checking into a hotel: in many cases the clerk takes a copy of your driver licence just in case you turn out to be a fraudster.
If data flows such as payments by credit card were inherently more reliable, merchants wouldn’t need superfluous details like the card verification value (CVV).
Better reliability of core data would help stem the superfluous flow of personal information. Reliability here boils down to data signing, to mark its origin and provenance.
MyPOV: the most important primitive for security and privacy is turning out to be data signing. All important data should be signed at the origin and signed again at every important step as it flows through transaction chains, to enable us to know the pedigree of all information and all things.
 
Digital Safety, Privacy & Cybersecurity FIDO Chief Analytics Officer Chief Data Officer Chief Digital Officer Chief Information Officer Chief Information Security Officer Chief Privacy Officer Chief Technology Officer

Why you need to connect your hiring to data, outcomes pronto

Mike Fitzsimmons, Cofounder & CEO of Crosschq, has started four companies and his biggest headache across industries was the same: Making good hires.

Speaking on DisrupTV Episode 331, Fitzsimmons said:

"This is not my first rodeo in starting a tech company, but it is my first one in the HR tech space," said Fitzsimmons. "I started the company with my co-founder out of pure frustration on just how damn hard it is to hire people. You can't hide from the math."

And the math: "45% of the hires at companies never get ROI positive for the companies that made the hire," said Fitzsimmons. "It's insane. It's terrible for talent. Terrible for companies. And it's terrible for everybody."

Crosschq's mission is to make the hiring process linked to outcomes. HR is one of the few corporate functions not linked to outcomes. The Crosschq platform is aimed at increasing the quality of hire, boosting recruiter efficiency and improving hiring intelligence.

Here's the platform in a nutshell:

Fitzsimmons said that since hiring decisions haven’t been tied to an outcome enterprises never get smarter. "We have failed our talent acquisition leaders because we have given them KPIs and goals to put butts in seats quickly," he explained. "We haven't created a machine that enables us to make sure we're putting the right person in the right place every single time."

Indeed, Crosschq, founded in 2018, has struck a nerve. It has more than 400 customers and counts GGVCapital, Bessemer Venture Partners, Slack and SAP among its investors. The company also has integrations with Workday, SAP SuccessFactors, Teamable, Greenhouse, SmartRecruiters, iCIMS and Jobvite to name a few.

Related: Connecting Experiences From Employees to Customers | 7 future of work themes to know now | Coursera: Generative AI will lead to reskilling, upskilling boom | The Lost Art of Being a Supervisor

To improve the hiring process, you need data from every step of the hiring process including:

  • Everything known when a hire was made.
  • How long did the person last?
  • Performance.
  • Impact on culture.
  • Engagement.

"Breaking all that down has been historically difficult because there's a big wall between talent acquisition and the rest of the organization," said Fitzsimmons. "You have core HCM and then all this stuff scattered around. It's an integration nightmare."

The challenge is to close that talent hiring gap in software and processes. Fitzsimmons said processes can't be understated. For instance, two years ago companies were hiring at a rapid clip and now they're cutting back.

"It's all about data and driving impact. The magic opens up once you start to connect these dots and realize you weren't doing it right all along," he said. HR has been different because it hasn't been performance based. "You can't just spend $20 million on Indeed and not have an idea what that led to in terms of the impact on the company."

The cultural change is creating the connection between ROI and hiring talent. An ROI mindset to hiring talent yields some interesting items, according to Fitzsimmons. Consider the following tips that are sprinkled around Crosschq's blog and reports on hiring talent and quality of hire:

  • Understand the ROI of the places where you source talent. For instance, agencies are the most volatile and it's where companies spend the most money. Recruiting agencies will send B-level talent that interviews well because they know they can place them again in 18 months.
  • Companies that rely on internal referrals often get lower quality hires. If you remove the financial incentives for internal referrals, the quality of hire goes up.
  • There's only a 9% correlation between an interview score and quality of hire. The move is to understand what interviewers are not good predictors of success in the org.
  • Only three of the eight standard third party assessments were correlated with success.
  • Pay attention to LinkedIn embellishment if not outright fraud. There's a correlation between hires that aren't totally truthful and success.
  • Think in terms of progress instead of perfection. It's a journey and connecting the data flows between hiring and outcome is a start. From there, you can build a foundation to improve program optimization, skills and competency and talent selection.
Future of Work Next-Generation Customer Experience Data to Decisions Innovation & Product-led Growth New C-Suite Tech Optimization Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Executive Officer Chief People Officer Chief Information Officer Chief Experience Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Confidential but in the limelight

One of the most consequential fields in digital technology saw its first major public event recently, with the inaugural Confidential Computing Summit held in San Francisco on June 29. I was not able to attend but I have been following closely the emergence of this vital industry and the Confidential Computing Consortium (CCC). Here I offer some observations and reflections on what should become the foundational to the digital economy.

In a companion piece to follow, I will go into more detail on the history of hardware-based security industry initiatives, and reasons why Confidential Computing is critical way beyond confidentiality.

Acknowledgement and Declaration: I was helped in preparing this article by Manu Fontaine, founder of new CCC member Hushmesh, who attended the summit. I am a strategic adviser to Hushmesh.

The Confidential Computing mission

Confidential Computing is essentially about embedding encryption and physical security throughout computing for better data protection and integrity of information processing.

The Confidential Computing Consortium (CCC) is a relatively new association comprising hardware vendors, cloud providers and software developers aiming to “accelerate the adoption of Trusted Execution Environment (TEE) technologies and standards”.

Confidential Computing protects data in use by performing computation in a hardware-based, attested Trusted Execution Environment. These secure and isolated environments prevent unauthorized access or modification of applications and data while in use, thereby increasing the security assurances for organizations that manage sensitive and regulated data. Reference: CCC.

So Confidential Computing crucially goes beyond conventional encryption of data at rest and in transit, to protect data in use.

If you are at all aware of Confidential Computing, you might have the impression that it’s all about secure cloud and data clean rooms. These are important applications for sure but there’s so much more, as the CC Summit proved.

The #CCSummit

About 250 people attended the one-day #CCSummit at the San Francisco Marriott Marquis. I am told the atmosphere was intense! Sponsorships and attendance were both double the organisers’ expectations.

I was impressed by the breadth of the agenda and the speakers’ perspectives.

  • As with any tech conference at the moment, there was lots of AI. And rightly so, as the provenance of machine learning is one of the hottest topics in tech today and the potential for CC to improve accountability for digital artefacts is obvious.
  • Yet privacy was the bigger concern by design for the event, as it is a prime driver for Confidential Computing. It was good to see so many facets of privacy being fleshed out, not just confidentiality. concerns of CC.
  • Intel Chief Privacy Officer Xochitl Monteon provided a valuable privacy tutorial within her keynote Confidential Computing as a Cornerstone for Cybersecurity Strategies and Compliance, stressing how legislated data privacy now protects over 70% of the world’s population. Monteon argued for protecting data through its entire lifecycle in a CC ecosystem, because otherwise, businesses are being crushed by formal data flow impact assessments. Contrary to popular belief, privacy regimes to not ban data flows — they restrain them.
  • Localisation of data processing to particular jurisdictions is a recurring issue in data protection. Location is another one of those signals which we increasingly rely on in data processing, and with its deep hardware connections, CC is going to be beneficial here. Nelly Porter, Google’s Head of Product for Computing and Encryption, was eloquent on the merits of digital sovereignty for emerging economies.
  • Academic and entrepreneur Raluca Ada Popa from UC Berkeley advocated for “Privacy-preserving Generative AI” using CC to protect queries with end-to-end encryption, and further, to protect commercially sensitive machine learning models by running them in secure enclaves.
  • Rolfe Schmidt from Signal Messenger described innovative use of attested TTEs to execute end-to-end encryption on behalf of end users, in cases where the ideal of keeping all sensitive data on the user’s device is not practical.
  • And there was plenty of discussion of Confidential Computing’s safe place, data clean rooms.

Privacy and data control

To appreciate the full potential for Confidential Computing in privacy and data protection, let’s think beyond confidentiality. Privacy is more to do with controlling personal data flows than confidentiality.

The Confidential Computing summit has helped to set the scene for a richer approach to privacy enhancing technologies (PETs). As Associate Professor Raluca Ada Popa explained in her keynote, CC takes PETs well beyond Differential Privacy (which compromises data quality) and Homomorphic Encryption (which protects data in use for many applications but with major performance trade-offs).

At Constellation Research we have always taken a broad view of digital safety, beyond data privacy and cybersecurity. What draws me to Confidential Computing is the possibility of safeguarding entire data supply chains, protecting the properties that make data valuable: clear permissions, authorisations, originality, demonstrated regulatory compliance, peer review and so on. Confidential Computing can provide the story behind the data.

 

Data to Decisions Digital Safety, Privacy & Cybersecurity New C-Suite Security Zero Trust Chief Information Security Officer Chief Privacy Officer

Why Chegg is using Scale AI to develop proprietary LLMs

Chegg is betting that a partnership with Scale AI can provide a new student experience over the next two semesters and develop proprietary large language models (LLMs) that can create personalized study tools. The goal: Develop generative AI tools that leverage Chegg's differentiated data and get to market fast.

The partnership was announced as Chegg reported second quarter earnings. The two companies have been piloting the new AI experience for students.

Generative AI has been a key topic for education technology providers. In the first quarter, Chegg shares took a beating over generative AI concerns but did launch its CheggMate generative AI service and a partnership with OpenAI. Generative AI is being built into the education technology stack with some efforts available in the fall.

Chegg's new experience will start rolling out this fall. Chegg CEO Dan Rosensweig said that the Scale AI partnership is accelerating the company's generative AI deployment. He said:

"The new Chegg will combine the best of generative AI, with Chegg's proprietary high-quality solutions and demonstrated ability to improve student outcomes. They can expect to see a much simpler conversational user interface, personalized learning pathways, more in-depth content and the ability to transform it automatically into innovative study tools such as practice tests, study guides and flash cards."

In addition, Chegg is building its own LLMs with training data provided by its proprietary data sets and more than 150,000 subject matter experts, said Rosensweig. Chegg has a learning taxonomy and a history of data from schools, classes and professors.

Andrew Brown, Chegg CFO, said the company’s decision to develop its own LLMs revolves around differentiating its service and creating "a truly differentiated and better experience with students at a lower cost." Brown added that completely relying on third parties for generative AI technology would have been too expensive.

For Rosensweig, the role of proprietary LLMs is to improve accuracy and engagement. Rosensweig, who noted that Chegg will still use ChatGPT, said:

"One of the really cool things that we'll be able to do differently than anybody else would be able to do is take the 100 million-plus questions that we have and all the data we've been able to collect and create completely personalized learning experiences on a per user basis based on knowing not only the history of that particular student, but others that have gone to that school, that class and with that professor. So that is not something that any generalist AI can do or frankly, anybody else in the education space could do because we have the largest direct-to-consumer list."

The plan for Chegg and Scale AI is to deploy a rolling launch to cover all 26 categories.

Chegg reported second quarter earnings of $24.6 million on revenue of $182.9 million, down 65 from a year ago. Non-GAAP earnings were 28 cents a share, a penny a share lower than estimates.

The company ended the quarter with 4.8 million subscribers, down 9% from a year ago. For the seasonally slow third quarter, Chegg projected revenue in the range of $151 million to $153 million.

 

 

Data to Decisions Next-Generation Customer Experience Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Nvidia fleshes out generative AI vision from PC, workstation to cloud

Nvidia at Siggraph outlined an AI vision where developers will create, test and optimize generative AI models and large language models (LLMs) on a PC and workstation and then scale them via data centers or the cloud.

Not surprisingly, this vision includes a heavy dose of Nvidia GPUs. PC makers already highlighted that systems were on deck for generative AI training and workloads.

The two headliners during CEO Jensen Huang's keynote were Nvidia RTX workstations as well as Nvidia AI Workbench. Nvidia AI Workbench is a toolkit to enable developers to create, test and customize models on a PC or workstation and then move them to deploy in data centers, public clouds or Nvidia DGX Cloud.

AI Workbench includes a simplified interface with models housed at Hugging Face, GitHub and Nvidia NGC that can be combined with custom data and shared. AI Workbench will be included in systems from Dell Technologies, Hewlett Packard Enterprise, HP Inc., Lambda, Lenovo and Supermicro.

To go along with AI Workbench, Nvidia launched Nvidia AI Enterprise 4.0, its enterprise software platform for production deployments. AI Enterprise 4.0 includes Nvidia NeMo, Triton Management Service, Base Command Manager Essentials as well as integration with public cloud marketplaces from Google Cloud, Microsoft Azure and Oracle Cloud.

As for the Nvidia RTX workstations, the systems will include Nvidia's RTX 6000 Ada Generation GPUs, AI Enterprise and Omniverse Enterprise software. These systems will include up to four RTX 6000 Ada Generation GPUs with 48GB of memory and up to 5,828 TFLOPS of AI performance and 192GB of GPU memory. These systems will be announced by OEMs in the fall.

Among other Nvidia items from Siggraph:

  • The company announced Nvidia OVX servers with the new Nvidia L40S GPU, which is designed for AI training and inference, 3D designs, visualization and video processing. The Nvidia L40S will be available starting in the fall. ASUS, Dell Technologies, GIGABYTE, HPE, Lenovo, QCT and Supermicro will offer OVX systems with L40S GPUs.
  • Nvidia launched a new release of Nvidia Omniverse for developers and enterprises using 3D tools and applications. Omniverse uses the OpenUSD framework and adds generative AI features. Additions include modular app building, new templates, better efficiency and native RTX spatial integration. Nvidia also launched new Omniverse Cloud APIs.
  • The company also rolled out frameworks, resources and services to speed up adoption of OpenUSD (Universal Scene Description). OpenUSD is a 3D framework that connects software tools, data types and APIs for building virtual worlds. The APIs include ChatUSD, a LLM copilot so developers and ask questions and generate code, RunUSD, which translates OpenUSD files to create rendered images, DeepSearch, an LLM for semantic search through untagged assets, and USD-GDN Publisher, which will publish OpenUSD experiences to Omniverse Cloud in a click.

Constellation Research’s take

Constellation Research analyst Andy Thurai said:

“Nvidia NeMO is an end-to-end framework for building foundational models that can be a pain to build. Nvidia AI workbench will allow for the cloning of AI projects and allow developers a workspace to build LLMs.

The new service and the interface will allow users to train or retrain models from Hugging Face in the Nvidia DGX cloud whether on a public cloud platform such as GCP or Azure or a Nvidia private cloud. Notoriously missing was AWS.

Nvidia claims the compute power today is built for older technologies and workloads. According to Nvidia, modern workloads must be run on the newer chips such as the Nvidia GPUs and Grace Hopper super chips. To that end, the Dual GH200 (a combination of Grace CPU and Hopper GPU into a Grace Hopper) is one of the most powerful processors ever. In that combination, Nvidia aims to get a lower capital expense for the processing power required (lower capex) and lower operational costs with energy consumption and a much faster inference thereby reducing the opex. If this dream were to come true, Nvidia can kill the mighty Intel's x86 business by demonstrating that Grace Hopper can process AI technologies, particularly the AI training workloads, with 20x less power and 12x less cost than the comparable CPU-based processing technologies.

In short, Nvidia has claimed the AI workloads, both training and inferencing, must be run by Nvidia based chips to be more efficient than rivals Intel and AI chip companies like SambaNova Systems."

Tech Optimization Data to Decisions Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

RingCentral launches RingCX, names new CEO

RingCentral launched RingCX, a native contact center platform currently in beta, in a strategy shift that minimized partner NICE. RingCX is a native platform that will include RingCentral's unified communications tools with contact center capabilities as well as generative AI.

In a move that's aimed at moving customers across touchpoints and expanding its market, RingCentral is launching RingCX even has it has a partnership with NICE. RingCentral has RingCentral MVP and RingCentral Contact Center, which is powered by NICE.

Vlad Shmunis, CEO of RingCentral, said the company will continue to invest in the NICE partnership, but needed a more native approach. "In listening to our customers, we’ve recognized an additional need for a native intelligent contact center solution that would be better suited towards addressing simpler use cases," he said.

RingCX will launch with more than 1,000 features including integrated communications and messaging across customer service use cases. Features include:

  • Skills-based routing.
  • Dashboards with analytics, real-time data and pre-built reports.
  • Integration with Salesforce and Zendesk at launch and adding Hubspot, Microsoft dynamics and ServiceNow soon.
  • Virtual agents powered by Google Dialogflow.
  • Real-time AI transcription and post call summaries.
  • AI-driven assistance, quality management and conversation analytics.

The launch of RingCX comes amid a busy news day for RingCentral.

  • RingCentral named Tarek Robbiati, former CFO at Hewlett Packard Enterprise, as CEO succeeding Shmunis effective Aug. 28.
  • RingCentral reported second quarter revenue of $539 million, up 11% from a year ago with a net loss of 23 cents a share. Non-GAAP earnings were 83 cents a share.
  • For the third quarter, RingCentral said revenue will be between $552 million to $556 million, up 8% to 9%, with non-GAAP earnings of 75 cents a share to 78 cents a share.
  • The company projected 2023 revenue between $2.19 billion to $2.2 billion, up 10% to 11%, with non-GAAP earnings of $3.11 a share to $3.25 a share.
  • Last week, RingCentral acquired assets from Kopin.

Research:

Constellation Research's take

Here's what Constellation Research analyst Liz Miller had to say about recent RingCentral developments:

"So it’s official…now EVERYONE is a contact center player. It always feels like RingCentral is at the center of a good number of CCaaS rumors…at least once a quarter someone unleashes a rumor that they are going to buy/takeover/merge with 8x8. But, in the wake of every rumor someone reminds the crowd that RingCentral’s partnership with NICE has been rock solid. That is until now. While RingCentral has made it clear that their partnership with NICE is still both strategic and important to their vision and growth…it is also clear that RingCentral customers want more options and integrations to create a single pane of collaboration across UCaaS and CCaaS tools and communications.

All of these moves to a “more unified, unified-communications strategy” point to a trend I’ve been tracking…this convergence in communications isn’t just about wiring, clouds and where calls start…but is a bigger shift to consolidate around collaboration be it between employees and teams or brands and their customers.

Now add conferences and events (Hopin), video and webinars, voice calls, chats, bots, contact centers, collaboration….make no mistake…this is all about collaboration in every channel with every constituent from employees, to partners, to customers."

 

Next-Generation Customer Experience Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Five9 acquires Aceyus, aims to expand analytics, enterprise reach

Five9 said it will acquire Aceyus, which ingests data from multiple customer experience systems and provides insights and analytics. Separately, Five9 reported second quarter earnings and projected $908 million to $910 million in 2023 revenue.

Aceyus provides call center analytics, customer journey data, omni-channel reporting and multiple integrations. Terms of the deal weren't disclosed.

According to Five9, Aceyus will provide its CX platform with contextual data across disparate systems. Aceyus integration catalog will also boost Five9's data lake. Five9 said Aceyus will also bolster its AI and automation portfolio.

Mike Burkland, Five9 CEO, said Aceyus and Five9 have multiple joint accounts. "The addition of Aceyus will extend our platform to further facilitate the migration of large enterprise customers to the cloud and to leverage contextual data to deliver personalized experiences," said Burkland in a statement.

Separately, Five9 reported second quarter revenue of $222.9 million, up 18% from a year ago. Five9 reported a second quarter net loss of $21.7 million. Non-GAAP earnings for the second quarter were $37.4 million, or 52 cents a share.

In the second quarter, Five9 reported enterprise subscription revenue growth of 28%. About 87% of Five9's revenue is enterprise.

As for the outlook, Five9 projected third quarter revenue of $223.5 million to $224.5 million with non-GAAP earnings of 42 cents a share to 44 cents a share. For 2023, Five9 projected non-GAAP earnings of $1.79 a share to $1.83 a share on revenue of $908 million to $910 million.

Constellation Research's take

Constellation Research analyst Liz Miller handicapped Five9 recent developments. Here's Miller's take:

Five9 Buys Aceyus: The opportunity here is to accelerate the shift from on-prem to cloud, especially when it comes to all that messy, complex and crazy customer data that delivers that robust, personalized and highly contextual experience that Five9 looks to deliver in an agile, cloud-first experience. Not only is the contact center a treasure trove of critical (and often untapped) customer data, it also sits at the front line of CX delivery. By harmonizing that gold found in the true voice of the customer with the stores of customer data that can be brought in from any number of CX outposts, we get a new, even more potent fuel for CX-centric AI applications. But the danger here is that this runs the risk of creating another silo of customer data that behaves just like a single use appliance for a single department or function. With Aceyus, the vision is to ingest and harmonize complex data…which many out there will recognize as the siren’s song of the enterprise Customer Data Platform (CDP). So the question here is will this vision assume that all of the teams at the front lines of real CX (eg: sales, service AND marketing) will all have their own CDP? Will this Five9 Aseyus offering sit next to, above or below an Amperity, Salesforce Data Cloud (eg Genie), Adobe Real-Time CDP or Segment implementation?

Five9 Earnings: These results are really a testament to a movement started almost 2 years ago to march up-market. The item to focus on here is the partner-driven growth that is behind this continued acceleration. Over 60% of Five9’s international implementations are being done by partners and there is phenomenal demand from partners to continue advancing, especially with the new innovations around AI. One note from the earnings indicates that 15 partners achieved over $1M in bookings for the quarter. Five9 had not always been touted as the “best” partner in the contact center market…but that has changed and they are actively investing in making sure that partners are happy and that channel growth and bookings are healthy.

Next-Generation Customer Experience Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Information Officer Chief Digital Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Zoom's customer data terms for training AI may be just the beginning

Zoom updated its terms of service to give the platform the right to use some customer data for training its AI models. Should customers enable Zoom's generative AI features, they'll have to sign a consent form to train models with customer content.

However, Zoom said that it will not use audio, video or chat content for training models without consent.

These new terms are outlined in a blog post and it's likely that other tech providers may have similar efforts in the future. The trade-off will be sharing your data vs. generative AI pricing.

In the post, Zoom noted the following:

  • Service generated data--telemetry and diagnostic data--is Zoom's data and can be used to train models to improve the experience.
  • Meeting recordings are owned by the customer and Zoom has a license to it to deliver the service.
  • Zoom's new generative AI features--Zoom IQ Meeting Summary and Zoom IQ Team Chat Compose--are currently a free trial. Zoom account owners’ control whether to enable those features, but if they do user content will be used to train AI models. No third-party model training will be allowed.

Constellation Research analyst Dion Hinchcliffe said:

"Zoom certainly touched upon a major nerve of marketplace fears when its recently updated Terms of Service granted it an essentially unlimited license to all user content (video, audio, text) that passes through its platform. The license is for a litany of uses, ranging from product improvement to training its generative artificial intelligence models. The big concern of course, is that customer IP and people's private information will get stored in such models, where it could be misused. There are also all sorts of very thorny issues with regulatory regimes like HIPAA that are implicated and likely violated by these license clauses as well.

For its part, Zoom has tried to clarify what it actually does with its license, both in a blog post and in boldface in the new terms of service. Essentially that the company does not apparently use this license without prior consent from the user within the app, Yet the terms of servce still grants Zoom the license regardless. Given the major uproar this change in terms has caused, this is going to be a widely watched test case -- and there will no doubt be others -- that will pave the early path for how vendors and the market negotiate this very sensitive subject. The view from this analyst at least, is that vendors should go out of their way to take the high road with customer data. Those that don't establish and maintain very high levels trust with customers regarding their data will not enjoy the fruits of the coming AI revolution."

A few quick thoughts:

  • Zoom's terms are likely to cause a kerfuffle today, but over time they'll be standard.
  • AI model training and data sharing may lead to discounts as both vendors and customers weigh cost vs. the value of data.
  • While collaboration is the theme with Zoom's terms, this data vs. licensing cost will become more interesting with more mission critical data from CRM and ERP systems.
  • Zoom’s terms will be more of an enterprise issue. Small business customers may not care. On an individual basis we’re all used to being the product (Facebook is exhibit A).
  • It's likely vendors are going to wait and see how customers react to Zoom's terms before doing anything similar.

 

Future of Work Data to Decisions Innovation & Product-led Growth New C-Suite Sales Marketing Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Tech Optimization zoom AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Education gets schooled in generative AI

This post first appeared in the Constellation Insight newsletter, which features bespoke content weekly.

Generative AI, which appears to be initially loved by students and loathed by educators, is coming to education as its embedded in courseware as well as learning management systems.

Recent earnings reports from Pearson, Coursera and Infrastructure, the company behind the dominant Canvas learning management system, all featured a heavy dose of generative AI and product talk. At a high level the takeaways are:

  • Courseware will get generative AI and large learning models (LLMs) that will highlight proprietary IP and datasets from education companies.
  • Learning management systems will embed generative AI and create alliances with companies that are disruptors.
  • Education will leverage generative AI for personalized learning experiences and create content at scale.

Here's a tour of what's happening in the education technology stack.

Pearson: Sees AI value in proprietary data

Pearson is best known for its courseware and typically known as an education publisher. CEO Andy Bird said generative AI will create turbulence in education, but the technology is likely to be a "long term positive" for Pearson's business in higher education and across the portfolio.

"We believe the value of our proprietary IP and datasets will increase over time. We have deep AI experience and expertise across the whole company. We're starting to introduce new AI enabled products across the business," said Bird, speaking on the company’s earnings conference call. "What this interest does demonstrate is the real value to be had of owning your own intellectual property. We're also continuing to monitor legal and legislative developments very closely."

Bird, who noted Pearson is early in its AI journey, said there are positive implications of embedding generative AI into its higher-ed courseware. "We've been working with one goal in mind, namely, how to improve the learning experience for both faculty and student," said Bird. "We're not interested in utilizing this technology merely to provide students with a shortcut to an answer. When we tested different LLMs with a question from Campbell's Biology, they often didn't get it correct. So, we believe delivering products that are reliable, accurate and trustworthy is paramount."

Tony Prentice, Pearson's chief product officer, said the company is embedding generative AI into its Pearson+ and MyLab & Mastering study tools. The takeaway: Pearson can leverage its content library and surface insights and educational opportunities with generative AI.

Infrastructure: Working AI into Canvas

Infrastructure, the company behind learning management system Canvas, said it will partner and embed generative AI throughout its platform. CEO Steve Daly said the goal is to leverage generative AI "to empower educators to meet students where they are in their educational journey."

Canvas has launched an emerging AI marketplace that will give educators access to AI tools that are integrated into Canvas and ensure privacy and security, said Daly.

At its InfrastructureCon 2023 conference, Infrastructure outlined the following advances in its platform:

  • A partnership with Khan Academy, which will bring Khan Academy AI-tools and content to Canvas. The two companies are looking for design partners and early adopters for the 2024-2025 school year.
  • AI-assisted course templating layouts to improve efficiency, reduce administrative tasks and pace assignments.
  • In-context student support with the integration of AI writing tutor Khanmigo, part of Khan Academy.
  • AI tools to surface analytics and insights.

Coursera: Multiple AI plays

Coursera is worth watching in the education stack for a few reasons. First, it's a higher-ed play. But it also has a big enterprise training business as well as a consumer unit. Coursera is aiming to not only use generative AI for content, but also to reskill employees and democratize knowledge about AI.

In the first quarter, Coursera outlined where AI fits in the reskilling space. Now Coursera has Coursera Coach, "a virtual learning partner, powered by generative AI and grounded in our expert content."

Speaking about Coursera Coach, CEO Jeff Maggioncalda said:

"It is designed to allow learners to ask questions and receive personalized explanations and answers, get personalized evaluations and feedback on their submissions, receive context-relevant examples and practice questions. And discover quick video lecture summaries and resources to better understand a specific concept. We launched a beta version of Coach to millions of Coursera Plus subscribers during the quarter and continue to be excited about the early feedback."

The company also launched Coursera ChatGPT, a plug-in to surface and personalize Coursera's catalog.

Maggioncalda said Coursera ChatGPT is "like an academic counselor." "The ChatGPT plugin allows learners using GPT-4 to identify recommended content and credentials based on the subject or career field the learner says they’re interested in exploring. It’s one example of the initiatives we are working on related to generative AI and reimagining the personalized discovery experience," said Maggioncalda.

Coursera also has a machine learning translation effort to localize content at scale. In the second quarter, Coursera delivered subtitle translation for 2,000 courses in seven different languages.

Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Why Llama 2, open-source models change the LLM game

Meta recently open-sourced Llama 2 and made it free for research and commercial uses. The move quickly put Llama 2 on the open-source leaderboard for large language models (LLMs) and spurred enterprises to give it a spin.

While the move is notable, there are a bevy of nuances for enterprises to consider. Here's a look at moving parts to consider as enterprises put Llama 2 through its paces as they craft generative AI use cases. This research note is based in part on a transcription of a CRTV discussion between Constellation Research's Larry Dignan and Andy Thurai.

Why Llama 2 matters. Thurai said OpenAI captured the imagination of the enterprise, but there's a need for open-source large language models (LLMs). "Everybody is going crazy about OpenAI, but what people don't realize is that after your proof of concept there is a usage model that adds up," said Thurai. "It can get quite expensive, so companies are looking for alternatives with open-source models."

Thurai said:

"Meta's offering is the first that is fully open-sourced and free to use commercially, truly democratizing AI foundational models. It is easier to retrain and fine-tune these models at a much cheaper cost than massive LLMs. Meta also released the code and the training data set freely. And wider availability can make this popular sooner. It is available on Azure (through Azure AI model catalog), on Hugging Face, AWS (via Amazon Sagemaker Jumpstart), and even Alibaba Cloud."

Llama 2's sizes. Llama 2 is also interesting for enterprises because it can be used for small language models or specialized models, said Thurai. Llama 2 also offers more parameters and sizes. "There are three primary variations: 7 billion parameters, 13 billion and 70 billion," explained Thurai. "These are comparatively much smaller models than ChatGPT, but more accurate." Those sizes quickly put Llama 2 on the Hugging Face leaderboards.

 

Is Llama 2 open source? Thurai said there has been a good amount of debate about whether Meta's language model is open source. Usually, open-source software is available for anyone to use without restrictions. Llama 2 has conditions about commercial use, said Thurai. "The average enterprise isn't likely to hit that commercial use number so it's not much of a restriction. Meta put restrictions in because it doesn't want other companies to use Llama 2 against the company in a competitive situation," said Thurai.

Will enterprises use Llama 2? Thurai said enterprises will try Llama 2 for pilots and proof of concept projects, but beyond that point usage is debatable. "It's tough to say what will happen, you have to read the rules carefully to ensure Meta doesn't come after you for licensing infringement," said Thurai.

Using alternatives. Thurai said Llama 2 is worth exploring but the Falcon LLM is popular as is MosaicML, which now falls under Databricks' umbrella. Open-source models should be in the enterprise mix, but it's worth knowing the vendor business models. "The money is in helping you train your own models," said Thurai. For now, enterprises should try alternatives with an eye toward costs. After all, most companies won't have the resources to grab open-source models and train them with proprietary data. Managed model training will also be important.

What's next? Thurai said enterprises are exploring multiple LLM options and it's too early to tell where they'll land. Some enterprises will lean toward proprietary models with industry-specific use cases. Others will fine tune open-source models. "There will be a lot of variations," said Thurai.

Thurai also said there will be a divergence between proprietary and open source LLMs. "I expect to see more divergence and more closed garden models like GPT-4 and Bard grow and models like Falcon and Llama 2," he said.

Thurai added:

"I expect more LLMs to come to market. Some will try to go big, and some will try to go small. But in order to differentiate, I expect domain-specific small language models to appear that will be very specific to industry verticals.

I also expect a lot of smaller companies to provide the missing pieces to make usage of LLMs much better. I also expect more open-source models to hit the market soon."

Data to Decisions Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Future of Work Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer