Results

Why Llama 2, open-source models change the LLM game

Meta recently open-sourced Llama 2 and made it free for research and commercial uses. The move quickly put Llama 2 on the open-source leaderboard for large language models (LLMs) and spurred enterprises to give it a spin.

While the move is notable, there are a bevy of nuances for enterprises to consider. Here's a look at moving parts to consider as enterprises put Llama 2 through its paces as they craft generative AI use cases. This research note is based in part on a transcription of a CRTV discussion between Constellation Research's Larry Dignan and Andy Thurai.

Why Llama 2 matters. Thurai said OpenAI captured the imagination of the enterprise, but there's a need for open-source large language models (LLMs). "Everybody is going crazy about OpenAI, but what people don't realize is that after your proof of concept there is a usage model that adds up," said Thurai. "It can get quite expensive, so companies are looking for alternatives with open-source models."

Thurai said:

"Meta's offering is the first that is fully open-sourced and free to use commercially, truly democratizing AI foundational models. It is easier to retrain and fine-tune these models at a much cheaper cost than massive LLMs. Meta also released the code and the training data set freely. And wider availability can make this popular sooner. It is available on Azure (through Azure AI model catalog), on Hugging Face, AWS (via Amazon Sagemaker Jumpstart), and even Alibaba Cloud."

Llama 2's sizes. Llama 2 is also interesting for enterprises because it can be used for small language models or specialized models, said Thurai. Llama 2 also offers more parameters and sizes. "There are three primary variations: 7 billion parameters, 13 billion and 70 billion," explained Thurai. "These are comparatively much smaller models than ChatGPT, but more accurate." Those sizes quickly put Llama 2 on the Hugging Face leaderboards.

 

Is Llama 2 open source? Thurai said there has been a good amount of debate about whether Meta's language model is open source. Usually, open-source software is available for anyone to use without restrictions. Llama 2 has conditions about commercial use, said Thurai. "The average enterprise isn't likely to hit that commercial use number so it's not much of a restriction. Meta put restrictions in because it doesn't want other companies to use Llama 2 against the company in a competitive situation," said Thurai.

Will enterprises use Llama 2? Thurai said enterprises will try Llama 2 for pilots and proof of concept projects, but beyond that point usage is debatable. "It's tough to say what will happen, you have to read the rules carefully to ensure Meta doesn't come after you for licensing infringement," said Thurai.

Using alternatives. Thurai said Llama 2 is worth exploring but the Falcon LLM is popular as is MosaicML, which now falls under Databricks' umbrella. Open-source models should be in the enterprise mix, but it's worth knowing the vendor business models. "The money is in helping you train your own models," said Thurai. For now, enterprises should try alternatives with an eye toward costs. After all, most companies won't have the resources to grab open-source models and train them with proprietary data. Managed model training will also be important.

What's next? Thurai said enterprises are exploring multiple LLM options and it's too early to tell where they'll land. Some enterprises will lean toward proprietary models with industry-specific use cases. Others will fine tune open-source models. "There will be a lot of variations," said Thurai.

Thurai also said there will be a divergence between proprietary and open source LLMs. "I expect to see more divergence and more closed garden models like GPT-4 and Bard grow and models like Falcon and Llama 2," he said.

Thurai added:

"I expect more LLMs to come to market. Some will try to go big, and some will try to go small. But in order to differentiate, I expect domain-specific small language models to appear that will be very specific to industry verticals.

I also expect a lot of smaller companies to provide the missing pieces to make usage of LLMs much better. I also expect more open-source models to hit the market soon."

Data to Decisions Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Future of Work Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Salesforce launches Einstein Studio: What you need to know

Salesforce launched Einstein Studio, which allows customers to bring their large language models (LLMs) from other services such as AWS' Amazon SageMaker, Google Cloud Vertex AI and other services via Salesforce Data Cloud.

With Einstein Studio, enterprises can bring their own LLMs to Salesforce Data Cloud without moving data. Salesforce's Einstein Studio, which is generally available, can use custom LLMs along with Einstein GPT's LLMs and remove the need for Extract Transform and Load (ETL).

The theme of model choice has become a key topic among enterprise buyers. AWS has also hit the bring your own model theme in recent days.

Salesforce Einstein Studio has a control panel to manage AI models, zero-ETL integration, connections to Salesforce data and no-code tools.

Here's what Einstein Studio means for customers, according to Constellation Research analysts Liz Miller and Andy Thurai.

Bring your own model approaches. Miller said:

"Einstein Studio is intended to be the point of model centralization and governance for a Salesforce customer. The goal is to be a center point and NOT the whole or only model. Larger enterprise customers were already developing models in everything from Sagemaker to Vertex. This move is in line with Salesforce's more partnership-forward and ecosystem friendly approach. Similar to how Data Cloud invites those lakehouse connections across Snowflake and Databricks, BigQuery and beyond, Einstein Studio is issuing an invitation for all AI models to find a spot in the Salesforce ecosystem."

What's in it for Salesforce? Thurai said:

"This move by Salesforce is rather brilliant. The problem with providing your own set of tools to create AI models is not easy. You have to constantly update the platform almost daily. For the true data scientists who are used to other tools, Salesforce provides an option such as AWS SageMaker and Google Vertex as well as other AI model training platforms to train their own models or use existing ones and fine-tune with custom data. The familiarity of the platform combined with the power of custom data makes this unique."

Model sprawl. Miller said:

"Most businesses are drowning in models that are under development and even more that have never made it out of development. Einstein Studio is Salesforce attempting to unlock AI investments faster by providing a fast, cost effective and readily available on-ramp for AI into business use cases."

Zero-ETL. "Zero-ETL means there will be no data movement. Data movement has been a major issue when it comes to model training," said Thurai. "Data needs to be moved around, managed, maintained, cataloged, secured and governed. Keeping the data where it is and creating the models where it is convenient is somewhat unique. Many vendors want you to create the model in their platform so you can be more tethered to them."

What's in it for customers? Miller said the bring-your-own-model approach from Salesforce means it's a vendor that's committed to moving with the market. Salesforce is moving to "deliver usable AI tools and solutions that bring AI models out of the development cycles and into view for many average business users."

"Right now, business users in sales, marketing and service are being promised a lot in this age of AI. But there is very limited runway to prove that these tools investments will yield real, profitable meaningful CX results," said Miller. "Salesforce is building customers glide paths to AI success."

More on Salesforce:

 
Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity salesforce ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration GenerativeAI Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

AWS closes generative AI narrative gap, plays on choice, Trainium, Inferentia chips

Amazon CEO Andy Jassy fleshed out Amazon Web Services' narrative when it comes to generative AI providing a multi-level view that also includes a good dose of its own silicon.

When it comes to generative AI, AWS has been caught between Microsoft Azure and Google Cloud Platform, two companies that have app-layer stories to tell. What's an app-layer story? Think co-pilots galore. That co-pilot led narrative is dominated by tech vendors that play on the application layer--Microsoft, Google and Salesforce to name a few.

Jassy's argument on Amazon's earnings conference call is that generative AI also means a lot of large language model (LLM) training. That training is why everyone is why Nvidia is the prom queen at the generative AI dance.

But Jassy outlined two key points. Nvidia GPUs won't be the only option for model training. Yes, there will be AMD, but there will also be AWS' custom silicon. To AWS, generative AI is comprised of three layers and so far, the application layer has received all the buzz. AWS is going to play in the lower layers--compute and models as a service.

He said (emphasis mine):

"At the lowest layer is the compute required to train foundational models and do inference or make predictions. Customers are excited by Amazon EC2 P5 instances powered by NVIDIA H100 GPUs to train large models and develop generative AI applications. However, to date, there's only been one viable option in the market for everybody and supply has been scarce.

That, along with the chip expertise we've built over the last several years, prompted us to start working several years ago on our own custom AI chips for training called Trainium and inference called Inferentia that are on their second versions already and are a very appealing price performance option for customers building and running large language models. We're optimistic that a lot of large language model training and inference will be run on AWS' Trainium and Inferentia chips in the future."

Training models isn't cheap and those with the infrastructure are going to fare well. Not everyone needs a luxury training processor.

The middle layer of the generative AI game will be LLMs as a service. Managed services are the AWS specialty in the cloud. Jassy said:

"We think of the middle layer as being large language models as a service. Stepping back for a second, to develop these large language models, it takes billions of dollars and multiple years to develop. Most companies tell us that they don't want to consume that resource building themselves. Rather, they want access to those large language models, want to customize them with their own data without leaking their proprietary data into the general model, have all the security, privacy and platform features in AWS work with this new enhanced model and then have it all wrapped in a managed service."

Jassy's comments line up with what we've heard repeatedly from CXOs. Some enterprises are looking at private cloud options for training. Some are thinking about going on-premises for training. Others want model choice including smaller LLMs that are use case specific. Choice isn't a bad thing and it's highly likely that not every enterprise is going to play along with OpenAI tolls.

This LLM choice mantra was seen last week when AWS outlined Bedrock at AWS Summit New York.

These first two layers are where AWS will play, said Jassy. "If you think about these first 2 layers I've talked about, what we're doing is democratizing access to generative AI, lowering the cost of training and running models, enabling access to large language model of choice instead of there only being one option," said Jassy.

What about the apps? Jassy said AWS is an enabler with services like CodeWhisperer. "Inside Amazon, every one of our teams is working on building generative AI applications that reinvent and enhance their customers' experience. But while we will build a number of these applications ourselves, most will be built by other companies, and we're optimistic that the largest number of these will be built on AWS," he said.

For good measure, Jassy reiterated what many vendors and customers have been saying. Without a good data strategy, you don't have AI, generative or otherwise. And by the way, AWS is embedded in a bunch of data management plays.

See: JPMorgan Chase: Digital transformation, AI and data strategy sets up generative AI | Goldman Sachs CIO Marco Argenti on AI, data, mental models for disruption 

Add it up and AWS laid out its generative AI case and got back into the perception and mindshare game. Yes, AWS growth rates are stable (12% in the second quarter) and that's slower than its rivals. But there's also a much bigger base. AWS is also seeing a lot of cost optimization from customers. In the end, it's likely those generative AI workloads are going boost AWS, which is "starting to see some good traction with our customers' new volumes."

Naturally, analysts want to know how AWS is going to monetize generative AI. The short answer is AWS is going to see more volume. What's unclear is whether there will be some add-on model. My guess is probably not. For starters, the upcharge for generative AI is an approach for SaaS companies that is going to get tired quickly.

Jassy, however, said it's way too early. "I think we're in the very early stages there. We're a few steps into a marathon in my opinion. I think it's going to be transformative, and I think it's going to transform virtually every customer experience that we know. But I think it's really early," said Jassy. "I think most companies are still figuring out how they want to approach it. They're figuring out how to train models. They want to -- they don't want to build their own very large language models. They want to take other models and customize them, and services like Bedrock enable them to do so. But it's very early. And so, I expect that will be very large, but it will be in the future."

Takeaways

Here's what AWS accomplished on Amazon's earnings conference call:

  1. AWS provided a more nuanced view of generative AI that plays to its core strengths--developers and enterprise builders. By not going co-pilot happy, AWS' narrative was almost refreshing.
  2. Outlined how important custom silicon will be. Nvidia infrastructure isn't cheap and CXOs will be looking for whatever processors get the training job done.
  3. Put Jassy back into a familiar role: AWS lead singer.
  4. Gave enterprise buyers a sermon more in line with their current thinking on generative AI.
  5. Allayed concerns from Wall Street. I noted the narrative gap when it came to the cloud and generative AI players and how AWS got lost. Analysts will be on the bandwagon again based on Amazon's second quarter results. Why does Wall Street matter? CXOs watch a lot of CNBC and Bloomberg too.

Related:

 

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity amazon Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Apple Q3 services revenue shines

Apple's fiscal third quarter revenue was down 1% from a year ago, iPhone and iPad sales were light relative to expectations and services revenue surged.

For the third quarter, Apple reported earnings per share of $1.26 with revenue of $81.8 billion.

Wall Street was expecting Apple to report fiscal third quarter revenue of $81.7 billion with earnings of $1.19 a share. As usual, iPhone revenue was expected to carry the quarter.

Going into the quarter, analysts were also interested in Apple traction in India as well as the ability to sell services to the iPhone installed base. Apple iPhone 15 sales won't land until November/December.

In a statement, CEO Tim Cook said third quarter services revenue hit a record in the June quarter with more than 1 billion paid subscriptions.

By the numbers:

  • Apple's China revenue was $15.76 billion, up from $14.6 billion a year ago.
  • iPhone sales were $39.7 billion in the third quarter, down from $40.67 billion a year ago.
  • Mac sales were $6.84 billion, down from $7.38 billion a year ago.
  • iPad sales in the third quarter were $5.79 billion, down from $7.22 billion a year ago.
  • Wearables revenue was $8.38 billion, up from $8.08 billion a year ago.
  • Services revenue in the third quarter landed at $21.2 billion, down $19.6 billion.

Estimates for iPhone sales were $39.9 billion with iPad and Mac, delivering $6.4 billion and $6.6 billion, respectively. Services revenue was expected to come in at $20.8 billion. All estimates were from Refinitiv.

 

Speaking on a conference call with analysts, Cook said Apple saw record sales in India as well as other countries. India is a closely watched growth region for Apple. Cook said the macroeconomic picture remains mixed and continues to "manage deliberately and innovate relentlessly."

Key comments include:

  • Cook said he was "thrilled" by the response to Vision Pro.
  • Mac sales were down 7% in the quarter, but Apple completed the transition to its own processors.
  • iPad revenue was down 20% in the quarter due to a difficult comparison to a year ago when the iPad Air launched. Back to school season can be a tailwind to iPad.
  • Apple Watch sales were in line with the company's expectations.
  • Services saw a sequential acceleration in the quarter with Apple Care, Apple TV and Apple Pay showing strength. Cook also said Apple Card is riding Apple Pay momentum.
Next-Generation Customer Experience apple Chief Information Officer

Amazon Web Services posts Q2 revenue growth of 12%, teases AI launches

Amazon Web Services sales in the second quarter were up 12% from a year ago to $22.1 billion with operating income of $5.4 billion.

Overall, Amazon reported second quarter net income of $6.7 billion, or 65 cents a share, on revenue of $134.4 billion. The earnings include a $200 million gain from the valuation of Rivian.

Wall Street was expecting Amazon to report second quarter non-GAAP earnings of 35 cents a share on revenue of $131.5 billion. AWS revenue was expected to be about $21.8 billion.

Leading up to Amazon's report, analysts were focused on AWS growth and cost cutting. Google Cloud grew at a 28% clip in the second quarter and Microsoft Azure grew 26% in its latest quarter from smaller bases. Microsoft doesn't break out Azure revenue. 

By segment:

  • North American e-commerce sales were up 11% in the second quarter to $82.5 billion with operating income of $3.2 billion.

  • International e-commerce sales were $29.7 billion, up 10% from a year ago, with an operating loss of $900 million.

  • AWS had operating income of $5.4 billion on revenue of $22.1 billion.

In a statement, CEO Andy Jassy said the company has continued to lower its cost on the retail side and said the following about AI:

"Our AWS growth stabilized as customers started shifting from cost optimization to new workload deployment, and AWS has continued to add to its meaningful leadership position in the cloud with a slew of generative AI releases that make it much easier and more cost-effective for companies to train and run models (Trainium and Inferentia chips), customize Large Language Models to build generative AI applications and agents (Bedrock), and write code much more efficiently with CodeWhisperer."

As for the outlook, Amazon projected third quarter sales between $138 billion and $143 billion, or up 9% to 13% from a year ago. Operating income is expected to be between $5.5 billion and $8.5 billion. 

Tech Optimization Next-Generation Customer Experience amazon B2C CX Chief Information Officer

Meta's Llama 2 and what that means for GenerativeAI

What's happening with Meta's latest Llama 2 #LLM and its new partnership with Microsoft? What does this mean for the future of #GenerativeAI?

Constellation analyst Andy ThurAI sits down with Constellation Editor in Chief Larry Dignan to discuss trends and pitfalls of large language models, and how #enterprises will use #LLMs and #AI technology in the future...

On ConstellationTV Insights <iframe width="560" height="315" src="https://www.youtube.com/embed/5a6z9KfneGQ" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

SAP user group DSAG rips S/4HANA innovation plans, maintenance increases

User groups are not happy about SAP's plan to only offer the latest innovations to cloud customers only.

During SAP's second quarter earnings call, SAP CEO Christian Klein said:

"SAP's newest innovations and capabilities will only be delivered in SAP public cloud and SAP private cloud using RISE with SAP as the enabler. This is how we will deliver these innovations with speed, agility, quality and efficiency. Our new innovations will not be available for on-premises or hosted on-premises ERP customers on hyperscalers."

The upshot is that SAP wants its customers to migrate to S4/HANA.

In a release, DSAG, the German-speaking SAP user group, blasted the plan to only make innovations such as AI and green ledger only available to customers using SAP S/4HANA Cloud, Public Edition or SAP S/4HANA Cloud, Private Edition via GROW-with-SAP or RISE-with-SAP contracts.

DSAG said:

"On-premises customers cannot benefit from major innovations such as artificial intelligence (AI) and green ledger. This also applies to larger function modules and extensions based on the Business Technology Platform (BTP). At the same time, SAP plans to increase maintenance fees. From the point of view of DSAG, SAP is leaving numerous loyal customer companies in the lurch with this approach."

According to DSAG, the SAP cloud-only plans will hit on-premises customers, direct-to-consumer hyperscalers and managed services providers hard. DSAG also blasted SAP's 30% price increase for new innovations.

Thomas Henzler, DSAG Board Member for Licenses, Service & Support, called the price increase and SAP operating model a "real showstopper and a big disappointment." DSAG advised companies to reevaluate planned S/4HANA implementations.

DSAG's key gripes include:

  • SAP's focus on cloud ERP customers creates a two-tier system.
  • On-premises customers have invested in S/4HANA and will lack sustainability and AI innovations.
  • SAP didn't justify the move beyond it being a good business decision for the company.
  • Sustainability innovations saw price increases before the products were ready.
  • All innovations for S/4HANA clouds should be available to on-premises customers.
  • The value of maintenance fees from SAP is unclear.

Constellation Research's take

Constellation Research analyst Holger Mueller said:

"Price increases for maintenance and support are in the basic enterprise software strategy book of vendors – especially when they want and need to move customers to new offerings. But customers know this and are sensitive to any pressure tactics like this – so SAP needs to be very careful. Otherwise customers may be motivated to start talking to SAP competitors – something SAP does not want and frankly cannot afford. SAP needs to convert the clear majority of its install base to S/4HANA to remain the SAP as we know it- the leading ERP vendor on this planet."

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization Future of Work SAP Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Unity Software scales digital twin business

Unity Software is best known for its game and virtual reality development platforms, but the company also has a fast-growing business focused on digital twin creation.

Digital twins, data-driven virtual representation of physical items, are seeing strong gains for Unity, which delivered second quarter revenue of $533.5 million, well ahead of Wall Street expectations for $518 million. The company reported a net loss of $192 million, or 51 cents a share. The company reported adjusted EBITDA of $99 million.

While Unity has been getting a lot of press as a platform aligned with Apple's Vision Pro launch, the digital twin business is maturing rapidly. The digital twin business sits inside Unity Industries, a part of the company's Create Solutions unit, which had second quarter revenue of $193 million, up 17% from a year ago.

Unity sells Unity Industry for $4,950 per year per license. Unity Industry, which provides metaverse, digital twin and augmented and virtual reality creation tools, is the fastest growing Unity SKU ever launched, the company said.

Speaking on a conference call with analysts, Unity CEO John Riccitiello touched on digital twin momentum multiple times. "We're getting great traction in the market, and we're now focused on delivering repeatable solutions, high-margin solutions that deliver ratable revenue streams and consumption revenue streams," said Riccitiello, who added Unity has cut back on professional services to rely on partners like Capgemini and Booz.

Riccitiello said that Unity is looking to embed generative AI into its digital twin platform that includes data ingestion, creation, visualization, simulation and collaboration tools. "An agent inside of a digital twin can predict what's going to happen next or run scenarios or run simulation. We feel great about that. That's ratable and consumption revenue," he said.

Unity is making money three ways in digital twins. First, there's the services part where a manufacturer, automaker or city is looking to get running on Unity via services or partners. The second revenue driver is the seats sold. Unity has combined all the tools into one suite. And finally, there is ratable revenue when the customer has the digital twin running and using Unity Cloud for simulation, rendering or computer vision. That's a cloud model.

Riccitiello said:

"We've been supply constrained, not pushing a lot for demand, and it's the sectors that you know. We're very keen on government. Another area where there's large interest is manufacturing, particularly the automotive industry, but also specialty manufacturers. There's a fair amount of interest on the retail side. And there's a lot of architecture engineering and construction. What we're working to do between now and the end of the year is to bring it down to a handful of turnkey solutions that we can scale rapidly. We think that's the best way to scale this business."

Indeed, Unity Industries generated roughly $58 million in revenue. Not bad for a company known for gaming and metaverse applications.

Data to Decisions Tech Optimization Innovation & Product-led Growth Next-Generation Customer Experience Future of Work Sales Marketing Metaverse Chief Information Officer

Qualcomm outlook light, but preps on-device generative AI processors

Qualcomm said it will roll out a series of generative AI products at its Snapdragon Summit in October. The company is betting that its mobile processor units will be a must have for AI use cases on edge computing and devices.

The idea of using local compute power to run AI models has been picking up. Enterprises are pondering whether they can offload AI workloads to devices instead of running them on their dime.

Cristiano Amon, CEO of Qualcomm, made the comments about AI use cases on the company's fiscal third quarter earnings conference call. Qualcomm outlined a collaboration with Microsoft to enable on-device generative AI applications. The aim would be to make generative AI more affordable and private. Qualcomm also is collaborating with Meta on running AI models like Llama 2 on Snapdragon devices.

"Our next-generation PC platform with integrated custom Orion CPUs in a significantly upgraded AI engine remains on track for commercial readiness. We look forward to sharing more information at our Snapdragon Summit in October," said Amon.

Amon said

"Everything that is real-time AI and requires context and low latency and personalization applies to all of our markets. We're going to announce a new set of products that will be generative AI compliant across use cases. If our customers and partners come up with new use cases and you have a gen AI capable smartphone it creates a upgrade cycle.

"GenAI on device needs a different computing platform that can enable AI continuously at low power."

While Amon's comments foreshadow the future focus for Qualcomm, the company has plenty of issues with its core business. Qualcomm's third quarter earnings topped estimates, but the outlook for the fourth quarter fell short of expectations. Handset chip sales fell 25% from a year ago.

Qualcomm reported third quarter non-GAAP earnings of $1.87 a share, above the $1.81 a share estimate. Revenue for the quarter was $8.44 billion, lower than the $8.5 billion expected for the third quarter. Net income was $1.60 a share. As for the outlook, Qualcomm said it expected non-GAAP earnings between $1.80 a share and $2 a share. Sales for the fourth quarter will be between $8.1 billion to $8.9 billion. That wide range was short of estimates calling for non-GAAP earnings of $1.91 a share on revenue of $8.7 billion.

Among the takeaways:

  • Amon said Android device demand will be flat and Qualcomm is looking more toward expanding its reach into the mid-tier.
  • Net income for the second quarter was down 52% from a year ago.
  • Automotive revenue was up 13% in the third quarter.
  • Amon said inventory among OEMs is bloated and "would be an issue through the end of the calendar year."
  • Qualcomm gained share in Android devices. Qualcomm said Apple bought components earlier in the year, but will see a pickup in the next quarter. 
  • IoT revenue was down 24%.
  • Handset units in 2023 will be down at least high-single digit percentage rates relative to 2022. That forecast reflects the economy and slow recovery in China.
  • Qualcomm will cut costs and restructure in the first half of fiscal 2024.
Tech Optimization Data to Decisions Future of Work Innovation & Product-led Growth New C-Suite Chief Information Officer

RingCentral buys Hopin Events, Sessions: Here's what it means

RingCentral is buying Hopin’s Events, an event management platform for virtual and hybrid events, and Hopin Sessions, a personalized engagement system, as the company increasingly goes toe-to-toe with Zoom Video Communications.

Zoom has been encroaching on RingCentral's turf with virtual meetings, Zoom Phone and contact center products. Zoom Spaces is for conference room systems and Zoom Events provides an event platform. RingCentral with the Hopin assets will now face off with Zoom Events.

Hopin will keep StreamYard, Superwave and Streamable and replace CEO Johnny Boufarhat with Badri Rajasekar, Hopin's Chief Technology and Product Officer.

More:

RingCentral said in a statement that the acquisition includes technology, customer relationships and engineering, product and sales employees. RingCentral's plan is to expand into interactive events.

Terms of the deal weren't disclosed.

Hopin Events is a platform that an host events including conferences, multi-track sessions, sponsor booths and registration tools. RingCentral will take Hopin Events and combine it with RingCentral Video, RingCentral Rooms and RingCentral Webinar. 

Constellation Research's take

Constellation Research analyst Liz Miller follows Unified Communications- and Contact Center-as-a-service. Here's what she had to say about RingCentral's move in a research note on the fly:

"The Unifed Communications space is trying to keep things spicy with Zoom and RingCentral flexing to bring more capabilities and channels into the UCaaS and communications ecosystem. Zoom, ever since that failed acquisition of Five9, has had its eyes on rounding out their CCaaS ecosystem and aggregating the tools, capabilities, and partnerships to advance on their promise of a contact center cloud offering on top of their UCaaS tool collection. RingCentral has been forging partnerships to extend native connectivity with platforms and ensure that CCaaS is actually part of their larger unified communications story. So now, events and event engagement tools enter the fray.

Hopin is a leader in the modern event management space, delivering solid modern solutions for virtual events like webinars and large-scale hybrid events. Between their Events and Sessions offerings, enterprises can host those large-scale user conferences, festivals and fairs on the same platform they host side meetings, webinars or video calls. Now with RingCentral event pros can connect those live event meetings at conferences, follow-up calls and conversations to webinars, demos and customer success meetings.

For RingCentral, where this gets interesting is when you look at their announcements across the year for expanded partnerships (AWS and Avaya to name a few). You also can’t quite ignore the rumor mill that has RingCentral going deeper with the likes of 8x8 to round out CCaaS offerings and tap into the XCaaS motion that 8x8 has been advancing. The goal here seems clear: More intentionally connect unified communications and contact tools, workflows and modes of bi-directional collaboration."

Next-Generation Customer Experience Tech Optimization Innovation & Product-led Growth Future of Work Chief Information Officer