Results

Acquisitions, TPU Chips, Zendesk | ConstellationTV Episode 78

Acquisitions, TPU Chips, Zendesk | ConstellationTV Episode 78

ConstellationTV episode 78 is here! Watch co-hosts Liz Miller and Holger Mueller analyze the latest #enterprise tech news (Google TPU chips, acquisitions - HubSpot rumors, Salesforce/Informatica)

Then hear from Constellation analysts live at#GoogleCloudNext and conclude with analysis from Liz on Zendesk Relate 2024. Watch until the end for bloopers!

0:00 - Introduction
1:50 - Enterprise #technology news coverage
14:30 - Google Cloud Next key takeaways
25:36 - Zendesk Relate 2024 analysis
31:25 - Bloopers!

ConstellationTV is a bi-weekly Web series hosted by Constellation analysts, tune in live at 9:00 a.m. PT/ 12:00 p.m. ET every other Wednesday!

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/qqgzD2RhY5U?si=9_BqrapgsnbjLzvV" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

77% of CxOs see competitive advantage from AI, says survey

77% of CxOs see competitive advantage from AI, says survey

Seventy-seven percent of CxOs believe AI will give their companies competitive advantage, but 91% of companies will determine that they don't have enough data to achieve a level of precision needed for trust, according to a Constellation Research and Dialpad survey.

The survey is based on responses from more than 1,000 senior executives in the US, Canada, UK, Australia and New Zealand about their AI initiatives.

Key takeaways from the survey include:

  • 77% of leaders believe AI will give them competitive advantage.
  • 75% of respondents believe AI will have a significant impact on their roles in the next three years.
  • 54% are concerned about AI regulation.
  • 38% are moderately to extremely concerned about AI.
  • 72% of CxOs plan to reskill workers for AI.
  • 69% of respondents are already using AI at work.
  • 33% of executives say their companies are using two AI solutions, 15% have three and 9.5% using four or more.

The survey also focused on early adopters who are applying AI to analytics, automating work, content creation, forecasting and insights. These CxOs are betting that the Return on Transformation Investments (RTI) with AI will come from efficiency, revenue and growth, compliance and risk and proactive monitoring.

While security, data leakage, hallucinations with generative AI and cost are key concerns for CxOs, high quality and high-volume data has emerged as a more long-term concern. Ninety-one percent of companies will determine they don't have enough data to achieve a level of precision they trust. For now, however, 66.5% of CxOs believe their team is getting enough data to power AI efforts.

Related: Middle managers and genAI | Why you'll need a chief AI officer | Enterprise generative AI use cases, applications about to surge | CEOs aim genAI at efficiency, automation, says Fortune/Deloitte survey

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration GenerativeAI Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

UnitedHealth sees $1.35 billion to $1.6 billion hit in 2024 due to Change Healthcare cyberattack

UnitedHealth sees $1.35 billion to $1.6 billion hit in 2024 due to Change Healthcare cyberattack

UnitedHealth Group has tallied up the costs from its Change Healthcare cyberattack including direct response, funding to care providers and lost revenue as the incident sucked out $3 billion of cash flow in the first quarter.

For 2024, UnitedHealth said the tab for the Change Healthcare cyberattack could be as high as $1.6 billion.

In February, UnitedHealth's Change Healthcare unit was hit with a ransomware attack. Since Change Healthcare processes claims and handles other financial processes prescriptions couldn't get filled and physicians ran low on cash.

On March 27, UnitedHealth said Change Healthcare could process medical claims, but its update page notes that some payer processes are being restored through April. The company also said it has provided more than $6 billion in advance funding and interest-free loans to care providers.

Couched in non-GAAP results and pro forma noise, you have to scroll to the bottom of UnitedHealth's first quarter earnings release to get a feel for the Change Healthcare costs. Here's the breakdown:

  • $1.22 billion: First quarter net loss for UnitedHealth, but some of that was due to the sale of a subsidiary.
  • $279 million: Business disruption impacts to UnitedHealth's Optum unit, which houses Change Healthcare. Business disruption impacts refer to revenue lost during the cyberattack.
  • $593 million: Total direct response costs due to the cyberattack. Costs attributed to the Optum unit were $363 million.
  • $872 million: Total UnitedHealth costs related to the Change Healthcare attack.
  • $1.35 billion to $1.6 billion: Total cyberattack hit for 2024 as projected by UnitedHealth, or $1.15 a share to $1.35 a share.

Adjusted for the Change Healthcare fiasco, UnitedHealth reported earnings of $6.91 per share on revenue of $99.8 billion. Both figures topped Wall Street estimates. UnitedHealth's adjusted figures included the revenue hit to Change Healthcare and excluded direct response costs.

Digital Safety, Privacy & Cybersecurity Security Zero Trust Chief Information Officer Chief Information Security Officer Chief Privacy Officer

Broadcom CEO Tan says VMware customers can get support extensions amid subscription transition

Broadcom CEO Tan says VMware customers can get support extensions amid subscription transition

Broadcom CEO Hock Tan penned another missive to VMware customers arguing that the software vendor has lowered the price of VMware Cloud Foundation, poured money into research and development, benefited partners and will complete the transition to subscriptions.

Tan also noted that Broadcom is working with extending support contracts for VMware customers struggling with the transition to subscription pricing.

The latest blog from Tan comes as Reuters reported the European Commission has received complaints about Broadcom's VMware pricing changes and the regulator sent requests for information to Broadcom.

Broadcom has had a steady cadence of blogs that appear to be aimed at allaying VMware customer concerns. To date, Nutanix has been the biggest beneficiary of VMware customer angst. It's unclear whether Broadcom's blog barrage is hitting the mark, but the missives collectively acknowledge that VMware customers may be a smidge disgruntled. The rundown:

Here's a look at the key points from Tan in order of importance.

Broadcom acknowledges that "fast-moving (VMware) change may require more time." Tan wrote:

"We continue to learn from our customers on how best to prepare them for success by ensuring they always have the transition time and support they need. In particular, the subscription pricing model does involve a change in the timing of customers' expenditures and the balance of those expenditures between capital and operating spending. We heard that fast-moving change may require more time, so we have given support extensions to many customers who came up for renewal while these changes were rolling out. We have always been and remain ready to work with our customers on their specific concerns."

Customers can keep their existing perpetual licenses for vSphere. Tan wrote:

"It is important to emphasize that nothing about the transition to subscription pricing affects our customers’ ability to use their existing perpetual licenses. Customers have the right to continue to use older vSphere versions they have previously licensed, and they can continue to receive maintenance and support by signing up for one of our subscription offerings.

To ensure that customers whose maintenance and support contracts have expired and choose to not continue on one of our subscription offerings are able to use perpetual licenses in a safe and secure fashion, we are announcing free access to zero-day security patches for supported versions of vSphere, and we’ll add other VMware products over time."

VMware is standardizing the pricing metric across cloud providers to per-core licensing to match its end-customer licensing. Tan said this standardization will allow enterprises to seamlessly move VMware Cloud Foundation on-premise to cloud and back if needed.

Tech Optimization vmware Chief Information Officer

Akamai adds Nvidia GPUs to cloud network, initially targeting media

Akamai adds Nvidia GPUs to cloud network, initially targeting media

Akamai has added Nvidia GPUs to its distributed cloud network adding a service optimized for processing video content at the edge.

The cloud service, announced at the National Association of Broadcasters' (NAB) conference, is powered by Nvidia RTX 4000 Ada Generation GPUs.

Akamai has been steadily building out its distributed cloud infrastructure for multiple use cases including AI and machine learning workloads that require low latency near data.

According to Akamai, the Nvidia RTX 4000 instances can process video frames per second 25x faster than CPU encoding and transcoding. Although the Akamai Nvidia-powered instances are initially aimed at the media industry, the company is also betting on other use cases.

These use cases include:

  • Virtual reality and augmented reality content.
  • Generative AI and machine learning workloads for training and inference at the edge.
  • Data analytics and scientific computing.
  • Gaming and graphics rendering.
  • High-performance computing.

Akamai said it will continue to add GPU instances optimized by industries.

Next-Generation Customer Experience Tech Optimization Chief Information Officer

Foundation model debate: Choices, small vs. large, commoditization

Foundation model debate: Choices, small vs. large, commoditization

Foundational model debates--large language models, small language models, orchestration, enterprise data and choices--are surfacing in ongoing enterprise buyer discussions. The challenge: You may need a crystal ball and architecture savvy to avoid previous mistakes such as lock-in.

In recent days, we have seen the following:

Dion Hinchcliffe: Enterprises Must Now Cultivate a Capable and Diverse AI Model Garden

Now that's a lot to talk about considering how enterprises need to plow ahead with generative AI, leverage proprietary data and pick a still-in-progress model orchestration layer without being boxed in. The dream is that enterprises will be able to swap models as they improve. The reality is that swapping models may be challenging without the right architecture.

This post first appeared in the Constellation Insight newsletter, which features bespoke content weekly and is brought to you by Hitachi Vantara.

Will enterprise software vendors use proprietary models to lock you in? Possibly. There is nothing in enterprise vendor history that would indicate they won't try to lock you in.

The crystal ball says that models are likely to be commoditized at some point. There will be a time where good enough is fine as enterprises toggle between cost, speed and accuracy. Models will be like compute instances where enterprises can simply swap them as needed.

It's too early to say that LLMs will go commodity, but there's no reason to think they won't. Should that commoditization occur platforms that can create, manage and orchestrate models will win. However, there is a boom market for models for the foreseeable future.

AWS Vice President of AI Matt Wood noted that foundation models today are "disproportionately important because things are moving so quickly." Wood said: "It's important early on with these technologies to have that choice, because nobody knows how these models are going to be used and where their sweet spot is."

Wood said that LLMs will be sustainable because they're going to be trained in terms of cost, speed and power efficiency. These models will then be stacked to create advantage.

Will these various models become a commodity?

"I think foundational models are very unlikely to get commoditized because I think that there's just there is so much utility for generative AI. There's so much opportunity," said Wood, who noted that LLMs that initially boil the AI ocean are being split into prices and sizes. "You're starting to see divergence in terms of price per capability. We're talking about task models; I can see industry focus models; I can see vertically focused models; models for RAG. There's just so much utility and that's just the baseline for where we're at today."

He added:

"I doubt these models are going to become commoditized because we haven't yet built a set of criteria that helps customers evaluate models, which is well understood and broadly distributed. If you're choosing a compute instance, you can look at the amount of memory, the number of CPUs, the number of cores and networking. You can make some determination of how that will be useful to you."

In the meantime, your architecture needs to ensure that you aren't boxed in as models leapfrog each other in capabilities. Rapid advances in LLMs mean that you’ll need to hedge your bets.

Constellation Research analyst Holger Mueller noted:

"We are only at the beginning of the foundation model era. The layering of public LLMs that are up to speed on real world developments and logically merged with industry knowledge, SaaS packaging, and functional enterprise domain specific models are  going to be crucial for gen AI success. Bridging the gap from real world aware and fitting an enterprise makes genAI workable and effective."  

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Anthropic CEO Amodei on where LLMs are headed, enterprise use cases, scaling

Anthropic CEO Amodei on where LLMs are headed, enterprise use cases, scaling

Anthropic CEO Dario Amodei said large language model personality is starting to matter, argued costs to train models will come down and agents that act autonomously will need more scale and reliability.

Those were some of the takeaways from Amodei, who spoke at Google Cloud Next.

Model personalities will start to matter. Amodei covered the launch of Claude 3 and said a lot of effort was put into making the large language model personable. He said:

"One thing that we worked particularly hard on was the personality of the model. We've had this kind of chat paradigm of models for a while, but how engaging the model was hasn't had as much attention as reasoning capabilities. How much does it sound like a human? How warm and natural is it to talk to? We had an entire team devoted to making sure that Claude 3 is engaging."

Models need families. Amodei said the strategy for Claude 3 was to create a family of models. "Opus is the largest one. Sonnet is the smaller one, but faster and cheaper. Haiku is very fast and very cheap," he said. "Enterprises have different needs. Opus is very good at performing difficult tasks where you have to do exact calculations and those calculations have to be accurate. Sonnet is the workhorse model in the middle. I'm excited about Haiku because it outperforms almost all of its intelligence class while being fast and cheap."

More Anthropic: AWS ups its investment in Anthropic as giants form spheres of LLM influence | Constellation ShortList™ Cloud AI Developer Services | 

Costs of training and inference. Amodei said costs for training and inference are coming down and will continue to fall, but more will be spent on training models. He said:

"I think the cost of training a particular model is going to go down very drastically but the models are so economically valuable that the amount of money that's spent on training is going to continue growing exponentially. We'll eat up all the efficiency gains at least at the higher end of models. Within Anthropic we measure things in units we call effective compute. I think that is going to go up 10x per year. That can't last forever, and no one knows for sure how long it'll last, but that's where we are right now."

How LLMs will develop over next few years. Amodei said model intelligence will come from pure scale. Future reliability and ability to handle specialized tasks will come from more scale and multi-modality with images, video and audio inputs. There will also be interactions with the physical world, maybe even robotics.

Hallucinations will also be a key challenge. "We have substantial teams to reduce the amount of hallucination at the present in models," said Amodei.

"The final thing I expect to see in the next year or two is agents models acting in the world," he added. "We've seen lots of instantiations of agents so far, but we haven't seen anything yet."

Enterprise use cases. Amodei said as models get smarter and trained for longer, they become much better at coding tasks. Healthcare and biomedicine will also be key use cases as well as finance and legal uses. "These use cases often involve reading long documents which Claude 3 has gotten better at relative to previous models," said Amodei.

Corporate use cases appear to be split evenly between creating internal tools to make employees more productive and customer facing uses. Consumer-facing companies will enable users to do more sophisticated tasks by coupling APIs.

Amodei said the cost of models for these use cases will become less of an issue since they'll be right sized for the task at hand.

The importance of prompt engineering. Amodei said enterprises should spend time with prompt engineers to test models and make sure they work as expected.

He said:

"We are still trying to figure out how our own models work. A large language model is very complicated object. When we deploy it, there's no way for us to figure out everything that it's capable of ahead of time. One of the most important things we do is just providing good prompt engineering support. It sounds simple, but 30 minutes with the prompt engineer can often make an application work when it wasn't before, or get better at handling errors.

I always recommend to an enterprise customer just meet with one of our prompt engineers for half an hour. It might completely transform your use case. There's a big difference between demos and actual deployment."

Safety and reliability. Anthropic recently published a paper on jailbreaking models. Amodei also said partnerships with Google Cloud revolve around security and reliability. Enterprises need both reliability and security to scale deployments of generative AI.

Amodei said short term concerns for models revolve around bias and misleading answers when important decisions need to be made in industries like finance, insurance, credit and legal. Overall, Amodei said his concern is how models will become increasingly powerful. He said:

"I think it's going to be possible for folks to misuse models. I worry about misuse of biology. I worry about cyberattacks. We have something called a responsible scaling plan, that's designed to detect those threats which honestly are not really very present today. We're only starting to see the beginning of them. So, every time we release a new model, we run we run them through this. We run tests to see if we are getting any closer to the world where we would be worried about these risks being present in models. And so far, the answer has always been no, but they're a little bit better at these tasks than they were before. Someday, the answer will be yes. And then we have a prescribed set of safety procedures that we'll take on the model. When that is the case, the other side of the risks is as models become more autonomous."

When models become agents and more autonomous, they can take actions without humans overseeing them. "I think there will be very substantial risks in this area, and we'll have to have policies. We'll have to mitigate them," said Amodei, who noted that enterprises will ask about those concerns as much as they do data privacy and hallucinations today.

Large custom models. Amodei said enterprises won't be in a position in the future of choosing a small custom model or a large general one. The correct fit will be a large custom model. LLMs will be customized for biology, finance and other industries.

What needs to happen beyond LLMs to create agents that take actions on your behalf? Amodei said "it's kind of an unexplored frontier." He said:

"One of my guesses is that if you want an agent to act in the world it requires the model to engage in a series of actions. You talk to a chat bot, it only answers and maybe there's a little follow-up. With agents you might need to take a bunch of actions, see what happens in the world or with a human and then take more actions. You need to do a long sequence of things and the error rate on each of the individual things has to be pretty low. There are probably thousands of actions that go into that. Models need to get more reliable because the individual steps need to have very low error rates. Part of that will come from scale. We need another generation or two of scale before the agents will really work."

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Microsoft raises Dynamics 365 prices starting Oct. 1

Microsoft raises Dynamics 365 prices starting Oct. 1

Microsoft said it is raising the prices for its Dynamics 365 enterprise resource planning and customer relationship management applications. 

The company said that Dynamics 365 hasn't seen a price increase in more than 5 years. The price changes go into effect Oct. 1 and range from an additional $10 to $15 more a month per user for most apps, but $30 more for a select apps. 

Microsoft's Dynamics 365 price increases apply to cloud and on-premise versions. US government list prices will increase 10% Oct. 1, 2024 and then see a smaller increase Oct. 1, 2025 to be on par with commercial pricing. These price increases don't appear to affect small business customers. 

Copilot capabilities delivered in Dynamics 365 are in the core SKUs. Copilot for Service, Copilot for Sales (both GA'd), and Copilot for Finance (in preview) require separate licenses. In other words, if Copilot is part of Dynamics 365 it does not get charged as extra. There are product SKUs called CoPilot for Sales, CoPilot for Service and CoPilot for Finance that are compatible with multiple CRM systems including Salesforce, so those products are the per seat per month Copilots

Here's a look at the changes.

Product  Price before October 1, 2024  Price as of October 1, 2024 
Microsoft Dynamics 365 Sales Enterprise  $95  $105 
Microsoft Dynamics 365 Sales Device  $145  $160 
Microsoft Dynamics 365 Sales Premium  $135  $150 
Microsoft Microsoft Relationship Sales3  $162  $177 
Microsoft Dynamics 365 Customer Service Enterprise  $95  $105 
Microsoft Dynamics 365 Customer Service Device  $145  $160 
Microsoft Dynamics 365 Field Service  $95  $105 
Microsoft Dynamics 365 Field Service Device  $145  $160 
Microsoft Dynamics 365 Finance  $180  $210 
Microsoft Dynamics 365 Supply Chain Management  $180  $210 
Microsoft Dynamics 365 Commerce  $180  $210 
Microsoft Dynamics 365 Human Resources  $120  $135 
Microsoft Dynamics 365 Project Operations  $120  $135 
Microsoft Dynamics 365 Operations – Device  $75  $85 
Next-Generation Customer Experience Microsoft Chief Information Officer

HOT TAKE: Cisco completes Splunk Acquisition - Constellation’s Take.

HOT TAKE: Cisco completes Splunk Acquisition - Constellation’s Take.

Last week, executives from Cisco and Splunk, including Liz Centoni, Jeetu Patel, and Tom Casey, held a 45-minute round table where the combined entity outlined their plans for Cisco’s observability future. General opportunities and high-level customer observability pain points were communicated in that discussion. Yet, customers still seek high-level action plans and specific execution details from the merger. While generic customer pain points to observability and security were discussed, the market sought more information about how these major observability platforms would come together. The full video of this roundtable can be seen here -> https://www.youtube.com/watch?v=PsZ2z66i6JI

Tom Casey from Splunk has taken over the product ownership of the Cisco #Observability solution strategy. This move aims to reduce leadership alignment friction by not having competing priorities across #O11y divisions to drive a unified platform. The collaboration between Cisco and Splunk has the potential to provide visibility from the network to the application level, which other observability vendors lack. However, the details of how this will be accomplished are not yet clear.

Splunk has been working hard to integrate its recent acquisitions, including SignalFx, Omnition, Rigor, Flowmill, Plumbr, and VictorOps, into its Observability platform. With Cisco acquiring them, Splunk’s initial direction of keeping SignalFx as a Splunk observability cloud while maintaining the Cloud logs as Splunk Platform (as it was difficult to change the architecture completely to merge them all together) might change. We still don’t know which platform the incoming observability products, such as AppD, Thousand Eyes, and FSO (Full Stack Observability), will move into or merge with.  They also diverted investments or decommissioned some acquisitions, including VictorOps and Incident Intelligence to make things simpler (Though engineering and support teams maintain those solutions, product, and strategy teams were eliminated thereby indicating the future of these products may be short-lived).

Given Cisco's history and past experience of integrating observability products, such as AppD and ThousandEyes, and Cisco’s own organic observability platform FSO, and the time Cisco took to streamline operations, field teams, pricing, and create a combined solution, Constellation expects that this new collaboration will take even longer to come to fruition. Many existing Splunk and AppD customers have expressed concerns about how this collaboration will unfold. For example, they are worried about getting the right recommendations from the field/solution teams given many overlapping solutions. Customers are very nervous about the combined Cisco observability solution pricing structure going forward, and whether they will pay a double dip fee to Cisco, which has not been fully disclosed yet. The combination of multiple platforms, add-ons, suites, packaging, overlapping features, and licensing models may confuse the customers, and field teams until the unified pricing structure and full-stack unified platform take shape. These include DEM (Synthetic Monitoring & RUM), APM (Distributed Tracing), metric stores, tracing stores, session replay capabilities, Infrastructure monitoring, and log capabilities overlap along with Splunk having their own powerful query language (SPL) which Cisco’s observability solutions lack. Cisco should proactively take the time to clearly explain these outcomes to customers and properly execute on it with specific defined milestones.

Furthermore, both companies claim that the acquisition is to catch up with AI demands. Yet, neither of them is a leader in infusing AI into their Observability or AIOps solutions. There are other competing vendors ahead of Cisco/Splunk with their generally available AI use cases, which Cisco/Splunk both need to catch up with. For instance, Splunk AI assistant (formerly SPL Co-Pilot), introduced in .conf23, is still in preview mode and constitutes a very basic use case of using a natural language interface to produce SPL (Splunk query language) used in observability data searches. Cisco's AI does not perform any observability-related tasks yet. It will be interesting to see how many AI use cases they can support quickly to catch up with the market.

Since a significant portion of Splunk's revenue comes from their ARR, this could help Cisco launch into the ARR model, which they have been trying to expand for the last few years.

Constellation POV

Based on our conversations with existing Splunk and Cisco customers, and Splunk ex-employees, Constellation believes that the integration faces many challenges. Constellation expects that the combined entity will take at least two years to complete post-merger integration in a manner that users will see the benefits.

Although the Cisco/Splunk team has said all the right things so far, execution will be critical, and it could be painful and slow, which may cost some large accounts that are already experimenting with competing solutions. Constellation believes that the overall merger will bring benefits to customers and partners, but be prepared for a much longer than expected post-merger integration, given the different architectures, consumption models, data collected, culture, and technical debt accrued over the years.

At first glance, the idea of combining Security with Observability seems to be a good one, and it aligns well with Splunk's ongoing mission before the acquisition. Bottom line – while this high-level strategy sounds promising, it needs more details to be fully understood and value realized.

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Splunk cisco systems ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Digital Officer Chief Analytics Officer Chief Data Officer Chief Technology Officer Chief Executive Officer Chief Information Officer Chief AI Officer Chief Information Security Officer Chief Product Officer

HOT TAKE: Adobe’s Frame.io Serves Up a Reimagined Version and I’m Gloating

HOT TAKE: Adobe’s Frame.io Serves Up a Reimagined Version and I’m Gloating

When Adobe acquired Frame.io, it was chalked up as just another Creative Cloud solution that was so niche and specialized only people with expensive cameras and the agencies that hire them would reap the rewards. But in the wake of the announcement in 2021, I blogged a hot take:

“Imagine what happens when Adobe pulls the best of the best from BOTH Workfront AND Frame.io to reimagine what collaboration for creativity and experience really works like. Only time will tell how far collaboration will connect the two sides of the Adobe coin…If anything can bridge that gap in a meaningful way, it just might be collaboration and workflows.”

I WAS RIGHT. IT IS HAPPENING!

That gloat felt good. Now back to the news at hand.

Adobe’s Frame.io V4 takes collaboration to the next level, focused on the work creative professionals must synch, share, comment on and coordinate to create new experiences. From will.i.am creating a new music video to a brand marketer creating a new story driven transmedia campaign, V4 has both the asset and the process covered. Much like the other updates and modernizations across Creative Cloud, the reimagination of Frame.io has me feeling the rage only true jealousy can bring on.

Let me explain: Many moons ago, I worked on a rebrand for a cosmetic product that required an extensive shoot involving multiple models with unique-yet-natural looks to satisfy a year-long campaign involving photo and video assets. The shoot was booked with an agency, a videographer, a casting agent and a photographer in Cape Town, South Africa…I was in Campbell, California. Let the creative chaos games begin. Briefs were shared, mood boards, story boards and concept briefs passed around for what felt like lifetimes.

As these types of creative jobs go…the shoot happened when I was sound asleep thanks to time zones so when I got the test shots 48 HOURS later, I had to send that “delicate” email of “The brief clearly outlined casting and I approved the first round of models. All the test shots you sent back are of totally different models in completely different scenes and nowhere near what was outlined on the boards?”

Days would be lost in the name of collaboration. Chaos was the norm in the name of asset and file sharing. Budget was lost to misinterpretation.

This new version of Frame.io enables that entire chaotic scenario to become a streamlined workflow centered around an easy to view and review interface, common centralized asset storage and intentionally uncomplicated processes to consolidate the work of creation. I’m secure enough to admit that how elegantly Frame.io reframes the chaos makes me more than a little jealous. It takes hold of the process from casting through to file transfer and sharing, delivers a single pane for commenting and collaborating and intentionally works to accelerate the process with alerts and aggregated comment drawers for smooth signoffs and approvals.

Version 4 also comes with a new single metadata framework that underpins everything allowing all assets, data and collaborators come together in a single, unified platform. Now every piece of the process can exist as metadata on an asset or file. Loved working with an actor you met in casting…that lives on that video. Want to only view dailies by scene or actor…yup…that’s metadata that can live with an asset and be easily searched. Frame.io extends the power of a metadata framework with Collections that aggregates and segments by that metadata.

Let’s follow the bouncing ball of my gloating once more and close your eyes to imagine just how powerful search becomes as this metadata framework extends beyond Frame.io into, for argument’s sake, a Digital Asset Management (DAM) solution like Adobe Assets or a workflow and work management solution like Workfront?

Don’t worry…you won’t have to worry all too long as Frame.io’s integration with Workfront is expected to be released later this year, enabling a new unified review and approval workflow between cross-functional teams. For marketers, agencies and brand leaders, we are talking about visibility and work that connects CAMERA to CAMPAIGN! That’s where this is heading!

Frame.io V4 beta is rolling out in stages for Free and Pro customers across web, iPhone and iPad across 2024 with Team and Enterprise customers expected to get the V4 update later in the year. In a video blog announcing V4, Frame.io’s Founder, Emery Wells, also shared a simplification of the pricing model for the new version.

This is the fourth iteration of Frame.io since the product launched in 2015 and the biggest update the company has ever introduced, reimagining the platform from the ground up but remaining grounded in their customers asks and innovations. Clearly, this whole “expand workflows so the processes of casting, scouting, and dailies review” makes me mutter like an old lady under my breath with that “BACK IN MY DAY” lament. But it really can’t be overstated just how much this work needs this overhaul. We need to reimagine the work and workflows of creatives and creators with tools that don’t just start and stop with outputs and assets but truly connects the totality of this work we call creation.

 

Image generated by Adobe Firefly (and my sick prompt skillz)

Future of Work Marketing Transformation New C-Suite Next-Generation Customer Experience Tech Optimization Chief Customer Officer Chief Digital Officer Chief Executive Officer Chief Information Officer Chief Marketing Officer Chief People Officer Chief Revenue Officer