Results

Atlassian: A look at its system of work strategy, enterprise uptake, AI approach

Atlassian is exiting its second quarter on a $5 billion annual revenue run rate as its "System of Work" strategy is landing large enterprises that want to connect their technology and business teams.

The company's goal is to hit the $10 billion annual revenue run rate and its ability to leverage AI across its platform is resonating with large enterprises. Atlassian landed a record number of deals with more than $1 million in annual contract value in the second quarter.

Atlassian CEO Mike Cannon-Brookes said on an earnings conference call:

"Our cloud platform with AI threaded throughout is delivering. With more than 20 years of data and insights on how software, IT, and business teams plan, track, and deliver work, we're uniquely positioned to help teams across every organization on the planet work better together. Today, more than 1 million monthly active users are utilizing our Atlassian Intelligence features to unlock enterprise knowledge, supercharge workflows, and accelerate their team collaboration. We're seeing a number of AI interactions increase more than 25x year-over-year."

Atlassian reported a second quarter net loss of $38.2 million in the second quarter, or 15 cents a share, on revenue of $1.29 billion, up 21% from a year ago. Non-GAAP second quarter earnings were 96 cents a share, 20 cents ahead of expectations.

As for the outlook, Atlassian projected third quarter revenue of $1.34 billion to $1.35 billion with cloud revenue growth of 23.5%. For fiscal 2025, Atlassian projected revenue growth of 18.5% to 19% with cloud revenue growth of 26.5%.

The strategy

Atlassian's strategy is to expand broadly into a system of work. This work system has the potential to break down information silos and align software development, service management and work management to create what could be referred to as an alignment engine.

In a nutshell, Atlassian wants to be the system enterprises use to plan, track and execute every collaboration workflow wall-to-wall.

Here's what Atlassian's plan based on its Investor Day last year. The company said:

"As business becomes more complex, we’re seeing a rapid rise in teams like HR, marketing, and finance working with their counterparts in development and IT. Management wants to see a single system of work, teams want seamless collaboration."

This is a bit of a different spin than that work operating system category occupied by Monday.com, Asana and Smartsheet, but the efforts rhyme. Like ServiceNow, Atlassian sees its platform as something that can go well beyond software development and service management into every corporate function. In fact, Atlassian sees a $67 billion total addressable market with $18 billion of that sum sitting in its existing customer base.

Atlassian is betting it can win because its software team collaboration tools put it in a digital transformation and AI pole position. The company can also land and expand. More importantly, Atlassian has one platform where it can leverage AI across its products and 1,800 marketplace partners to extend it.

Atlassian acquires Loom for $975 million, will add asynchronous video to platform

On the product front Atlassian is doing the following:

  • Doubling down on IT service management with Jira Service Management.
  • Move its entire customer base to the cloud. In the second quarter, cloud revenue was $847 million, up 30%. Data center revenue, where Atlassian is self-managed by enterprises, was $362.3 million, up 32%.
  • Grow large enterprise accounts.
  • Layer AI and Atlassian Intelligence throughout the platform. Atlassian is in a natural position to turbo charge software team processes with AI.
  • Use Rovo, a human-AI collaboration tool that disperses knowledge to teams and their workflows to land more cross-functional enterprise teams. Rovo is Atlassian's AI agent play and it's early days in the uptake. Atlassian said customers are mostly in the Rovo proof-of-concept stage.

Atlassian launches Rovo, consolidates Jira Work Management, Jira Software | Atlassian Rovo AI additions go GA with consumption pricing on deck

Can Atlassian move upstream?

Cannon-Brookes said in a shareholder letter that 10% of Atlassian's revenue in the second quarter was from large customers. The big question on the earnings call was whether Atlassian can expand in large enterprises.

According to Cannon-Brookes, the combination of Jira Work Management and Jira Software gives the company the opportunity to touch more employees and use cases in large companies.

Atlassian is betting that large enterprises will gravitate toward platforms that can enable innovation.

Cannon-Brookes said:

"The CIOs and CEOs I speak to continue to want to form a deeper strategic relationship with Atlassian, not because of any single product we have, but because of our R&D speed, the innovation we're delivering. AI is just the latest example of that, but also the breadth of the platform, the amount of things they can see it improving from their goals all the way down to the day-to-day work that they do."

Atlassian did say there's some uncertainty in the macroeconomic environment, but the risks are manageable.

Atlassian on AI agent hype, multiple foundational models

Another item that remains to be seen is how large enterprises react to the barrage of AI agents being tossed at them. Atlassian's Rovo is also in that mix.

Cannon-Brookes was asked about the competitive dynamic in agentic AI. He had some interesting things to say.

"There's no doubt we've been through these technology transformations before. And when we go through them, you run through the hype cycle up and down, and there are certain words that mean something and then mean nothing and then end up meaning something. I think agents is probably squarely in that camp. The word is used everywhere suddenly for all sorts of things that I would argue aren't agents, but you can't control how the world uses a word."

Atlassian's definition of an AI agent goes like this.

  • AI agents have a goal, are aimed at outcomes, and have "some sort of personality."
  • They have a set of knowledge and can take action.
  • There are control parameters.
  • And AI agents actual like a virtual teammate.

"Atlassian agents are unique in that they can basically anywhere that a human being can be used in our software, an agent can do the same sorts of things. You can assign them issues, you can give them certain sets of knowledge, you can give them permission to certain actions. So that's pretty differentiated to other people who are building either a chatbot or fundamentally just something they're calling an agent," said Cannon-Brookes.

The ultimate barometer for enterprise AI vendors is the ability to pivot R&D. Cannon-Brookes said the AI market is moving quickly and vendors have to go with it.

"Our ability to build, deploy, get customer feedback and learn in a loop is really important in order to navigate these transitions," said Cannon-Brookes. "Anyone who tells you they know where this is going to be three years from now is a fool. What I can tell you is that we have to be able to learn really fast and move really fast and take the latest and greatest innovations and deploy them and get them to customers quickly. That is the best strategic path to gain that value over time."

In addition, enterprises will need vendors that rely on multiple models. Agentic AI is going to depend on a series of different models. "Atlassian Intelligence needs to be able to keep adapting modern models as fast as possible. Again, we're running more than 30 models from more than seven different vendors today. We continue to evaluate new models," said Cannon-Brookes. "It's also about all the data you have, the quality, the ability to search and ability to connect it."

"Ultimately, customers and users don't use an AI model, they use a piece of software, they use some high-level technology to interact with an agent," he added.

Data to Decisions Future of Work Next-Generation Customer Experience Tech Optimization Innovation & Product-led Growth New C-Suite Sales Marketing Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Executive Officer Chief Financial Officer Chief Information Officer Chief Experience Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Bringing Zero Trust to SAP RISE: Zscaler and SAP Partner for Secure Cloud Migrations

For enterprises running legacy software on-premise, modernizing applications for the cloud is often a complex and risk-laden endeavor. One of the biggest hurdles in cloud adoption—especially in multicloud and hybrid deployment models—is security. Traditional perimeter-based security approaches are no longer sufficient to protect modern workloads, leaving enterprises exposed to rapidly evolving sophisticated cyber threats.

Recognizing the need for security to be embedded in cloud modernization efforts, Zscaler, a leader in cloud security and an SAP partner, now offers its Zero Trust Network Access (ZTNA) service natively integrated within RISE with SAP. Delivered through the Zscaler Zero Trust Exchangeâ„¢ platform, Zscaler Private Accessâ„¢ (ZPAâ„¢) for SAP enables enterprises with on-premise ERP workloads to migrate to the cloud securely and efficiently—eliminating the complexity and risks associated with traditional VPN-based access. 

Source: Zscaler

Why This Matters to SAP Customers

SAP RISE enables enterprises to transition from legacy on-premise deployments to cloud-based SAP solutions, helping them become more agile and competitive. However, moving mission-critical workloads to the cloud introduces new security challenges, such as securing user access, preventing lateral movement, and protecting against cyber threats in a perimeter-less environment.

Zero Trust architecture addresses this challenge. Instead of relying on network-based security models, Zero Trust enforces least-privileged access, ensuring that only authenticated and authorized users can access SAP workloads—whether they are in private data centers, public clouds, or hybrid environments. Zscaler’s integration with SAP RISE helps customers:

  • Reduce security risks by eliminating implicit trust and verifying every access request dynamically.
  • Enable secure hybrid work by allowing employees, partners, and suppliers to securely access SAP applications from anywhere without relying on VPNs or exposing networks.
  • Improve compliance and governance by ensuring consistent security policies and real-time threat protection across cloud workloads.

Zscaler’s Role in SAP RISE: How the Integration Works

Zscaler brings its Zero Trust Exchange platform to SAP RISE, offering:

  • Secure access to SAP applications: Users can securely connect to SAP workloads without exposing applications to the internet.
  • Microsegmentation and threat containment: By enforcing Zero Trust policies, the integration minimizes lateral movement risks, reducing the impact of potential breaches.
  • End-to-end visibility and policy enforcement: Enterprises can enforce uniform security policies across SAP applications, regardless of where they are deployed.

The integration aligns with SAP’s strategy of helping enterprises modernize with secure, cloud-based solutions while ensuring business continuity, resilience, and compliance.

A Step Forward in Secure Cloud Modernization

As enterprises embark on their cloud transformation journeys, security must be a foundational element rather than an afterthought. The shift to cloud-based and hybrid environments brings significant operational benefits but also introduces new security challenges that traditional perimeter-based models cannot address. A Zero Trust approach, which enforces least-privileged access and dynamic security policies, is becoming essential for protecting modern workloads.

Partnerships such as the one between SAP and Zscaler demonstrate how security can be integrated into cloud transformation initiatives, giving enterprises the confidence to modernize without compromising protection. By embedding security into migration strategies from the outset, organizations can reduce risk, improve resilience, and accelerate digital transformation with greater trust and agility. 


 

Tech Optimization Digital Safety, Privacy & Cybersecurity Chief Information Officer Chief Information Security Officer Chief Technology Officer

Apple Q1 strong, but China and iPhone revenue falls

Apple's first quarter results were better-than-expected as Mac, iPad and services revenue gained from a year ago. But iPhone revenue was down from a year ago as were wearables. China sales also took a hit in the first quarter.

The company, which is betting that Apple Intelligence can drive an upgrade cycle, reported first quarter earnings of $2.40 a share on revenue of $124.3 billion.

Wall Street was expecting Apple to report earnings of $2.35 a share on revenue of $124.03 billion.

CEO Tim Cook said Apple reported its best quarter ever and added that Apple Intelligence will be available in more languages in April.

By the numbers:

  • iPhone revenue in the first quarter was $69.14 billion, down from $69.7 billion a year ago. Wall Street analysts were looking for $71 billion in iPhone sales.
  • Those flattish iPhone sales come as sales in greater China for the first quarter were $18.5 billion, down 11% from $20.82 billion a year ago.
  • Mac sales in the first quarter were $8.99 billion, up 15% from $7.78 billion a year ago.
  • iPad revenue in the first quarter were $8.09 billion, up 15% from $7.02 billion a year ago.
  • Wearables revenue (mostly Apple Watch) in the first quarter was $11.75 billion, down from $11.95 billion a year ago.
  • Services revenue in the first quarter surged to $26.34 billion, up from $23.12 billion a year ago.
  • Sales in the Americas were $52.65 billion, up from $50.43 billion in the first quarter a year ago.
  • Apple saw revenue gains in Europe, Japan and the rest of Asia Pacific.

As for the outlook, Kevan Parekh, Apple’s CFO, said the stronger US dollar will be a headwind. The company expects second quarter revenue to grow in the low- to mid-single digit range.  

Cook said on the earnings conference call that Apple has more than 2.35 billion active devices. Cook was asked about Apple Intelligence and demand. He said:

"We did see that the markets where we had rolled out Apple Intelligence that the year-over-year performance on the iPhone 16 family was stronger than those where Apple intelligence was not available."

Regarding China, Cook said:

"Over half of the decline that we experienced was driven by change in channel inventory from the beginning to the end of the quarter. And on the Apple Intelligence side, we have not rolled out in China. And it's the most competitive market in the world."

Cook was also asked about cost of compute and DeepSeek's impact.
 
"In general, I think innovation that drives efficiency is a good thing. And that's what you see in that model. Our tight integration of silicon and software will continue to serve us very well. We do things on the device and we do things in the private cloud. From a CapEx point of view, we've always taken a very prudent and deliberate approach to our expenditure."

Future of Work apple Chief Information Officer

Intel Q1 outlook light, Q4 better-than-expected

Intel is still on the hunt for a CEO, but the company's fourth quarter results were better-than-expected even as sales fell from a year ago in every division except for network and edge computing.

The company reported a fourth quarter net loss of 3 cents per share on revenue of $14.3 billion, down 7% from a year ago. Non-GAAP earnings were 13 cents a share.

Wall Street was expecting Intel to report non-GAAP fourth quarter earnings of 12 cents a share on revenue of $13.83 billion.

The financial report was the first since Pat Gelsinger retired as CEO in December to be replaced by co-CEOs David Zinsner and Michelle (MJ) Johnston Holthaus.

Intel reported a 2024 loss of $18.8 billion, or $4.38 a share, on revenue of $53.1 billion, down 2% from the previous year.

As for the outlook, Intel said it will report first quarter revenue of $11.7 billion to $12.7 billion with breakeven earnings per share on a non-GAAP basis. Wall Street was looking for non-GAAP first quarter earnings of 9 cents a share.

Holthaus, who is also CEO of Intel Products, said the fourth quarter was a "positive step" and the company is simplifying its portfolio. Zinsner, who is also CFO, said the first quarter outlook "reflects seasonal weakness magnified by macro uncertainties, further inventory digestion and competitive dynamics."

By the numbers:

  • Intel's Client Computing Group fourth quarter revenue was $8 billion, down 9% from a year ago. Intel said it is on track to ship more than 100 million AI PCs by the end of 2025.

  • The Data Center and AI unit delivered fourth quarter revenue of $3.4 billion, down 3% from a year ago.
  • Network and Edge had revenue in the fourth quarter of $1.6 billion, up 10% from a year ago.
  • Intel Foundry revenue in the fourth quarter was $4.5 billion, down 13%.

 

Tech Optimization Data to Decisions Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Big Data ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Oracle scales Oracle Database@Google Cloud

Oracle and Google Cloud said the companies will add eight new regions for Oracle Database@Google Cloud.

The companies also said they will add new features including the including the general availability of cross-region disaster recovery and database replication for Oracle Autonomous Database Serverless on Oracle Database@Google Cloud. Support for single-node Oracle database deployments on Oracle Exadata Database Service on Dedicated Infrastructure for Oracle Database@Google Cloud is also added.

Oracle's cloud, AI plans are a master class in co-opetition

Oracle and Google Cloud announced a partnership last year with plans to scale up rapidly.

According to the companies, Oracle Database@Google Cloud will add more regions over the next 12 months including:

  • U.S. Central 1 (Iowa)
  • North America-Northeast 1 (Montreal)
  • North America-Northeast 2 (Toronto)
  • Asia-Northeast 1 (Tokyo)
  • Asia-Northeast 2 (Osaka)
  • Asia-South 1 (Mumbai)
  • Asia-South 2 (Delhi)
  • South America-East 1 (Sao Paulo).

Oracle said it will double capacity in Google Cloud regions in London, Frankfurt and Ashburn.

Constellation Research analyst Holger Mueller said:

"Oracle keeps doubling down on what works--and that is putting its Exadata machines in other cloud vendors' data centers. It is good news for customers who can keep using their database with the tools and AI they want to use. It is good news for both vendors as they generate cloud revenue. The reprieve on R&D from not having to build a highly scalable transactional RDBMS (at Microsoft and AWS, lesser at Google Cloud) is the R&D angle to this development."

Data to Decisions Tech Optimization Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Google Cloud Google Oracle SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

DOJ sues to thwart HPE's acquisition of Juniper Networks

The Department of Justice sued to block Hewlett Packard Enterprise's planned acquisition of Juniper arguing that the combination will hamper competition in the enterprise wireless networking market.

In a statement and lawsuit, the DOJ said:

"HPE and Juniper are successful companies. But rather than continue to compete as rivals in the WLAN marketplace, they seek to consolidate — increasing concentration in an already concentrated market."

HPE announced the acquisition more than a year ago. The DOJ argument is that the combination of HPE and Juniper along with Cisco will represent 70% of the wireless local area network market. According to the DOJ complaint, HPE couldn't beat Juniper's Mist AI capabilities so it chose to acquire its rival.

HPE said the DOJ lawsuit "is fundamentally flawed" and an "overreaching interpretation of antitrust laws." HPE said that its networking portfolio is complementary to Juniper's products. The company said:

"The DOJ’s claim that the WLAN market is composed of three primary players is substantially disconnected from market realities. As customers shift to AI and cloud-driven business strategies for secure, unified technology solutions to protect their data, barriers to entry have decreased and expansion and competition for WLAN has intensified. As such, WLAN is an extremely competitive market with a broad set of players, all of whom are fighting for business and winning bids in competitive RFP processes. The transaction will not impede the ability of other WLAN vendors to vigorously compete."

HPE also noted that its proposed acquisition of Juniper has been approved by antitrust regulators in 14 jurisdictions including the European Union.

Constellation Research analyst Holger Mueller said:

“If anybody would have thought that the Trump administration would be more lenient on M&A – here is the counterproof. A dominant position in the WLAN market Is far from being achieved but given the bad track record of tech enterprises have in lawsuits with the DOJ, this is not good news for HPE investors.”

 

 

Tech Optimization Data to Decisions HPE Big Data Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

ServiceNow aims for 'Goldilocks' software model, SaaS industry likely to follow

ServiceNow took the plunge as it pivots its business model to a hybrid approach with seats, subscriptions and consumption blended together.

The move, outlined on ServiceNow’s fourth quarter earnings call, isn't surprising given agentic AI needs to be priced into plans somehow, but enterprises need budget predictability. ServiceNow isn't the first mover here, but is likely to trigger a rush to a hybrid software model. Salesforce has also been noting a move to a consumption-based business model with Agentforce and Microsoft launched pay-as-you-go agents.

For enterprises, the big question is whether this hybrid business model is going to be a win. At the very least, enterprises will have to manage their agentic AI consumption to keep costs in line. Like the move to cloud computing, enterprises can plan on getting hit with a few zingers. SaaS providers will need to add more transparency into consumption just like hyperscale cloud providers do.

Speaking on ServiceNow's earnings conference call, CEO Bill McDermott portrayed the seat-subscription-consumption model as a Goldilocks scenario for the vendor and customers.

McDermott said:

"Our goal is to combine both subscription and consumption pricing. Customers can start with a base subscription, which they like. They want that flag in the ground, so they can predict their spend and their current ROI schemes. But then, they obviously want to take advantage of agentic AI and yet at the same time, the industry is early in its formation. We're actually innovating faster than they have deployed it. So they want to scale with us in harmony and in partnership.

With our Pro Plus version, they'll get access to our agentic AI agents and will give them a meter based pricing methodology where they will take out the soul crushing business process work that is tedious and complex that people actually don't even want to do. Agentic AI agents will do that for them. They will see a very nice ROI on that. And by definition, if the meter is running up, that means they're using it and deriving financial gain from it, and they're happy to pay and share with us the profits.

It's the Goldilocks model where you get it both ways."

Amit Zavery, ServiceNow's product chief, elaborated on the consumption model. "It's not completely like pay as you go per meet per individual assist. It's really packs of assist in a way. It's subscription pricing and we are giving them some flexibility and the ability for customers to see value instantly," he said.

The AI agent pack approach rhymes with how Adobe prices Firefly. You get tiers of credits. Salesforce has floated the idea of $2 per resolved conversation, but it's unclear whether that's a trial balloon or not. If you buy that an AI agent is a human replacement $2 per resolved issue makes sense. Over time, it's likely that agentic AI is more of a feature and process automation than human replacement.

Salesforce President and Chief Operating Officer Brian Millham outlined the consumption model in December at a Barclays investor conference. He said:

"As we think about the consumption world, it's very different than going out and selling a customer 500 licenses of Sales Cloud or Service Cloud. We're convincing them that Agentforce is the future. They're buying Agentforce from us, but we'll monetize it through a consumption model going forward.

New capabilities that we have on pay as you go, giving people insights into how they're using the product, term commits like AWS where they make a commitment to usage over time, but you've got to burn through that during the term of the agreement. We think this is additive to a model that we've had forever, which is name license plus this consumption model will really drive some growth going forward."

For ServiceNow, and any SaaS vendor, the trick will be getting agentic AI adoption, use cases and value that can be shared. ServiceNow isn't forgoing subscription revenue with a hard pivot to consumption, but it will take time to build up the additional revenue stream.

A few observations:

  • This hybrid approach makes sense for the vendor and the buyer, but it will be an adjustment. Enterprises will want more visibility and transparency and SaaS vendor deals have become murky.
  • Enterprises won't be totally new to consumption models since AWS, Google Cloud and Microsoft Azure have trained enterprises on consumption models. Databricks and Snowflake are also consumption based. 
  • The consumption bookkeeping will be challenging if a company takes a multi-vendor approach to AI agents.
  • There will be tension with customers since SaaS vendors have already gobbled up too much of the operating expense budget.
  • To track this consumption, it's likely that SaaS vendor deals will be procured through cloud marketplaces. For instance, enterprises may choose to monitor consumption through one dashboard via AWS or another hyperscaler.
  • This model won't be Goldilocks for every enterprise, but it's the approach that'll become the norm for the foreseeable future.
  • It's unclear what the agentic AI value equation turns out to be. I'm not sure the digital labor argument will hold up especially if consumption surges to the point where AI agents are comparable to human-per-hour costs.
Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Digital Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Executive Officer Chief Operating Officer Chief AI Officer Chief Product Officer

AWS, Microsoft Azure, IBM watsonx.ai add DeepSeek models via custom import

Amazon Web Services said enterprises and developers can take DeepSeek's R1 model for a spin on Amazon Bedrock via its Custom Model Import feature. IBM also said it will add DeepSeek R1 models to watsonx.ai via its Custom Foundations Model feature and Microsoft Azure made a similar move. 

DeepSeek, a Chinese AI startup that has torched US AI stock valuations such as Nvidia, has released models that can perform as well as pricier foundation models for a fraction of the cost.

That price compression has spurred a flurry of opinions about how DeepSeek may affect the broader market. Price compression is highly likely.

DeepSeek: What CxOs and enterprises need to know

For AWS, which started with a LLM agnostic strategy, adding something like DeepSeek to Amazon Bedrock isn't a concern. In a community article, AWS said Bedrock's custom import feature can be used to leverage DeepSeek. AWS is also holding a webinar on deploying DeepSeek models on Bedrock

Key items in the walkthrough include:

  • The Custom Model Import feature allows you to use externally fine-tuned models on Bedrock's infrastructure.
  • Your DeepSeek R1 model should be based on supported architecture, such as Llama 2, Llama 3, Llama 3.1, Llama 3.2, or Llama 3.3.
  • Prepare your model files in the Hugging Face format and store them in Amazon S3.

Holger Mueller, Constellation Research analyst, said:

"AWS wastes no time to keep its 'Switzerland' status when it comes to being home for all LLMs - large and small - as it supports DeepSeek in AWS Cloud. With CISOs probably concerned about any enterprise access - there is likely interest in the AI / Data Science community." 

More:

IBM followed the AWS with a similar custom import approach. IBM said it will add DeepSeek R1 models to watsonx.ai via its Custom Foundations Model feature. The feature is similar to what AWS has in Bedrock, but IBM said DeepSeek R1 models can be based on Llama or Qwen architectures. Qwen models are created by Alibaba. 

The watsonx.ai workflow is similar to the custom import on Bedrock. 

Developers need to prepare DeepSeek files and bring the model into IBM Cloud Object Storage. From there, the model needs a config.json file as well as be in a safetensors format before deployment. 

Microsoft said DeepSeek R1 is available in the Azure AI Foundry catalog and GitHub. In a blog post, Microsoft emphasized that DeepSeek models were put through its paces. 

"DeepSeek R1 has undergone rigorous red teaming and safety evaluations, including automated assessments of model behavior and extensive security reviews to mitigate potential risks. With Azure AI Content Safety, built-in content filtering is available by default, with opt-out options for flexibility. Additionally, the Safety Evaluation System allows customers to efficiently test their applications before deployment. These safeguards help Azure AI Foundry provide a secure, compliant, and responsible environment for enterprises to confidently deploy AI solutions."

 

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Meta on DeepSeek, custom silicon, AI optimizing engineering and business

Meta reported strong fourth quarter results, but the earnings call was much more interesting as CEO Mark Zuckerberg and CFO Susan Li riffed on custom silicon, developing Llama 4 and why building AI infrastructure matters.

The company reported fourth quarter revenue of $48.4 billion, up 21% from a year ago, with net income of $20.84 billion. For 2024, Meta raked in net income of $62.36 billion on revenue of $164.5 billion.

Holger Mueller, analyst at Constellation Research, said Meta is set up financially to invest heavily in AI. 

"Things are going well for Meta, as its business is fundamentally healthy. Despite all the investments, Zuckerberg’s enterprise was able to grow revenue year over year by over $30 billion, but grew profit at the same time $23 billion. Meta earns three quarters on an additional dollar of revenue today and that KPI did not look that favorable in the past. Zuckerberg can keep investing into AI, the metaverse, which could be accelerated by AI, and content creation."

And Meta will invest.

Here's a look at the key takeaways on Meta's investment strategy for AI:

Meta looks at AI as a personalization tool that will have different use cases for each individual. "We believe that people don't all want to use the same AI. People want their AI to be personalized to their context, their interests, their personality, their culture, and how they think about the world," said Zuckerberg. 

Open-source models will win starting with Llama 4. Zuckerberg said: "I think this will very well be the year when Llama and open source become the most advanced and widely used AI models. Llama 4 is making great progress in training. Llama 4 Mini is done and looking good too. It's going to be novel, and it's going to unlock a lot of new use cases."

DeepSeek helps the open-source cause and will bring costs down, but Zuckerberg expects Llama to win. "As Llama becomes more used it's more likely that silicon providers and other APIs and developer platforms will optimize their work more for that and basically drive down the costs of using it," said Zuckerberg. "The new competitor, DeepSeek from China, makes it clear there's going to be an open source standard globally. I think for our kind of own national advantage, it's important that it's an American Standard. We want to build the AI system that people around the world are using. If anything, some of the recent news has only strengthened our conviction that this is the right thing for us to be focused on."

It's too early to know the DeepSeek impact on demand for AI infrastructure. "It's probably too early to really have a strong opinion on what this means for the trajectory around infrastructure and capex and things like that. There are a bunch of trends that are happening here all at once," said Zuckerberg. "I continue to think that investing very heavily in capex and infra is going to be a strategic advantage over time. It's possible that we'll learn otherwise at some point, but I just think it's way too early to call that."

Meta wants AI that will replicate a mid-level engineer. "This is going to be a profound milestone," said Zuckerberg. "Our goal is to advance AI research and advance our own development internally. And I think it's just going to be a very profound thing."

Llama will provide engineering throughput. Li said:

"We expect that the continuous advancements in Llama's coding capabilities will provide even greater leverage to our engineers, and we are focused on expanding its capabilities to not only assist our engineers in writing and reviewing our code, but to also begin generating code changes to automate tool updates and improve the quality of our code base."

The monetization plan for models has nothing to do with licensing or consumption. Zuckerberg noted Meta's plan for AI glasses and investments in AI infrastructure that will improve ads and apps. He said this year will see more growth in Reels on Facebook and Instagram regardless of what happens to TikTok.

Meta AI has more than 700 million active monthly users and updates are planned to deliver more personalized content and monetization efficiency. Meta CFO Susan Li said:

"In the second half of 2024 we introduced an innovative new machine learning system in partnership with Nvidia called Andromeda. This more efficient system enabled a 10,000x increase in the complexity of models we use for ads retrieval, which is the part of the ranking process where we narrow down a pool of 10s of millions of ads to the few 1,000 we consider showing someone. The increase in model complexity is enabling us to run far more sophisticated prediction models to better personalize which ads we show someone. This has driven an 8% increase in the quality of ads that people see."

Meta's capital spending is focused on scaling the footprint and increasing efficiency of workloads. "We're pursuing efficiencies is by extending the useful lives of our servers and associated networking equipment. Our expectation going forward is that we'll be able to use both our non AI and AI servers for a longer period of time before replacing them, which we estimate will be approximately five and a half years. This will deliver savings in annual capex and resulting depreciation expense, which is already included in our guidance," said Li. "We're pursuing cost efficiencies by deploying our custom silicon MTIA in areas where we can achieve a lower cost of compute by optimizing the chip to our unique workloads."

Custom silicon is being used for ranking and recommendation inference workloads for ads and organic content. "We expect to further ramp adoption of MTIA for these use cases throughout 2025 before extending our custom silicon efforts to training workloads for ranking and recommendations next year," said Li. "We're also very invested in developing our own custom silicon for unique workloads where off-the-shelf silicon isn't necessarily optimal, and specifically because we're able to optimize the full stack to achieve greater compute efficiency, and performance per cost and power."

Over time, MTIA is going to take on GPU workloads and training. "Next year, we're hoping to expand MTIA to support some of our core AI training workloads, and over time, some of our Gen AI use cases," said Li. 

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

IBM Q4 better than expected, genAI business surges

IBM delivered better-than-expected fourth quarter results and said its generative AI business including consulting and software is now a $5 billion business, up from $3 billion in the third quarter.

The company reported fourth quarter earnings of $2.98 billion, or $3.11 a share, on revenue of $17.6 billion, up 1% from a year ago. Non-GAAP earnings were $3.92 a share, 15 cents better than Wall Street estimates.

IBM CEO Arvind Krishna said the company is "well-positioned for 2025 and beyond" with annual revenue growth of at least 5%.

For 2024, IBM reported net income of $6 billion, or $6.42 a share, on revenue of $62.8 billion.

By the numbers for the fourth quarter:

  • IBM software revenue of $7.9 billion was up 10% from a year ago with Red Hat revenue up 16%. IBM said that automation revenue was up 15% and data and AI up 4%.
  • Consulting revenue was down 2% to $5.2 billion with the business transformation unit faring the best, but still down 1% in the fourth quarter. Technology consulting revenue was down 7%.
  • Infrastructure revenue was down 7.6% to $4.3 billion.

Krishna made the following points on an earnings conference call:

  • "Our AI portfolio is tailored to meet the diverse needs of enterprise clients, enabling them to leverage a mix of models, IBMs, their own, open models from Hugging Face, Meta and Mistral. IBM's Granite models designed for specific purposes are 90% more cost-efficient than larger alternatives."
  • "We are looking forward to a regulatory environment that is a bit more rational and a bit more pro-competition. So I think what that implies for us is that we think reasonable deals have a very good chance of getting through in a reasonable amount of time and not being held up for years. With that context, we are going to lean in more."
  • "DeepSeek was a point of validation. We have been very vocal for about a year that smaller models and more reasonable training times are going to be essential for enterprise deployment of large language models. We have been down that journey ourselves for more than a year. We see as much as 30 times reduction in inference costs using these approaches. As other people begin to follow that route, we think that this is incredibly good for our enterprise clients." DeepSeek: What CxOs and enterprises need to know

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity IBM AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer