Results

City of Los Angeles Bets on Google Workspace with Gemini as It Preps for Big Events

The City of Los Angeles will deploy Google Workspace with Gemini across its workforce of 27,500 employees as part of its broader digital transformation strategy.

Google Public Sector announced the deal at its Google Public Sector Summit in Washington DC. The City of Los Angeles will leverage Google's AI for communications, project planning and ultimately public services. The gains in Los Angeles come as Google Public Sector is landing other states and local governments.

For instance, Google announced that the Maryland Department of Information Technology (DoIT) will roll out Google Workspace with Gemini to nearly 43,000 employees as its usage grows across 59 state agencies. Maryland already had 12,500 state employees who were active users of Google Workspace with Gemini making the DoIT account a classic land and expand.

Los Angeles' Google Public Sector usage comes as the city is rolling out a broader digital transformation effort called SmartLA 2028. LA is preparing for the 2026 World Cup, 2027 Super Bowl and 2028 Olympic and Paralympic Games.

At a high level, SmartLA 2028 includes multiple levels of the resident journey.

  • Smart city infrastructure to use technology in LA's physical assets such as 5G, IoT and fiber.
  • Data tools and practices to work across departments and deliver government to resident and business services.
  • Digital services and applications to deliver services.
  • Connectivity and digital inclusion efforts.
  • Governance and coordination of investments across LA departments.

Ted Ross, CIO for the City of Los Angeles, said Google Public Sector and Google Workspace will be key to "modernizing the city’s approach to constituent services."

Here's a look at where Google Workspace with Gemini fits into LA's technology plans.

  • Workspace is being used to improve communication and information with residents. LA creates multilingual content, public announcements and emergency communications via Workspace and uses Gemini to rewrite at the most accessible reading level.
  • Workflows are being revamped across LA's 45 departments with Gemini summaries and data analysis.
  • City employees are using NotebookLM to analyze grant documents for project funding.
  • AI training for LA's workforce.

Speaking on a panel, Ross outlined the Gemini for Government plans and offered some best practices. Here's a look:

Use cases are not hard to find. Ross said that in government use cases abound for areas where AI can improve efficiencies. Information dissemination and analysis are big ones. "In emergency management, AI has the ability to synthesize real-time information from utilities, cities, counties, states and multiple jurisdictions," said Ross. This information also has to be multi-lingual.

In addition, managers have been able to leverage NotebookLM and query it for new grant opportunities.

Ross added that it helps to think through AI use cases in terms of personas. "Think from the perspective of personas like the broad workforce, manages, front lines," said Ross.

Don't scrimp on training. "I'm a huge fan of training and giving employees an intro to AI," said Ross, who added that the training and use of AI is critical to employee engagement. "AI is a once in a generation shift of how people are computing and you have to train the workforce so you can launch them into the future and build AI fluency. Make the investment in training now."

Contractual protections. If adopting AI in the public sector make sure you are using tools with contractual protections, said Ross. Since the City of LA was using Google Workspace it made sense to use Gemini since it fell under the same contract.

The roadmap ahead. Ross said using AI for transportation modeling is a big focus for the city. "We're getting into predictive modeling to see what happens when people go to an event," said Ross, who noted that the 2028 Olympics will be like having a Super Bowl every day for two weeks. "That includes multilingual traveling assistance."

Data to Decisions Future of Work Next-Generation Customer Experience Innovation & Product-led Growth New C-Suite Sales Marketing Digital Safety, Privacy & Cybersecurity Google Chief Information Officer

Nvidia's GTC Washington DC news barrage lands amid US vs China AI backdrop

Nvidia CEO Jensen Huang laid out a series of announcements covering an investment in Nokia to meld 6G and AI, quantum computing connectivity with GPUs and a vision to build AI infrastructure as a means for national security.

At Nvidia GTC Washington, Huang laid out a bevy of news items. Nvidia's GTC coincides with Google Cloud's Public Sector Summit in Washington DC. The backdrop of AI competition with China was also hard to avoid.

The need for US-based infrastructure was a big theme as Huang talked about the 6G and AI intersection and an investment in Nokia. Huang said wireless technology is largely deployed on foreign technology. "Our fundamental communication technology, built on foreign technology, and that has to stop — and we have an opportunity to do that," said Huang, who said it's time to get back in the game.

Nvidia is building an AI-native stack for 6G with Nvidia ARC-Pro. Nokia will put Nvidia's wireless AI technology in its future base stations. Nvidia will invest $1 billion in Nokia as a way to layer AI into the transition from 5G to 6G. Nvidia also partnered with Palantir and CrowdStrike and expanded ties with Google Cloud.

In addition, Nvidia said its AI Aerial platform will add multimodal integrate sensing and communications over 6G. Nvidia is also partnering with Booz Allen, Cisco, MITRE, ODC and T-Mobile to build an American AI-RAN stack.

On the quantum front, Nvidia launched NVQLink, a high-speed interconnect that lets quantum processors connect to GPU supercomputers. The company has 17 quantum labs and nine scientific labs in the fold.

According to Nvidia, NVQLink gives quantum computing researchers a system for the control algorithms needed for large-scale quantum error correction. "In the near future, every NVIDIA GPU scientific supercomputer will be hybrid, tightly coupled with quantum processors to expand what is possible with computing," said Huang.

On the manufacturing front, Nvidia also outlined a "mega" Nvidia Omniverse Blueprint to expand libraries for building factory-scale digital twins and physical AI systems for robotics. Nvidia is also working with the U.S. Department of Energy’s national labs to develop AI factory buildouts using Nvidia Omniverse.

The Mega Nvidia Omniverse Blueprint can simulate robot fleets to include technology for designing and simulating factory digital twins.

China vs US provides context

Huang's talk landed as the US and China engage in trade talks. The two sides are also in a middle of an AI battle.

At a panel at Constellation Research's Connected Enterprise, experts noted the following:

  • The AI battle between the US and China is really just one front. Bio, quantum and robotics are other areas.
  • Depending on how the China and US relationship goes there could be global destabilization.
  • Half of the AI researchers are from China.
  • China dominates in renewable energy and is leading in nuclear reactor construction to power AI.
  • China controls the critical mineral supply chain.

"I think we're trying to handicap a three-dimensional chess game and the rules aren't fully developed. It's a very fluid situation and we're approaching it from different perspectives," said George Chanos, founder and CEO of Uvolution.io. "In my view the US and China are marching towards a finish line of singularity."

The big question is what the No. 2 player will do when there's a winner declared--real or perceived, said Chanos. He added:

"If you have boolean Manhattan Projects you have two players moving towards an end game that they think can give them global supremacy. But that's not going to be a gentlemanly type of endeavor. It's going to get heated, and when number two feels that number one is approaching the finish line, I don't know that number two is going to allow number one to cross that finish line. I think the instability that we're seeing around the world today is in large part due to this looming conflict potential."

Constellation Research CEO R "Ray" Wang noted that China is "basically going after US AI and industrial complex by giving up everything free with open source. We've been in an economic war for the last 15 years."

 

Data to Decisions nvidia Chief Information Officer

Scaling AI Innovation Globally: DataMasque's Journey with Synthetic Data and AWS Marketplace

We are LIVE from the Amazon Web Services (AWS) Startup Partner Summit. R "Ray" Wang and Bob O'Donnell sat down with Grant de Leeuw, CEO & co-founder of DataMasque, to discuss how high-fidelity synthetic hashtag#data is empowering regulated industries to unlock AI and ML innovation—while protecting sensitive data. 

DataMasque's “marketplace-first” strategy, leveraging the AWS Marketplace, enabled global growth and rapid US success even before building a local sales force. Now, they’re leading the shift from generative to agentic AI with cutting-edge in-flight and API-based data masking.

Watch the full interview to learn more about a startup paving the way for responsible, scalable AI.

Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Marketing Transformation Next-Generation Customer Experience Tech Optimization On <iframe width="560" height="315" src="https://www.youtube.com/embed/zeiNYDrYyBw?si=Z08W6xi7EzqVhtbP" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Microsoft owns 27% of OpenAI worth $135 billion

Microsoft's stake in OpenAI has a number: $135 billion, or roughly 27% of the AI company.

The disclosure was made as part of OpenAI's recapitalization that solidifies its structure as a public benefit corporation (PBC).

OpenAI and Microsoft previously said they had an understanding about the moving parts in the partnership. The valuation of Microsoft's stake in OpenAI also comes in handy since the software giant reports earnings Wednesday. Questions about the spending and losses related to Microsoft's OpenAI investment were starting to percolate.

In its annual report, Microsoft listed $4.7 billion OpenAI expenses in an “other net” line that included other items. Microsoft also hasn’t disclosed a carrying valuation for OpenAI.

Microsoft's 27% stake in OpenAI Group PBC, the non-profit that owns OpenAI, is down from the 32.5% stake in the for-profit entity. OpenAI Foundation owns a $130 billion stake in the for profit OpenAI.

Nevertheless, the new agreement gives Microsoft plenty of upside should OpenAI deliver on its growth projections. Microsoft also gets some protection in case OpenAI doesn't and can now pursue AGI on its own, be a key compute provider but without right of first refusal and has rights to models through 2032.

Microsoft and OpenAI increasingly compete:

The other moving part worth noting is that Microsoft remains OpenAI's lead frontier model partner. Microsoft also has exclusive IP rights and Azure API exclusivity until artificial general intelligence (AGI).

OpenAI's new deal with Microsoft has a few other wrinkles worth noting:

  • Once AGI is declared by OpenAI, the claim has to be verified by independent experts.
  • Microsoft's IP rights for models and products are extended through 2032 and now include models post-AGI.
  • Microsoft's IP rights to research will remain until AGI or through 2030, whichever comes first. Research IP doesn't include model architecture, weights, inference code, finetuning and any IP related to data center hardware and software.
  • Microsoft doesn't have IP rights to OpenAI's consumer hardware.
  • OpenAI can jointly develop some products with third parties even though API products are exclusive to Azure.
  • Microsoft can pursue AGI independently or with other partners.
  • OpenAI will purchase an incremental $250 billion of Azure services and Microsoft doesn't have right of first refusal to be OpenAI's compute provider.
  • OpenAI can provide API access to US government national security customers.

 

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity openai Microsoft ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Adobe expand GenStudio, launches custom Firefly models, Firefly Foundry

Adobe expanded GenStudio with new AI tools and integrations, outlined custom Firefly tools so customers can add brand knowledge, and added Creative Cloud and Experience Cloud enhancements with the aim of solving the AI last mile issue for enterprises.

When it comes to marketing, content production and brand experiences, Adobe is looking to address how it's easier to create and come up with ideas, but struggle to scale quality and be brand safe.

The other takeaway from Adobe's slew of announcements from Max is that the company's AI strategy is less about layering on AI as much as it is adding throughout its platforms.

Adobe's expansion of GenStudio highlights the strategy.

The company said Adobe GenStudio will see new AI capabilities including GenStudio for Performance Marketing, Firefly Creative Production, Firefly Services and Firefly Custom Models. Adobe also launched Adobe Firefly Foundry, which provides access to Adobe, to create proprietary generative AI models, and Firefly Design Intelligence, a tool co-created with The Coca-Cola Company to scale brand compliance.

Varun Parmar, general manager of Adobe GenStudio and Firefly Enterprise, said the unified platform brings together AI and marketing workflows where "teams can assemble, activate and optimize content for any channel all in one place."

Here's a look at what Adobe announced at Max as part of the GenStudio expansion.

  • Firefly Custom Models, which gives enterprises the ability to train Firefly on select images to create custom brand content for marketing use cases. Firefly Custom Models rides along with Firefly Foundry, a set of services to give companies the ability to create exclusive models.
  • Adobe Firefly Foundry provides the ability to generate AI models that are trained on existing IP and tuned. Adobe Firefly Foundry models can support all asset types including images, video, audio, vector and 3D for use in marketing and media workflows.
  • Firefly Creative Production for Enterprise expands the web app to resize and reframe assets with reusable production workflows, integrations with Adobe Experience Manager Assets and Frame.io.
  • Adobe Content Production Agent, a genAI app for scaling on brand ads, emails and other content across channels. The agent is available in bet in GenStudio for Performance Marketing.
  • Adobe Firefly Services, a set of APIs for creative workflows. The Content Authority API, in beta, embeds digital credentials that are verifiable.
  • GenStudio for Performance Marketing now integrates with major advertising platforms including Amazon Ads, Innovid, Google, LinkedIn and TikTok.

For creatives, Adobe outlined the following:

  • Firefly Boards enhancements with Presents and Generative Text Edit.
  • Firefly Generate Soundtrack and Generate Speech for long and short form videos.
  • A new Firefly Video Editor that's lightweight.
  • Creative Cloud gets Firefly Image Model 5 that features 4MP photorealism, natural language and layer-based editing.
  • Creative Cloud gets more than 100 improvements including a preview of Photoshop AI Assistant and Project Moonlight.
  • New partner models from ElevenLabs and Topaz and access to Google and OpenAI models.

 

Data to Decisions Innovation & Product-led Growth Marketing Transformation Matrix Commerce Next-Generation Customer Experience adobe Chief Information Officer Chief Marketing Officer Chief Technology Officer

Amazon cuts 14,000 corporate jobs

Amazon said it will cut 14,000 corporate jobs as it restructures and looks to leverage AI to boost efficiency.

The move isn't surprising given many enterprises have said they aren't hiring and looking to AI to automate more jobs and processes.

Amazon CEO Andy Jassy has said the company wants to operate like a startup and continue to cut costs and improve margins. Jassy also noted that the company is looking to increase the ratio of individuals to managers. Amazon has 1.54 million total employees.

According to Amazon, the restructuring will make the company stronger by "further reducing bureaucracy, removing layers, and shifting resources to ensure we’re investing in our biggest bets and what matters most to our customers’ current and future needs."

The company said it will continue to hire more in some spaces while trimming other roles. The broader trend is that enterprises are looking at AI to take on more roles. These AI transformation efforts mostly revolve around augmenting human workers, but the outcome is that you simply need fewer people to run a company. See: 

Beth Galetti, SVP of People Experience and Technology at Amazon, said:

"This generation of AI is the most transformative technology we’ve seen since the Internet, and it's enabling companies to innovate much faster than ever before (in existing market segments and altogether new ones). We’re convicted that we need to be organized more leanly, with fewer layers and more ownership, to move as quickly as possible for our customers and business."

Perhaps the biggest takeaway is that Amazon isn't unique. JPMorgan Chase CFO Jeremy Barnum said on the company's most recent earnings call that the bank is being disciplined with expenses.

"We're going to have a very strong bias against having the reflective response to any given need to be to hire more people and feeling a little bit more confident on our ability to put that pressure on the organization because we know that even if we can't always measure it that precisely, there are definitely productivity tailwinds from AI," said Barnum. "You can assume that we're going to be pushing hard on all fronts to extract as much productivity out of the organization as possible."

Data to Decisions Future of Work amazon Chief Information Officer

Building an AI-Fluent Organization and the Rise of the AI Factory

Kristie Grinnell, SVP & CIO of TD Synnex, shares with Constellation Editor in Chief Larry Dignan how TD SYNNEX is building an “AI-fluent” organization, leveraging #AIagents to drive both revenue growth and productivity, and pioneering an “AI factory” to manage tech debt and boost agility across its global supply chain. 

Learn about their data strategy, value-driven approach to use cases, and how change management and employee empowerment are at the heart of their #AI journey.

00:00 - Meet Kristie Grinnell 
00:28 - TD Synnex’s AI Strategy & Vision 
02:14 - Driving Revenue Growth & Productivity with AI 
03:12 - Tackling Tech Debt: The AI Factory 
04:35 - Supply Chain Agility & Plug-and-Play Agents 
06:07 - Data Strategy & Building a Global Data Lake 
08:10 - Value-Driven Approach to AI Use Cases 
10:13 - Change Management & Employee Enablement

Data to Decisions Digital Safety, Privacy & Cybersecurity Future of Work Innovation & Product-led Growth Marketing Transformation Matrix Commerce Tech Optimization On Insights <iframe width="560" height="315" src="https://www.youtube.com/embed/OrV9slkLPjM?si=CAXwSZNw-LHCMuoZ" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Future of Staffing: How Siemens Leverages AI and Certinia

Check out this Dreamforce interview with Siemens' Kurt Kuelz and Constellation Research’s R "Ray" Wang. Kuelz shares how Siemens is transforming global workforce management by leveraging AI-powered solutions from Certinia—driving strategic staffing, improving employee satisfaction, and moving toward true digital transformation.

Key takeaways:
✅ Automated staffing for the right resource at the right time
✅ Skills matching using advanced competency models
✅ Early warning and optimization with predictive analytics

Learn how Siemens is focusing on value realization—not just experimentation—bringing a win-win for both the business and employees.

Data to Decisions Digital Safety, Privacy & Cybersecurity Future of Work Innovation & Product-led Growth Marketing Transformation Matrix Commerce New C-Suite Next-Generation Customer Experience Tech Optimization Chief Analytics Officer Chief Customer Officer Chief Data Officer Chief Digital Officer Chief Executive Officer Chief Financial Officer Chief Information Officer Chief Information Security Officer Chief Marketing Officer Chief People Officer Chief Privacy Officer Chief Procurement Officer Chief Product Officer Chief Revenue Officer Chief Supply Chain Officer Chief Sustainability Officer Chief Technology Officer On <iframe width="560" height="315" src="https://www.youtube.com/embed/dn43WjSQpoY?si=5Gtq3nHvfRTq2hze" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Celebrating 15 Years of Constellation Connected Enterprise (CCE)

🎉 Celebrating 15 Years of Constellation Connected Enterprise (CCE) 💫 

What an incredible journey our founder & CEO, R "Ray" Wang, began 15 years ago with CCE! To celebrate, we brought together voices from across the Constellation community—clients, partners, friends, and alumni—to share their appreciation for over a decade of innovation, insight, and connection that culminates every year in Half Moon Bay.

This milestone is more than just a number. It’s about the family we’ve built, the bold questions we’ve asked, and the impact we’ve made together. From lifelong friendships to industry-shaping ideas, CCE remains a gathering place for thought leaders, disruptors, and visionaries.

A heartfelt thank you to everyone who shared their appreciation (listed in order of appearance):

Kenny Lauer, Meyer Sounds
Byron Reese, scissortail.AI
Cindy Zhou, KnowBe4
Jonathan Becher, San Jose Sharks
Loni Stark, Adobe
Vala Afshar, Salesforce
Brent Leary, CRM Essentials
Tracey Cesen, ForeverHuman.AI
Andrea Chin, ForeverHuman.AI
Indy Cho, Costco
Adam Gunther, Equifax
Kate Carruthers (FGIA, MAICD)
Steve Lucas, Boomi
John Nosta, NostaLab
Lara Druyan, Venture Partner
Sandra Lo, Zoho
Ravi Kumar S, Cognizant
Andrew Nebus, ASRC Federal
Gurvinder Singh Sahni, Altimetrik
Heather Clancy, Trellis Group
Andy Weinstein, GODOT
Paul Greenberg, The 56 Group
Dr. Janice Presser, Teaming Science
Jonathan Feldman, Wake County
Tricia Wang, Advanced AI Society
Joseph Hughes, EY
Rohit Gupta, Auditoria.AI
David Bray, PhD, The Stinson Center
Aiaz Kazi, rtZen.AI
Soon Yu, Author, Pdocaster
Alan Lepofsky, Netomi
Jon Reed, Diginomica
The Constellation Research analyst, sales, and marketing teams!

Your reflections, memories, and enthusiasm make CCE truly special. Here’s to many more years of collaboration, inspiration, and "right kind of trouble"! 👏

On <iframe width="560" height="315" src="https://www.youtube.com/embed/oHQQveikp_c?si=iwt6qeSer9x6ajH8" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Qualcomm outlines AI accelerators, eyes inferencing

Qualcomm is entering the AI accelerator market with a focus on inferencing in a move that will give enterprises more options beyond Nvidia, AMD and custom hyperscale cloud processors.

The company, which has been eyeing data center workloads, launched the Qualcomm AI200 and AI250 AI accelerators with what it calls "industry-leading total cost of ownership (TCO)."

Qualcomm also said its AI chips will be compatible with AI frameworks, software and offer a new memory architecture designed for AI workloads. Qualcomm said it will have an annual cadence for its AI inference roadmap.

According to Qualcomm, it will offer its AI200 and AI250 accelerator cards and racks. The effort builds off its neural processing unit (NPU) products and aims to deliver high performance per dollar watt metrics.  Humain, a Saudi Arabia AI company, will be the first customer of Qualcomm's AI accelerators. The companies will integrate Humain's AI Allam models with Qualcomm's platform.

The AI200 and AI250 will be commercially available in 2026 and 2027.

Holger Mueller, an analyst at Constellation Research, said:

"Qualcomm has proven its expertise in chip performance over and over, more recently with the launch of its new Snapdragon series - and it certainly has a right to play in the up and coming AI inference marketg. Good to see the partnerhship with Humain, but breaking into the cloud data center is (very) hard - as we have seen from Dell, HPE and before the Microsoft partnership - for Nvidia."

Here's a look at the key points:

  • Qualcomm AI200 is optimized for model performance, inference and AI workloads. The rack system supports 768 GB of LPDDR per card for higher memory capacity and lower cost.
  • Qualcomm AI250 features a memory architecture based on near-memory computing and can deliver 10x higher effective memory bandwidth ad lower power consumption. Qualcomm said it is pushing toward disaggregated AI inferencing.
  • Both racks have direct liquid cooling, PCIe, Ethernet and confidential computing.
  • Rack-level power consumption is 160 kw.
  • The stack supports leading machine learning frameworks, inference engines, generative AI frameworks, and LLM / LMM inference optimization techniques like disaggregated serving.
  • The systems provide seamless model onboarding and one-click deployment of Hugging Face models via Qualcomm Technologies’ Efficient Transformers Library and Qualcomm AI Inference Suite.

More:

Data to Decisions Tech Optimization Chief Information Officer