Results

Canva acquires Affinity in move to better target designers

Canva acquires Affinity in move to better target designers

Canva said it will acquire Affinity, a UK company with a design suite that competes with Adobe's Creative Cloud.

The timing of the Canva-Affinity deal is notable given Adobe Summit kicks off this week.

In a blog post, Canva Co-Founder and COO Cliff Obrecht said Affinity's photo editing and design software is used by more than 3 million people. Canva plans to scale that reach by pitching Affinity to the 175 million people who have used its software.

Constellation ShortList™ Digital Canvas Workboards

Obrecht said:

"While our last decade at Canva has focused heavily on the 99% of knowledge workers without design training, truly empowering the world to design includes empowering professional designers too. By joining forces with Affinity, we’re excited to unlock the full spectrum of designers at every level and stage of the design journey."

Affinity's software is available on Windows, Mac and iPad. The company has 90 employees. The stack includes:

  • Affinity Designer, a vector-based graphics application for illustrations, art, graphics and brand design.
  • Affinity Photo, an image editor to cover a wide range of use cases.
  • Affinity Publisher, a layout application for Web, publications and marketing content.

Version 2 of the Affinity suite for individuals goes for $164.99, but is on sale for $114.99. The company runs on a license model without subscriptions. A universal license for multiple business users is $109.24 per license currently. Each application in the Affinity family is also sold separately.

Canva has a free version of its software with tiers for businesses, teams and enterprises. Canva Pro for one person is $119.99 for a year, $300 for a team of 5 people and a range of plans for a minimum of 100 people.

Marketing Transformation Chief Information Officer Chief Digital Officer

Enterprise generative AI use cases, applications about to surge

Enterprise generative AI use cases, applications about to surge

If 2023 was the year of generative AI pilots, 2024 will be about moving to production and 2025 will likely be warp speed. Why? The generative AI building blocks are falling into place.

In recent weeks, three mileposts have highlighted where enterprise generative AI was headed.

  • Nvidia GTC highlighted how the software building blocks for generative AI are in place. The company launched Blackwell GPUs, but Nvidia Inference Microservices (NIMs) will ultimately be just as important. NIMs are pre-trained AI models packaged and optimized to run across the CUDA installed base.
  • SAP, ServiceNow, Cohesity, CrowdStrike, Snowflake, NetApp, Dell, Adobe and a bevy of others are rallying behind NIMs.
  • Nvidia's AI Enterprise 5.0, which will include NIMs and capabilities that will speed up development, enable private LLMs and create co-pilots and generative AI applications quickly with API calls.
  • Palantir held its AIPCon meetup and customers outlined how they delivered value quickly. The use cases ranged from supply chain to defense to logistics to smarter workflows among field workers. Palantir has been using its AI Platform (AIP) to land, generate value and then expand.
  • C3 AI held its Transform event where Baker Hughes highlighted how they used C3 AI's platform to optimize sourcing and inventory along with value delivered to the US Department of Defense, Con Edison, GSK and others. C3 AI's formula rhymes with Palantir's approach.

Taken as a whole, the generative AI use cases today are delivering value, but won't set the technology world on its ear. Frankly, some of the use cases sit at the intersection of AI, process mining and data science and you'd be hard pressed to declare the implementations as solely artificial intelligence.

This post first appeared in the Constellation Insight newsletter, which features bespoke content weekly and is brought to you by Hitachi Vantara.

Jensen Huang's keynote highlighted where generative AI use cases are going to go. First, the sheer pull of Nvidia's ecosystem--AWS, Microsoft Azure, Google Cloud Platform, Oracle Cloud Infrastructure, data platforms such as Databricks and Snowflake and enterprise software vendors--will put NIMs on the map. Enterprise AI 5.0 will be ubiquitous.

And priced at $4,500 per GPU for AI Enterprise, there's a big market opportunity for Nvidia, but nothing that breaks the bank. The cash register for Nvidia is still the GPU. That said, the software math for Nvidia is compelling--especially if Nvidia has 1 million GPUs in the field attached to AI Enterprise.

Nvidia GTC 2024 Is The Davos of AI | Will AI Force Centralized Scarcity Or Create Freedom With Decentralized Abundance? | AI is Changing Cloud Workloads, Here's How CIOs Can Prepare

Simply put, Nvidia is flooding the zone for generative AI use cases. Speaking to industry analysts, Huang was asked about enterprise use cases. He said:

"We have two avenues to take AI into enterprise. One avenue is people build up applications in the IT department. We have business application developers writing applications for forecasting and supply chain management. We have to create these AI modules and AI libraries for them. Business applications are just AI applications. Somebody's going to go off and build.

The other avenue is through the enterprise IT platforms, and I think that they're all sitting on a goldmine. They created tools and you can now create AI copilots to go use those tools. You're gonna have SAP create copilots and they're gonna get better and better. Instead of instead of hiring 100 business application developers, you have 100 and another 500 that are APIs."

Platforms appear to be the primary vendor goal at the moment. Palantir said on its fourth quarter earnings conference call that the company has covered nearly 200 use cases coming from its AIP Bootcamps. Palantir CTO Shyam Sankar said AIP is enabling the company "to integrate so many types of new data, video conferences, incident response calls, Slack rooms, PDFs, images, video, audio, and exploit them through the power of LLMs and ontology."

Palantir posts strong Q4, sees enterprise traction in US | Palantir's commercial business scales with help of AI boot camps

Sankar said the real data that defines a process is in the conversations than the enterprise system. "What's in the enterprise process system is a lousy latent representation of this reality," said Sankar. "With AI and LLMs, you can't think your way through it. You have to get your hands dirty and work in anger to get use cases into production. In AIP, we have built a platform to deliver proof, not just proofs of concept, to our customers."

C3 AI CEO Tom Siebel said during one of his Transform 2024 talks that if you fast forward three years, you'll find that the entire enterprise application stack will be transformed. AI applications will be predictive and prescriptive and save billions of dollars.

"Let's fast forward three years March 2027. No CEO in the world will be able to withstand a board meeting where he or she was standing up without reporting what customer churn was, what device failure was, and the level of fraud.  When the tools are in place to prevent the failures, prevent the customer churn and make sure you can deliver the products on time it's big," said Siebel.

C3 AI as of Transform 2024 has deployed more than 47 use cases in generative AI across multiple industries.

Related: How Baker Hughes used AI, LLMs for ESG materiality assessments | C3 AI launches domain-specific generative AI models, targets industries | C3.ai's next move: Convert generative AI pilots to production deals

The bet here is that we're going to see a lot more enterprise use cases soon, but the real business value will be at the intersection of generative AI, process transformation, automation, scale and speed. It's also worth noting that enterprises are planning to allocate money to generative AI even if they haven't scaled funding yet. Deloitte's first quarter CFO Signals survey found that 64% of North American CIOs are looking to adopt generative AI with a focus on IT, business operations, customer service, finance and sales and marketing.

Data to Decisions Future of Work Next-Generation Customer Experience Innovation & Product-led Growth Tech Optimization Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

CFOs aren't allocating budgets toward generative AI yet, says Deloitte

CFOs aren't allocating budgets toward generative AI yet, says Deloitte

Sixty-two percent of CFOs say their organizations are allocating less than 1% of corporate budgets to generative AI next year, according to Deloitte's CFO Signals survey for the first quarter. Another 37% of CFOs expect 1% to 10% of budgets to be allocated to generative AI.

The findings, based on 116 respondents, are notable because they highlight how actual enterprise movement on generative AI has trailed headlines and vendor proclamations. Consumer companies plan to allocate more than 5% of their budgets to generative AI. Another notable takeaway is that 58% of CFOs say their boards are somewhat or very much encouraging genAI adoption in the enterprise.

Accenture: Enterprises focused on transformation, data foundation, genAI and punting on smaller projects | Data leaders bullish on generative AI, but multiple challenges remain, says Informatica | Here's why generative AI disillusionment is brewing

Here's the breakdown from the report.

The budgets may move once the returns on generative AI become clearer. Seventy percent of CFOs expect a 1% to 10% increase in productivity from using genAI with 13% of CFOs seeing higher gains. Productivity is the return metric of choice among CFOs. CFOs from larger companies expect the biggest generative AI productivity gains.

For instance, 9% of CFOs from companies with more than $10 billion in revenue are expecting productivity gains of more than 20% from genAI. Five percent of CFOs surveyed expect productivity gains of more than 20%.

Going forward, CFOs are valuing generative AI on workforce productivity and cost savings. A big chunk of CFOs surveyed, 24%, are uncertain how to value generative AI or had no measurement.

Across the enterprise, CFOs say IT, business operations, customer service, finance and sales and marketing are the top functions ripe for generative AI transformation.

The generative AI data from the CFO Signals survey come amid other key themes.  Other takeaways include:

  • 40% of CFOs say now is a good time to take greater risks and the remainder are risk averse.
  • 65% of CFOs say they believe the US equity markets are overvalued.
  • 42% of CFOs say they were more optimistic about their own companies' financial prospects.
  • 59% of CFOs saw North American economic conditions as good or very good, but 12% of CFOs saw Europe that way. Just 3% of CFOs China economic conditions as good.
Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Financial Officer Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Accenture: Enterprises focused on transformation, data foundation, genAI and punting on smaller projects

Accenture: Enterprises focused on transformation, data foundation, genAI and punting on smaller projects

Enterprises are betting on generative AI and digital transformation at the expense of other IT projects, but scaling AI is difficult and more foundational work is needed, according to Accenture CEO Julie Sweet.

Sweet, speaking on the company's second quarter earnings call, said Accenture saw 39 clients with quarterly bookings topping $100 million. The company also had more than $600 million in generative AI bookings to reach $1.1 billion in generative AI sales for the first half.

That's the good news. The bad news is enterprises are prioritizing large transformation projects that convert to revenue more slowly. Sweet said:

"We see clients continuing to prioritize investing in large-scale transformations which convert to revenue more slowly, while further limiting discretionary spending particularly in smaller projects. We also saw continued delays in decision-making and a slower pace of spending.

Our clients are navigating an uncertain macro-environment due to economic, geopolitical, and industry-specific conditions. And in response, we are seeing them thoughtfully prioritize larger transformations, building out their digital core to partnering, to improve productivity, to free-up more investment capacity to focus on growth and other initiatives with near-term ROI."

Revenue in the quarter was flat for the second quarter even though Accenture saw mid-single digit growth or higher in six of its 13 industries.

Overall, Accenture reported second-quarter revenue of $15.8 billion, flat from a year ago, with earnings of $1.71 billion, or $2.63 a share.

Accenture's outlook for the third quarter fell short of expectations. The company projected third quarter revenue between $16.25 billion to $16.85 billion, below Wall Street estimates of $17 billion. Full year revenue will be between 1% and 3%. Analysts were looking for growth of 2% to 5%.

Sweet said enterprises are now "near universal recognition of the importance of AI," but "most clients are coming to grips with the investments needed to truly implement AI across the enterprise and nearly all are finding it difficult to scale, because the AI technology is a small part of what is needed."

Indeed, Sweet said companies with strong data and digital cores are moving quickly. Laggards are investing in digital core and new processes. "We are working closely with our ecosystem partners to help our clients understand the right data and AI backbone that is needed and how to achieve tangible business value," said Sweet, who noted that 2024 budgets were just recently set and there's caution about the economy.

Here's a look at some of the enterprise technology spending takeaways from Accenture:

  • Enterprises pulled back on spending for Accenture services and smaller projects at the beginning of the year.
  • Accenture is focused on market share and meeting customers where they are.
  • Cloud, data and AI are leading priorities.
  • Companies are substituting projects instead of adding to budgets.
  • Foundational data projects are necessary, and those transformation projects are heavier lifts.
  • Companies are dialing back services because they are more discretionary. Large transformations are happening because the need to replatform is critical.

Sweet said:

"You can't just jump to the great data foundation. You need to be in the cloud. You have to have modern platforms. The clients during these higher bookings rate are making big transformations oftentimes to be ready to put in the data foundation. Only 40% of workloads are in the cloud and 20% of those roughly haven't been modernized. Many of our clients haven't put in the platform--if you don't have the major ERP platforms that are modern, you don't create a data foundation to fuel GenAI. You've got to build the digital core. And there's a lot more to go."

Data to Decisions Innovation & Product-led Growth Tech Optimization Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity accenture AI ML Machine Learning LLMs Agentic AI Generative AI Analytics Automation B2B B2C CX EX Employee Experience HR HCM business Marketing SaaS PaaS IaaS Supply Chain Growth Cloud Digital Transformation Disruptive Technology eCommerce Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP Leadership finance Customer Service Content Management Collaboration M&A Enterprise Service GenerativeAI Chief Information Officer Chief Technology Officer Chief Digital Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Executive Officer Chief Operating Officer Chief AI Officer Chief Product Officer

iPaaS Primer: How the Integration Platform as a Service is Evolving

iPaaS Primer: How the Integration Platform as a Service is Evolving

iPaaS vendors are filling out their capabilities with API management, workflow automation, AI/ML, and, on the cutting edge, GenAI.

I’ve been covering the path from data to decisions for nearly nine years here at Constellation Research, and it’s a path that invariably starts with integration – integrating data sources and data-generating applications so organizations can connect business processes, gain insight, make decisions, and act. With the steady rise of the cloud over these last nine years, the integration platform as a service (iPaaS) has come to the fore. Here’s a closer look at the latest trends in iPaaS, which is one of the three core markets I cover, along with analytical data platforms (data lakes, data warehouses and lakehouses), analytics/BI and citizen data science capabilities including artificial intelligence (AI), machine learning (ML) and generative AI (GenAI).  

iPaaS have emerged as the cloud-based platforms for connecting databases, applications and mission-critical systems both in the cloud and from on-premises environments to the cloud. It’s not just about connecting sources to targets, as in the batch-oriented extract/transform/load (ETL) days of yore. Integration is increasingly a two-way street, with updates and data streams sent to AI models, source systems, automated business processes, and data platforms.

iPaaS have helped organizations move on from brittle, hard-coded, point-to-point integrations. The iPaaS becomes the consistent intermediary between points of integration, facilitated by the platform’s hundreds of out-of the-box connectors to popular apps and data sources (all of which are maintained by the vendor). The work of connecting sources and systems becomes much more accessible to non-IT types by way of drag-and-drop and point-and-click interfaces. What’s more, the components of integrations created with the iPaaS are modular and can be reused to quickly assemble new integrations. When systems change, components can be quickly updated across all integrations in which they are used, helping teams work faster and be more productive. 

When the iPaaS emerged more than a decade ago, vendors typically came out of the data-integration or application-integration arena, but what Constellation calls a next-generation iPaaS has to be able to do it all. Many iPaaS vendors also address business-to-business integration and the electronic data interchange (EDI) requirements seen in supply chain environments. In addition to offering hundreds of prebuilt connectors and templates for common integration flows, iPaaS typically provide monitoring, alerting and debugging capabilities to keep tabs on and troubleshoot integrations, pipelines and jobs.

As detailed below, the three main areas where iPaaS vendors are stepping up are:

API management. Connecting cloud apps and data sources is all about using application programming interfaces (API) that abstract away complexity and promote agility and flexibility. Unfortunately, APIs also introduce a new source of complexity in the form of API sprawl. Here’s where API management capabilities come in. iPaaS vendors are stepping up with 1. API lifecycle management capabilities, 2. Unified control planes for wrangling all those APIs, and 3. Governance frameworks to ensure that APIs are tracked and managed.

Workflow and automation. Organizations continue to face pressure to do more with fewer people, so workflow and automation capabilities are on the rise. It makes sense to automate wherever possible. Where there’s any doubt about next steps, use the iPaaS to create a workflow with humans in the loop for exception handling. Where there is confidence about exactly what an event or an analytic threshold or a prediction means, choose straight-through automation without unnecessary human intervention.

AI/ML. As the name suggests, an iPaaS is a cloud-based platform provided as a service. That puts vendors in the position to provide recommendations based on observable integration patterns. The customer’s private data remains secure and unseen by the vendor, but leading iPaaS vendors are learning from the metadata patterns and graphs of interactions behind the scenes in order to suggest appropriate data sources, pre-existing integrations, and/or next-best integration steps to users. These recommendations help save time and enhance productivity for professional and novice users alike.

GenAI. The latest innovation in iPaaS is the use of GenAI, which is being used to design and deploy new integrations and to explain and document new or existing integrations. GenAI will make the iPaaS accessible to an even broader swath of users through natural language interfaces, and it will help organizations to modernize legacy integrations by explaining, recreating, and optimizing code created by people who have long since left an organization.

Streaming capabiliites. The pace of business is always accelerating, so it’s a must to consider low-latency data integration. A next-gen iPaaS should address streaming requirements.

To summarize, modern iPaaS are benefitting professional integrators and tech-savvy business users alike. Using an iPaaS enhanced with augmented capabilities including AI/ML and GenAI, tech-savvy business types can create integrations for themselves rather than having to wait in line for IT to do the work. For the professionals, an iPaaS can accelerate and scale up their integration work, enabling them to:

  • Create, monitor, maintain and modify integrations much more quickly and productively.
  • Validate, troubleshoot and optimize integrations created by the tech-savvy business types.
  • Explain, document and streamline legacy integrations and code.

Recommendations

If there’s a risk in investing in an iPaaS, it’s that the platform might not support all the types of integration or the scale of integration that the organization will need. A next-generation iPaaS is one that is complete and able to serve as the companywide standard. If you can do it all with one platform you’ll get much more out of the investment, both in terms of the technology and the training of people, and there will be no need for point solutions.

Look beyond the next integration project to consider the breadth of integration requirements in recent history and in the foreseeable future. Do you have on-premises requirements? Will you need to work with more than one public cloud? Are investments anticipated in new enterprise apps, such as ERP or CRM systems? What are the workflow and automation requirements?

On the cutting edge, if an iPaaS vendor doesn’t have an AI/GenAI strategy by this point – let alone GenAI-based features in preview – I’d say it’s time to cut them from your short list.

Costs and licensing regimes are crucial. Does the platform you are considering offer modularity? As noted above, a complete iPaaS is a future-proof choice, but if you don’t have plans to use subsets of capabilities, is it possible to add them (and pay for them) only as and when needed? What subscription models are available? Is it per user, per connection or capacity based? The more choices available the better, as the model that makes sense today may get expensive as the number of users or integrations multiplies.

To give you a head start on your tech selection process, I recently updated my Constellation ShortListTM for Integration Platform as a Service. If you don’t see a candidate you are considering on my ShortList, feel free to contact me at [email protected] for an advisory consultation. I wish you the best of success in your technology selection process.

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity ipaas ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR business Marketing SaaS PaaS IaaS CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Information Officer Chief Analytics Officer Chief Data Officer Chief Information Security Officer Chief Technology Officer Chief Executive Officer Chief AI Officer Chief Product Officer

GitHub Elevates Code Scanning to the Next Level By Offering to Auto Fix the Code

GitHub Elevates Code Scanning to the Next Level By Offering to Auto Fix the Code

In a major advancement for developer productivity and security, GitHub has announced “code scanning autofix,” a new feature powered by GitHub Copilot and CodeQL. Starting today, it will be available in public beta for all GitHub Advanced Security customers. This AI-driven tool helps developers identify and fix vulnerabilities in their code with suggested fixes, streamlining the development process and improving code security. Here’s how it works.

Scanning code is crucial for preventing security breaches and maintaining a strong software supply chain. Vulnerabilities in code can be exploited by malicious actors to gain unauthorized access to systems or steal sensitive data. By proactively identifying and fixing these vulnerabilities, developers can significantly reduce the risk of attacks.

Image courtesy: GitHub

Features such as autofix make life easier for developers of all skill levels. Novice programmers can leverage the suggested fixes to learn from experts and improve their coding practices. Experienced developers can benefit from the automation, allowing them to focus on more complex tasks. Ultimately, any developer working on a codebase with potential vulnerabilities can benefit from this new feature.

As AI-driven tools continue to mature, code scanning tools will become even more sophisticated. In addition, we can expect to see code scanning tools become more and more integrated directly into the development process. This will make it easier for developers to scan their code for vulnerabilities early and often, an ongoing desire from CIOs and CISOs we work with. 

Digital Safety, Privacy & Cybersecurity Chief Information Officer Chief Information Security Officer

Micron Technology: More AI, more memory, more demand ahead

Micron Technology: More AI, more memory, more demand ahead

Micron Technology CEO Sanjay Mehrotra said artificial intelligence workloads are boosting demand for memory chips as AI-optimized systems with GPUs and upcoming AI PCs are faring well.

In prepared remarks with Micron Technology's second quarter results, Mehrotra said AI server demand for high-bandwidth memory, data center solid-state drives and DDR5 are boosting prices. he said:

"We expect DRAM and NAND pricing levels to increase further throughout calendar year 2024 and expect record revenue and much improved profitability now in fiscal year 2025."

Mehrotra's argument is that Micron is well positioned for edge and data center inference workloads.

We are in the very early innings of a multiyear growth phase driven by AI as this disruptive technology will transform every aspect of business and society. The race is on to create artificial general intelligence, or AGI, which will require ever-increasing model sizes with trillions of parameters. On the other end of the spectrum, there is considerable progress being made on improving AI models so that they can run on edge devices, like PCs and smartphones, and create new and compelling capabilities. As AI training workloads remain a driver of technology and innovation, inference growth is also rapidly accelerating. Memory and storage technologies are key enablers of AI in both training and inference workloads."

Micron said it is seeing the following tailwinds:

  • Its high-memory offerings with better bandwidth are seeing demand due to better power consumption.
  • Micron is making progress qualifying its memory products with multiple customers. The company recognized its first revenue from HBM3E, which will be part of Nvidia's H200 Tensor Core GPUs, in the fiscal second quarter.
  • Data center SSD revenue hit a record for Micron in calendar 2023.
  • The PC market is expected to grow modestly and accelerate due to AI PC demand, which uses more memory.
  • AI will also drive smartphone memory specs over time.

Also see:

For its second quarter, Micron reported net income of $793 million, or 71 cents a share, on revenue of $5.82 billion, up 58% from a year ago. Non-GAAP earnings were 42 cents a share.

Wall Street was expecting a non-GAAP second quarter loss of 24 cents a share on revenue of $5.35 billion.

As for the outlook, Micron said third quarter revenue will be $6.6 billion give or take $200 million with non-GAAP earnings of about 45 cents a share, give or take 7 cents. Wall Street was expecting third quarter non-GAAP earnings of 9 cents a share on $6 billion in sales.

Tech Optimization Data to Decisions Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Layoffs, DXPs, and Zoho Customer Feedback | ConstellationTV Episode 76

Layoffs, DXPs, and Zoho Customer Feedback | ConstellationTV Episode 76

🎬 ConstellationTV episode 76 just dropped! This week, hilarious analyst duo Liz Miller and Holger Mueller unpack enterprise #tech news (Oracle/Microsoft partnerships, impending layoffs and Adobe's new AI assistant).

Then Liz explains why Pantheon Platform made the 2024 #DXP ShortList and Holger hears from Rob O'Brien of ITV Studios about his experience using Zoho #technology. Watch until the end for bloopers!

0:00 - Introduction
1:30 - #Enterprise tech news coverage (partnerships,#layoffs and #AI)
13:49 - Let's Talk About #DXPs with Liz Miller
32:06 - #ZohoDay2024 interview with ITV Studios
42:05 - Bloopers!

ConstellationTV is a bi-weekly Web series hosted by Constellation analysts, tune in live at 9:00 a.m. PT/ 12:00 p.m. ET every other Wednesday!

On ConstellationTV <iframe width="560" height="315" src="https://www.youtube.com/embed/GQq9OkVkIys?si=01UtQP45WHdz7MUY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

.Lumen outlines bringing computer vision headset to blind

.Lumen outlines bringing computer vision headset to blind

The best use cases are sometimes so obvious. During Nvidia GTC 2024, Cornel Amariei, CEO of .Lumen walked through a headset for the visually impaired that will scale better than a guide dog using sensors and AI technologies that are used in cars.

"We have today over 300 million people who are visually impaired, and this number is increasing greatly. But if you check what solutions are out there for them, there are only two solutions for their mobility, and they're 1,000s of years old--a guide dog and the white cane," explained Amariei.

Amariei explained how .Lumen's headset includes spatial navigation AI to understand the pedestrian world the same way a self-driving car would. The headset also includes a non-visual feedback interface that uses haptics to guide the blind.

"Rather than pulling your hand as a guide note, we actually pull your head," he explained. "We tested with over 300 blind individuals, and I would argue it's actually more intuitive than a guide dog pulling your hand. It's all possible because of the latest in self-driving, robotics and artificial intelligence powered by Nvidia."

The technology behind the headset includes two RGB cameras, two depth cameras, infrared sensors, and an inertial measurement unit with the ability to use GPS in some use cases. The data is processed in the headset to run machine learning models and computer vision flows.

Amariei added that .Lumen is optimizing for battery life and other features. He said that the headset can be used with a white cane or guide dog as well as by itself. Approval from the Food and Drug Administration is expected next year, and the device will be available in the second half of 2024.

More from Nvidia GTC 2024:

Next-Generation Customer Experience Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity nvidia ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Technology Officer Chief Executive Officer Chief Information Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Microsoft names Suleyman head of consumer AI, Microsoft AI

Microsoft names Suleyman head of consumer AI, Microsoft AI

Microsoft is shoring up its consumer Copilot efforts with the addition of Mustafa Suleyman and Karén Simonyan to lead a new group called Microsoft AI. Suleyman and Simonyan were two of three co-founders of Inflection.ai.

Suleyman was also Co-founder of Google's DeepMind.

Inflection had a large language model, Inflection 2.5, that was behind the Pi, a personal AI. Pi apparently was more conversational. The addition of Suleyman and Simonyan creates a group solely focused on consumer AI products and research, notably Bing and Edge Copilots. Inflection AI announced $1.3 billion in funding with Microsoft and Nvidia as the lead investors in June. That round was led by Microsoft, Reid Hoffman, Bill Gates, Eric Schmidt, and new investor Nvidia. The company at the time said it was building the largest AI cluster with 22,000 Nvidia H100 Tensor Core GPUs.

In a blog post, Microsoft CEO Satya Nadella said:

“Mustafa will be EVP and CEO, Microsoft AI, and joins the senior leadership team (SLT), reporting to me. Karén is joining this group as Chief Scientist, reporting to Mustafa...Several members of the Inflection team have chosen to join Mustafa and Karén at Microsoft. They include some of the most accomplished AI engineers, researchers, and builders in the world. They have designed, led, launched, and co-authored many of the most important contributions in advancing AI over the last five years. I am excited for them to contribute their knowledge, talent, and expertise to our consumer AI research and product making."

Microsoft's consumer generative AI team will report to Mustafa.

Nadella was sure to note that Kevin Scott continues as CTO and EVP of AI and Rajesh Jha remains EVP of Experiences and Devices and in charge of Copilot for Microsoft 365.

A few takeaways:

  • Microsoft was sure to note that "our AI innovation continues to build on our most strategic and important partnership with OpenAI," but it's clear there's some diversification going on with Inflection as well as the Mistral AI partnership.
  • By calling out Scott and Jha, Microsoft is signaling Copilot stability to enterprises.
  • Microsoft has led the generative AI wave for its consumer applications, but hasn't moved the market share needle vs. Google with Edge, Bing and Microsoft Advertising.
  • The stakes for talent are obviously high as Nadella noted that "there is no franchise value in our industry and the work and product innovation we drive at this moment will define the next decade and beyond."
  • Microsoft is doubling down on home grown AI development.

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Microsoft AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer