Results

How Google Public Sector and NASA aim to bring generative AI to aircraft ground traffic control

How Google Public Sector and NASA aim to bring generative AI to aircraft ground traffic control

Google Public Sector and NASA are training AI models to understand the speech, context and instructions needed to get airplanes from the runway to their gates as efficiently and safely as possible.

Today's airport surface management process revolves around quirky acronyms, aviation vocabulary and human voice traffic that's far from perfect. If NASA Aeronautics Research Institute’s (NARI) partnership with Google Public Sector pans out, the airport surface management process can be augmented with data, speech-to-text instructions and automation.

NASA Aeronautics Research Institute (NARI) is focused on cutting-edge aeronautics research and operational strategies. NARI connects industry, government, and academia to NASA with a focus on autonomous, high-speed, and electric aircraft. "We're the bridge between NASA researchers in aeronautics and the external community, which can be the FAA, other agencies, universities and industry," said Dr. Krishna Kalyanam, Deputy Director, NARI. "NARI was also set up to seed foundational early-stage research that may pan out and turn into a larger project funded by the government."

NARI's priorities include Advanced Air Mobility (AAM), Wildland Fire Initiatives, Shaping Tomorrow's Aviation Systems and providing a collaborative infrastructure for partners to work with NASA.

We caught up with Dr. Kalyanam (right) at the Google Public Sector Summit in Washington, DC to talk about the research project.

The project. Dr. Kalyanam said the goal of the research project is to leverage speech to text models in ground traffic control when planes land and taxi on the tarmac. If the process of getting plans through the airport surface to the gate can be optimized, airlines can improve safety and the cost structure. The research looked into whether voice content over the radio can be turned into taxi instruction with 100% accuracy so it can be absorbed by automation and provide another layer of instructions for pilots.

"You land and you have to get off the concrete. You don't want aircraft on the runway and the sooner you can get an aircraft to its destination, the more planes you can get on the runway," said Dr. Kalyanam. "As soon as you land, you're getting instructions to the gate assigned to you by your dispatcher. All instructions are provided by the ground controller. It's 'take this route. Turn here. And here.'"

Challenges. The biggest challenge with the project, according to Dr. Kalyanam, was that voice traffic between pilots and the control tower has its own vocabulary as well as poor radio quality.

"Say you're running into some bad weather and need instructions to the gate. Today that's full end-to-end speech. The information could be augmented by text, visual and other inputs to go along with voice that can be converted to a route that's communicated digitally," said Dr. Kalyanam. "Once digital it can be displayed on a map or directly ingested into route planning."

Another challenge is that instructions to pilots have a unique vocabulary including terms like Roger and Wilco that humans can easily fill in gaps when interpreting data. Models need to be trained on voice traffic over air to pick up this vocabulary.

More from Google Public Sector Summit:

The goal. By digitizing the voice traffic over radio, directions can be given via moving maps, text, and color codes. That data can also be used to optimize routes and improve efficiency. "Once you digitize the information you have all this information in one place that can be optimized," said Dr. Kalyanam. "There are 100 tasks that are needed between the time the plane lands, people get off and the plane is ready to take off again."

Dr. Kalyanam said this research could also apply to autonomous aircraft and refueling. "The traditional processes are mostly human-centric," he said. "Some of these things can be automated, but at the least you can make it easier for humans to perform tasks.”

He added that the motivation of the research is to provide a secondary source of information for the pilots.

Training models. Google Public Sector used multiple models for training, but training was done on a minimum data set of 10 hours of voice instructions. Google's base models were already trained on general English conversations but had to be customized by vocabulary and use case. The models would pick up voice instructions, transcribe them according to ground control's acronyms and vocabulary and create digitized instructions.

"It's almost like learning a new language," said Dr. Kalyanam. "There are words that we will never use in English because they mean completely different things. You need to get the right context. If you hear “Dealt” it most probably means “Delta” with the ‘-ah’ sound clipped. Sometimes you can’t hear parts of what is being said. You're training models to be as perfect as they can be in an imperfect environment."

Google Public Sector and NASA worked with retired controllers to verify the ground truth as well as the voice commands and how the models performed. "The goal was to capture the taxi instruction with 100% accuracy," said Dr. Kalyanam. "There may be a conversation, but you want to be able to know that the pilot can't turn left from Charlie to Lima and then use that information. There's local knowledge about the airport layout that can be used to fix errors.”

Complicating the training effort is the reality that every airport is different, and the models will need to know local context—say the differences between the airport in Dallas vs. Tampa. It’s possible that models will need more fine tuning based on location of the airport. This fine-tuning will be even more important if this research is applied to international airports.

Working with Google Public Sector. Dr. Kalyanam said partnering with Google Public Sector made sense given Google's experience in AI, speech-to-text use cases and mapping. "We had some internal stuff we developed over the years, but the speech-to-text expertise was with Google," said Dr. Kalyanam. "Google has the models and has done the research."

Dr. Kalyanam added that Google Public Sector also had the engineers available to test multiple models and configurations for the audio. "Not one single model works best," said Dr. Kalyanam. "It took a lot of experimentation. This is custom engineering work. It's a good partnership since we don't have access to what's inside the box, but we can provide feedback so Google can build something. We also have retired controllers and pilots for model validation."

Metrics. Although it's early in the research process, Dr. Kalyanam said time saved and reduced mishaps will be core metrics. "If you end up in the wrong place it's a lot of time wasted because aircraft normally do not go in reverse," he said. "There's a lot of opportunity with digitized data. If you didn't make a turn, automation can alert you and give you new instructions. I think this process can be made simpler and hopefully less prone to error."

What's next? NASA and Google Public Sector are looking to publish their research and work with the FAA and the aviation industry. "There's a lot of interest in this research," said Dr. Kalyanam. "This is exploratory research, so we are ready to accept some failures. We are trying to prove this concept and maybe we'll simulate it in one airport and see how it adapts. We do the research, crunch the numbers and work with the FAA and industry to mature the technology for use."

Data to Decisions Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization Future of Work Google Cloud Google SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing finance Healthcare Customer Service Content Management Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

BT150 Spotlight: Sunitha Ray on the difference between enterprise AI and genAI

BT150 Spotlight: Sunitha Ray on the difference between enterprise AI and genAI


Sunitha Ray, Field Operations CTO at Shopify, says there's a big difference between enterprise AI and generative AI and business leaders need to know the use cases and potential returns on investment for each category.

Ray, a Constellation Research BT150 member, was VP of IT at Shark Ninja and has a unique view of AI since she has been on both the sell side and buy side of enterprise spending.

In our chat, we covered the difference between generative AI and enterprise AI, how to think about returns and the need for reskilling.

Here are the takeaways from my chat with Ray.

Differentiating between enterprise AI and generative AI. Before taking on the role at Shopify, Ray was the VP of IT at Shark Ninja and led the artificial intelligence team and genAI projects.

"I differentiate between genAI and enterprise AI at this point. Enterprise AI is about optimizations and figuring out solutions to problems," explained Ray. "We did a project where we designed the supply chain network, plotted optimal manufacturing plant and distribution center locations based on customer service levels we wanted to meet. That's enterprise AI."

GenAI is more about getting access to all types of data and then creating something new, said Ray. "While genAI has a lot of use cases, but for corporate use cases it has a long way to go for ROI," said Ray. "Enterprise AI can still be leveraged more effectively."

Where generative AI works well. Ray said genAI has great use cases, but they tend to be in marketing and personalization. Images and text can be generated on the fly to personalize goods and offers for consumers. At Shopify, the company is leveraging genAI to give merchants imagery, product catalog and personalization options inside the platform.

Overspending? "We are big believers in genAI, but I feel like the amount being invested may be disproportionate to the returns that companies will see in the next year or two," said Ray.

Ray said that unless enterprises see clear returns from generative AI, they will pull back on investments. "There will be a huge wave of benefits coming in, but if companies are not seeing returns early, they may pull back on funding," said Ray. "I don't want to be negative, but there has been a lot of investment already and ultimately the C-suite will be looking at the bottom line."

Enterprises should also be honest about their AI readiness. One big reason genAI projects have stumbled is data strategy, said Ray, who noted investing in data strategy first will ensure better AI results.

The difference in leadership on the vendor side vs. the buy side. Ray said she's excited to be on the sell side with Shopify, which is a leading platform with a lot of AI.

Ray said:

"The big difference between the buy side and sell side is buy side is always about managing constraints and managing resources. Sometimes you may not always make the best decisions. You might compromise because you don't have the budget, people on board and the right resources."

"On the sell side, you don't have those constraints because companies are always trying to make their product superior and provide better total cost of ownership to customers."

DIY vs. buy decisions. Ray said DIY is predominate in AI projects today because consulting companies are still building out practices and enterprises are also honing skills. "When everything is changing so rapidly, companies are scrambling to reskill and start frameworks to generate use cases, have workshops and implement," said Ray.

Bridging genAI skill gaps to improve genAI projects. "What I would do differently if starting off with genAI today is to have a readiness workshop instead of jumping in," said Ray. "Are we ready as an organization to invest and create value from AI? Most companies would probably say no, but that doesn't mean you don't start. I would have parallel tracks for data strategy and AI."

Ray said enterprises should also start with baselines to track progress and then prioritize use cases. "One of my favorite ways of prioritization is the effort vs. impact metrics. How much effort do you put in and how much impact can you get with minimal effort? Take those use cases to senior management," she explained.

Final word. "I'm very excited about generative AI. I just want to make sure companies have the necessary guardrails to make sure projects don't fail. I see AI being a total game changer for most organizations," she said.

Ray added that enterprises should also lean into employee reskilling over the next two to three years. "The transformation is going to happen in the next two to three years and it's going to be exciting to see how it changes corporate structure and industries overall," she said.

New C-Suite Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Leadership AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Experience Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

How ResMed’s data prowess sets it up for AI, sleep health market expansion

How ResMed’s data prowess sets it up for AI, sleep health market expansion

ResMed is best known for its CPAP devices, medical devices and masks, but it's also software company with a treasure trove of sleep and breathing data. The plan for ResMed: Leverage artificial intelligence, generative AI and machine learning to grow its market.

The company’s digital health data set includes:

  • 28 million patients in ResMed's AirView software ecosystem.
  • 26 million medical devices with 100% cloud connectivity.
  • 20 billion nights of sleep health and breathing health data in the cloud across 140 countries.
  • 150 million accounts in ResMed's residential care software ecosystem.
  • 8.3 million patients with ResMed's myAir patient app.

And a newly launched generative AI digital concierge called Dawn will provide more interactions. ResMed is an example of how enterprises are increasingly using proprietary data to create unique AI applications and new products and services.

Here is the flywheel that ResMed is trying to leverage.

The bet is that ResMed has a massive untapped market that includes 2.3 billion people with sleep and breathing disorders.

On its investor day, ResMed CEO Michael Farrell said:

"The overlap between a person who suffocates every night with sleep apnea and who also has a psychological reason that they cannot sleep. It is incredibly difficult to treat insomnia. If you suffocate as well, it becomes a double wheeled problem. We don't know how many of our patients are not adherent to CPAP because they have insomnia. But that overlap is significant. And ResMed is investing in digital health on both sides of that.

The other overlap there is what's called overlap syndrome, which is chronic obstructive pulmonary disease and sleep apnea. You have difficulty breathing because of the geometry of the upper airway. But then in addition to that, you have lung disease. These are some of the most difficult patients to treat, and ResMed has the technologies, the bilevels, the ventilators, but also the digital health technology that can help physicians take care of these patients."

ResMed plans to expand into adjacent markets including insomnia, chronic obstructive pulmonary disease or COPD, neuromuscular disease and other chronic conditions. And ResMed is also working to make its core medical devices smaller and more comfortable.

Data driven

The idea that ResMed could expand its market is quite a turnabout considering many Wall Street analysts thought the company’s total addressable market would be shrinking until recently.

ResMed’s approach to data has helped it navigate a volatile 18 months as investors were concerned about how GLP-1 drugs used to treat obesity would impact ResMed sales of its medical equipment. The thinking behind the stock volatility was that lower obesity would hamper sales.

RedMed's approach to the GLP-1 threat was to analyze the data to continually test whether investor fears were warranted. Farrell said on the company's first quarter earnings call that the data so far is that patients on GLP-1 are more likely to start sleep apnea therapy and wear their CPAP (continuous positive airway pressure) devices.

"We've designed a real-world data analysis that now equals 989,000 subjects, who received both a prescription for a GLP-1 medication and a prescription for positive airway pressure therapy," said Farrell. "The results from this analysis are clear. People prescribed a GLP-1 and PAP therapy have 10.8 percentage points more likelihood or propensity to commence positive airway pressure therapy."

GLP-1 prescription and PAP prescription patients are also more likely to adhere to long-term therapy based on ResMed's analysis of ReSupply data.

Indeed, ResMed's first quarter revenue was up 11% and the company saw strong demand for its medical devices, mask, accessories and residential care software. A program called ReSupply keeps patient supply sales flowing.

The data flywheel

ResMed has a vast amount of sleeping and breathing data on patients, but the company is also getting an assist from consumer wearable devices, which are increasingly flagging sleep apnea issues.

Farrell said Samsung's latest Galaxy Watch and Apple's new Apple Watches are detecting sleep issues. Google's Fitbit and Garmin are also tracking sleep health. "We believe that these technologies will help drive more patients to seek out information regarding their sleep health and breathing health," said Farrell. "ResMed's obligation is to help these sleep health and breathing health consumers find their own pathway to appropriate diagnosis and treatment for sleep apnea."

The consumer wearable market is likely to drive the funnel for ResMed in the future. ResMed executives said the company plans to integrate with Apple HealthKit and other platforms and pursue strategic partnerships.

ResMed's data lake is one of the "deepest and most profound location of medical data on the planet," according to Farrell. That common data platform will continue to be an asset that unlocks value with de-identified data.

"What have we done with that? We've lowered costs. We lowered the cost of setting a patient up on positive airway pressure by 50% through the digital pathways. We've increased adherence, up to 87% from patients who are using myAir app on top of the doctor using AirView and the full connectivity," said Farrell. "What real-world data is going to come forward over the next five years, what are we going to do with the exponential technology that is generative AI and how are we going to take it to the next level?"

Farrell said patients will also create their own personalized data sets as they combine sleep health data with cardiovascular, diabetes and other data and then work better with health systems. "I think the outcomes will be there," said Farrell. "We see the person in the center. This is patient centric."

AI plans

ResMed is already seeing early returns from its Dawn generative AI assistant. After a few months, Farrell said about 25% of visitors have initiated a session with Dawn.

These sessions have reduced the volume of direct-to-live human contact center queries by 40%.

That example highlights half of ResMed's two-pronged AI strategy. AI will drive productivity in the company. Internally, ResMed is using AI to automate operations and processes to health providers, insurers and in the supply chain, said Bobby Ghoshal, Chief Commercial Officer, SaaS at ResMed.

"Our plan is to further infuse automation and AI across this entire process and specifically, to reduce friction and the patient intake side around documentation, authorization and billing," said Ghoshal.

ResMed is also betting that AI revenue through new products and services as well as personalized experiences.

Hemanth Reddy, ResMed's Chief Strategy Officer, said the company's 2040 strategy is to expand its market and use its data assets to harness the latest advancements in AI.

"We're going to connect our solutions much more deeply as one single integrated health technology ecosystem across an individual's patient journey. In doing so, we're going to drive much more personalized and digital-enabled pathways," said Reddy.

ResMed's plan is to benchmark itself against successful technology companies in terms of product management and speed.

However, ResMed knows its core strengths and where AI fits in. "ResMed is not going to be the world's best at AI. That's going to be Amazon and Microsoft and Google. But we are going to be the world's best at applying generative AI to the world's biggest data lake. I actually call it a data well of sleep health and breathing health information on the planet," said Farrell.

The ResMed stack

ResMed primarily uses Amazon Web Services for its data, AI and machine learning backbone. ResMed built its Intelligence Health Signals (IHS) platform on AWS so its data science team would build and deploy models.

In a 2022 case study, ResMed detailed its use of Amazon SageMaker for its artificial intelligence and machine learning platform. The company's data lake is also built on AWS and connects to SageMaker via AWS Glue.

Here's a look at RedMed architecture circa 2022.

Based on job listings, Snowflake is a key vendor for ResMed. The company also leverages open-source technologies as well as Terraform from HashiCorp, now owned by IBM.

More:

Data to Decisions Next-Generation Customer Experience Innovation & Product-led Growth Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity AR AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Agentic AI without process optimization, orchestration will flop

Agentic AI without process optimization, orchestration will flop

Agentic AI is a hot topic in enterprise technology, but without process automation and orchestration the vision is unlikely to be realized. CxOs sifting through the marketing hype of agentic AI should keep process optimization and orchestration in the forefront of planning.

In recent weeks, the flow of agentic AI news hit a fever pitch and it appears that vendors have gone from launching AI agents in their platforms to catapulting to "agent of agents." Many of these plans are short on process automation and orchestration. At this moment, agentic AI is a game of executing tasks autonomously within a vendor's platform. First, you got the data silos. Then you got the dime-a-dozen copilots within your applications. And now you're getting AI agents that aren't going to operate across platforms and processes.

Here's what you'll need to make agentic AI work.

  • A vendor that is more of a neutral party and has connectors into multiple systems. Think UiPath and Celonis potentially Boomi in the future and ServiceNow today.
  • A platform that can operate horizontally across these systems. Amazon Q for Business would be an example as is Google Cloud. A hyperscale cloud provider makes the most sense in this horizontal AI agent context. AWS's Generative AI Vision
  • Process mining and optimization capabilities. This process knowhow appears to the missing ingredient to most of these agentic AI visions. Microsoft has process optimization capabilities as does SAP via DataSphere/Signavio and a partnership with UiPath. ServiceNow also has process optimization that rides along with workflows.
  • Orchestration ability because there will be more agents than humans in short order. Building AI agents won't be a problem. Managing them will be.

With that backdrop here's a look at some of the agentic AI developments worth watching.

The agentic AI ERP play

Enterprise resource planning (ERP) platforms can be playing a home game in the agentic AI race since they can connect data and context across multiple functions. SAP CEO Christian Klein was obviously talking up his company's Joule genAI and agent technology during the company’s third quarter earnings call, but he has a point. He said:

"While many in the software industry talk about AI agents these days, I can assure you, Joule will be the champion of them all. So far, we have added over 500 skills to Joule and we are well on track to cover 80% of the most frequent business and analytical transactions by the end of this year. And in Q3 alone, several hundred customers licensed Joule."

Microsoft can also play the ERP to AI agent game. The company launched 10 out-of-the-box AI agents. In Microsoft's view, Copilots are how you'll interact with the agents that will work on behalf of an individual, team or function to execute on processes.

The biggest issue with the ERP focused AI agent plays--or CRM, HR or any other enterprise acronym of your choice--is that you're still locked in with a vendor.

Nevertheless, the combination of AI agents and well-structured data within ERP systems is likely to lead to quick returns. I am surprised that Microsoft didn't connect the dots more between its AI agents in Dynamics 365 and Power Automate.

The other wrinkle in this agentic AI ERP parade is ServiceNow's just announced partnership with Rimini Street along with its Workflow Data Fabric. The subtext: Maintain your legacy ERP system, abstract it with the Now Platform, save with third party maintenance and reinvest the savings in AI automation.

ServiceNow CEO Bill McDermott said on the company’s third quarter earnings call that enterprises want to avoid previous mistakes with ERP platforms and AI agent sprawl.

"The C-suite is looking to us to prevent a mess with AI," said McDermott. "Leaders see the risk that every vendor's bots and agents will scatter like hornets fleeing the nest. Enterprises trust us to be the governance control tower."

The neutral party, orchestration, automation play

At UiPath Forward, UiPath pivoted from robotic process automation to AI agent building and orchestration. UiPath made its name with RPA, process mining and task mining and then created an automation platform.

The new vision for UiPath rhymes with Microsoft's copilot-to-agents approach except the bridge is from RPA bots to agents. UiPath previewed Agent Builder, forged a partnership with Anthropic and set a vision that combines RPA, automation, robots and people to automate end-to-end processes. UiPath's play is that processes don't run in one system so you need a horizontal platform across the enterprise to be an agent conductor.

UiPath CEO Daniel Dines said:

"We can go end to end process automations. We can reduce a lot of human input into processes. We can make humans only the decision makers into a real process. I think many business applications will offer capabilities to create agents. We are very happy to orchestrate them."

Dines said RPA and generative AI isn't a zero-sum game. "Our robots will provide the tools to the agent to connect to all of these platforms," he said. "Robots are low skilled. Agents are more highly skilled employees."

While UiPath's conference was underway in Las Vegas, Celonis held its Celosphere event in Munich. Celonis has focused on process intelligence with its platform and then feeds into various AI models with its digital twins of enterprises.

Celonis has taken an ingredient brand approach. It's worth noting that one session at Celosphere focused on the combination of Celonis Process Intelligence and Amazon Bedrock, which is also likely to play a big role in building AI agents and orchestrating them.

Celonis launched Celonis AgentC, a suite of AI agent tools, integrations and partnerships. Celonis is looking to embed its Process Intelligence into AI agents to add business context. Celonis' first platform integrations include Microsoft Copilot Studio, IBM watsonx Orchestrate, Amazon Bedrock Agents and open-source environments like CrewAI.

Here’s a graphic on how this Process Intelligence integration works with Microsoft Copilot Studio.

"You can now power AI agents with process intelligence," said Celonis co-CEO Alex Rinke. "This is AI that knows how your business flows."

The sales playbook for UiPath and Celonis has been cribbed by multiple agentic AI vendors that have focused on use cases, enterprise functions and industry applications.

ServiceNow is also playing the role of the broad neutral party that can connect various models, workflows and systems. The company has already layered agentic AI into its Now Platform and for that matter could acquire either UiPath or Celonis.

Lingering questions in an evolving landscape

This riff on agentic AI, the role of automation and process optimization is a work in progress because vendor strategies--and yours for that matter--are being cooked up as we speak.

Among the key questions:

  • Can Salesforce leverage MuleSoft to take Agentforce beyond front-office functions?
  • How many first movers in agentic AI will find themselves buried in agent sprawl?
  • Will the neutral parties today ultimately be acquired? SAP and UiPath are already cozy partners. Aside from the trillion-dollar valuation club no vendor is too big to be acquired.
  • Can integrators be the agentic AI orchestrators? For instance, Infosys CEO Salil Parekh said the company has been focused on small language models, use cases and processes to create multi-agent frameworks to automate work. "We have a multi-agent framework where the agents are doing--a set of agents are doing full solutions to certain business processes or certain functions," he said.
  • What role will hyperscale cloud providers take on? AWS doesn't have any applications in this fight and is truly horizontal. It could have a big role in building and orchestrating agents in the background. Ditto for Google Cloud.
  • Does ServiceNow emerge as the agentic AI point guard for enterprises? McDermott is betting that way. "We intend to be the control point that governs the deployment of agentic AI across the enterprise," he said.

With so many agentic AI moving parts, CxOs may want to lump agentic AI plans with broader process transformation and automation strategies. Agentic AI looks great, but keep process, automation and orchestration top of mind.

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Revenue & Growth Effectiveness Digital Safety, Privacy & Cybersecurity ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation Quantum Computing Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain Leadership VR GenerativeAI Chief Information Officer Chief Data Officer Chief Technology Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

How autonomous vehicles could change how cities are designed

How autonomous vehicles could change how cities are designed


BT150 member Dr. Jonathan Reichental said the impact on autonomous vehicles on smart city design is underappreciated.

Reichental is CEO of Human Future, an advisory, investment and education firm. He has previously served as chief information officer at O'Reilly Media and City of Palo Alto. He has written a series of books on smart cities and has created online education content for LinkedIn Learning.

We covered a lot of ground--AI, Internet of things and city operating systems--in a wide-ranging discussion about generative AI and where it fits into cities. Here's a look at the takeaways.

GenAI questions abound. Reichental said generative AI has been the focus of conversation in the tech sector for nearly two years now, but there are plenty of organizations that are figuring out strategies. "A lot of my work is education and clients are asking how they should think about generative AI," he explained. "Sometimes that's the hardest question."

Those incoming questions highlight how it's still early in the generative AI game and organizations are pondering the risks, rewards and use cases. As these questions are happening many organizations have already adopted AI since their employees have brought it to work. "Employees are bringing their own AI to work," said Reichental.

Applying AI to solve problems. Reichental said organizations should start identifying a problem to solve and then finding the right solution. "I'm old school. Let's look at the problem first and figure out the right solution for it," he said. "Let's not start with the solution and AI before the problem you're trying to solve. These are valuable conversations and activities that I'm seeing right now."

Do-it-yourself approaches vs. buying AI capabilities. Reichental said the public sector is more focused on buying AI capabilities off the shelf. Every cloud SaaS vendor has AI capabilities and in the public sector it is normal to wait for those tools to be integrated. "In the public sector we use traffic management systems, permitting systems and legislative systems and vendors are enabling AI," said Reichental. "Some cities in the world are progressive and building their custom AI solutions, but that's more rare."

Reichental added that he was an advocate for cities moving into the cloud because they shouldn't be in the data center business. AI is similar. "You should focus on your core competencies and subscribe to technology," he said. "Public service is really a world of constraints. Too many projects and not enough time, money or talent and you have to operate within that. There are only a few big cities in the world that have the capacity to pull off a custom genAI project."

To Reichental, smart cities' core competencies are providing educational services, health, transportation, energy and public safety. Technology and AI can make delivery of those services faster with less bureaucracy. "There's a lot of momentum behind the digitization of government and AI is just really a big part of that," he said.

Autonomous vehicles will transform cities. Reichental said that all of the hubbub about AI is overshadowing autonomous vehicles, which have the potential to transform how cities are designed. "Autonomous vehicles and drones can transform the landscape of cities. This is a really big deal," said Reichental. :"Cities can completely change how they are designed. Do we need a grid system? How about traffic lights or parking spaces and parking lots? Cities have been built for most of the 20th and 21st century to reflect the needs of cars we drive."

To Reichental, cities will transform from designs that accommodate car ownership to ones that have on-demand electric autonomous vehicles." Reichental said autonomous vehicles will likely have a faster impact on cities than most observers think today. 

Some city design possibilities:

  • Buildings can be planned for various uses. Perhaps a multi-story parking lot can be converted into housing.
  • Instead of tearing down buildings, there can be planned conversions to other uses.
  • Engineering of cities can change without the burden of accommodating car ownership and focus on green spaces, pedestrian areas and gardens. "It's already happening," said Reichental.

Internet of things vision realized. Reichental said the Internet of things will also have a big impact on cities since the sensors and systems are deployed. What has been missing is the AI for dynamic traffic =and energy management. "As we deploy these lower cost sensors in the urban landscape cities will be able to respond better. AI will be a part of that," he said.

Silos and that elusive city operating system. I asked Reichental whether we'd ever see a city platform that integrates everything a city does. The short answer is no. He said:

"There is a graveyard of failed city operating systems and platforms. What everyone has to recognize is that the private sector is centered and focused on delivering to the marketplace. A city is going to have 20 different departments and every one of these is a completely different business and doing its work in a different way. How similar is the fire service in the city to planning or permitting or legal? There are data standards and best practices, but I think the silos are going to continue. Maybe we'll be surprised."

Data to Decisions Next-Generation Customer Experience Innovation & Product-led Growth Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Infosys leans into small language models for industry use cases

Infosys leans into small language models for industry use cases

Infosys said it is leaning into small language models as it aims to leverage its industry data sets to target use cases.

Speaking on Infosys' second quarter earnings conference call, CEO Salil Parekh outlined the strategy:

"We are working with clients to deploy enterprise generative AI platforms, which become the launch pad for clients’ usage of different use cases in generative AI. We are building a small language model leveraging industry and Infosys’ data sets. This will be used to build generative AI applications across different industries. We have launched multi-agent capabilities to support clients in deploying agent solutions using generative AI."

Parekh said Infosys has "some very good data sets" that can be used to train small language models. These models will be built for clients by industry and added to business applications and combined with Infosys' Topaz cloud platform.

To that end, Infosys launched Infosys Topaz BankingSLM and Infosys Topaz ITOpsSLM to spearhead a rollout of small language models. The banking focused genAI effort was developed using Nvidia AI enterprise and AI Foundry with Sarvam AI and will be integrated into Infosys offerings.

Infosys added that it is working on Nvidia to develop NIM Agent Blueprints.

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Zoho focuses LLM efforts on Nvidia architecture

Zoho focuses LLM efforts on Nvidia architecture

Zoho said it will build narrow use case focused language models for its platform on Nvidia after seeing a 60% increase in throughput and 35% reduction in latency compared to the open-source frameworks used previously.

The company, which offers a broad suite of business applications, has been building its AI stack and features in its portfolio.

Zoho said that it will use Nvidia's NeMo, a part of Nvidia AI Enterprise, and GPUs for its models. Zoho said it has spent more than $10 million on Nvidia technology and plans to invest another $10 million in the next year.

In a statement, Ramprakash Ramamoorthy, Director of AI at Zoho, said the company is focused on developing large language models (LLMs) designed for business use cases and integrated into its stack.

Zoho's focus has been on using smaller models that are more use case focused and cost efficient. Zoho, which uses multiple models, doesn't train its models on customer data.

According to Zoho, its LLM efforts will revolve around multimodal, vision and speech capabilities. The company said it is testing NVIDIA TensorRT-LLM.

Constellation Research analyst Holger Mueller said:

"This is a good move by Zoho and it's not surprising. There is no alternative to NVidia when it comes to on-premises AI. It’s a great validation of the Nvidia stack as Zoho tried alternate solutions, and does not shy away from stating that Nvidia is more efficient. The question is whether can a Nvidia stack deliver at an attractive SMB price point."

Data to Decisions zoho Chief Information Officer

HOT TAKE: Epicor's pickup of Acadia extends its "last mile" go-to-market optimization

HOT TAKE: Epicor's pickup of Acadia extends its "last mile" go-to-market optimization

Epicor has announced it has acquired Acadia Software, which provides connected worker solutions for the manufacturing and other supply chaiun industries. Terms were not discolsed, but Epicor noted in a statement that the new technology will augment Epicor's ability to equip front line workers with the knowledge and tools, as well as intelligent task management, to promote a safer and more optimized work environment. 

“Frontline workers need the digital tools and knowledge necessary to perform their roles efficiently and safely,” Epicor CEO Steve Murphy said in a statement. “The acquisition of Acadia furthers Epicor’s commitment to helping businesses across the make, move, and sell industries move beyond simply telling workers what to do, but showing them how to do it effectively to drive stronger productivity and efficiency.”

On the surface this seems like just another workflow addition to Epicor's portfolio. But the specificity of how it is designed for front line workers is unique, and can add value for manufacturing and related industries as they look to both hire more effectively (read: save costs by ramping less experienced workers at lower wages, as well as retain employees thus reducing turnover costs) but also improve the overal quality of product and customer experience. With more engaged and knowledgable workers, products are manufactured with less errors and on time for customers. This in turn drives both cost effeciencies but also the ability to increase customer satisfaction in a manner that opens up more expansion and cross sell/upsell growth opportunities - in short, improving both top and bottom line metrics. 

Epicor also laid out a list of benefits in its acquisition announcement: 

Real-Time, Actionable Insights: Acadia’s platform is designed to integrate easily with existing enterprise systems, allowing businesses to dynamically combine workforce performance data with other operational metrics.

Skills Management and Development: Acadia provides tools aimed to help workers quickly adopt new processes, software, and equipment that fosters employee growth, skills development, and career progression.

Driving Continuous Improvement: Aligned with Epicor’s focus in helping businesses optimize operations and achieve sustainable growth, Acadia enables workers to identify inefficiencies, suggest improvements, and execute tasks according to best practices.

Epicor users with large shop floor employee bases should consider Acadia's functionality, when Epicor announces formalized pricing for the integrated feature set. The functionality should be part of an overarching strategy of workforce transformation that includes packaged and other AI solutions to drive the right knowledge, task management, etc. to the right front line worker at the right time, and provide telemetry for continuous improvement of the "who, what, when, and where" around the workforce. 

Future of Work Tech Optimization Data to Decisions Innovation & Product-led Growth Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Epicor ERP ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM finance Healthcare Customer Service Content Management Collaboration Chief Customer Officer Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

IBM Q3 mixed, AI bookings surge, infrastructure sales take hit

IBM Q3 mixed, AI bookings surge, infrastructure sales take hit

IBM's third quarter was mixed as sales fell short of estimates, earnings were better-than-expected and the company said that its generative AI bookings were $3 billion.

The company reported third quarter earnings of $2.30 a share on revenue of $15 billion, up 1% from a year ago. Wall Street was expecting IBM to report third quarter non-GAAP earnings of $2.23 a share on revenue of $15.08 billion.

CEO Arvind Krishna said the company was set up well in software with revenue growth consistent with the third quarter. Krishna said the company saw "a reacceleration in Red Hat" and good traction with its models, which deliver good price for performance.

Constellation Research analyst Holger Mueller said IBM's results were hampered by pension costs, but is showing traction in software. Mueller said:

"IBM is becoming more of a software company with 45% of revenue coming from applications. If the trend continues, we will see IBM passing the 50% milestone next year. Good things happen when a former product developer is made CEO, and knows how to leverage IBM Research."

By the numbers:

  • IBM's software revenue was $6.5 billion, up 9.7% from a year ago. Data and AI revenue was up 5%, Red Hat growth was 14% and automation was up 13%.
  • Consulting revenue in the quarter was $5.2 billion, down 0.5%.
  • Infrastructure revenue in the third quarter fell 7% from a year ago to $3 billion. IBM Z revenue was down 19%.

Data to Decisions IBM Chief Information Officer

Streamlining Proof of Delivery with Robotic Process Automation | Ring Container Technologies

Streamlining Proof of Delivery with Robotic Process Automation | Ring Container Technologies

SuperNova Finalist Jaime Zepeda of Ring Container Technologies discusses how the company used Infor's robotic process automation (RPA) solution to streamline their proof of delivery process.

Ring Container Technologies, a leading manufacturer of packaging solutions, was facing challenges with managing signed bill of lading documents across their 20 global shipping sites. The documents were being stored in various formats, making it difficult to quickly retrieve and provide to customers when needed. Zepeda explains how Ring leveraged Infor's RPA capabilities to automate the capture and linkage of these signed documents directly to the corresponding transactions in their Infor ERP system. This allowed them to standardize the process across sites and save their employees valuable time previously spent searching for these documents.

The interview also touches on how Ring explores additional use cases for RPA, such as automating the payables process and integrating data from external systems into their ERP.

Learn how a leading manufacturing company uses innovative technologies like RPA to drive efficiency and better serve their customers.

On Insights <iframe width="560" height="315" src="https://www.youtube.com/embed/qHpKtgCBY7s?si=odcDQEguNY8bt0oE" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>