Generative AI is the topic du jour on earnings conference calls and technology press releases, but enterprise customers are wary of data security, compliance and hype. There's a generative AI rocket ship ahead, but the timing of lift off is debatable.

Speaking at Domino Data Lab's Rev 4 conference in New York City, Jan Zirnstein, Director of Data Science at Honeywell Connected Enterprise, said the company has been looking at generative AI use cases but questions remain.

"Generative AI has tipped the public perception of what AI is, but tipped it a little too far," said Zirnstein. "There's nothing in the actual training model and architecture that's tied to truth and factual correctness. We're looking at use cases tied to where factualness isn't imperative like saving time on the creative side. There are also use cases on the summarization side."

Zirnstein said generative AI can speed up software development, but there's also a chance that the technology can simply scale poor code.

Neil Constable, head of quantitative research and investments at Fidelity, said at Rev 4 that there are multiple data safety issues to consider with generative AI. "If you use ChatGPT and think what you put in won't show up in some future version you're sadly mistaken," said Constable. Nevertheless, Constable said enterprises should explore generative AI, but "a lot of work should go into looking at what you should and shouldn't do."

He said it's worth bringing in smaller models and learning how to find tune them. "There's a lot of proprietary data I'd like to throw into it," said Constable. "When trained properly there's the ability to use generative AI across the organization but only internally. The data security issue is no joke."

These concerns were echoed by CEOs speaking on earnings conference calls in recent weeks. The big issue is transparency into how large learning models and transformer architecture work.

Those security concerns are why vendors like Salesforce are pushing a trust layer. "Large customers must maintain data compliance as a critical part of their governance, while using generative AI and LLMs. This is not true in the consumer environment, but it is true for our customers, our enterprise customers who demand the highest levels of this capability," said Salesforce CEO Marc Benioff on the company's earnings conference call.

He added:

"Where customers who for years have used relational databases as the secure mechanism of their trusted data, they already have that high level of security to the row and cell level. We all understand that. And that is why we have built our GPT trust layer into Einstein GPT. The GPT trust layer gives connected LLM secure real time access to data without the need to move all of your data into the LLM itself."

Beyond Nvidia, however, no tech vendor has meaningfully raised guidance based on generative AI demand. Yes, hyperscale cloud providers are ramping up generative AI infrastructure, but the other layers in the tech stack aren't benefiting just yet.

C3 AI CEO Tom Siebel said there are inbound calls about AI. He said:

"I do not believe that it's an overstatement to say that there is no technology leader, no business leader and no government leader, who is not thinking about AI daily. AI chipmakers like NVIDIA are accelerating production to try to keep up with the very real demand that's out there. And all of this is being accelerated by the advent of generative AI.

The interest in AI and in applying AI to business and government processes has never been greater. Business inquiries are increasing, the opportunity pipeline is growing, demand is increasing."

But Siebel also noted that enterprise customers' interest won't translate into revenue right away. "In terms of applying AI to enterprise we're in first half of the first inning. This is an embryotic market," he said. "We're going to see where this goes in the next few years."

Generative AI guide: ChatGPT: Hype or the Future of Customer Experience