Results

AI150 Interview: Inizio Medical's Matt Lewis on AI, humans ability to change, life sciences

AI150 Interview: Inizio Medical's Matt Lewis on AI, humans ability to change, life sciences

Larry Dignan, Editor in Chief of Constellation Insights, sits down with Matt Lewis, Global Chief Artificial Augmented Intelligence Officer at New Medical, to discuss the intersection of life #sciences, #mentalhealth, and #AI. Lewis highlights the challenges of AI adoption, emphasizing trust, and self-efficacy as critical human factors.

He noted that 80% of AI adoption success depends on these psychological considerations. Lewis also discusses the regulatory hurdles in life sciences and the potential of AI to transform industries, particularly in early disease detection and mental health support. Despite concerns, he expressed optimism about AI's future benefits, particularly in improving health outcomes.

Full Video Transcript: (Disclaimer: this transcript is not edited and may contain errors)

Hi. I'm Larry digner from constellation insights, and we're here with Matt Lewis. He's part of Constellation Research. Is AI 150 Hi Matt. Thanks for joining us. Hey, Larry, thanks so much for having me. It's great, great honor to be here. So you're playing at the intersection of Life Sciences, mental health and AI, so I guess you want to explain your role and kind of how you're viewing things. Yeah, sure. So it's a really interesting time. Generally, it's an interesting time to be alive and to be in the space I'm currently serving as global chief artificial augmented intelligence officer in new medical and I'm also in that in that space, I'm the executive sponsor of new medicals mental health business Employee Resource Group, which is 1000 person large division of folks that kind of put our medical affairs kind of a proposition forward, and the mental health Burg is all the folks that are challenged with mental health and well being issues, either themselves, or, you know, their children, their parents as caregivers and the rest. So my world is kind of like a mix of, how do we leverage emerging technologies like generative artificial intelligence, machine learning, deep learning, NLP, and the rest to speed time to commercialization. But also, how do we support the mental health and well being needs of our colleagues and our counterparts and the people with whom we work so that they can live good lives and enjoy what they're doing? So how do you see generative AI contributing to mental health? And, you know, I mean, my, I mean, there are some looming things, like, you know, the therapist professions, you know, a lot of folks are aging out. You know, there's cost, there's comfort, there's a bunch of things. How do you see generative AI playing in that space?

Yeah, I mean, it is one of these things that I think a lot of people you know recognize that there's a kind of an imperative, technologically, to adopt and to kind of put the AI into their world, whether it's you know, as people you know, I went to a blog party recently, and people you know at the pool are asking, like, how do I use chatgpt to make my life better? So it's like a real thing that actual people ask about, but in, you know, the board room and in the corporate corridors, people ask the same question. They don't really know how to make sense of it. And one of the early meetings that we had when I first stepped into role as chief AI officer, and I've been the Chief Data and Analytics officer for years before that, one of the first questions someone asked me was, you know, are robots going to take my job? That's literally what they ask. And you know, it underlying that question is a question of fear, really and anxiety, which are really mental health concerns that are not really kind of questions of, you know, do they know the technology they understand its merits and its benefits and features, but you know, do they feel safe and secure in their organization to support them in their learning journey as technology is adopted across the enterprise and a lot of people that work in these environments really don't give proper consideration to the kind of psychological or cognitive or affective concerns of knowledge workers in an environment that is rapidly Changing.

They almost kind of think, Oh, well, together. Just Well, the other just kind of talking. They're just saying things like, robots are going to take my job, but they don't really kind of give it the proper consideration. And as it turns out, the research literature on the adoption of artificial intelligence suggests that at least up to 80% of successful adoption in the enterprise is due to what's called human factors. And most of the Human Factors literature is around psychological considerations of adoption, and with what we kind of think about in that space, it's like, really how someone shows up to their role, and their role as kind of a counterpart to AI, and if they don't show up in a way that is kind of intentional. They don't show up in a way that is welcoming. They don't try to collaborate with AI. Lots of bad things happen, but for themselves, as individuals, as professionals, and for the companies with which they work. So really, at the at the core of it, it really is a kind of human factor, kind of mental consideration, that determines whether AI adoption is successful or not. But a lot of people just kind of poo, poo it away and say, oh, you know, they're just talking nonsense. What are, what are some of those human factors? If you were to rank them, like, I know, there's change management, there's culture, there's a bunch of things, but, but I guess, how do you I guess?

What would you rank in terms of, you know, three human factors that, you know, exactly need to think about. Yeah, I think the first one that is that comes to mind that, you know, is talked about a lot, but it has a number of kind of sub components to it is definitely trust. And it's not necessarily just related to artificial intelligence, but to any emerging technology, and honestly, any decision that we as humans make ultimately has trust at the core. I mean, we're not going to, you know, hire a plumber to come in and fix our toilet in our bathroom in our home if we don't trust that they're going to do a good job and they're not going to cause our family or ourselves to be at risk while they're in our home. It's it doesn't matter if it's kind of a basic thing like that, or if we're going to use, you know, a service like chatgpt that takes our data and potentially sends it off to the cloud and sends it back to OpenAI in California. Fundamentally, if humans don't trust the service or the professional that's providing them value, they won't work with them. And there are so many aspects of how trust is kind of moderated or mediated in a relationship, professionally or personally, that if you don't satisfy that nothing, nothing really works. Well, another big consideration in the human factor environment is what's called a locus of control, which is kind of how the individual perceives themselves to be in, like the broader network with which they they work or they live.

So people that tend to kind of consider themselves as part of a broader system or a broader network or a community end up working better, actually, with generative AI than those that consider themselves to be, like a lone wolf, if you will, or like really kind of calling all the shots. And it also, there's a lot of research, very interesting research that suggests that if you remind people of their spiritual and religious obligations immediately before prompting in generative they're better at working with generative AI than if they just go in blind. And the reason why that is is that if humans are kind of reminded of the fact that they're not alone in the world and that there's a connection, either to nature or to earth or to a deity, that they're able to kind of connect with the other being generative and partner directly, whereas if they approach generative without that prompt, no pun intended, they they do so more from a hostile perspective, and they don't try to collaborate. And the outputs suck. They they're much worse than if they go at that from a kind of position of vulnerability, if you will. So logos of control is probably the second biggest thing. And then I'd say the third biggest thing is what might be called in the in the psychological literature, and it's all learning literature, which I did most of my doctoral degree in, is around like self efficacy, which is like the intention to act or like, you know, the confidence that someone has in their ability to actually perform a task. And a lot of people even like very highly technical people, like people that have, you know, medical degrees, PhDs, pharmds, very technical people.

And these are the people with whom I work regularly across both in Israel and in the life sciences ecosystem. They don't really rate their own skills with deep tech, with emerging tech, with general artificial intelligence, with emerging tech, Blockchain, VR, anything like that, very highly. So when they show up into those environments and are asked to take on the role of thought partner to AI, they don't do well, not because they're not capable, but largely because they're not confident. So they're they are competent, but they're not confident, and as a result, their work suffers because they don't believe in themselves. And you know, this shows up not just in people that are highly technical, but like our athletes and our Olympians, like how many times you watch the the Olympics recently, and Simone Biles is talking about the twisties. The twisties is like a a self efficacy thing writ large. People don't believe in what they can do, even though they're super competent. They just don't show up when it matters. So does this mean that you're almost going to have to, I'm just thinking aloud about, you know, how this affects the future of work.

So are you almost like the people that, I guess, thrive and adapt? Are they going to have to have those three qualities and sort of have that trust level and be willing to collaborate? And if so, what that mean? What's that mean for the workforce, especially folks who are, you know, kind of like the lone wolf and what, and have control? Yeah. Yeah, Ithink, you know, there's a lot of things like all happening at the same time, and it's hard to kind of piece it apart, really pull it apart and piece it back together. Part of it is that the world in which we are working now is not really set up for the type of transformative change that's happening as a result of generative AI. So we're almost like, you know, we're working in 2024 but generative AI is like making possible a 2031 type of work. And there are certain kind of structures and processes and systems that would make 2031 work possible, but we just happen not to be living in that environment, like, for example, when when you work with content, any content that originates digitally, it would be preferable to know the origin of that content and to know, for example, like when people view this interaction between you and I, that we're taping it live right now on Friday, September 6, at 11:26am, Eastern and its origin is recorded directly at this point, but most people can't do that. When they watch it directly, they'll only see the asset itself in 2031 almost guaranteed, every piece of media that emerges into the world will have some type of watermark or some type of transparency stamp on it, either from ctpa, which is on the media side, or the equivalent in healthcare and life sciences. And you can derive what's called Providence, and be able to say, Okay, well, the origination of this was actually a prompt that someone made to generative platform and then a human edited it or annotated it or labeled it or curated it, or did something to it, and then later it went back through a generative platform and then emerged into the ecosystem as the final deliverable. And you can see the whole kind of chain, the logic chain, if you will.

But since people don't have that, all they had was what existed in 2014 or 2004 they tend to approach it with a healthy degree of skepticism, and they don't trust it, understandably, but it's like our systems and our processes haven't caught up with our technology yet, but they will eventually, and when they do the the trust level will increase dramatically. And when that happens, you won't really have to ask people to do things so they don't, you know that they're not ready for yet. But for those of us that are deep in on AI and are in this world, we kind of know that that world is coming. It just hasn't appeared yet. For the other two areas, like locus of control and for self efficacy, I think you're going to start seeing true platforms. I don't mean like the consumer platforms like chatgpt and Gemini, but true platforms, AI platforms that are being built right now, mostly in the startup ecosystem, that focus on things like, how do we encourage people in the real world and also in our companies, to start showing up as themselves, as their best selves, in ways that make them able to really contribute fully in the workforce and contribute ways that amplify value for the work they're doing, and hopefully enjoy the work better, because it's not fun to do work where you don't really like, you're not able to contribute the things that you that make you happy, because the things you really enjoy doing are like, kind of like in your personal time, and things you have to do at work or just for work that that's not great, we've all been there. And then, on the self efficacy side, it really, I think a lot of people feel like they can't do what's required of them because they haven't been trained, or they their company hasn't given them the skills that are necessary to kind of contribute in an equitable manner. When that's actually not necessary. At present, it's, you know, I've been in artificial intelligence for 15 years, and there was a time back in the late 2000s early 2000 10s, when you really needed, like a team, a full team of like 20, 3050, 100 PhDs in machine learning, to build a single model and keep it running for a going period of time. It's just not like that. Now, if you want to see value from generative AI, all you really need to do is identify a pressing problem that exists in your life or at work, find a platform or application that can solve against that and then use it long enough to either get really frustrated that it's not working well, or find something else that does work well and then figure out those kind of guardrails as to how to progress it forward. And that's really all it takes.

And if you don't have that type of experience, you really can't contribute in the world that is transformed by generative and I think that's the gap between having confidence to play in this space and not having confidence. And I've seen it firsthand in my teams. I've seen it in client environments. It's really building experience and kind of growing with the technology, if you will, as opposed to kind of running away from it, as that first person mentioned a couple years ago, the robots and the like, and I think we're going to see a lot more of that in the days to come. So in terms of, you know, enterprises, a lot of vendors, they all talk about, you know, the the trust of generative AI and all that, you know, usually they're talking about corporate data, or, you know, things like that, or keeping private. Data secure, but, but really, the whole trust thing needs to be solved before any of this other stuff gets going, correct? Yeah, yeah, for sure. And I think, you know, there, there are a number of kind of aspects to trust that that people either recognize, but don't spend the time to really, kind of really fix, if you will, or that they recognize they're important, but I think they think perhaps, that they're going to be like someone else's problem, like down the road, like, you know, this is going to be an issue that, you know, our kind of predecessors are, you know, kind of will inherit later. You know, that's not, I didn't say it the right way, but you know, though, that will these aren't problems that the current leadership will have to deal with their problems. That'll happen three to five years now, but it's just not true.

There are problems of today like you know, for example, I've when I speak, sometimes when I do keynotes and other conferences. I'll use examples of some of the activist boards that are trying to claim that existing companies today are not actively including generative AI in their plans and their current marketing activities. And when you look at companies like Disney or the large kind of blue chip companies out there that have not been proactive in adopting generative artificial intelligence, they indicate that in some of these types of organizations, they could potentially transform the entire way that they communicate with their customers and turn what essentially is like a very kind of anachronistic model into more of a engagement model with their customers using gender of artificial intelligence. And really what's at issue is not so much the business model, but really how the leadership considers what their business to be, and how customers really trust that organization for the value that they accrue, like whether they come to Disney, for example, for just a theme park, or for a streaming platform, or for for example, like a broader experience that is leveraged on the insights of all the activities that undergird the whole kind of corpus that Disney supports.

And to get to that, like later consideration requires a real shift in how current leadership thinks about what it really is in business to do, and also how they kind of communicate with with all their stakeholders. And the failure to do that in the near term is encouraging a number of startups out in the generative AI ecosystem to try to solve that same problem of generative AI experiences, using content for family audiences, if you will, that can do it on a shoestring budget, and kind of pull those eyeballs away from legacy, you know, enterprises. So it really is a, an actual issue today, not like a an issue that will exist three, five years from now. And, you know, it's, it is fixable and solvable. But it's not fixable and solvable necessarily by just throwing more software across the enterprise gates. It's fixable really by a hard look at kind of who the organization is and who it wants to be, and how it can really speak, you know, vulnerably, about like, where it can create value in the ecosystem and how it can do that, you know, given what it's already done historically, and I think every group, if they're not willing to do that, they're going to have threats externally from organizations and entrepreneurs that are willing to do it themselves. So in terms of Life Sciences, you know, we've talked a lot about, you know, the various challenges with, you know, trust and the psychology of Gen AI and working with it. Is life sciences a harder nut to crack, or is it about on par with other industries?

Yeah, I mean, it isn't harder in the sense that that it's not possible. I think it's harder for the same reasons that all the regulated industries are challenging, because ultimately, when you're interacting with consumers, you have to first pass through the regulatory considerations of, at least in this country, the Food and Drug Administration and its counterparts in Europe and in Asia and other markets. So you know, when you're actually talking about, say, getting a drug or a device or digital therapeutic across to someone that has a health condition, that you can't directly change what you're doing without first getting the say so of a regulatory body, which is different, say, than if you change the flavor of coke and then you want to put it on shelves. It's a lot easier to do that than it is to, you know, to adjust like a drug that your mother is taking just just much harder. That's not to say, though, that a lot of the kind of so called back office or operational aspects of communicating and commercializing novel science aren't already being transformed by gender AI, and they are. And historically, AI is has always had a very strong foundation in life sciences and in healthcare as well, especially in areas like research and development and in post launch marketing and a number of other areas as well that are not as close to the regulatory schema because not being kind of held to those same kind of considerations, there's a lot. That is either friction full, you know, that just doesn't work the way it should, or where there are places to make things more efficient or effective, or to ensure that people doing the work find the work engaging, so they stay around with it long enough to bring a novel intervention to market. So there are a tremendous number of use cases. You know, we've partnered on the nysio side with McKinsey.

I've worked with other consultants as well. There are hundreds of use cases within life sciences that are medical to intervention from a generative AI perspective, the challenge is not finding things to do with Gen AI, you could we could be here all day thinking about things that could be done. The challenge is really aligning those to the priorities of your specific business, both from a resource, time and people and financial perspective, as well as find the people, internally and externally, that are committed to seeing that through and that from a human factor standpoint, actually want to do it and want to see the outcomes of it benefit the organization. Because if they don't want to do it and you just build the thing, build a solution, the platform, and then kind of throw it across the fence, they will actively resist it being done, and it won't benefit the organization. You'll get these results that people always talk about, that 90% of AI projects fail and that that is a true statement, but it's probably 90% of those are human factor driven, and it's not the software or the model. The models are great now they weren't always, but it's largely because the people that are adopting them have no interest in seeing them work, and they do everything possible to sabotage them once they're actually in their world. I mean, it's, it's almost comical, like Gen AI is, you know, this, this new whiz bang technology, and the models are really cool and all that. But end of the day, like any IT project, totally depends on the human factors and whether people are into it or not. Yeah, whether it's data analytics. ERP, pick your pick your acronym, like, yeah, if the troops, if the troops resist it, it's not going to work. Yeah. I mean, there used to be this acronym. I you and I probably are old enough to remember this.

I don't know if everyone viewing this will remember this. But back when I I've had a beard for 26 years, but it didn't always have gray in it. And I used to have all the hair on my head, and it was involved in the top but back in the early, like the late 90s, early 2000s there used to be this acronym in the space called pebcac, the problem exists between the chair and the keyboard. And you know, you'd get, you know, all these issues. People couldn't figure out how to use email, they couldn't run filters or tag messages. And it wasn't there was no problem with like Lotus Notes or with with Outlook. The problem was the person using the software, and it was almost always the person that was the problem. But it couldn't say, like, you know, to the Senior Vice President, the issue was you so they use this pebcac, person exists between the chair and the problem exists between the chair and the keyboard. You know, acronym as as the problem now to be called at Human Factors. That's really where human factors research came from, is, you know, it is the person that's the problem, but you'd rather than blame the person. You need to really think about their motivation or their mental health or their psychology, or their interests, or their training, or, you know, you said, strategic enablement, earlier training, like, you know, the people have a lot of experience and expertise when we ask them to do things. And in life sciences, especially today, and a lot of the economy, it's a very difficult time. Like, we've got just come out of the pandemic, which is a lot of difficulty for a lot of people. A lot of organizations are restructuring, you know, their difficult economic climate. Those two changes alone are more than a lot of people could handle from a change perspective. And then you're throwing generative AI on top of them and saying, Hey, like, the whole way you've done knowledge work, your whole career, is shifting from, you know, you use software to the software talks to you and tells you what you should do. And a lot of people are like, I can't I can't I can't handle that. So it's, it's a realistic, kind of understandable thing that this is, this is why this is but you know, rather than kind of cast blame on the people that are involved, the human factors, your community, is really trying to make sense of why it is that way, and to try to stand up a solution that makes it better, because the generative AI wave is not slowing down. It's just going to continue to wash across our shores. And if we can help people kind of figure out if they need to go grab a surfboard and ride the wave, or, you know, run up for the hills or run away from the wave, I don't know, but, you know, it just kind of telling them that they shouldn't stand there and get hit by the wave is not helpful, right?

All right. Is there anything I didn't ask I should have? Or any final points you want to make? I'll just say that, you know, there is a lot of concern these days about generative AI, and I think that it's, it's definitely appropriate for people to be asking good questions, and, you know, be thoughtful and considerate about what is at risk, and what potentially are dangers and concerns in the space. But I'll also say that I think there's a tremendous opportunity for good as a result of generative AI. And I've honestly never been more excited about our collective future as a result of generative AI than any technology I've worked in the 27 years I've been working in life sciences. There are. More true examples of what generative AI can do to help identify diseases early, to help people that are suffering, improve their actual health today, and even if all the AI research stopped today and we just only had access to the actual models that existed today and nothing ever improved, which won't, won't happen. But if that were to be the case, we could do so much good for humanity just with what's been discovered in the last two years, that it would be a major benefit for society just for what we've already discovered. But that won't happen. What will happen is more likely that the next 235, years, we'll see so much benefit for society, hopefully for human health, for mental health, and for all of us as people, that the balance of the risks and the benefits will kind of even it themselves out, I think, and hopefully we'll start seeing you know why some of us are so passionate about the space. All right? Thanks for joining us. Thank you so much 

On Insights <iframe width="560" height="315" src="https://www.youtube.com/embed/jITo-_vS3iY?si=GDYYhD45Vbh5C-Br" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

AI 15O's Matt Lewis on GenAI adoption, psychology and life sciences

AI 15O's Matt Lewis on GenAI adoption, psychology and life sciences

Inizio Medical's Matt Lewis, Global Chief Artificial and Augmented Intelligence Officer, has been named to Constellation Research's AI 150 list and sits at the intersection of AI, life sciences, human change management and mental health.

I caught up with Lewis to talk about AI's role in mental health, human factors in AI adoption, life sciences and how trust is key for workers to collaborate with AI. The big challenge with AI adoption is that generative AI has made it possible to do 2031 work, but the infrastructure, people and processes aren't in place yet.

Here are a few of the highlights from our wide-ranging chat.

Fear, anxiety and AI. Enterprises realize that it is imperative to adopt generative AI and AI, but there is a lot of fear and anxiety around adoption. "One of the first questions asked is 'will robots take my job?' said Lewis. "That question is about anxiety, a mental health concern and do they feel safe and secure in their organization. (Enterprises) really don't give proper consideration to the kind of psychological or cognitive or affective concerns of knowledge workers in an environment that is rapidly changing."

Human factors in AI adoption. Psychology is just one human factor that will determine whether AI is successful or not. Trust in AI systems is critical, as people are reluctant to engage if they don’t feel confident in the technology. Cultural factors, change management, and transparency also play crucial roles in how AI is accepted within organizations. "Any decision that we as humans make ultimately has trust at the core," said Lewis, who noted control is another big issue.

He added:

"It's like our systems and our processes haven't caught up with our technology yet, but they will eventually, and when they do, the trust level will increase dramatically. For those of us that are deep in AI we know that world is coming. It just hasn't appeared yet."

Challenges with AI in life sciences. Lewis said the life sciences sector faces unique challenges but AI isn't necessarily harder to integrate relative to other industries. He said:

"There are hundreds of use cases within life sciences and medical intervention from a generative AI perspective. The challenge is not finding things to do with genAI as much as it is aligning to the priorities of your specific business, both from a resource, time and people and financial perspective, as people committed to seeing it through."

Optimism about AI's impact on society. Lewis was optimistic about AI's potential to improve mental health and societal well-being. While challenges exist, Lewis believes that generative AI will contribute significantly to both the present and future of healthcare and human services. He said:

"Even if all the AI research stopped today and we only had access to the models that existed today we could do so much good for humanity with just what's been discovered in the last two years. The next two to three to five years will see so much benefit for society for human health and mental health. Yes, there are risks, but the benefits will be there too."

More:

Data to Decisions Future of Work Innovation & Product-led Growth Next-Generation Customer Experience Tech Optimization Digital Safety, Privacy & Cybersecurity AR AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Executive Officer Chief Information Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

How Google Cloud is monetizing AI

How Google Cloud is monetizing AI

Google Cloud CEO Thomas Kurian said the company is increasingly monetizing AI based on consumption models, volumes of data and selling agents to line of business executives over IT.

Speaking at Goldman Sachs Communacopia and Technology conference, Kurian touched on a big theme in software and cloud services: AI monetization. Many technology companies are pondering consumption models for AI agents, say $2 per customer service inquiry resolved. Other tech vendors are pondering value-based models.

As AI has been integrated throughout the Google Cloud portfolio, Kurian said the company has altered its model. Historically, cloud projects were controlled by IT, but AI adoption is driven by business initiatives.

"We monetize it in different ways, and we're focused on three important things: winning new customers, winning more projects within the customer, upselling new products," said Kurian, who added that the company is focused on industries and line of business. "Many of these solutions are not bought in the IT organization. They are bought by the Head of Customer Service, the Head of Commerce, the head of Finance, and so you have to learn to sell outside of the IT organization."

Here are some of the monetization comments from Kurian.

Security. Google Cloud is monetizing volumes of data used to detect and respond to threats. "We're seeing growth because we have helped people speed up how quickly they can use our tools to detect and respond to threats. We've seen a 4x increase in customer adoption, 3 times the volume of data ingested, an 8 times increase in threat hunting. We monetize this based on the volume of data we're processing and the number of threat hunts or queries that are happening on the system," said Kurian.

Democratizing analytics. Data agents have become popular as a way to "migrate data, stage it, aggregate it, visualize it, it even builds you charts and spreadsheets," said Kurian. As a result, BigQuery has 80% more machine learning operations in the last six months with 13x growth in multimodel data due to Gemini models. "Because we've opened up analysis from being the domain of people who know SQL, Python, etc. It also drives a lot more end user subscription growth because we can sell more seats in an organization," said Kurian. "We see growth in our analytical platform BigQuery."

Google Workspace. Kurian said Google Cloud is introducing new applications for customer experience and customer service. "Think of it as you can go on the web, on a mobile app, you can call a call center or be at a retail point of sale, and you can have a digital agent, help you assist you in searching for information, finding answers to questions using either chat or voice calls," said Kurian.

The bet with Google Workspace is that it will stand out because it can handle web, mobile point-of-sale and call center in one system with multimodal data. These call center use cases are monetized based on value, said Kurian. He said:

"We monetize based on the value we save for users, either the costs we're displacing or the reach expansion we're giving their agents. We've seen growth across all the dimensions, the adoption of agents, digital agents, the volume of traffic going to these agents, etc. Examples of customers using our customer experience platform. If you call Verizon, you're talking to our chat system and our call system, 60% containment rate, high rate of call deflection.

If you drive a General Motors vehicle and you hit OnStar, you're talking to our conversational AI system."

Partner led deals. Google Cloud's recent Accenture expansion is an example of how the company is leveraging integrators. "We're not a big consulting shop. We're not a services organization. So, we work with a broad partner ecosystem. Because we don't conflict with the partner ecosystem, we've invested in them. We've invested in technology, commercial incentives, training and certifications, as well as go-to-market incentives," said Kurian.

AI Agents. AI agents will be specialized with different models. "We have people building insurance agents, research analysis agents, customer service agents of their own," said Kurian. "We've also provided packaged agents, a data agent, a cybersecurity agent, a collaboration agent for helping you write things and increasingly, we are specializing in them by industry. So, for example, an insurance agent is different than a nursing agent. All three monetized in different ways."

Other items from Kurian on Google Cloud include:

  • Google Cloud now offers close to 1 gigawatt of water cooling in its data centers.
  • Google Cloud has seen 10x growth from a year ago in AI training workloads.
  • Ford Motor is using Google Cloud's deep learning services to build simulations for virtual wind tunnels to replace computational fluid dynamics.
  • 45% of the Fortune 500 projects with Google Cloud and Accenture have gone live.

Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization Future of Work Next-Generation Customer Experience Google Cloud Google SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Chief Information Officer Chief Data Officer Chief Information Security Officer Chief Technology Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Oracle CTO Ellison talks AWS partnership with Garman, the need for autonomous security

Oracle CTO Ellison talks AWS partnership with Garman, the need for autonomous security

Oracle CTO Larry Ellison hit the stage with AWS CEO Matt Garman to talk about their multicloud partnership and optimizing. Ellison also talked about Oracle's autonomous security efforts to prevent ransomware, identity theft and other attacks.

Speaking during his Oracle CloudWorld keynote, Ellison followed up the AWS announcement with a few more details and a lot of strategy for Oracle. His main argument is that seamless multicloud technology is happening now.

"We lost the idea that customers could buy technology from many different companies, and those technologies work gracefully together and we're entering a new phase where services on different clouds work gracefully together," said Ellison. "The clouds are becoming open. They're no longer walled gardens, but customers will have choices and can use multiple clouds together."

Garman said Oracle and AWS have multiple joint customers. He said:

"We'll take an example of somebody like a Best Buy, who runs many of their systems inside of AWS. Best Buy has recommendations for their customers and lots of interesting e commerce applications, and they run their core database workloads on Oracle. This is a great solution for them."

Garman said that the integration between AWS and Oracle will result in a native experience to the point where Oracle database backs up to AWS S3. The prices through Oracle Cloud and AWS are the same for databases.

Ellison added:

"We think this (Oracle and AWS) dramatically expands the market. It's what customers have asked for very long time. My friend Jamie Dimon has got huge commitments at JPMorgan Chase. Every time he saw he asked when we are going to have AWS. Now I finally have an answer."

State Street CTO Andy Zitney said the deal will be a big gain for the company, a big Oracle Exadata and AWS customer. "we were starting down the journey of starting to integrate the clouds, and this comes right at the perfect time to expedite that and make it easier for us," said Zitney. "It will help us accelerate our digital transformation."

AWS and Oracle's first region will be in Virginia, but Zitney noted that State Street needs global coverage and can prioritize locations. Zitney said State Street's current footprint with AWS is 43, but the plan is to scale that down. "We're thinking the number will be 30 to 40 in the end," said Zitney. "With this I won't need on-prem as much as I used to."

Ellison added that Oracle's multicloud strategy doesn't end with AWS, Microsoft Azure and Google Cloud. He noted that Oracle embeds its database services in Fujitsu as well as NTT.

"The interesting thing about this multicloud world is whichever is your primary cloud you can reach out to other clouds, infrastructure, and mix and match the applications and the services you want," said Ellison.

Cyber defense robots

Ellison's other big topic was cybersecurity.

"Oracle is using AI to build cyber defense robots and autonomous systems to defend against identity theft," said Ellison. "We are using AI to prevent that from happening. We can use AI to dramatically improve cloud security. The cyberwars are getting worse not better."

Ellison said the CIA was among Oracle's first customers and data security is critical. "We look at four pillars of security at Oracle. One is data security. Make sure under no circumstance can people steal, look at your data, steal your data, or lock you out and away from your own data. We have to have absolute data security," he said.

Ellison also added that Oracle's other pillars for security are application and network security.

The pitch from Ellison is that cybersecurity systems need to be autonomous. "what we need to do is a set of cyber robots, defense robots to stop these attacks," he said. "We can do a better job of protecting our data if the database system that is managing that data is fully autonomous. And this is something that Oracle has been working on for a very long time."

Like most cybersecurity experts, Ellison said human error is largely to blame for many attacks. Autonomous systems are simply safer.

As a result, Oracle is moving all of its applications to Autonomous Database by 2025. "No human labor, no human error. It's really interesting that it is the most economical way to do things and the safest way to do things," said Ellison. "We're moving not to save money, but to secure data better."

Other security takeaways from Ellison:

  • Biometric security needed. Oracle won't have any passwords in the future because biometrics will replace them. Ellison added that biometric databases have multiple use cases especially credit cards. Passport control, secure school entry and prescription pickups are also good use cases for biometric security.
  • Autonomous code generation will also be more secure. "When the application generator generates the code, we don't generate security vulnerabilities. We don't we don't generate security vulnerabilities. A computer program is writing the code, it will not make that mistake," he said. 
  • Zero Trust Packet Routing (ZPR) is another pillar. "The solution to the problem is you really have to separate network security from network configuration," said Ellison, who noted that ZPR is being rolled out in Oracle Cloud. 

"Let's build an all new system that's responsible for network security and that all new, that all new system will authorize certain paths through the network for certain users to use certain services, look at certain data, and only authorized pads are allowed. No other paths will be allowed. It's a brand new network security system that is separate from network configuration, rather than blended with network configuration." 

Robots are designed to inspect every packet every second, he added. 

 

Data to Decisions Tech Optimization Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Next-Generation Customer Experience amazon Oracle AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology cybersecurity SaaS PaaS IaaS Cloud Digital Transformation Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Chief Information Officer Chief Information Security Officer Chief Privacy Officer Chief AI Officer Chief Experience Officer Chief Technology Officer Chief Data Officer Chief Executive Officer Chief Analytics Officer Chief Product Officer

Microsoft claims hybrid quantum breakthrough with Quantinuum, partners with Atom Computing

Microsoft claims hybrid quantum breakthrough with Quantinuum, partners with Atom Computing

Microsoft and Quantinuum said they have created 12 highly reliable logical qubits by combining Azure Quantum's qubit virtualization system to Quantinuum's H2 trapped-ion quantum computer. Microsoft also said it would work with Atom Computing to add a new quantum system to Azure Quantum.

The companies said they also demonstrated reliable quantum computing by integrating it with AI models and high-performance computing (HPC). That development highlights how quantum systems, HPC and classical compute will ultimately be used together. Previously: Quantinuum, Microsoft claim quantum reliability breakthrough

Microsoft and Quantinuum also said they produced the 12 logical qubits with good fidelity and lower error rates. Here's what Microsoft said in a blog post about error corrections.

"Microsoft and Quantinuum demonstrated several fault-tolerant computations with the improved logical qubits. On eight logical qubits, the teams successfully conducted five rounds of repeated error correction. Furthermore, the eight logical qubits were used to perform a fault-tolerant computation during error correction, successfully demonstrating the combination of logical entangling operations with multiple rounds of quantum error correction. The eight logical qubits exhibited a circuit error rate of 0.002, which is 11 times better than the corresponding physical qubits’ circuit error rate of 0.023. To our knowledge, this is the first demonstration of computation and error correction being beneficially combined, and it showcases the ability of these logical qubits to perform increasingly deeper quantum computations reliably, paving the way to fault-tolerant quantum computing."

On the hybrid quantum computing front, Microsoft and Quantinuum said they used HPC, AI and quantum hardware to chemistry problems. To that end, Microsoft Azure will include Quantinuum's InQuanto computational chemistry offering and integrate it into Azure Quantum Elements.

Constellation ShortLists™

Microsoft's partnership with Atom Computing, which has yielded logical qubits and systems that are being optimized. Microsoft said it will apply its qubit-virtualization system to Atom Computing's second-generation systems within Azure Elements.  

More quantum computing:

Data to Decisions Tech Optimization Innovation & Product-led Growth Microsoft Quantum Computing Chief Information Officer

Generative AI driving interest in nuclear power for data centers

Generative AI driving interest in nuclear power for data centers

Nuclear-powered data centers are on the horizon (and here in some cases), but deployments will take time and likely extend into 2026 or 2027. What's driving the nuclear data center concept? Generative AI workloads and an electricity grid that's currently under strain.

Oracle CTO Larry Ellison dropped this nugget on the company's first quarter earnings call:

"Let me say something that's going to sound really bizarre. Well, I probably -- you'd probably say, well, he says bizarre things all the time. So why is he announcing this one? It must be really bizarre. So, we're in the middle of designing a data center that's north of the gigawatt that has -- but we found the location and the power place. We look at it, they've already got building permits for three nuclear reactors. These are the small modular nuclear reactors to power the data center. This is how crazy it's getting. This is what's going on."

It's not that crazy considering how much attention nuclear power is starting to get its due because of the need for sustainable energy and the reality that so-called AI factories are going to need a lot more power. Is it any wonder that OpenAI founder Sam Altman also happens to be Chairman of Oklo, which specializes in fast fission reactors that can run on fresh fuel and recycled waste?

Oklo is publicly traded, but is pre-revenue. The general idea is that Oklo would build these mini-nuclear power plants attached to data centers handling AI workloads. On a recent second quarter earnings call, CEO Jacob DeWitte said:

"When we talk about providing power directly to energy users, these sizes offer a good entry point to a number of different markets, and these projects can be quite large when they aggregate together. The reality too is that data centers are making up a vast majority of the market opportunity we see in front of us. While the numbers are very large around those opportunities, especially around the larger scale AI purpose data centers, these projects are not being deployed all at once at a one gigawatt or multi-gigawatt scale. Instead, they're ramping into it. It's phased growth through a development process."

Oklo recently announced deals with Equinix, Wyoming Hyperscale and Diamondback Energy.

These two slides highlight Oklo's reactors and commercialization plan. Oklo's shareholder letter and deck are worth a read to get up to speed.

Barron's, however, on its cover this week also highlighted Oklo and more established plays on nuclear including Constellation Energy, Duke Energy and Vistra. Bill Gates' TerraPower has also built out nuclear facilities. Amazon founder Jeff Bezos backs General Fusion, a nuclear company in British Columbia. TAE Technologies, a nuclear fusion startup raised $250 million in a venture capital round that included Google in 2022.

And in January, Amazon Web Services acquired a data center attached to Talen Energy's nuclear plant. Talen Energy will sell power to AWS.

Talen Energy CEO Mac McFarland said:

"At Talen, we have come up with one creative cost-effective solution by co-locating a 1-gigawatt AWS data center campus next to our Susquehanna nuclear plant. Everyone seems interested in our efforts, our colleagues in the IPP space, regulated utilities and RTOs. And the issue now sits at FERC’s doorstep. In the investment community, our deal created excitement about increased demand and incremental value creation across the entire power sector, attracting new investors."

Bottom line: AI is going to tax data center infrastructure and the grid. It increasingly looks like a nuclear power renaissance may occur due to AI workloads.

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Oracle CloudWorld 2024: Oracle HeatWave Lakehouse, GenAI agents, Zettascale supercluster, Intelligent Data Lake

Oracle CloudWorld 2024: Oracle HeatWave Lakehouse, GenAI agents, Zettascale supercluster, Intelligent Data Lake

Oracle is going after the lakehouse market with HeatWave to go along with a bevy of generative AI features including HeatWave GenAI and HeatWave on AWS. Oracle also launched its Intelligent Data Lake and genAI apps across its platform. Oracle also announced a zettascale cloud computing cluster with Nvidia's Blackwell platform.

At Oracle CloudWorld in Las Vegas, Oracle introduced HeatWave Lakehouse, HeatWave GenAI, HeatWave on AWS and HeatWave MySQL enhancements. MySQL became part of Oracle via the Sun Microsystems acquisition in 2009. In 2020, Oracle launched HeatWave a cloud-native in-memory query accelerator designed to speed online analytical processing (OLAP) to deliver real-time analytics and other complex queries within the MySQL database as a managed service in Oracle Cloud Infrastructure (OCI).

The HeatWave news landed a day after Oracle announced a deal with Amazon Web Services (AWS) for Oracle Database@AWS. With the move, Oracle has database partnerships with all of the hyperscale cloud providers. 

Here's a look at the HeatWave announcements:

HeatWave on AWS. Oracle said it is launching HeatWave GenAI and HeatWave Lakehouse on AWS as well as OCI. The core pitch is that Oracle HeatWave can deliver better performance. Oracle said users can automate vector store creation and vector embedding, use large language models in-database running on CPUs as well as Amazon Bedrock and have natural language conversations with documents in Amazon S3.

According to Oracle, HeatWave vector processing offers better price performance than Snowflake, Databricks and Google BigQuery.

Research: Oracle MySQL HeatWave Grows, Adds Lakehouse Support

HeatWave Lakehouse will give AWS insights on structured, semi-structured and unstructured data on Amazon S3 and offer native JavaScript support and give AWS users the ability to predict the right set of indexes for their OLTP workloads.The AWS moves round out Oracle's multicloud strategy.

HeatWave GenAI will give customers multi-lingual support to load documents in 27 languages into HeatWave Vector Store. Oracle is also adding optical character recognition (OCR) support to HeatWave Vector Store, LLM inference batch processing and automatic vector store updates.

HeatWave Lakehouse will give customers the ability to query data in object storage at the same speed as database queries. Users can write results to object storage and use HeatWave for MapReduce applications. Oracle said it is also adding automatic change propagation to HeatWave Lakehouse.

HeatWave MySQL will get the ability to optimize query plans and improve performance, integration with OCI Ops Insights and bulk ingest.

HeatWave AutoML will be able to build, train and explain machine learning models in HeatWave without additional costs. Oracle said HeatWave AutoML is getting the ability to store and process larger models, model topics, manage data drift and add semi-supervised log anomaly detection.

All OCI accounts get access to a standalone HeatWave instance in OCI home regions with 50 GB of storage and 50 GB of backup storage for an unlimited time.

Constellation Research's take on HeatWave additions

Constellation Research analyst Holger Mueller said it was critical that Oracle added genAI and lakehouse functionality to HeatWave.

"Oracle has to make it easy enough for Oracle DB customers to stay with their Oracle databases. If Oracle succeeds it will keep customers using its databases. If Oracle made it hard, customers would look for database and lakehouse alternatives – which would not be a good outcome for Oracle. Lakehouse is critical--Oracle has been absent from it so it's very much needed. The multicloud deployments of HeatWave are as well."

"For HeatWave GenAI, the update is significant. Oracle had to add vector support, and this release is all about to make it easier for developers to use vector capabilities inside HeatWave. Oracle added JavaScript support which is a big step. Basically, Oracle needs to make sure that the data content in HeatWave is available and it is easy for developers to use the vector support. If the latter succeeds the future of HeatWave in the AI era is set."

Constellation Research analyst Doug Henschen said:

"Oracle's latest announcements on MySQL Heatwave step up the competition in the data warehouse/data lakehouse market, particularly with AWS. While AWS continues to focus Aurora on transactional needs, Redshift on  analytical needs and SageMaker on data science, the combination of HeatWave on AWS, HeatWave Lakehouse, HeatWave AutoML, and HeatWave GenAI brings together a compelling set of capabilities on a single platform."

Oracle Supercluster on Nvidia Blackwell

Oracle announced a zettascale cloud computing cluster with Nvidia's Blackwell platform. OCI said it is taking orders for the AI supercomputer, which has up to 131,072 Nvidia Blackwell GPUs available.

According to Oracle, the AI supercluster has 2.4 zettaFLOPS of peak performance and can outperform the Frontier supercomputer. Oracle said:

"OCI Superclusters are orderable with OCI Compute powered by either NVIDIA H100 or H200 Tensor Core GPUs or NVIDIA Blackwell GPUs. OCI Superclusters with H100 GPUs can scale up to 16,384 GPUs with up to 65 ExaFLOPS of performance and 13Pb/s of aggregated network throughput. OCI Superclusters with H200 GPUs will scale to 65,536 GPUs with up to 260 ExaFLOPS of performance and 52Pb/s of aggregated network throughput and will be available later this year. OCI Superclusters with NVIDIA GB200 NVL72 liquid-cooled bare-metal instances will use NVLink and NVLink Switch to enable up to 72 Blackwell GPUs to communicate with each other at an aggregate bandwidth of 129.6 TB/s in a single NVLink domain."

Nvidia Blackwell GPUs on the supercluster are available in the first half of 2025. 

Oracle Intelligent Data Lake, GenAI analytics on Oracle Data Intelligence Platform

Oracle launched Oracle Intelligent Data Lake to go with its Oracle Data Intelligence Platform.

As Mueller noted, Oracle's plan is to surround its databases with all of the key components that would attract existing customers to other platforms.

Oracle said that Intelligent Data Lake will be a core component to the Data Intelligence Platform. Oracle is looking to provide a unified experience by combining orchestration, data warehouses, analytics and AI within the Data Intelligence Platform, which runs on OCI.

The Data Intelligence Platform will include integration with Oracle Autonomous Data Warehouse, Oracle Analytics Cloud, HeatWave, AI services and third-party services.

According to Oracle, the Intelligent Data Lake will enter "limited availability in 2025."

In addition, Oracle is embracing open standards with its Intelligent Data Lake, which will support data catalogs, Apache Spark, Apache Flink, Jupyter Notebooks and open-source standards such as Kafka, Delta Lake, Iceberg and Parquet.

Oracle also said its Oracle Analytics Cloud AI Assistant is available and Autonomous Database will support retrieval augmented generation (RAG) and new no code tools as part of Data Studio.

Oracle Fusion Data Intelligence, HCM, SCM apps

Oracle Fusion Data Intelligence has intelligent applications for Oracle Fusion Cloud Human Capital Management (HCM) and Oracle Fusion Cloud Supply Chain & Manufacturing (SCM) designed to recommend actions. Fusion Data Intelligence combines data, analytics, prebuild AI and machine learning models to create actionable insights to enterprises.

The company added the following:

  • Oracle Cloud HCM gets a People Leader Workbench app that aims to align business and financial goals for HR and finance execs.
  • Oracle Cloud SCM gets Supply Chain Command Center to help enterprises respond to changing conditions across the supply chain network.
  • Fusion Data Intelligence gets operational reporting as well as an AI-powered developer assistant.
  • ERP, HCM, SCM and CX analytics get new AI and machine learning feature.

GenAI Agents with RAG

Oracle announced the general availability of OCI GenAI Agents with RAG capabilities. OCI GenAI Agents access Oracle Database 23ai Vector Search and add an automation layer. 

The company said the use cases for OCI GenAI Agents include call center optimization, legal research, revenue operations and HR recruiting functions. 

In addition, OCI GenAI will be able to access Meta Llama 3.1 models as well as Cohere's Command R, Command R+ and Embed models. OCI Data Science will be able to tap into OCI Ampere A1 as well as models from Hugging Face. 

Oracle also added document understanding, support for more than 100 languages, vision and speech capabilities as well as Code Assist, which is in beta. 

Generative development

Oracle launched generative development (GenDev) that combines Oracle 23ai and multiple Oracle services and support for LLMs.

The company also launched Autonomous Database Nvidia GPU support with GPU-enabled Python packages, Data Studio AI enhancements, Autonomous Database for Developers and a version with container images. 

Oracle also announced Autonomous Database Select AI, which supports synthetic data in test instances. 

Cloud optimization

Oracle launched Cloud Success Navigator, a platform that optimizes cloud and AI deployments. Cloud Success Navigator, which is available at no cost for 12 months, includes:

  • Preconfigured environments for specific enterprise processes. 
  • Best practices from Oracle's library of business processes, process flows, learning content and guides. 
  • Deployment guidance by roles. 
  • Recommended actions for cloud quality. 
  • Dashboards for implementation status and milestones. 

Oracle Fusion Cloud Applications

Oracle outlined a series of AI tools across finance, supply chain, HR, sales, marketing and service functions in the Oracle Fusion Cloud Applications Suite. Here's the breakdown:

  • Oracle Cloud ERP gets predictive cash forecasting tools via AI models that operate across multiple time frames s well as narrative reporting toos, variance explanations and commentary. Transaction records will be automated. 
  • Oracle Cloud HCM as a set of bespoke AI skills that combines skills data with enterprise and third-party data. 
  • Oracle Cloud SCM adds a new smart operations workbench and assisted authoring in Oracle Order Management. 
  • Oracle Cloud CX has generative AI tools to answer contract questions, write emails and summarize for quotes and proposals.  

Oracle Unity Customer Data Platform (CDP), part of Oracle Fusion Cloud CX, will get an account profile explorer to spot revenue opportunities, buying group and opportunity scoring, native Oracle Analytics Cloud integration and industry optimized templates, data models and attributes. 

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Oracle ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration GenerativeAI Chief Information Officer Chief Data Officer Chief Technology Officer Chief Information Security Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

ServiceNow's Xanadu release adds AI Agents, RaptorDB Pro, genAI enhancements

ServiceNow's Xanadu release adds AI Agents, RaptorDB Pro, genAI enhancements

ServiceNow launched the Xanadu version of its Now Platform, which includes AI Agents that can autonomously perform tasks without human intervention, RaptorDB Pro, a new back-end database that improves performance, industry-specific features, and a new integrated development environment (IDE).

Those high-level additions are part of hundreds of new features launched in the Xanadu release. Here's a look at the Xanadu release.

AI Agents and generative AI

Agents are the hot commodity in generative AI and ServiceNow previewed its direction with autonomous AI approaches at its Knowledge conference earlier this year.

AI Agents in ServiceNow's platform are designed to automate tasks and make decisions based on all enterprise data across multiple roles and industries. 

ServiceNow said it will integrate agentic AI throughout its platform with use cases including IT, customer service, procurement, HR and other enterprise functions. In November, ServiceNow will launch AI agents in its Customer Service Management (CSM) and IT Service Management offerings. 

Jon Sigler, senior vice president of platform and AI at ServiceNow, said the company's strategy revolves around enabling AI agents to "work autonomously in the background, handling tasks, managing processes and collaborating with employees rather than just serving them." 

ServiceNow is also providing an AI Skill Kit so enterprises can customize skills to their workflows and needs. ServiceNow has been posting strong revenue growth with a practical approach to generative AI and a focus on use cases. ServiceNow recently announced the acquisition of Raytion, a genAI search tool that will be integrated into the Now Platform. Boomi and ServiceNow also formed a strategic partnership that will blend Boomi's application programming interface management and automation platform with ServiceNow's Now Platform.

"There is a critical need for automation and AI solutions that deliver value and drive better business outcomes without needing to invest in new technology stacks," said Dorit Zilbershot, vice president of project management, AI at ServiceNow. "Our vision is to leverage AI Agents that can understand the environment and all available data across the enterprise and make decisions and take actions."

These AI Agents can do the following:

  • Autonomously diagnose issues, make decisions and execute actions based on predefined policies and enterprise data.
  • Tap into all available data in an enterprise including historical cases, knowledge articles, workflows and policies.
  • Summarize past interactions with customers.
  • Generate email replies to customer support cases based on all relevant data.
  • Automate routine tasks such as troubleshooting, providing discounts and handling customer queries.
  • Suggest actions that can improve processes.

In addition, ServiceNow has launched new generative AI updates for email reply generation, data visualization and various workflows. Generative AI additions to Xanadu include Now Assist genAI tools for security operations, sourcing and procurement.

Key items include:

  • Integration with Microsoft Copilot is generally available.
  • Now Assist for SecOps enables teams to transfer interactions with genAI summarization and Q&A to prioritize incident response.
  • Now Assist for Sourcing and Procurement Operations streamlines processes for submitting requests, review and compliance.
  • Now Assist Skill Kit enables partners and customers to create custom genAI skills for their use cases with multiple large language models.
  • Now Assist for IT Service Management summarizes change requests and related data.
  • Now Assist for HR Service Delivery (HRDS) engages employees and managers to complete tasks like approvals, training and goal setting with LLM prompts.

RaptorDB Pro

Xanadu will include RaptorDB Pro, which can improve transaction times by as much as 53% and deliver answers to queries 27x faster.

RaptorDB Pro adds a column data store architecture optimized for large datasets that improves speed and efficiency of querying data. ServiceNow said the addition of RaptorDB Pro is abstracted away from customers so they don't have to interact with the database or adjust workflows.

According to ServiceNow, RaptorDB Pro will include a unified Knowledge Graph that connects events, operations and people data. Pat Casey, CTO and executive vice president, DevOps at ServiceNow, said RaptorDB will be useful for data processing, AI inferencing and analytics. "Customers can scale their workflows with speed, connectivity, and personalization on the ServiceNow platform," said Casey.

Other key points:

  • RaptorDB Standard includes improvements to ServiceNow's current database and is available to new customers today and all customers next year.
  • RaptorDB Pro is the premium version of the new database and available to new and existing customers.
  • ServiceNow is rolling out the RaptorDB Lighthouse Program to select customers.
  • Early next year, ServiceNow plans to launch personalized workflows via Knowledge Graph with availability in March.

Industry enhancements

The Xanadu release will expand Now Assist into telecom, media and technology, financial services, public sector and retail.

Among the industry additions:

  • Now Assist for Banking integrates with Dispute Management.
  • Retail Operations and Retail Service Management aim to unify workflows between store associates, corporate headquarters, customers and technicians.
  • Now Assist for Telecom, Media and Technology helps agents understand service problems quickly and resolve issues by using genAI to summarize issues and diagnose them.
  • Now Assist for Insurance summarizes claims, provides context and minimizes mistakes.
  • Now Assist for Public Sector Digital Services gives government employees relevant case history, summaries and insights to make decisions.

Productivity enhancements

ServiceNow's Now Platform Xanadu release adds more tools to streamline workflows and processes.

For developers, the Now Platform gets a new IDE that aims to enable developers to collaborate across teams. "This is a new way of developing on our platform and the IDE allows developers to easily create and modify ServiceNow applications in just minutes," said Amy Lokey, Chief Experience Officer.

In addition, Xanadu adds updates to AIOps and Service Reliability Management and Guided Self-Service in Employee Center.

Chris Bedi, chief customer officer and interim chief product officer at ServiceNow, noted that proactive service delivery can lower costs, reduce risks and scale business transformation.

Key items include:

  • The new IDE enables subject matter experts and developers to collaborate as applications are built.
  • ServiceNow launched Enterprise Architecture, an expansion of ServiceNow Application Portfolio Management, in a move to align IT teams and business owners to simplify operations.
  • AIOps in IT Operations Management now has Event Management to group and escalate alerts while accessing impact.

 

Data to Decisions Next-Generation Customer Experience Tech Optimization Innovation & Product-led Growth Future of Work Digital Safety, Privacy & Cybersecurity servicenow AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Oracle databases everywhere with AWS partnership, strong Q1 results

Oracle databases everywhere with AWS partnership, strong Q1 results

Oracle and Amazon Web Services announced a strategic partnership to complete CTO Larry Ellison's multicloud trifecta. Oracle first partnered with Microsoft Azure, then Google Cloud and now AWS. Oracle also delivered strong cloud revenue growth in the first quarter. 

The two cloud providers launched Oracle Database@AWS, which allows customers to access Oracle Autonomous Database on dedicated data centers and Oracle Exadata Database Service within AWS.

Oracle said Oracle Database@AWS will unify Oracle Cloud Infrastructure (OCI) and AWS for database administration, billing and customer support. Enterprises will also be able to connect enterprise data in Oracle to applications running on Amazon EC2, analytics services and AI services including Amazon Bedrock. Procurement will be simplified through AWS Marketplace so enterprises can purchase Oracle Database services using their AWS commitments and existing Oracle licenses. Big software deals closing on AWS Marketplace, rival efforts

In addition, Oracle CTO Larry Ellison and AWS CEO Matt Garman will talk about the partnership on Tuesday at Oracle CloudWorld. The partnership is part of Oracle's broader multicloud strategy. Oracle Database@Google Cloud, which was announced last quarter, is now generally available. Multi-cloud computing isn't 'a bunch of separate clouds'

"We are seeing huge demand from customers that want to use multiple clouds," said Ellison. "To meet this demand and give customers the choice and flexibility they want." Garmin added that the partnership will give joint customers more flexibility and scalability.

Ellison said Oracle's cloud business is benefiting from its Nvidia GPU clusters as well as autonomous database. "Our large and loyal customer base understand and appreciate the many technical advantages of using the Oracle database, and those customers wanted us to find a way to make the very latest and best Oracle technology available on other cloud," said Ellison. "We believe our cloud partnerships with AWS and Microsoft and Google will turbocharge the growth of our database business for years to come."

The Universal Database Versus a Suite of Specialized Databases

Oracle Database@AWS will be available in preview later this year with broader availability in 2025. For Oracle, the AWS deal is big since it now can offer its database services across every hyperscale cloud.

Key items about the partnership include:

  • Direct access between Oracle database services and AWS will be provided via a low latency network connection.
  • AWS customers will be able to use Oracle Autonomous Database seamlessly.
  • Zero-ETL integration will run between Oracle Database services and AWS Analytics services.
  • Oracle database services will seamlessly be integrated with Amazon S3.
  • Oracle Database@AWS can be launched via the Amazon Management Console, Command Line Interface and AWS CloudFormation.

Fidelity, Best Buy and State Street were among the joint customers touting the partnership.

Ellison said the AI race is a marathon not a sprint and it requires all parts of the data center. 

"This race goes on forever to build a better neural network. And the cost of that training gets to be astronomical. When I talk about building gigawatt or multi-gigawatt data centers, I mean these AI models, these frontier models are going to -- the entry price for a real frontier model from someone who wants to compete in that area is about $100 billion. Let me repeat, around $100 billion. That's over the next four, five years for anyone who wants to play in that game. That's a lot of money. And it doesn't get easier."

"Let me say something that's going to sound really bizarre. Well, I probably -- you'd probably say, well, he says bizarre things all the time. So why is he announcing this one? It must be really bizarre. So we're in the middle of designing a data center that's north of the gigawatt that has -- but we found the location and the power place. We look at it, they've already got building permits for three nuclear reactors. These are the small modular nuclear reactors to power the data center. This is how crazy it's getting. This is what's going on."

Separately, Oracle announced first quarter earnings of $1.03 a share on revenue of $13.3 billion, up 7% from a year ago. Non-GAAP earnings in the first quarter were $1.39 a share, 6 cents a share better than estimates.

By the numbers:

  • First quarter cloud revenue was $5.6 billion, up 21% from a year ago.
  • Cloud infrastructure revenue was $2.2 billion, up 455 from a year ago.
  • SaaS revenue was $3.5 billion, up 10% from a year ago.
  • Cloud ERP revenue was $900 million, up 16% and NetSuite first quarter revenue was also $900 million, up 20%.

Oracle CEO Safra Catz said cloud services are now the company's largest business and operating income and earnings growth are accelerating.

Ellison added that Oracle now has 162 cloud data centers. In the first quarter, Oracle signed 42 additional GPU contracts valued at $3 billion. Oracle has 7 OCI regions live at Microsoft with 24 more being built and 4 with Google Cloud with another 14 on tap.

As for the outlook, Catz said Oracle's database business will accelerate with all of the hyperscalers in the mix. Catz said:

  • Oracle spent $2.3 billion in capital expenditures in the first quarter. Fiscal 2025 capex will be double 2024. 
  • "We remain very confident and committed to full year total revenue growth growing double digits and full year total cloud infrastructure revenue growing faster than last year."
  • Total revenue in the second quarter will 7% to 9%. Cloud revenue will grow 23% to 25%. 
  • Non-GAAP earnings in the second quarter will be between $1.42 a share to $1.46 a share. 

Constellation Research's take

Constellation Research analyst Holger Mueller said:

"Oracle had another very good quarter. Its Q1 is traditionally a sluggish quarter dating back in the on premises days, and the strength shows Oracle's transition to services and subscriptions. All regions grew as well which demonstrates the global appeal of the Oracle offering portfolio.  Oracle is building a lot of datacenters, buying a lot of GPUs and now partnering with AWS along with Google and Microsoft. All of those partnerships require CAPEX to put the Exadata machines where they need to be for an attractive customer experience.

Oracle invested close to $8 billion in CAPEX in Q1, but at a lower rate of using free cash flow (it was 47% now down to 41% - but Oracle added almost $1.5B to cash flow). With a build out of a total of 31 datacenters planned for Microsoft (7 live, 24 to be built), 18 planned for Google (4 live and 14 to be built), and an amount to be built for AWS, Oracle will have to keep investing. 

No wonder infrastructure revenue grows with 45%, with Oracle Cloud ERP growing a respectable 16% and NetSuite revenue growth at 20%. Those are very good growth rates during uncertain overall economic times. Oracle has never been closer to the original Larry Ellison vision of becoming the IBM of the 21st century and providing all that enterprises need across the stack – from the hardware to SaaS – with a strong commitment to compete in the multicloud. 

Lastly it is clear that competition to unseat Oracle as the leading transactional, mission critical relational database has folded, AWS was the last competitor standing. The future is coopetition now. Oracle can’t rest on its laurels, as it needs to keep enterprise data in its databases, delivering strong on vector search (bringing AI to the database) and strong and easy uptake of RAG (allowing data to stay in the database for validation)." 

Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity amazon Oracle Big Data AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

Apple iPhone 16 event: How an AirPod Pro hearing aid stole the innovation show

Apple iPhone 16 event: How an AirPod Pro hearing aid stole the innovation show

Apple’s iPhone 16 event added a bevy of hardware upgrades in a bid to entice customers to upgrade devices even though Apple Intelligence will be delivered via software updates over the next year. Apple, however, does have a knack for useful killer apps and AirPod Pro as a clinical grade hearing aid will sell well.

Based on innovation, I'd rank the event this way.

  • AirPod Pro becomes a hearing aid. Given the sheer number of people with untreated hearing loss looking to avoid stigma, Apple will quickly have a hit--once it gets FDA approval. 
  • Apple launches A18 Pro, a chip that the company claims is the most powerful on any smartphone.
  • Apple launches A18 processor that is supposed to rival PC chips.
  • Apple Watch has new displays that are able to be viewed better at an angle.
  • Generative emoji creation. Not the most groundbreaking feature, but sticky for the base.
  • Camera control that has haptic features to enable framing of a shot, adjustments and access Visual Intelligence.
  • Spatial audio that will set Apple up for its Vision Pro ecosystem. Today, spatial audio will make mixing audio and video easier.
  • Satellite service on iPhone enhancements with roadside assistance, find device and other features across geographies.

Constellation Research CEO Ray Wang said the iPhone 16 cycle is a "significant release" that sets up "a super cycle of replacement that is shifting from 5G to AI." "We are halfway through the 5G cycle and the AI replacement cycle is driven by Apple Intelligence," said Wang. "We're seeing about 230 million iPhones a year replaced so year one for AI begins in 2025."

But innovation doesn’t necessarily pay Apple’s bills. What follows is a recap of the Apple iPhone 16 event by priority to the company's financials and most customers. Preorders start today on most products announced today with delivery Sept. 20. 

iPhone 16 Pro, A18 Pro

Apple CEO Tim Cook said the company's flagship has new enhancements including the largest displays at 6.3" and 6.7".

According to Apple, iPhone 16 Pro is designed for Apple Intelligence, leverages Siri for genAI tasks and personal semantic intelligence and provides suggestions for better photos. The device also has the longest battery life.

iPhone 16 Pro features A18 Pro, which is faster and more efficient than A18. It includes 3 nanometer technology, 16-core neural engine, 6-core CPU, 2x faster ray tracing, machine learning features, advanced media features, new image and video signal processors.

Spatial audio on iPhone 16 Pro will set Apple up for the future with Vision Pro, but today it will enable better audio and video mixing. Apple also added some fun features for voice memos.

iPhone 16 Pro starts at $999.

iPhone 16, A18 and Apple Intelligence

Cook said the new iPhone has been designed from the ground up for Apple Intelligence, which isn't expected to fully roll out until 2025. iPhone 16 comes in 6.1" version and 6.7" version.

Biggest item in iPhone 16 is Apple Silicon for models, AI and machine learning. Apple launched A18 with 16-core neural engine, 16% more bandwidth to generative models, 3 nanometer technology, 6-core CPU and 30% more energy efficient. Apple A18 also includes a new GPU that uses 35% less power than A16.

Apple claimed that the A18 rivals high-end desktop processors.

The company touted Apple Intelligence integration throughout iPhone 16, but many features aren't expected to be available at launch. However, iOS will get writing tools, tone adjustment and proofreading. Apple Intelligence will be able to create new Emojis on the fly and new images.

Visual Intelligence will enable customers to take a picture and get contextual information for Google and ChatGPT data via iPhone 16's new camera control.

Next month, Apple Intelligence will be available for beta with a rollout in non-English languages next year. The upshot: Upgrade your phone and we'll give you Apple Intelligence later.

iPhone 16 starts at $799 and iPhone Plus starts at $899 and Apple said there are trade-in deals to be had.

Apple Watch

Apple Watch gets a larger screen, fast charging (80% charged in a 30 minutes), and wide-angle OLED display for brighter off-angle viewing. Apple claims that the screen size is now large enough to type on and 30% larger than Series 6 and bigger than the current Apple Watch Ultra.

Available Sept. 20 starting at $399.

Apple Watch Ultra 2

Apple Watch Ultra gets GPS enhancements, running zones and power meters for cycling as well as automatic stroke detection in swimming. Many of the features added to Apple Watch Ultra are already in Garmin watches.

Apple Watch Ultra 2 starts at $799.

Apple AirPods 4

AirPods will enable you to respond to Siri with a simple yes or no nod. The charging case is USB C with 30 hours of total battery life. AirPods will get active noise cancellation for the first time as well as adaptive audio that adapts to environmental noise around you. Other items include conversational awareness, wireless charging to the case and transparency mode. Apple also updated AirPod Max headphones at $549.

AirPod Pro

AirPod Pro will include features to prevent hearing loss and boost awareness of the health issue. AirPod Pro will have hearing protection via machine learning while preserving audio range. AirPod Pro will also include hearing tests with an app on the iPhone based on data from the Apple Hearing Study and a hearing aid feature. Apple said that it expects FDA approval for its Hearing Aid feature soon and the feature will be delivered via a software update.

Data to Decisions Future of Work Innovation & Product-led Growth Next-Generation Customer Experience Tech Optimization Digital Safety, Privacy & Cybersecurity apple AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer