Amazon Web Services is using its Nova models for tailored use cases including cybersecurity. Other takeaways from a chat with Amazon and AWS chief information security offers included the combination of physical and cybersecurity and how humans and AI code differently.
AWS recently launched Amazon Nova Premier, its most capable LLM. AWS launched Nova models last year and has been courting developers.
Eric Brandwine, VP and Distinguished Engineer at Amazon, said:
"We are very proud of the work that we've done with Nova, and we are absolutely using it internally. One of the things that we can do because we have this AI organization, is fine tune the model for different use cases, and so we've been able to come up with Nova variants that are tuned to specific security workloads, and that has shown significant dividends."
At AWS re:Inforce 2025 during an analyst Q&A, Brandwine was speaking on a panel with AWS CISO Amy Herzog and Amazon CISO CJ Moses. At Amazon, security chiefs rotate among units. For instance, Moses and Herzog swapped roles.
- AWS re:Inforce 2025: How customers are using AWS security building blocks
- AWS re:Inforce 2025: SecurityHub, AI and proactive defense
Herzog said Nova is an example of Amazon building tools and building blocks. Models are no different. "Choice is so deeply ingrained that it might not be top of mind to talk about one versus the other," said Herzog. "You have a job and there are a bunch of different models that you could choose from for that job. You pick the best one."
Other topics:
Physical security. Moses said physical security falls under him as CISO. "We did it for the reasons of making sure that we have the best visibility across all of those areas," said Moses. "A piece of information about a workplace incident will become the information that we need to stack on to other things to determine where we have a scrambled employee that potentially could become an insider."
Areas of non-obvious data connections that may prove out include cybersecurity and freight intelligence from an incident in a building. "We actually use the data, because the worst thing you can do is have intelligence and not actually act on it," said Moses. "And the whole idea for us is to make sure we're not siloed with that data, and secondarily, that we're able to act on it."
AWS is AWS' largest security customer. Security is required just to run a cloud. "The amount that you invest in security to secure an online retailer is very different from what you invest to secure a cloud. And so we've got all of these smart, clever people. They're operating with different constraints. They have different creative ideas, and we get to go reap them all and apply them across the company," said Moses.
Security's different lens. Herzog said security is a prerequisite and you can't get carried away with new technologies that may hurt your cybersecurity posture.
Herzog said:
"If developer productivity goes up by this amount and we need to keep pace with it, what does that without lowering the security bar? What does that look like? What ideas do you have? Recognize the changes that are happening, but then really keep the outcomes that we want to achieve--protecting our customers at speed and at scale."
Solving problems never ends. Brandwine said that internally AWS talks about the security ratchet. "It always gets tighter," he said. "It's a travesty to spend time solving a problem we've already solved before, or relearning an old lesson. So we have this deep investment in automation, automated reasoning, in using existing techniques and new techniques. We reason about our services. We say this will always be true, and then we make the machine make that always true, so we can spend our time on the new things. When you solve a problem, you're not free. You just go work on the next problem."
Why AWS and Amazon don't talk about security more in public. "If you're bringing things up to customers that they can't act on or do anything about directly themselves, you're essentially fear mongering," said Moses. "We don't believe on unnecessarily worrying our customers, especially when those things are things that are within our control. The industry itself does a good enough job on (fearmongering) that we don't need to add to the flames. We'd rather be the ones that are putting the flames out."
AI security is just security. Herzog said, "you can't just separate genAI from the rest of the conversation." "The playbook is the same as always. What are you trying to accomplish?," said Herzog. "There are definitely technical challenges that we are starting to get ahead where we might be in a few years. But I think that's a different conversation."
Brandwine said:
"There are absolutely interesting novel attacks against LLMs, and some of these have been applied to commercially deployed services. But the vast majority of LLM problems that have been reported are just traditional security problems with LLM products. You've got to get the fundamentals right. You've got to pay attention to traditional deterministic security."
Secure code and AI vs. human. Brandwine said Amazon has multiple checks on AI-generated code. One thing to watch is AI and humans write code differently. "We're getting significant success internally, but what we're finding is that the way that the human would write the code is not necessarily the way that the model would write the code. And if you want the model to evolve the code, you might want to structure it a bit differently," said Brandwine.