Is there value for AI after GenAI?

February 1, 2026
still distilling aftershots
distilling ideas

Welcome to a new edition of The Board: Distillation Aftershots (*).

This blog post is a copy of the Distillation Aftershots newsletter that shares curious and interesting insights and data points distilled from enterprise technology to identify what’s notable.  Subscribe for weekly delivery to your inbox every Sunday morning, just in time for brunch.

In this issue, we will discuss what happens after GenAI. LLM and GenAI took over in 2025, even “evolving” into Agentic AI. The problem? LLM has limited abilities, like any other single-solution model, and quickly, we found out what the end of Gen AI was. The question then becomes, is there something else we can do with AI if GenAI is not the answer we thought it was? That’s today’s newsletter.

First, my take.

I started doing AI in 1984. Back then, it was nowhere near what it is today – way more theoretical, and most of the work was done on the early elements of intelligence: cognition and interpretation, not execution. We did a lot of work on linguistics and symbolism, and we applied it via compilers and single-process execution (automation, like we have today, was not on the agenda; complex execution was not on the agenda either). The reason I am bringing this up is that most of what we worked on and learned back then still applies today – and Gen AI is nothing but a simple language (only) model, compared to what we used to do back then – but it is not AI.

There is a lot more to AI these days, and we are seeing a lot of different, very interesting things. I have been writing in my report about the end of LLM and Gen AI as a starting point for creating value from AI.  Gen AI was a very small first step, getting the interest going: now, let’s make it shine.  The links below show you some of the latest things I uncovered about AI, but there is a lot more out there.

Inference is making a comeback, via Active Inference – a great evolution that takes you beyond learned and trained models, and so are SLM (small language models, or what we used to call models – since they are not only about language: they are single problem focused models), adversarial models (have two or more models compete and only use the best solution, shortening the time and effort to get a better answer), circle of experts and chain of expert models that refine LLM (if you want to optimize those), and modern decision trees.  And much, much more.

The bottom line: Gen AI (and its ugly cousin, Agentic AI – including the many “solutions” that were found to be lacking in security and governance and were quickly discarded or reduced in scope) was a good opening line.  It’s time to optimize AI and move beyond Gen AI, find what works for your enterprise, and explore new ways to use it.

Here are some reading resources:

  1. What if AI were just another technology? This article from Columbia University explores just that.
  2. Mustafa Suleyman is the CEO for AI at Microsoft.  He was also one of the creators of what later became Gemini.  When he talks, we listen.  He is still, IMO, a little too focused on LLMs, but he posted this with a great question: Why are we limiting AI by correlating it with what we did in the past?  There is a great rabbit hole to chase here.
  3. 3.  This is a fun read: the end of basic prompting. Quoted “A new paper from MIT CSAIL introduces a simple but powerful shift: instead of forcing AI to answer once, make it reason like a system that can inspect, decompose, and verify its own work before committing.”
  4. 4.  As you probably inferred by now (small pun intended), there are two camps: doom and gloom – AI will kill us all, and we haven’t seen anything yet.  A good place to start exploring this question, and another rabbit hole to chase, is this article by Quartz presenting a third option: can we have a conversation that ends up in a different place?
  5. 5.  Dario Amodei, Anthropic’s CEO, earned his right to be controversial.  When he wrote this blog post, in which he poses a question about where Gen AI can go, he made some interesting points.  His search is not about different models but finding how to make LLMs better. Quoted “This is the effort to understand, down to individual neurons, why large language models do what they do.”
  6. Andrej Karpahthy dropped some notes on X from “Claude coding quite a bit these last few weeks”.  Basically? It won’t replace programmers, and it can do something – but only if you know what that something is. Vibe coding has a long way to go for enterprise adoption.
  7. One more.  A white paper published this week discusses what LLMs can and cannot do and which use cases matter. Great detail, good insights.  Also, from Anthropic.
  8. And finally, for fun, this short YouTube video shows why our agents perform horribly: we give them unclear instructions; they do what we tell them.  Like a dad making a peanut butter sandwich following his kids’ instructions.
  9. And, because I mentioned it above, check out Active Inference – from one of the brightest people I know, Denise Holt.

What’s your take? We are fostering a community of executives who want to discuss these issues in depth. This newsletter is but a part of it. We welcome your feedback and look forward to engaging in these conversations.

If you are interested in exploring the full report, discussing the Board’s offering further, or have any additional questions, please contact me at [email protected], and I will be happy to connect with you.

(*) A normal distillation process produces byproducts: primary, simple ones called foreshots, and secondary, more complex and nuanced ones called aftershots. This newsletter highlights remnants from the distillation process, the “cutting room floor” elements, and shares insights to complement the monthly report.