What Do You Really Stand For in the Age of AI? | DisrupTV Ep. 438

May 8, 2026

Values, Organizational Truth, and the Context Layer — Insights from DisrupTV Episode 438

In DisrupTV Episode 438, Vala Afshar and R “Ray” Wang sat down with Paul Ingram and Jon Reed for a conversation that connected two ideas rarely discussed together:

  • Personal values as a source of leadership clarity
  • Organizational context as the missing layer in enterprise AI

Together, they explored how the future of AI may depend less on raw model capability and more on whether organizations truly understand what they stand for—and whether their systems reflect a shared version of reality.

The throughline was unmistakable:

In a world increasingly shaped by AI, clarity becomes leverage.

Values Are More Than Ethics — They’re Operating Systems

Paul Ingram’s work begins with a deceptively simple question:

What do you really stand for?

His argument is that values are not just abstract ideals or culture-deck slogans. They are practical decision-making tools that shape how leaders behave under pressure.

He illustrated this with the story of Captain Matt Feely during the 2011 Great East Japan Earthquake. Faced with a decision that technically violated protocol but aligned deeply with his values of humanity, service, and love, Feely chose to continue humanitarian aid operations.

Why?

Because he had already clarified what mattered most.

According to Paul, this is the hidden power of values:

  • They reduce ambiguity in high-stakes moments
  • They improve resilience and focus
  • They help leaders make faster, more principled decisions

Most people, however, have never fully articulated their values in a structured way. They may know fragments of what matters to them, but not enough to consistently guide action.

The “Triad” Exercise: Surfacing Hidden Drivers

One of the most fascinating moments in the episode came when Paul guided Ray Wang through a live values exercise.

Starting with something simple—favorite cities—the conversation gradually uncovered deeper motivations:

  • Liveliness
  • Serenity
  • Precision
  • Velocity
  • Purpose
  • Helping people

What looked like a casual preference discussion ultimately revealed a core operating philosophy.

Paul’s broader point is that values are often hidden beneath surface-level preferences and habits. The work of leadership is uncovering and prioritizing them intentionally.

He recommends maintaining a manageable set of core values—often around five to eight—that are:

  • Prioritized
  • Memorable
  • Actionable

Because when values become explicit, they stop being passive beliefs and start becoming behavioral tools.

The Missing Layer in Enterprise AI: Context

If Paul Ingram focused on the inner operating system of leaders, Jon Reed focused on the outer operating system of organizations.

His thesis is blunt:

Most AI projects fail not because the models are weak, but because the organizational context is broken.

Jon describes this as the missing “context layer.”

This layer includes:

  • Shared definitions and metrics
  • Agreed-upon workflows
  • Institutional knowledge
  • Governance rules
  • Process exceptions
  • Business semantics
  • Organizational trust

Without that foundation, AI systems are often amplifying confusion rather than intelligence.

As Jon puts it, organizations frequently attempt to automate environments where teams still disagree on:

  • What the data means
  • Which numbers are correct
  • How processes actually work
  • What success looks like

AI doesn’t solve those problems. It exposes them.

The Hidden Cost of Enterprise Dysfunction: The “Verification Tax”

One of Jon’s strongest observations was around what he calls the verification tax.

In many enterprises, professionals spend massive amounts of time:

  • Validating data
  • Reconciling systems
  • Checking spreadsheets
  • Confirming metrics across departments

In some cases, leaders estimated that 30–70% of professional work time is spent simply verifying whether information can be trusted before decisions are made.

That creates a profound AI problem. Because if trust in organizational data is already weak among humans, AI systems trained on that same environment inherit the dysfunction.

This reframes AI readiness entirely:

AI readiness is not primarily a model problem.
It’s a trust, governance, and organizational alignment problem.

Why LLMs Alone Won’t Solve Enterprise Complexity

Jon Reed was careful to distinguish between being anti-AI and anti-hype.

He acknowledged the enormous strengths of large language models:

  • Pattern recognition at scale
  • Natural language interfaces
  • Workflow decomposition
  • Massive productivity acceleration

But he also outlined their limitations in enterprise environments:

  • Lack of persistent organizational memory
  • Weak understanding of chronology and causality
  • Limited governance awareness
  • No inherent understanding of real-world context
  • Inability to autonomously seek missing information

His conclusion: LLMs are powerful, but incomplete.

To create meaningful enterprise outcomes, organizations need compound systems that combine:

  • LLMs
  • Deterministic workflows
  • APIs
  • Databases
  • Domain-specific models
  • Human oversight

Most importantly, they need reliable context.

AI Is Amplification Technology

One of the most important insights from the episode was that AI amplifies whatever already exists inside an organization.

That means:

  • Strong values become stronger
  • Clear systems become more efficient
  • Broken trust becomes more visible
  • Organizational confusion scales faster

R "Ray" Wang framed this pragmatically through risk and accuracy:

  • 85% accuracy in customer support may be acceptable
  • 85% in finance or healthcare may be catastrophic

The lesson is not that AI must be perfect before deployment. It’s that leaders must design systems thoughtfully around risk, observability, and human checkpoints.

Beyond Automation: AI and Human Creativity

Both Paul and Jon ultimately converged on a hopeful view of AI.

Paul argued that creativity has always been about selecting meaningful combinations from infinite possibilities. AI expands the space of possibilities dramatically—but humans still determine what matters.

AI can generate ideas.

Humans decide:

  • Which ideas align with mission and values
  • Which outcomes are ethical and sustainable
  • Which paths are worth pursuing

That’s where human agency remains essential.

Jon echoed this sentiment by warning against “AI for AI’s sake.” The real opportunity is not simply automating legacy workflows faster—it’s redesigning how organizations create value altogether.

Key Takeaways

  • Values are not soft concepts; they are practical leadership tools
  • Organizations with clear internal alignment will outperform those with fragmented truth systems
  • The “context layer” may become the most important layer in enterprise AI
  • AI readiness is fundamentally a trust and data quality challenge
  • LLMs alone are insufficient without governance, workflows, and human judgment
  • AI amplifies organizational strengths and weaknesses alike
  • The future belongs to organizations that combine human creativity with contextual intelligence

Final Thoughts

DisrupTV Episode 438 offered a powerful reminder that AI is not replacing leadership—it’s exposing it.

The organizations that thrive won’t simply have the best models. They’ll have:

  • The clearest values
  • The strongest trust systems
  • The most aligned organizational context
  • The best human judgment surrounding the technology

In many ways, AI acts like a mirror.

If leaders lack clarity about what they stand for, AI magnifies confusion.
If organizations lack a shared version of truth, AI accelerates dysfunction.

But when values and context are strong, AI becomes something far more powerful: A force multiplier for creativity, purpose, and better decision-making at scale.

Related Episodes

If you found Episode 438 valuable, here are a few others that align in theme or extend similar conversations:

Your Hosts