Research Notes - The Why-erarchy: Values, Purpose, Intent, Vision, and Strategy in GenAI Collaboration

AI Generated by claude-sonnet-4-6 · human-supervised · Created: 2026-03-10 · History

Research: The Why-erarchy — Values, Purpose, Intent, Vision, and Strategy in GenAI Collaboration

Date: 2026-03-10 Search queries used:

  • “why-erarchy” OR “why hierarchy” purpose values intent strategy leadership philosophy
  • Simon Sinek Golden Circle why how what purpose vision strategy hierarchy
  • purpose hierarchy values vision mission strategy alignment organizational theory cascade
  • intent stack GenAI human AI collaboration prompt level task level goal level values level
  • AI alignment human values intent stack LLM user purpose goals hierarchy
  • context engineering intent decomposition AI agentic task hierarchy user goals sub-goals
  • intent hierarchy GenAI AI collaboration values purpose strategy alignment
  • OKR objectives key results purpose vision values cascade strategy execution alignment

Executive Summary

The “why-erarchy” is a layered vocabulary for organizing human motivation and direction, distinguishing values and purpose (permanent, identity-forming) from vision and strategy (temporal, direction-giving), with intent as the dynamic connector cutting across both. The concept appears in the Dutch appendix of David Lockie’s Intent Stack article (Feb 2026), synthesising management strategy frameworks (Sinek’s Golden Circle, Pyramid of Purpose), organizational theory, and emerging AI collaboration design. For GenAI practice, the why-erarchy provides a structural vocabulary that current AI tools lack: most operate at the task or preference layer and cannot trace actions back to human purpose or identity. The Intent Stack (Lockie 2026) is a working implementation of the why-erarchy as a five-layer hierarchy that AI agents can consume. Across the agentic alignment literature, a parallel hierarchy emerges — from static model values to dynamic prompt-level intent — suggesting the why-erarchy is not just an organizational metaphor but an emerging design pattern for AI systems that must act purposefully on behalf of humans.

Key Sources

“Intents All the Way Down” — David Lockie (Dec 2024)

  • URL: https://www.divydovy.com/2025/12/intents-all-the-way-down/
  • Type: Blog / practitioner essay
  • Key points:
    • Intent is the design primitive underlying all interfaces: “Every interface is just a clumsy way of getting you to reveal your intent”
    • Technological shift from procedural (“how”) to declarative (“what”) interaction mirrors the intent-first model
    • Drawing on Jeff Hawkins’ Thousand Brains theory: intelligence = current state + desired state + navigation = intent
    • Introduces “meta-intents” — constraints on how other intents get fulfilled (“never auto-commit purchases over €200”)
    • Warns: AI’s real value may be helping humans clarify their intents, not just execute them
  • Tenet alignment: Strong alignment with Human Intent First; introduces the concept of good friction (pausing to clarify intent) vs. bad friction (unnecessary procedural steps)
  • Quote: “It really is intents all the way down. From the moment you wake up to the apps you open to the purchases you make — everything is intent.”

“The Intent Stack: A New Design Space for Human-AI Collaboration” — David Lockie (Feb 2026)

  • URL: https://www.divydovy.com/2026/02/the-intent-stack-a-new-design-space-for-human-ai-collaboration/
  • Type: Blog / practitioner framework
  • Key points:
    • Defines the five-layer Intent Stack: Lifetime Intent (identity/values) → 5-Year Intent (strategic direction) → Annual Intent → Operational Intents → Project-specific Intents
    • Lower layers inherit context from higher layers without restating them — mirrors how humans actually think
    • Contains a Dutch-language appendix that names and defines the why-erarchy explicitly (see table below)
    • Distinguishes the stack from preference files (claude.md, Cursor rules): those handle how, the Intent Stack handles why
    • Uses Asimov’s Three Laws of Robotics as analogy: higher layers constrain lower ones, but the hierarchy is self-authored, not imposed
    • Three practical uses: AI context layer, self-authoring tool, decision filter
  • Tenet alignment: Direct operationalisation of Human Intent First and Context as Infrastructure — the stack is persistent, structured, user-owned context
  • Quote: “The tools handle preferences. The Intent Stack handles purpose. They’re complementary.”

“Prompt Engineering 2.0 Is the New Alignment Layer” — Yi Zhou (Oct 2025)

  • URL: https://medium.com/generative-ai-revolution-ai-native-transformation/prompt-engineering-2-0-0b2f529172d1
  • Type: Practitioner analysis
  • Key points:
    • Introduces the Agentic Alignment Stack: Model Layer (aligns to reality) → Instruction Layer (aligns to human values) → Prompt and Context Layer (aligns to intent and situation) → Cognition Layer (aligns to responsibility and reflection)
    • Each layer of the stack parallels a layer of the why-erarchy (values, intent, strategy/execution)
    • “Alignment has become a relationship, not a state” — continuous negotiation through prompts and context
    • Verbalized Sampling research (Zhang et al., 2025) shows that prompt-level design can restore generative diversity suppressed by RLHF
  • Tenet alignment: Aligns with Symbiotic Intelligence over Automation; the dynamic layer (prompt/context) is where human intent is enacted at inference time
  • Quote: “The Model Layer aligns to reality. The Instruction Layer aligns to human values. The Prompt and Context Layer aligns to intent and situation. The Cognition Layer aligns to responsibility and reflection.”

The Golden Circle — Simon Sinek

  • URL: https://simonsinek.com/golden-circle/
  • Type: Leadership framework
  • Key points:
    • Three concentric layers: WHY (purpose/cause/belief) → HOW (values/processes at natural best) → WHAT (products/services/outputs)
    • “People don’t buy WHAT you do, they buy WHY you do it”
    • Inspired leaders communicate from the inside out (WHY → HOW → WHAT)
    • WHY is not profit — it is purpose, cause, or belief; the reason you get out of bed
  • Tenet alignment: Foundational antecedent to the why-erarchy; establishes that purpose (WHY) should govern strategy (HOW/WHAT) rather than the reverse
  • Conflict: Sinek’s three-layer model collapses values and purpose into a single WHY, which the why-erarchy distinguishes more finely

Pyramid of Purpose — connecteddale

  • URL: https://www.connecteddale.com/releases/wesc/pyramid_of_purpose.html
  • Type: Strategy tool documentation
  • Key points:
    • Four-layer hierarchy from base to peak: Core Values → Mission → Vision → Strategic Goals
    • “Every aspect of the organization’s strategy is interconnected and drives towards a common purpose”
    • Explicitly hierarchical: values are foundation; goals are the peak
    • Acknowledges weakness: overemphasis on hierarchy can create rigidity; static one-time strategy vs. continuous evolution
  • Tenet alignment: Aligns with Human Intent First and Always Scalable; the nested structure supports different levels of abstraction

“A Thousand Brains: A New Theory of Intelligence” — Jeff Hawkins (Numenta)

  • URL: https://numenta.com
  • Type: Neuroscience / intelligence theory
  • Key points:
    • Intelligence emerges from cortical columns building models through reference frames (maps with locations and movements)
    • Core insight: intelligence = knowing where you are + where you want to be + navigating between them
    • This recursive process happens at every level — from moving a hand to planning a career
    • Technology is evolving to mirror this architecture: nested intents decomposing into sub-intents until hitting executable actions
  • Tenet alignment: Provides a cognitive science grounding for intent-based hierarchies; supports the claim that the why-erarchy maps onto how human cognition actually works, not just organizational convention

Major Positions

The Why-erarchy (Lockie 2026 — from Dutch appendix)

The explicit framework as defined in the source material:

ConceptQuestion it answersTime horizonCharacter
ValuesWhy does it matter?PermanentMost fundamental, identity-forming
PurposeWhy do we exist — for whom, for what?PermanentExistential, beyond profit
IntentWhat do we want to achieve — consciously and unconsciously?Situational + structuralDynamic, multidimensional
VisionWhere do we want to go?Long (5–20 years)Inspiring, directional
MissionWhat do we do to get there?Medium-termOperational, action-oriented
StrategyHow do we do that concretely?Short–mediumChoices, prioritisation, resources

Key structural insight: Values and purpose form the identity layer (permanent, foundational). Vision and strategy form the direction layer (temporal, actionable). Intent cuts across both layers — it is the living connection between who you are and what you do.

  • Proponents: David Lockie (practitioner), implicitly supported by Sinek, Pyramid of Purpose tradition
  • Core claim: Most organizational vocabularies conflate these levels or treat them as a simple cascade. The why-erarchy treats them as qualitatively different — identity vs. direction — with intent as the dynamic bridge.
  • Key arguments: (1) Values are not the same as purpose — values are the “how it matters,” purpose is the “why we exist”; (2) Intent is multidimensional and situational, not just a task — it can be implicit, temporal, conflicted; (3) Vision is aspirational direction, not a fixed goal; strategy is the choices that operationalise vision
  • Relation to site tenets: Directly maps onto Human Intent First (values/purpose as the anchor from which intent flows) and Always Scalable (different time horizons require different interaction modes)

Intent as First-Class Object in AI Systems

  • Proponents: Lockie (2026), Hawkins (Thousand Brains), Zhou (2025)
  • Core claim: Current AI tools treat tasks and preferences as first-class objects but have no representation of intent, purpose, or values. Treating intent as a first-class, hierarchical, inheritable object changes what AI assistants can do.
  • Key arguments: (1) Intents are hierarchical — “be healthy” contains “exercise” contains “do pull-ups on Tuesdays”; (2) Intents inherit context without restating it; (3) Intents can be implicit, detected from behavior; (4) Intents are temporal, not binary done/not-done; (5) Intents compose and interact across domains
  • Relation to site tenets: Operationalises Context as Infrastructure — intent becomes the persistent, structured context layer that AI systems read from and write to

Agentic Alignment as Parallel Why-erarchy

  • Proponents: Yi Zhou (2025), Anthropic alignment research
  • Core claim: The alignment stack in agentic AI systems mirrors the why-erarchy: foundational model values → instruction-level behavioral orientation → prompt/context-level intent → cognition-level responsibility
  • Key arguments: Each layer resolves a different dimension of truth — reality, human values, intent/situation, responsibility; control moves from model weights to words as the stack moves up; alignment is a continuous relationship, not a static state
  • Relation to site tenets: Supports Symbiotic Intelligence — alignment is a negotiation, not a fixed constraint; and Context as Infrastructure — the prompt/context layer is the dynamic alignment layer

Key Debates

Does intent hierarchy flatten or preserve human complexity?

  • Sides: Lockie argues the stack preserves complexity by making it explicit and editable; Tom Nixon (commenter on Lockie’s LinkedIn post) argues that externalizing values into machine-readable form creates disembodied, mechanistic representations that miss embodied and gut-level dimensions of decision
  • Core disagreement: Whether structured intent representations enhance or diminish the richness of human motivation
  • Current state: Unresolved; Lockie’s own caveats acknowledge the risk of a static hierarchy vs. living human change

Values vs. Purpose: are they genuinely distinct?

  • Sides: The why-erarchy treats them as distinct layers; Sinek collapses both into WHY; many organizational frameworks treat them as interchangeable
  • Core disagreement: Values = how something matters to you (evaluative principles); Purpose = why you exist / what change you’re trying to make in the world (existential). The distinction matters: an organization can share values while having different purposes.
  • Current state: Growing consensus in strategy literature that the distinction is real and useful (Jim Collins, Pyramid of Purpose) but not yet standardised

Meta-intents: constraints on intent or a separate layer?

  • Sides: Lockie introduces meta-intents (“don’t auto-commit purchases over €200”) as constraints that govern how other intents are fulfilled. These could be seen as part of the values layer, or as a distinct constraint layer that cuts across the hierarchy
  • Core disagreement: Are meta-intents just values applied to specific domains, or are they a new category of AI governance primitive?
  • Current state: Emerging question — relevant to AI safety and user autonomy design

Historical Timeline

YearEvent/PublicationSignificance
1943Maslow’s Hierarchy of NeedsFirst formal hierarchy of human motivation; establishes that higher-order needs build on lower-order ones
1970s–80sOrganizational mission/vision frameworks (Peter Drucker et al.)Establishes the values → mission → vision → strategy cascade in organizational design
2009Simon Sinek, “Start With Why” / Golden Circle TED TalkPopularises the inside-out model: WHY drives HOW drives WHAT; establishes purpose-first as a leadership principle
2011Jim Collins & Jerry Porras, “Built to Last”Distinguishes core ideology (values + purpose — timeless) from envisioned future (vision + strategy — temporal); direct antecedent to the why-erarchy’s identity/direction split
2022Jeff Hawkins, “A Thousand Brains”Provides neuroscience grounding for hierarchical intent models; intelligence as recursive maps of current and desired states
2024David Lockie, “Intents All the Way Down”Introduces intent as the universal design primitive; meta-intents as a governance concept
2025Yi Zhou, “Prompt Engineering 2.0” / Agentic Alignment StackMaps AI alignment to a hierarchy that parallels the why-erarchy
Feb 2026David Lockie, “The Intent Stack”Introduces the five-layer Intent Stack and the why-erarchy vocabulary explicitly; proposes the Personal Context Document as persistent AI context layer

Potential Article Angles

  1. The why-erarchy as vocabulary upgrade for GenAI designers — Most AI collaboration frameworks conflate task, goal, and purpose. The why-erarchy gives designers a more precise language. How does naming these layers separately change what’s possible in prompt design, system architecture, and AI coaching? Aligns with Human Intent First and Context as Infrastructure tenets. Would position the site as introducing vocabulary, not just frameworks.

  2. Intent as the bridge between identity and execution — The most generative insight in the why-erarchy is that intent cuts across the identity and direction layers. Intent is dynamic and multidimensional in a way that purpose and strategy are not. This could become a standalone concept article distinguishing intent from goal, task, and purpose. Would connect to existing research on intent specification.

  3. The personal context document as why-erarchy infrastructure — Lockie’s PCD is a practical answer to “how do you persist the why-erarchy across AI sessions?” This connects to the Context as Infrastructure tenet directly. Article would explore what it means to treat your values and purpose as infrastructure — the design challenges (authorship, drift, privacy, update) that arise when the identity layer becomes machine-readable.

Gaps in Research

  • No academic philosophy literature specifically theorizing “why-erarchy” as a formal concept — it is a practitioner coinage (Lockie 2026) synthesising multiple strands. The philosophical grounding (teleology, practical reason, Frankfurt’s hierarchical desires) has not been drawn in yet.
  • Limited empirical research on how human-AI collaboration changes when the system has access to a persistent intent hierarchy vs. a standard system prompt. The performance and quality claims are practitioner-level, not experimentally validated.
  • The Dutch-language table in the Lockie source is an annotation (possibly by Bram) added to the original article, not Lockie’s own text — origin should be verified before citing as Lockie’s.
  • The relationship between the why-erarchy and Harry Frankfurt’s philosophical work on “higher-order desires” (desires about desires) and “volitional necessity” (caring about what you care about) has not been explored; this is a rich connection worth investigating.
  • No treatment yet of cultural variation in how the why-erarchy layers are weighted — e.g., collectivist cultures may place purpose above individual values.

Citations

  1. Lockie, David. “Intents All the Way Down.” divydovy.com, December 9, 2024. https://www.divydovy.com/2025/12/intents-all-the-way-down/
  2. Lockie, David. “The Intent Stack: A New Design Space for Human-AI Collaboration.” divydovy.com, February 19, 2026. https://www.divydovy.com/2026/02/the-intent-stack-a-new-design-space-for-human-ai-collaboration/
  3. Zhou, Yi. “Prompt Engineering 2.0 Is the New Alignment Layer.” Agentic AI & GenAI Revolution, Medium, October 27, 2025. https://medium.com/generative-ai-revolution-ai-native-transformation/prompt-engineering-2-0-0b2f529172d1
  4. Sinek, Simon. “The Golden Circle.” simonsinek.com. https://simonsinek.com/golden-circle/
  5. Williams, Dale. “Pyramid of Purpose.” connecteddale.com. https://www.connecteddale.com/releases/wesc/pyramid_of_purpose.html
  6. Hawkins, Jeff. A Thousand Brains: A New Theory of Intelligence. Basic Books, 2021.
  7. Collins, Jim, and Jerry Porras. Built to Last: Successful Habits of Visionary Companies. HarperBusiness, 1994.