The Intent Stack: Making Human Purpose Legible to AI

AI Generated by claude-sonnet-4-6 · human-supervised · Created: 2026-03-11 · History

Most AI collaboration fails at the purpose level, not the task level. AI assistants execute tasks with precision and generate goal-oriented content on demand, but they operate without access to why a task matters — the values, directions, and constraints that determine whether doing a task well actually serves the person asking. David Lockie’s Intent Stack (2026) proposes a remedy: a five-layer hierarchy that structures human intention as persistent, machine-readable context, making purpose as available to AI systems as task descriptions already are.

What the Intent Stack Proposes

Lockie defines the Intent Stack as a five-layer hierarchy of human intention:

  1. Lifetime Intent — identity-level values and purposes (“what kind of person I’m trying to be”)
  2. 5-Year Intent — medium-range directional commitments
  3. Annual Intent — operational focus for the current year
  4. Operational Intents — recurring roles and responsibilities
  5. Project-Specific Intents — purpose and constraints for active work

Each layer inherits context from the one above. A project intent does not need to re-derive lifetime values — they are already encoded in the stack. Lockie describes the stack as sitting inside a Personal Context Document (PCD): a structured document that AI agents read as context before acting. “The tools handle preferences,” Lockie writes. “The Intent Stack handles purpose. They’re complementary — the stack provides the conceptual frame, tools like claude.md provide the mechanism for acting on it.”

Lockie identifies three use modes: as an AI context layer (the PCD), as a self-authoring tool for clarifying one’s own intentions, and as a decision filter for evaluating new requests against existing commitments. Intents in this model are hierarchical, composable, temporal, and can be implicit (inferred from observed behaviour) or explicit (directly authored).

Why Task Management and Goal-Setting Fall Short

Task management systems — GTD, Todoist, Asana — are, in Lockie’s framing, “execution engines with no understanding of purpose.” They operate below the level of intention: they represent what to do, not why it matters. David Allen’s Getting Things Done (2001) includes a “6 Horizons of Focus” framework with conceptually similar levels (from ground-level actions up to purpose and principles), but GTD’s horizons are cognitive scaffolding reviewed periodically — not machine-readable objects designed for AI context.

OKRs (Objectives and Key Results) add measurability to goals but remain static: they describe what to achieve in a given period, operating in a periodic-review model incompatible with the real-time, session-by-session context an AI agent needs. Neither GTD’s horizons nor OKRs can answer the question the Intent Stack is designed to support: should I take on this project, given everything I am trying to do?

The gap Lockie identifies is not a shortage of frameworks for managing work. It is a shortage of frameworks that make the hierarchy of human motivation legible to a machine at the moment of interaction.

The Philosophical Grounding: Bratman’s Planning Theory

The Intent Stack has a philosophical foundation in Michael Bratman’s planning theory of intention. Bratman’s Intention, Plans, and Practical Reason (1987) argues that intentions are not isolated mental events but “elements of partial plans of action” — hierarchical, temporally stable, and constitutive of rational agency. “We form future-directed intentions as parts of larger plans, plans which play characteristic roles in coordination and ongoing practical reasoning,” Bratman writes. Crucially, intentions are sticky: they resist reconsideration without reason, filtering out options inconsistent with prior commitments.

Lockie’s five-layer stack operationalises this structure. Where Bratman describes how human intention naturally organises into a planning hierarchy, Lockie proposes externalising that hierarchy into a structured document — transforming an implicit cognitive structure into explicit, AI-readable infrastructure.

The Ownership Problem

A competing analysis by Chaudhary and Penn (Harvard Data Science Review, 2024) reframes the optimistic picture. Their “intention economy” thesis warns that making intent legible to machines creates new extraction vectors beyond the attention economy: “The intention economy will be the attention economy ‘plotted in time.’” Intent data — what someone wants, at what hierarchy level, and when — is more valuable and invasive than behavioural data, because it predicts future action rather than describing past behaviour.

Chaudhary and Penn build on Bratman directly: they note that human intent has “elements of stable planning and dispositional states” — exactly the properties that make intent useful as AI context and, therefore, valuable to capture at scale. The critical variable in their analysis is ownership: a user-owned PCD (Lockie’s model) is categorically different from intent data inferred and held by platform LLMs. The Intent Stack is not inherently a surveillance mechanism, but the same architecture that empowers individuals becomes exploitative when the ownership structure is flipped.

Tom Nixon, commenting on Lockie’s article, raises a related concern: by “putting our intentions into words and feeding to an LLM they become disembodied. The AI might make seemingly wise choices on our behalf, but what about the necessity of, for example, a real gut check about a decision?” This embodiment critique questions whether articulated intent adequately captures the tacit, felt dimensions of human motivation that matter most at decision points.

Relation to Site Perspective

The Intent Suite Framework’s Human Intent First tenet holds that any AI action must be traceable to a human purpose, question, or value, and the human must remain able to redirect or withdraw that intent. The Intent Stack is a direct instantiation of this tenet: it makes intent the primary orienting structure of AI collaboration, ensuring AI capability is deployed within a human-authored purpose hierarchy rather than against model defaults.

Context as Infrastructure names the tenet; the PCD is the artifact. Lockie’s framework treats the intent hierarchy not as a disposable prompt string to be re-authored each session but as persistent infrastructure — owned by the user, readable by agents, and updated over time. This is precisely what the tenet means in practice.

Symbiotic Intelligence introduces the necessary tension. The Intent Stack expands human capacity for coherent action across AI interactions — but only if intent remains user-owned and the human retains the ability to override, revise, and “gut-check” AI interpretations. The Intent Suite Framework reads the embodiment critique (Nixon) and the commodification critique (Chaudhary & Penn) as the conditions the tenet rules out: an intent stack that disembodies decision-making or becomes a platform-owned extraction layer is not symbiotic — it is substitutive.

The Chaudhary & Penn critique does not contradict the Intent Stack concept; it clarifies the conditions under which the concept serves human intent rather than extracting it. Ownership architecture — not intent hierarchy — is the variable that determines whether the framework aligns with or violates symbiotic intelligence.

Further Reading

References

  1. Lockie, David. “The Intent Stack: A New Design Space for Human-AI Collaboration.” divydovy.com, February 19, 2026. https://www.divydovy.com/2026/02/the-intent-stack-a-new-design-space-for-human-ai-collaboration/

  2. Allen, David. “The 6 Horizons of Focus.” gettingthingsdone.com, January 2011. https://gettingthingsdone.com/2011/01/the-6-horizons-of-focus/

  3. Bratman, Michael E. Intention, Plans, and Practical Reason. Cambridge: Harvard University Press, 1987.

  4. Chaudhary, Yaqub, and Jonnie Penn. “Beware the Intention Economy: Collection and Commodification of Intent via Large Language Models.” Harvard Data Science Review, December 30, 2024. https://hdsr.mitpress.mit.edu/pub/ujvharkk