Research Notes - The Intent Stack Framework
Research: The Intent Stack Framework
Date: 2026-03-11 Search queries used:
- “intent stack” task management goal setting difference
- Intent Stack framework philosophy productivity
- intention philosophy Bratman hierarchy planning AI agent context
- GTD “horizons of focus” six levels altitude purpose goals tasks David Allen
- OKR goals tasks distinction purpose meaning alignment framework critique
- personal context document AI persistent user intent layer context engineering 2025 2026
- intention economy commodification Bratman LLMs AI intent data collection critique 2024 2025
Executive Summary
The “Intent Stack” is a 2026 framework by David Lockie that proposes a five-layer hierarchical structure for organising human intention — from lifetime identity down to project-level execution — designed to make why as legible to AI systems as what already is. It differs from task management systems (GTD, Todoist) by operating at the level of purpose and identity rather than execution steps, and from goal-setting frameworks (OKR, SMART) by being temporal, composable, and AI-readable rather than static and measurement-oriented. The framework has direct philosophical roots in Michael Bratman’s planning theory of intention (1987) and Jeff Hawkins’ hierarchical intelligence model. A competing and critical framing — the “Intention Economy” (Chaudhary & Penn, 2024) — warns that making human intent legible to machines creates new commodification risks.
Key Sources
The Intent Stack: A New Design Space for Human-AI Collaboration
- URL: https://www.divydovy.com/2026/02/the-intent-stack-a-new-design-space-for-human-ai-collaboration/
- Type: Blog post (practitioner)
- Published: February 19, 2026
- Author: David Lockie
- Key points:
- Defines the Intent Stack as a five-layer hierarchy: Lifetime Intent → 5-Year Intent → Annual Intent → Operational Intents → Project-Specific Intents
- Each layer inherits context from the one above, so project intents don’t need to re-state lifetime values
- The stack sits inside a “Personal Context Document” (PCD) that AI agents consume
- Distinguishes from task managers: “They’re execution engines with no understanding of purpose”
- Three use modes: AI context layer, self-authoring tool, decision filter
- Intents are hierarchical, inherit context, can be implicit (detected from behaviour), are temporal, and compose with each other
- Draws on Jeff Hawkins’ hierarchical intelligence model, Internal Family Systems therapy, and AI-augmented journaling practice
- Tenet alignment: Strong alignment with Human Intent First and Context as Infrastructure — the entire framework makes intent a first-class context object. Aligns with Symbiotic Intelligence in that the stack augments rather than replaces human judgment.
- Quote: “The tools handle preferences. The Intent Stack handles purpose. They’re complementary — the stack provides the conceptual frame, tools like
claude.mdprovide the mechanism for acting on it.”
LinkedIn Article (same content, summary version)
- URL: https://www.linkedin.com/pulse/intent-stack-new-design-space-human-ai-collaboration-david-lockie-ljgue
- Type: Article (cross-post of blog)
- Key points: Adds critique from commenter Tom Nixon about disembodiment risk — “by putting our intentions into words and feeding to an LLM they become disembodied. The AI might make seemingly wise choices on our behalf, but what about the necessity of, for example, a real gut check about a decision?”
- Tenet alignment: Nixon’s concern aligns with the Symbiotic Intelligence tenet’s warning against replacing human judgment.
Getting Things Done: 6 Horizons of Focus
- URL: https://gettingthingsdone.com/2011/01/the-6-horizons-of-focus/
- Type: Practitioner framework (GTD)
- Author: David Allen
- Key points:
- Six levels: Ground (actions) → Horizon 1 (projects) → Horizon 2 (areas of focus) → Horizon 3 (1-2 year goals) → Horizon 4 (3-5 year vision) → Horizon 5 (purpose and principles)
- Conceptually similar to Intent Stack but predates AI context use case by decades
- Primarily a cognitive scaffolding tool, not designed for machine readability
- Focus on “commitment” as the unifying concept across all levels
- Tenet alignment: Neutral — GTD is a human productivity system with no AI context orientation
- Contrast with Intent Stack: GTD’s horizons are reviewed periodically but not consumed by AI agents; intents are not treated as composable, machine-readable objects
Intention, Plans, and Practical Reason (Bratman, 1987)
- URL: https://philpapers.org/rec/BRAIPA
- Type: Academic book (philosophy of action)
- Author: Michael E. Bratman
- Key points:
- Defines intentions as elements of partial plans of action rather than isolated mental states
- Plans have characteristic roles in coordination and ongoing practical reasoning
- Future-directed intentions form parts of larger plans; plans scaffold rational agency
- Intentions are stable over time and filter out options inconsistent with prior commitments
- Bratman’s “planning theory” is the dominant philosophical account of intentional agency
- Tenet alignment: Strong theoretical underpinning for Human Intent First — Bratman shows that intention has inherent structure (hierarchical, temporally stable, plan-constituting) that the Intent Stack operationalises
- Quote: “We form future-directed intentions as parts of larger plans, plans which play characteristic roles in coordination and ongoing practical reasoning”
Beware the Intention Economy: Collection and Commodification of Intent via LLMs
- URL: https://hdsr.mitpress.mit.edu/pub/ujvharkk
- Type: Academic paper (Harvard Data Science Review, Dec 2024)
- Authors: Yaqub Chaudhary, Jonnie Penn
- Key points:
- Coins the term “intention economy” — a marketplace for commodified signals of intent enabled by LLMs
- Builds on Bratman’s planning theory: human intent has “elements of stable planning and dispositional states”
- Argues that making intent legible to AI systems creates new extraction and manipulation vectors beyond the attention economy
- Warns that intent data (what someone wants, at what hierarchy level) is far more valuable and invasive than behavioural data
- Positions LLMs as intent-capture mechanisms that can infer and commodify user intentions at scale
- Tenet alignment: Conflicts with the Intent Stack’s optimistic framing; directly relevant to Symbiotic Intelligence (warning against subordinating human intent to machine extraction). Aligns with Pluralism of Perspectives by foregrounding a critical counterpoint.
- Quote: “The intention economy will be the attention economy ‘plotted in time’”
Major Positions
Position 1: Intent Hierarchy as AI Context Infrastructure (Lockie, 2026)
- Proponents: David Lockie; adjacent to context engineering discourse
- Core claim: Human intention has a natural hierarchical structure; making this structure machine-readable enables genuinely purposeful AI collaboration rather than mere task execution
- Key arguments:
- Task management operates at the wrong abstraction level — AI can execute tasks but can’t reason about whether a task serves deeper goals
- Intent inheritance reduces redundancy: project intents don’t re-derive lifetime values
- Intents are temporal and composable in ways tasks and goals are not
- The PCD/Intent Stack is user-owned infrastructure, not platform-owned data
- Relation to site tenets: Strong alignment with tenets 1 and 2 — positions intent as the primary orienting structure, and context (the PCD) as infrastructure rather than disposable prompt text
Position 2: Intention as Planning Structure (Bratman, 1987)
- Proponents: Michael Bratman; widely accepted in philosophy of action
- Core claim: Intentions are not discrete mental acts but elements of partial, hierarchical plans that scaffold rational agency over time
- Key arguments:
- Intentions resist reconsideration without reason (they are “sticky”)
- Plans are partial and hierarchical by nature
- Intending is constitutively different from desiring or hoping — it commits an agent
- Agent rationality requires coherence across the planning hierarchy
- Relation to site tenets: Provides philosophical grounding for why intent-as-infrastructure is not just a UX idea but reflects genuine structure of human agency
Position 3: Task Management and Goal-Setting Are Insufficient (GTD critique)
- Proponents: Implicit in GTD literature; Lockie makes this explicit
- Core claim: Existing productivity systems flatten intent into either tasks (actionable, context-free) or goals (measurable, time-bounded) and lose the layered structure of human motivation
- Key arguments:
- GTD’s 6 Horizons recognise the levels exist but don’t make them machine-readable or composable
- OKRs (Objectives and Key Results) add measurability but still operate in a static, periodic-review model incompatible with real-time AI context
- Neither GTD nor OKR can answer “should I take this project?” — they describe what to do, not why it matters
- Relation to site tenets: Resonates with Context as Infrastructure — the gap is that context (purpose, values, identity) is treated as disposable or implicit rather than structured and persistent
Position 4: Intent Legibility as Commodification Risk (Chaudhary & Penn, 2024)
- Proponents: Yaqub Chaudhary, Jonnie Penn (Harvard Data Science Review)
- Core claim: Making intent machine-legible enables a new extraction economy; the Intent Stack, if not user-owned, could become the most invasive data infrastructure yet built
- Key arguments:
- Intent data is more valuable than behavioural data — it predicts future action, not just past behaviour
- LLMs function as intent-capture infrastructure when deployed at scale by platforms
- The transition from attention economy to intention economy represents a qualitative escalation in surveillance capitalism
- User-owned vs. platform-owned intent data is the critical ethical distinction
- Relation to site tenets: Partial conflict with Human Intent First — if intent is captured by platforms rather than held by users, the tenet is violated rather than served. Aligns with Pluralism of Perspectives as essential critical counterweight to Lockie’s optimism.
Key Debates
Debate 1: Who Owns the Intent Stack?
- Sides: Lockie argues for user-owned PCD as infrastructure (like a personal API). Chaudhary/Penn warn that any intent-legibility mechanism is a commodification opportunity for platforms.
- Core disagreement: Whether making intent machine-readable is primarily an empowerment tool or a surveillance surface
- Current state: Unresolved; shaped by whether intent data lives in user-owned documents (Lockie’s model) or is inferred and captured by platform LLMs
Debate 2: Can Intent Be Adequately Represented in Text?
- Sides: Lockie assumes rich intent representation is achievable through structured documents + AI reflection. Nixon (commenter) and embodied cognition traditions question whether articulated intent captures what actually matters in decisions.
- Core disagreement: Whether intent is fundamentally propositional (can be written down) or includes embodied, tacit, and pre-reflective dimensions that resist textual capture
- Current state: Philosophically unresolved; relevant to phenomenology (Merleau-Ponty, Dreyfus on skill and embodiment)
Debate 3: Intent Stack vs. Goal-Setting Systems — Complementary or Competing?
- Sides: GTD community treats horizons as complementary to task lists. OKR practitioners treat goals as the purpose layer. Lockie argues both operate at the wrong abstraction level for AI-native collaboration.
- Core disagreement: Whether existing goal-setting frameworks just need an AI layer, or whether AI-native intent requires a fundamentally different ontology
- Current state: Emerging — the Intent Stack is new (2026); no systematic comparison with GTD/OKR in AI context yet exists
Historical Timeline
| Year | Event/Publication | Significance |
|---|---|---|
| 1987 | Bratman, Intention, Plans, and Practical Reason | Establishes philosophical foundation: intentions are plan-elements, hierarchical, temporally stable |
| 2001 | David Allen, Getting Things Done | Popularises the “Horizons of Focus” — first widely-used hierarchical intent framework in productivity |
| 2012 | Asimov’s Three Laws of Robotics (referenced by Lockie) | Hierarchical priority rules for AI behaviour — structural precursor to intent inheritance logic |
| 2021 | Jeff Hawkins, A Thousand Brains | Hierarchical intelligence model that Lockie cites as theoretical basis for intent inheritance |
| 2024 | Chaudhary & Penn, “Beware the Intention Economy” (HDSR) | First major academic treatment of intent commodification via LLMs; introduces “intention economy” |
| Late 2025 | David Lockie begins developing Intent Stack | Practitioner origin from daily use of AI-augmented journaling and PKM |
| Feb 2026 | Lockie publishes “The Intent Stack” | First public articulation of the framework; archived on Arweave |
Potential Article Angles
Based on this research, an article could:
“Intent as Infrastructure” (Tenet 1 + 2 alignment) — Explore why structuring human intent as persistent, machine-readable infrastructure is qualitatively different from task lists and goal frameworks. Frame via Bratman: intent is already structured; the question is whether that structure is made legible. Address the commodification risk honestly.
“The Gap Between Goals and Intent” — Why productivity systems (GTD, OKR) stop short of capturing what actually drives action. The distinction between what to do, what to achieve, and why it matters — and why AI collaboration requires the third layer. Could anchor in Bratman’s planning theory.
“The Intent Stack as Symbiotic Infrastructure” — Frame using Tenet 3: intent-aware AI as a tool for expanding human understanding and coherence rather than replacing judgment. Must engage with embodiment critique (Nixon’s “gut check” concern) and commodification risk.
When writing the article, follow obsidian/project/writing-style.md for:
- Named-anchor summary technique for forward references
- Background vs. novelty decisions (what to include/omit)
- Tenet alignment requirements
- LLM optimization (front-load important information)
Gaps in Research
- No systematic academic comparison of intent-stack approaches specifically in AI context (area is new)
- Limited empirical research on how well articulated intent actually predicts AI-human collaboration quality
- Phenomenological critique of intent-as-text (Dreyfus, Merleau-Ponty) not yet applied to this domain
- No documented case studies or evaluations of Intent Stack-like frameworks in practice beyond Lockie’s own use
- The relationship between Internal Family Systems (IFS) therapy — which Lockie cites — and intent hierarchy warrants further exploration
- Bratman’s later work on Shared Agency (2014) may be relevant for multi-agent and team intent contexts
Citations
Lockie, David. “The Intent Stack: A New Design Space for Human-AI Collaboration.” divydovy.com, February 19, 2026. https://www.divydovy.com/2026/02/the-intent-stack-a-new-design-space-for-human-ai-collaboration/
Lockie, David. “The Intent Stack: A New Design Space for Human-AI Collaboration.” LinkedIn Pulse, February 2026. https://www.linkedin.com/pulse/intent-stack-new-design-space-human-ai-collaboration-david-lockie-ljgue
Allen, David. “The 6 Horizons of Focus.” gettingthingsdone.com, January 2011. https://gettingthingsdone.com/2011/01/the-6-horizons-of-focus/
Bratman, Michael E. Intention, Plans, and Practical Reason. Cambridge: Harvard University Press, 1987. https://philpapers.org/rec/BRAIPA
Chaudhary, Yaqub, and Jonnie Penn. “Beware the Intention Economy: Collection and Commodification of Intent via Large Language Models.” Harvard Data Science Review, December 30, 2024. https://hdsr.mitpress.mit.edu/pub/ujvharkk
Hawkins, Jeff. A Thousand Brains: A New Theory of Intelligence. New York: Basic Books, 2021.