Task Queue
Task Queue
Tasks are picked up by the evolution loop (scripts/evolve_loop.py).
Priority Levels
- P0 — Urgent: blocking issues
- P1 — High: important improvements
- P2 — Normal: standard content work (automation picks these)
- P3 — Low: backlog / nice to have
Active Tasks
P1: Can Generative AI have an “intent”?
- Type: research-topic
- Notes: How to reframe this looking at “intent”? The user has an intent, the why doing something its actions and relations with technology. But can we also flip it? Is there something like artificial intent?
- Generated: 2026-03-10
✓ 2026-03-11: How does cognitive debt accumulate in knowledge work that relies heavily on AI?
- Type: research-topic
- Notes: Follow-up to AI intensification question. What recovery and boundary practices reduce it?
- Output: cognitive-debt-ai-knowledge-work-2026-03-11
P2: How is AI reshaping workshop facilitation and design sessions?
- Type: research-topic
- Notes: Domain exploration — facilitation. What new roles, risks, and responsibilities emerge when GenAI participates in collaborative sense-making?
- Generated: 2026-03-10
P2: What does effective synthesis look like in AI-augmented UX research?
- Type: research-topic
- Notes: Domain exploration — synthesis. Where does delegating synthesis to AI erode the understanding that makes insights actionable?
- Generated: 2026-03-10
P2: How do Product Owners use GenAI for backlog management and prioritization without losing strategic alignment?
- Type: research-topic
- Notes: Domain exploration — product ownership. What guardrails prevent intent drift?
- Generated: 2026-03-10
P2: What is cognitive offloading in knowledge work, and where are the limits of beneficial offloading?
- Type: research-topic
- Notes: Concept definition. Where does beneficial offloading become harmful dependency in creative and analytic professions?
- Generated: 2026-03-10
P2: What distinguishes AI as thinking partner from AI as executor?
- Type: research-topic
- Notes: Concept definition. What determines which mode is appropriate in different phases of design and strategy work?
- Generated: 2026-03-10
P2: What is the strongest case against “symbiotic intelligence” as a design goal?
- Type: research-topic
- Notes: Counterargument. Does AI collaboration risk homogenizing creative output, flattening epistemic diversity, or displacing the productive friction of disagreement?
- Generated: 2026-03-10
P2: What is automation bias in design and strategy?
- Type: research-topic
- Notes: Counterargument. How does reliance on AI suggestions undermine professional judgment even when humans nominally remain in control, and what structural conditions make it worse?
- Generated: 2026-03-10
P2: What evaluation frameworks exist for assessing whether a human-AI collaboration is genuinely symbiotic versus subtly coercive?
- Type: research-topic
- Notes: Gap question. Where do current frameworks leave the question unanswered?
- Generated: 2026-03-10
P2: How does pluralism of perspectives apply in GenAI practice?
- Type: research-topic
- Notes: Gap question. What methods help models surface genuinely different frames rather than recombining dominant ones, and when does the method itself constrain the pluralism?
- Generated: 2026-03-10
P2: How does human intent drift or degrade across extended GenAI collaboration sessions?
- Type: research-topic
- Notes: Human Intent First tenet gap. What practices help designers and Product Owners maintain directional integrity over time?
- Generated: 2026-03-10
P2: What practical signals and design patterns distinguish genuinely symbiotic AI collaboration from subtly corrosive automation?
- Type: research-topic
- Notes: Symbiotic Intelligence tenet gap. How do practitioners recognize the difference in real-time practice?
- Generated: 2026-03-10
P2: How do Systemic Designers, UX Strategists, and Product Owners build and reuse context differently across GenAI sessions?
- Type: research-topic
- Notes: Context as Infrastructure tenet gap. What tools, rituals, or artefacts support cross-session continuity?
- Generated: 2026-03-10
P2: What methods help design teams use GenAI to actively surface minority, non-dominant, or non-Western perspectives?
- Type: research-topic
- Notes: Pluralism of Perspectives tenet gap. What structural conditions make this reliable rather than performative?
- Generated: 2026-03-10
P2: What is a Personal Context Document (PCD) and how might it reshape persistent AI collaboration?
- Type: research-topic
- Notes: The PCD is a structured, user-owned representation of identity, values, and direction that AI systems read from and write to over time. What design challenges arise around authorship, privacy, and drift in such persistent context layers?
- Generated: 2026-03-10
P2: What is the difference between implicit and explicit intent in human-AI interaction, and what are the design implications?
- Type: research-topic
- Notes: Explicit intents are declared (“I want X”); implicit intents are inferred from behavior patterns. How do systems propose implicit intents for ratification without being manipulative or creepy? What signals reliably distinguish the two?
- Generated: 2026-03-10
P2: What are “meta-intents” — the intents that govern other intents — and how do they relate to AI alignment and user autonomy?
- Type: research-topic
- Notes: Meta-intents are constraints on how intents get fulfilled: “never auto-commit purchases over €200,” “don’t optimize for cost at the expense of ethics.” How do these map onto AI alignment problems, and how should designers expose meta-intent authoring to non-technical users?
- Generated: 2026-03-10
P2: How do intent-driven systems handle intent conflict — when two active intents compete for the same time, attention, or resources?
- Type: research-topic
- Notes: The Intent Stack treats intent as hierarchical and composable, but conflicts are inevitable (“be healthy” vs. “work intensively this quarter”). What resolution mechanisms exist, and how should AI systems surface conflicts without becoming paternalistic?
- Generated: 2026-03-10
P2: How does the shift from procedural interfaces to intent-driven interfaces change the role of UX designers?
- Type: research-topic
- Notes: If “every interface is just a clumsy way of getting you to reveal your intent” (Lockie, 2025), the designer’s task shifts from mapping flows to understanding and exposing intents. What new skills, methods, and risks emerge? What is lost when procedural understanding disappears?
- Generated: 2026-03-10
P2: How do agentic systems decompose high-level human intents into executable sub-goals, and where does the decomposition go wrong?
- Type: research-topic
- Notes: Intent fulfillment orchestration requires splitting, routing, and delegating intent fragments across agents, APIs, and services. What failure modes emerge (intent drift, over-decomposition, loss of context)? How does this connect to multi-agent architectures?
- Generated: 2026-03-10
P2: What does “intent recognition” require beyond NLP — how do systems infer urgency, mode, and context from underspecified human input?
- Type: research-topic
- Notes: Recognizing “I need a gift” (exploration mode) vs. “I need a gift by Friday” (urgency mode) requires context inference beyond syntax. What signals matter, and what are the risks of misrecognition at scale when agents act on behalf of users?
- Generated: 2026-03-10
P2: How does intent-driven design risk reducing serendipity, agency, and the productive struggle of figuring things out yourself?
- Type: research-topic
- Notes: Counter-argument strand. Intent-driven systems optimize for stated goals but may eliminate the discovery, surprise, and learning that comes from procedural engagement. When is “good friction” — slowing users down to clarify intent — more valuable than seamless execution?
- Generated: 2026-03-10
P2: What is the strongest case that intent-based interfaces will NOT displace procedural ones in expert knowledge work?
- Type: research-topic
- Notes: Counter-argument. Procedural interfaces build user understanding of system capabilities and create explainability. In healthcare, legal, financial, and systemic design contexts, walking through steps may be the point. Where does intent-driven design under-serve expert practitioners?
- Generated: 2026-03-10
Backlog (P3)
P3: How does intent inheritance work across time horizons — what breaks when a five-year intent becomes irrelevant mid-year?
- Type: research-topic
- Notes: The Intent Stack assumes intents cascade down from lifetime to project level. But people change. How do systems handle intent obsolescence without destabilizing the whole hierarchy? What update rituals or versioning mechanisms make sense?
- Generated: 2026-03-10
P3: How is intent as design primitive connected to Jobs-to-be-Done, and where do the two frameworks diverge?
- Type: research-topic
- Notes: Both JTBD and intent-centric design center on desired outcomes rather than features. But JTBD is largely static (the job doesn’t change), while intents in the Lockie framework are temporal and evolving. What can each learn from the other?
- Generated: 2026-03-10
P3: How do task expansion and scope creep differ across design, product, and strategy roles when AI lowers the cost of starting new work?
- Type: research-topic
- Notes: Follow-up to AI intensification question. Low priority — explore after main questions are answered.
- Generated: 2026-03-10
P3: What is the relationship between AI-enabled task porousness and sustainable creative performance?
- Type: research-topic
- Notes: Follow-up to AI intensification question. Task porousness = work seeping into off-hours.
- Generated: 2026-03-10
Completed Tasks
✓ 2026-03-11: What does “good enough” mean in AI-augmented systemic design?
- Type: research-topic
- Notes: Follow-up to AI intensification question. How do professionals decide when to stop refining?
- Output: What does “good enough” mean in AI-augmented systemic design?
✓ 2026-03-11: The Intent Stack: Making Human Purpose Legible to AI
- Type: expand-topic
- Output: intent-stack-framework
- Based on: intent-stack-framework-2026-03-11
- Completed: 2026-03-11
✓ 2026-03-11: What is the “Expert Benchmark” fallacy in AI evaluation?
- Type: research-topic
- Notes: Comparing AI performance against idealized experts vs. the reality of amateur, tired, or biased human practitioners.
- Output: What is the “Expert Benchmark” fallacy in AI evaluation?
✓ 2026-03-11: How do the divergent and convergent phases of systemic design call for different AI collaboration strategies?
- Type: research-topic
- Notes: Domain exploration — double diamond phases. What modes of GenAI engagement fit each phase?
- Output: How do the divergent and convergent phases of systemic design call for different AI collaboration strategies?
✓ 2026-03-11: What is the “Intent Stack” framework, and how does it differ from task management and goal-setting systems?
- Type: research-topic
- Notes: The Intent Stack (Lockie, 2026) proposes a five-layer hierarchy from lifetime identity to project-level execution. How does treating intent as a first-class, hierarchical object change what AI assistants can do for knowledge workers, designers, and strategists?
- Output: What is the “Intent Stack” framework, and how does it differ from task management and goal-setting systems?
✓ 2026-03-10: What is the “why-erarchy” — the relationship between values, purpose, intent, vision, and strategy — and how does it map onto GenAI collaboration practice?
- Type: research-topic
- Notes: The why-erarchy (values/purpose as identity layer; vision/strategy as direction layer; intent as the living connection between them) offers a richer vocabulary than the task/goal dichotomy. How does this hierarchy help designers and strategists structure AI collaboration at different abstraction levels?
- Output: What is the “why-erarchy” — the relationship between values, purpose, intent, vision, and strategy — and how does it map onto GenAI collaboration practice?
✓ 2026-03-10: What is “desirable difficulty” in the context of AI-assisted learning and research?
- Type: research-topic
- Notes: Where does removing the friction of reading and synthesis erode foundational competence? How do designers preserve the “struggle” that builds mastery?
- Output: What is “desirable difficulty” in the context of AI-assisted learning and research?
✓ 2026-03-10: The Augmentation Condition: What Makes AI Extend Rather Than Replace Your Thinking
- Type: expand-topic
- Output: augmentation-condition
- Based on: cognitive-augmentation-vs-offloading-genai-2026-03-10
- Completed: 2026-03-10
✓ 2026-03-10: How do “multi-pass processing” and “context engineering” improve the reliability of AI research agents?
- Type: research-topic
- Notes: Moving beyond single-prompt summarization. What agentic architectures (checkpoints, bias detection, source validation) match the needs of expert researchers?
- Output: How do “multi-pass processing” and “context engineering” improve the reliability of AI research agents?
✓ 2026-03-10: How do “multi-pass processing” and “context engineering” improve the reliability of AI research agents?
- Type: research-topic
- Notes: Moving beyond single-prompt summarization. What agentic architectures (checkpoints, bias detection, source validation) match the needs of expert researchers?
- Output: multi-pass-context-engineering-ai-research-agents-2026-03-10
✓ 2026-03-10: What is context as infrastructure in GenAI collaboration?
- Type: research-topic
- Notes: Concept definition. How do professionals build, maintain, and reuse context across sessions to preserve continuity without recreating it from scratch?
- Output: What is context as infrastructure in GenAI collaboration?
✓ 2026-03-10: Create void article: The Symbiosis Measurement Void
- Type: expand-topic
- Output: symbiosis-measurement-void
- Based on: void-symbiosis-measurement-2026-03-10
- Completed: 2026-03-10
✓ 2026-03-10: What is intent specification — how do practitioners translate fuzzy direction into productive GenAI collaboration?
- Type: research-topic
- Notes: Concept definition. How do Systemic Designers, UX Strategists, and Product Owners make intent legible to a model?
- Output: What is intent specification — how do practitioners translate fuzzy direction into productive GenAI collaboration?
✓ 2026-03-10: What is “prompt as design material” — how does treating prompts as iterative design artifacts change practice?
- Type: research-topic
- Notes: Concept definition. What craft skills does it require from designers and strategists?
- Output: What is “prompt as design material” — how does treating prompts as iterative design artifacts change practice?
✓ 2026-03-10: How to effectively use Generative AI for cognitive augmentation and not just offloading
- Type: research-topic
- Notes: “The point of PKM isn’t to stash ideas for later or to have a machine think for you, but to create a space that lets you think more effectively.”
- Output: How to effectively use Generative AI for cognitive augmentation and not just offloading
✓ 2026-03-10: How to effectively use Generative AI for cognitive augmentation and not just offloading
- Type: research-topic
- Output: cognitive-augmentation-vs-offloading-genai-2026-03-10
✓ 2026-03-10: How does the self-reinforcing cycle of AI speed and rising expectations manifest in design and strategy work?
- Type: research-topic
- Notes: Follow-up to AI intensification question. How can the cycle be interrupted?
- Output: How does the self-reinforcing cycle of AI speed and rising expectations manifest in design and strategy work?
✓ 2026-03-10: How does the self-reinforcing cycle of AI speed and rising expectations manifest in design and strategy work?
- Type: research-topic
- Output: ai-speed-expectations-cycle-design-strategy-2026-03-10
✓ 2026-03-10: What organizational and personal guardrails allow creative professionals to use AI for focus and subtraction rather than expansion?
- Type: research-topic
- Notes: Follow-up to AI intensification question. Focus on scope creep prevention.
- Output: What organizational and personal guardrails allow creative professionals to use AI for focus and subtraction rather than expansion?
✓ 2026-03-10: The Oversight Tax: AI’s Hidden Work for Strategic Roles
- Type: expand-topic
- Output: oversight-tax-ai-hidden-work-strategic-roles
- Based on: ai-work-intensification-systemic-designers-2026-03-10
- Completed: 2026-03-10
✓ 2026-03-10: Does AI intensify rather than reduce work for systemic designers, product owners, and UX strategists?
- Type: research-topic
- Notes: Primary question from HBR article “AI Doesn’t Reduce Work—It Intensifies It” (Ranganathan & Ye, Feb 2026). What design principles prevent efficiency gains from collapsing into cognitive overload?
- Output: Does AI intensify rather than reduce work for systemic designers, product owners, and UX strategists?
P1: Does AI intensify rather than reduce work for systemic designers, product owners, and UX strategists?
- Type: research-topic
- Status: ✅ Complete
- Output: ai-work-intensification-systemic-designers-2026-03-10
- Completed: 2026-03-10