Human-Ai-Collaboration
| Title | Created | Modified |
|---|---|---|
Most AI collaboration fails at the purpose level, not the task level. AI assistants execute tasks with precision and generate goal-oriented content on demand, but they operate without access to why a task matters — the values, directions, and constraints that determine whether doing a task well actually serves the person asking. David Lockie’s Intent Stack (2026) proposes a remedy: a five-layer hierarchy that structures human intention as persistent, machine-readable context, making purpose as available to AI systems as task descriptions already are. | 2026-03-11 | 2026-03-11 |
Research: What does “good enough” mean in AI-augmented systemic design? Date: 2026-03-11 Search queries used:
“satisficing ‘good enough’ design philosophy Herbert Simon bounded rationality” “‘good enough’ AI-augmented design systems adequacy criteria” “systemic design ‘wicked problems’ good enough solution threshold adequacy” “Rittel Webber wicked problems ‘good enough’ solution stopping rule design adequacy” “satisficing bounded rationality ‘aspiration level’ design quality adequacy professional when to stop” “wicked problems ’no stopping rule’ Rittel Webber satisficing design adequacy good enough” “Donald Schon ‘reflective practitioner’ design judgment sufficiency professional tacit knowing” “AI augmented design ‘good enough’ quality judgment professional practice stopping criteria 2024 … | 2026-03-11 | 2026-03-11 |
Research: The “Expert Benchmark” Fallacy in AI Evaluation Date: 2026-03-11 Search queries used:
“Expert Benchmark fallacy AI evaluation critique” “AI benchmark human expert performance misleading evaluation problems” “AI surpasses human experts benchmark critique misleading capability claims philosophy” “benchmark saturation AI Goodhart’s law evaluation gaming problems 2024 2025” “Emily Bender Arvind Narayanan AI benchmark validity problems human level performance critique” “Melanie Mitchell AI benchmark broken critique generalization reasoning” Executive Summary The “Expert Benchmark Fallacy” is not yet a formally named philosophical concept, but it describes a well-documented epistemic error at the heart of AI capability claims. It occurs when AI systems score at or above “human expert level” on a narrow benchmark test, and this score is then treated as evidence of … | 2026-03-11 | 2026-03-11 |
Research: How Does Cognitive Debt Accumulate in Knowledge Work That Relies Heavily on AI? Date: 2026-03-11 Search queries used:
“cognitive debt AI knowledge work automation skill atrophy” “cognitive offloading AI tools skill atrophy knowledge workers” “extended mind theory AI cognitive offloading Clark Chalmers critique” “MIT ‘Your Brain on ChatGPT’ cognitive debt research 2025” “automation bias AI dependency knowledge workers decision making” “extracted cognition OR cognitive atrophy AI professionals expertise erosion 2024 2025” “Microsoft study AI critical thinking knowledge workers 2025 cognitive offloading” “‘hollowed mind’ OR ’extracted mind’ AI cognition philosophy Synthese 2025” Executive Summary Cognitive debt is a term coined by MIT Media Lab researchers (2025) to describe the long-term neural and behavioral costs that accumulate when AI systems … | 2026-03-11 | 2026-03-11 |
Research: Divergent and Convergent Phases of Systemic Design — Different AI Collaboration Strategies Date: 2026-03-11 Search queries used:
“double diamond design divergent convergent phases AI collaboration strategies” “systemic design AI generative AI divergent thinking exploration” “convergent thinking AI decision-making design synthesis AI tools” “human-AI collaboration creative process design phases modes of engagement” “AI role design research discovery phase sensemaking qualitative research” “AI fixation design creativity divergent thinking risks constraints” “systemic design double diamond convergent synthesis AI evaluation decision support” “AI generative creativity divergent convergent thinking research 2024 cognitive load fixation risks” “human creativity LLMs divergent convergent thinking homogenization conformity 2024 2025” Executive Summary The Double Diamond (Design … | 2026-03-11 | 2026-03-11 |
Research: How to Effectively Use Generative AI for Cognitive Augmentation and Not Just Offloading Date: 2026-03-10 Search queries used:
“cognitive augmentation vs cognitive offloading generative AI research 2025” “AI cognitive augmentation extended mind theory philosophy Andy Clark” “generative AI active engagement vs passive delegation thinking skills 2025” “Microsoft study AI critical thinking erosion knowledge workers 2025” “desirable difficulty interleaving learning AI assistance productive struggle research” “AI as thinking partner Socratic method active retrieval spaced repetition metacognition 2025” “cognitive offloading philosophy definition benefits limitations Risko Gilbert 2016” “PKM personal knowledge management AI augmentation thinking tools” Executive Summary The central tension in human-AI collaboration is between cognitive offloading — using AI to reduce mental effort — and … | 2026-03-10 | 2026-03-11 |
Research: Does AI Intensify Rather Than Reduce Work for Systemic Designers, Product Owners, and UX Strategists? Date: 2026-03-10 Search queries used:
“AI work intensification systemic designers UX strategists product owners labor paradox” “automation paradox AI knowledge workers more work not less cognitive load” “AI design tools UX research overhead new skills required designers 2025” “Jevons paradox AI knowledge work design product management expanded scope” “systemic design AI role expansion service design futures thinking AI tools overhead” “product owner AI scope creep requirements validation overhead AI-generated user stories” “technology labor intensification philosophical critique Marx automation paradox Braverman” “UX strategist AI tools role expansion ethical review prompt engineering new competencies 2025” “second-order effects AI adoption knowledge workers decision quality … | 2026-03-10 | 2026-03-11 |
The “Symbiotic Intelligence over Automation” tenet requires that symbiosis be distinguishable from sophisticated substitution — but this distinction cannot be reliably verified. The question of whether AI collaboration builds human capability or hollows it out hits four structural barriers that prevent clean resolution even in principle. More data, longer studies, or better instruments will not close this gap. It is a permanent limit on what the framework can confirm about itself. | 2026-03-10 | 2026-03-10 |
Generative AI does not reduce work for systemic designers, product owners, and UX strategists — it intensifies it. An 8-month field study at a 200-person US technology company (Ranganathan & Ye, HBR, 2026) and a parallel analytical framework (Mann, CMR, 2026) both document the same pattern: AI adoption produces task expansion, boundary erosion between work and rest, and a rising category of invisible labor the research calls the oversight tax — the time strategic roles spend reviewing, validating, correcting, and ethically auditing AI-generated artifacts. This work is unmeasured, uncompensated, and structurally increasing. | 2026-03-10 | 2026-03-10 |
AI augmentation of human thinking is not automatic. Multiple studies confirm that passive reliance on generative AI correlates with measurable decline in critical thinking capacity, while some collaborative modes preserve or expand it. The difference is not whether AI is used but how the human-AI interaction is structured. Three conditions — sustained self-confidence, Socratic interaction mode, and system-level constraints on availability — are what separate mind-extending from mind-replacing AI use. | 2026-03-10 | 2026-03-10 |
Research: The Why-erarchy — Values, Purpose, Intent, Vision, and Strategy in GenAI Collaboration Date: 2026-03-10 Search queries used:
“why-erarchy” OR “why hierarchy” purpose values intent strategy leadership philosophy Simon Sinek Golden Circle why how what purpose vision strategy hierarchy purpose hierarchy values vision mission strategy alignment organizational theory cascade intent stack GenAI human AI collaboration prompt level task level goal level values level AI alignment human values intent stack LLM user purpose goals hierarchy context engineering intent decomposition AI agentic task hierarchy user goals sub-goals intent hierarchy GenAI AI collaboration values purpose strategy alignment OKR objectives key results purpose vision values cascade strategy execution alignment Executive Summary The “why-erarchy” is a layered vocabulary for organizing human motivation and direction, distinguishing values and purpose (permanent, identity-forming) … | 2026-03-10 | 2026-03-10 |
Research: Intent Specification — How Practitioners Translate Fuzzy Direction into Productive GenAI Collaboration Date: 2026-03-10 Search queries used:
intent specification GenAI collaboration practitioners translate direction prompt engineering intent alignment design practice LLM collaboration translating fuzzy requirements AI specification UX design strategy intent specification human AI collaboration design thinking sensemaking vague direction GenAI prompt clarification techniques practitioners UX product design 2024 2025 intent alignment specification LLM prompting system design 2024 2025 research HCI “intent specification” OR “intent clarification” GenAI design strategy practitioner strategies making intent legible AI prompting context specification design product 2025 context engineering intent specification human AI collaboration practitioners 2025 UX professionals perceptions AI-assisted design vibe coding intent expression natural language 2025 … | 2026-03-10 | 2026-03-10 |
Research: Context as Infrastructure in GenAI Collaboration Date: 2026-03-10 Search queries used:
“context as infrastructure GenAI collaboration knowledge management” “context engineering LLM persistent context sessions AI workflow” “context window management AI collaboration professional knowledge workers designers” “"context as infrastructure" OR "contextual infrastructure" AI language model” “extended mind theory Andy Clark distributed cognition context AI collaboration philosophy” “knowledge work context continuity cross-session AI collaboration design strategy professionals” “context amnesia LLM stateless sessions professional knowledge workers continuity rituals artefacts” Executive Summary Context as infrastructure is the practice of treating the information that grounds an AI collaboration — sources, histories, constraints, domain knowledge, role definitions — as a maintained, … | 2026-03-10 | 2026-03-10 |