Knowledge-Work

TitleCreatedModified
Most AI collaboration fails at the purpose level, not the task level. AI assistants execute tasks with precision and generate goal-oriented content on demand, but they operate without access to why a task matters — the values, directions, and constraints that determine whether doing a task well actually serves the person asking. David Lockie’s Intent Stack (2026) proposes a remedy: a five-layer hierarchy that structures human intention as persistent, machine-readable context, making purpose as available to AI systems as task descriptions already are.
2026-03-112026-03-11
Research: What does “good enough” mean in AI-augmented systemic design? Date: 2026-03-11 Search queries used: “satisficing ‘good enough’ design philosophy Herbert Simon bounded rationality” “‘good enough’ AI-augmented design systems adequacy criteria” “systemic design ‘wicked problems’ good enough solution threshold adequacy” “Rittel Webber wicked problems ‘good enough’ solution stopping rule design adequacy” “satisficing bounded rationality ‘aspiration level’ design quality adequacy professional when to stop” “wicked problems ’no stopping rule’ Rittel Webber satisficing design adequacy good enough” “Donald Schon ‘reflective practitioner’ design judgment sufficiency professional tacit knowing” “AI augmented design ‘good enough’ quality judgment professional practice stopping criteria 2024 …
2026-03-112026-03-11
Research: How Does Cognitive Debt Accumulate in Knowledge Work That Relies Heavily on AI? Date: 2026-03-11 Search queries used: “cognitive debt AI knowledge work automation skill atrophy” “cognitive offloading AI tools skill atrophy knowledge workers” “extended mind theory AI cognitive offloading Clark Chalmers critique” “MIT ‘Your Brain on ChatGPT’ cognitive debt research 2025” “automation bias AI dependency knowledge workers decision making” “extracted cognition OR cognitive atrophy AI professionals expertise erosion 2024 2025” “Microsoft study AI critical thinking knowledge workers 2025 cognitive offloading” “‘hollowed mind’ OR ’extracted mind’ AI cognition philosophy Synthese 2025” Executive Summary Cognitive debt is a term coined by MIT Media Lab researchers (2025) to describe the long-term neural and behavioral costs that accumulate when AI systems …
2026-03-112026-03-11
Research: Divergent and Convergent Phases of Systemic Design — Different AI Collaboration Strategies Date: 2026-03-11 Search queries used: “double diamond design divergent convergent phases AI collaboration strategies” “systemic design AI generative AI divergent thinking exploration” “convergent thinking AI decision-making design synthesis AI tools” “human-AI collaboration creative process design phases modes of engagement” “AI role design research discovery phase sensemaking qualitative research” “AI fixation design creativity divergent thinking risks constraints” “systemic design double diamond convergent synthesis AI evaluation decision support” “AI generative creativity divergent convergent thinking research 2024 cognitive load fixation risks” “human creativity LLMs divergent convergent thinking homogenization conformity 2024 2025” Executive Summary The Double Diamond (Design …
2026-03-112026-03-11
Research: How to Effectively Use Generative AI for Cognitive Augmentation and Not Just Offloading Date: 2026-03-10 Search queries used: “cognitive augmentation vs cognitive offloading generative AI research 2025” “AI cognitive augmentation extended mind theory philosophy Andy Clark” “generative AI active engagement vs passive delegation thinking skills 2025” “Microsoft study AI critical thinking erosion knowledge workers 2025” “desirable difficulty interleaving learning AI assistance productive struggle research” “AI as thinking partner Socratic method active retrieval spaced repetition metacognition 2025” “cognitive offloading philosophy definition benefits limitations Risko Gilbert 2016” “PKM personal knowledge management AI augmentation thinking tools” Executive Summary The central tension in human-AI collaboration is between cognitive offloading — using AI to reduce mental effort — and …
2026-03-102026-03-11
Research: Does AI Intensify Rather Than Reduce Work for Systemic Designers, Product Owners, and UX Strategists? Date: 2026-03-10 Search queries used: “AI work intensification systemic designers UX strategists product owners labor paradox” “automation paradox AI knowledge workers more work not less cognitive load” “AI design tools UX research overhead new skills required designers 2025” “Jevons paradox AI knowledge work design product management expanded scope” “systemic design AI role expansion service design futures thinking AI tools overhead” “product owner AI scope creep requirements validation overhead AI-generated user stories” “technology labor intensification philosophical critique Marx automation paradox Braverman” “UX strategist AI tools role expansion ethical review prompt engineering new competencies 2025” “second-order effects AI adoption knowledge workers decision quality …
2026-03-102026-03-11
Research: What organizational and personal guardrails allow creative professionals to use AI for focus and subtraction rather than expansion? Date: 2026-03-10 Search queries used: “creative professionals AI guardrails focus subtraction not expansion productivity” “AI content abundance problem creative work curation subtraction guardrails” “organizational policy AI use creative teams editorial judgment curation over generation” “essentialism subtraction design principle AI tools creative constraint intentional boundaries” “Wharton AI creativity convergence homogenization ideas similar outputs 2025” “newsroom AI policy editorial judgment human curation guardrails 2025 2026” “Leidy Klotz subtract subtraction bias design thinking less is more” Executive Summary The default trajectory of AI in creative work is expansion: more drafts, more options, more content at lower marginal cost. Yet research shows this …
2026-03-102026-03-11
Generative AI does not reduce work for systemic designers, product owners, and UX strategists — it intensifies it. An 8-month field study at a 200-person US technology company (Ranganathan & Ye, HBR, 2026) and a parallel analytical framework (Mann, CMR, 2026) both document the same pattern: AI adoption produces task expansion, boundary erosion between work and rest, and a rising category of invisible labor the research calls the oversight tax — the time strategic roles spend reviewing, validating, correcting, and ethically auditing AI-generated artifacts. This work is unmeasured, uncompensated, and structurally increasing.
2026-03-102026-03-10
AI augmentation of human thinking is not automatic. Multiple studies confirm that passive reliance on generative AI correlates with measurable decline in critical thinking capacity, while some collaborative modes preserve or expand it. The difference is not whether AI is used but how the human-AI interaction is structured. Three conditions — sustained self-confidence, Socratic interaction mode, and system-level constraints on availability — are what separate mind-extending from mind-replacing AI use.
2026-03-102026-03-10
Research: Multi-Pass Processing and Context Engineering for AI Research Agent Reliability Date: 2026-03-10 Search queries used: “multi-pass processing AI research agents reliability” “context engineering LLM agents reliability 2025” “context engineering definition AI agents structured prompting” “multi-pass LLM reasoning iterative refinement research agent accuracy” “context rot LLM long context degradation attention mechanism 2024 2025” “Anthropic multi-agent research system how we built it 2025” Executive Summary Context engineering has emerged as the successor to prompt engineering for agentic AI systems: rather than crafting individual instructions, it involves curating what information enters a model’s bounded attention budget at each inference step. Multi-pass processing — iterative loops where agents revise, compress, or hand off context between inference calls — is the primary architectural mechanism …
2026-03-102026-03-10
Research: Desirable Difficulty in AI-Assisted Learning and Research Date: 2026-03-10 Search queries used: “desirable difficulty learning Robert Bjork cognitive science” “desirable difficulty AI-assisted learning research 2024 2025” “productive struggle AI tutoring systems cognitive load” “desirable difficulty vs undesirable difficulty AI tools over-reliance metacognition” “Manu Kapur productive failure AI learning design scaffolding 2024 2025” “spacing effect retrieval practice interleaving AI tools research 2025 learning retention” “desirable difficulty artificial intelligence research assistance knowledge generation 2025” Executive Summary “Desirable difficulty” is a term coined by cognitive psychologist Robert Bjork (UCLA) describing learning conditions that feel harder in the short term but produce superior long-term retention and transfer. Core techniques include spaced practice, …
2026-03-102026-03-10
Research: Context as Infrastructure in GenAI Collaboration Date: 2026-03-10 Search queries used: “context as infrastructure GenAI collaboration knowledge management” “context engineering LLM persistent context sessions AI workflow” “context window management AI collaboration professional knowledge workers designers” “"context as infrastructure" OR "contextual infrastructure" AI language model” “extended mind theory Andy Clark distributed cognition context AI collaboration philosophy” “knowledge work context continuity cross-session AI collaboration design strategy professionals” “context amnesia LLM stateless sessions professional knowledge workers continuity rituals artefacts” Executive Summary Context as infrastructure is the practice of treating the information that grounds an AI collaboration — sources, histories, constraints, domain knowledge, role definitions — as a maintained, …
2026-03-102026-03-10
Research: The Self-Reinforcing Cycle of AI Speed and Rising Expectations in Design and Strategy Work Date: 2026-03-10 Search queries used: “AI speed rising expectations design work self-reinforcing cycle productivity paradox” “AI expectations ratchet effect knowledge workers design strategy acceleration” “ratchet effect AI performance targets design strategy workers expectations creep” “Jevons paradox AI design work efficiency induced demand creative professionals” “AI speed expectations cycle UX design product strategy systemic design practice” “breaking acceleration cycle AI creative work deliberate pace boundaries design strategy” Executive Summary AI tools in design and strategy work do not reduce workload — they restructure and intensify it through a self-reinforcing cycle. UC Berkeley field research (Ranganathan & Ye, 2026) identifies “workload creep” and “expectation creep” as the …
2026-03-102026-03-10