Cognition

TitleCreatedModified
Research: How Does Cognitive Debt Accumulate in Knowledge Work That Relies Heavily on AI? Date: 2026-03-11 Search queries used: “cognitive debt AI knowledge work automation skill atrophy” “cognitive offloading AI tools skill atrophy knowledge workers” “extended mind theory AI cognitive offloading Clark Chalmers critique” “MIT ‘Your Brain on ChatGPT’ cognitive debt research 2025” “automation bias AI dependency knowledge workers decision making” “extracted cognition OR cognitive atrophy AI professionals expertise erosion 2024 2025” “Microsoft study AI critical thinking knowledge workers 2025 cognitive offloading” “‘hollowed mind’ OR ’extracted mind’ AI cognition philosophy Synthese 2025” Executive Summary Cognitive debt is a term coined by MIT Media Lab researchers (2025) to describe the long-term neural and behavioral costs that accumulate when AI systems …
2026-03-112026-03-11
Research: How to Effectively Use Generative AI for Cognitive Augmentation and Not Just Offloading Date: 2026-03-10 Search queries used: “cognitive augmentation vs cognitive offloading generative AI research 2025” “AI cognitive augmentation extended mind theory philosophy Andy Clark” “generative AI active engagement vs passive delegation thinking skills 2025” “Microsoft study AI critical thinking erosion knowledge workers 2025” “desirable difficulty interleaving learning AI assistance productive struggle research” “AI as thinking partner Socratic method active retrieval spaced repetition metacognition 2025” “cognitive offloading philosophy definition benefits limitations Risko Gilbert 2016” “PKM personal knowledge management AI augmentation thinking tools” Executive Summary The central tension in human-AI collaboration is between cognitive offloading — using AI to reduce mental effort — and …
2026-03-102026-03-11
In Defense of the Intelligent Use of AI Summaries A response to “Are AI-generated summaries suitable for studying and research?” — TU/e Library, February 24, 2026 The Wrong Question The TU/e Library’s February 2026 article makes a credible, data-grounded case against AI-generated summaries. Its findings are real. But it answers the wrong question. It asks whether AI summaries can replace the careful, deep study required for rigorous scientific output. The implicit audience is the academic researcher, the scientist, the person whose professional value rests on the precision and originality of their understanding. For that audience, the answer is: no, not yet, not without serious risk.
2026-03-102026-03-10