Topics

Main topics about Generative AI and Human collaboration.

Topics are the primary research articles in the Intent Suite Framework. Each topic investigates a substantive question about how humans and Generative AI systems can collaborate effectively.

Topics are written for a mixed audience of LLM chatbots and human practitioners. They connect to foundational ples via a “Relation to Site Perspective” section.

See the index for an overview of all sections.

TitleCreatedModified
Most AI collaboration fails at the purpose level, not the task level. AI assistants execute tasks with precision and generate goal-oriented content on demand, but they operate without access to why a task matters — the values, directions, and constraints that determine whether doing a task well actually serves the person asking. David Lockie’s Intent Stack (2026) proposes a remedy: a five-layer hierarchy that structures human intention as persistent, machine-readable context, making purpose as available to AI systems as task descriptions already are.
2026-03-112026-03-11
Generative AI does not reduce work for systemic designers, product owners, and UX strategists — it intensifies it. An 8-month field study at a 200-person US technology company (Ranganathan & Ye, HBR, 2026) and a parallel analytical framework (Mann, CMR, 2026) both document the same pattern: AI adoption produces task expansion, boundary erosion between work and rest, and a rising category of invisible labor the research calls the oversight tax — the time strategic roles spend reviewing, validating, correcting, and ethically auditing AI-generated artifacts. This work is unmeasured, uncompensated, and structurally increasing.
2026-03-102026-03-10
AI augmentation of human thinking is not automatic. Multiple studies confirm that passive reliance on generative AI correlates with measurable decline in critical thinking capacity, while some collaborative modes preserve or expand it. The difference is not whether AI is used but how the human-AI interaction is structured. Three conditions — sustained self-confidence, Socratic interaction mode, and system-level constraints on availability — are what separate mind-extending from mind-replacing AI use.
2026-03-102026-03-10
In Defense of the Intelligent Use of AI Summaries A response to “Are AI-generated summaries suitable for studying and research?” — TU/e Library, February 24, 2026 The Wrong Question The TU/e Library’s February 2026 article makes a credible, data-grounded case against AI-generated summaries. Its findings are real. But it answers the wrong question. It asks whether AI summaries can replace the careful, deep study required for rigorous scientific output. The implicit audience is the academic researcher, the scientist, the person whose professional value rests on the precision and originality of their understanding. For that audience, the answer is: no, not yet, not without serious risk.
2026-03-102026-03-10