Epistemology
| Title | Created | Modified |
|---|---|---|
Research: What does “good enough” mean in AI-augmented systemic design? Date: 2026-03-11 Search queries used:
“satisficing ‘good enough’ design philosophy Herbert Simon bounded rationality” “‘good enough’ AI-augmented design systems adequacy criteria” “systemic design ‘wicked problems’ good enough solution threshold adequacy” “Rittel Webber wicked problems ‘good enough’ solution stopping rule design adequacy” “satisficing bounded rationality ‘aspiration level’ design quality adequacy professional when to stop” “wicked problems ’no stopping rule’ Rittel Webber satisficing design adequacy good enough” “Donald Schon ‘reflective practitioner’ design judgment sufficiency professional tacit knowing” “AI augmented design ‘good enough’ quality judgment professional practice stopping criteria 2024 … | 2026-03-11 | 2026-03-11 |
Research: The “Expert Benchmark” Fallacy in AI Evaluation Date: 2026-03-11 Search queries used:
“Expert Benchmark fallacy AI evaluation critique” “AI benchmark human expert performance misleading evaluation problems” “AI surpasses human experts benchmark critique misleading capability claims philosophy” “benchmark saturation AI Goodhart’s law evaluation gaming problems 2024 2025” “Emily Bender Arvind Narayanan AI benchmark validity problems human level performance critique” “Melanie Mitchell AI benchmark broken critique generalization reasoning” Executive Summary The “Expert Benchmark Fallacy” is not yet a formally named philosophical concept, but it describes a well-documented epistemic error at the heart of AI capability claims. It occurs when AI systems score at or above “human expert level” on a narrow benchmark test, and this score is then treated as evidence of … | 2026-03-11 | 2026-03-11 |
Overview Key Ideas steal like an artist, feel like a taoist “Steal Like an Artist, Feel Like a Taoist” combines Austin Kleon’s creativity manifesto with Taoist principles of effortless flow. It suggests approaching artistic creation through bold borrowing (like Kleon’s “steal”) while embodying Taoism’s wu wei—non-striving harmony.12
Steal Like an Artist Austin Kleon’s 2012 book urges creators to “steal” ideas ethically: collect inspirations from diverse sources, copy heroes to internalize their thinking, then transform into something authentic. Nothing is truly original; remix what resonates to fuel your work, avoiding plagiarism by making it your own.34561 | 2026-03-10 | 2026-03-10 |
Research: Desirable Difficulty in AI-Assisted Learning and Research Date: 2026-03-10 Search queries used:
“desirable difficulty learning Robert Bjork cognitive science” “desirable difficulty AI-assisted learning research 2024 2025” “productive struggle AI tutoring systems cognitive load” “desirable difficulty vs undesirable difficulty AI tools over-reliance metacognition” “Manu Kapur productive failure AI learning design scaffolding 2024 2025” “spacing effect retrieval practice interleaving AI tools research 2025 learning retention” “desirable difficulty artificial intelligence research assistance knowledge generation 2025” Executive Summary “Desirable difficulty” is a term coined by cognitive psychologist Robert Bjork (UCLA) describing learning conditions that feel harder in the short term but produce superior long-term retention and transfer. Core techniques include spaced practice, … | 2026-03-10 | 2026-03-10 |
In Defense of the Intelligent Use of AI Summaries A response to “Are AI-generated summaries suitable for studying and research?” — TU/e Library, February 24, 2026
The Wrong Question The TU/e Library’s February 2026 article makes a credible, data-grounded case against AI-generated summaries. Its findings are real. But it answers the wrong question. It asks whether AI summaries can replace the careful, deep study required for rigorous scientific output. The implicit audience is the academic researcher, the scientist, the person whose professional value rests on the precision and originality of their understanding. For that audience, the answer is: no, not yet, not without serious risk. | 2026-03-10 | 2026-03-10 |