Research Notes - What does 'good enough' mean in AI-augmented systemic design?
Research: What does “good enough” mean in AI-augmented systemic design?
Date: 2026-03-11 Search queries used:
- “satisficing ‘good enough’ design philosophy Herbert Simon bounded rationality”
- “‘good enough’ AI-augmented design systems adequacy criteria”
- “systemic design ‘wicked problems’ good enough solution threshold adequacy”
- “Rittel Webber wicked problems ‘good enough’ solution stopping rule design adequacy”
- “satisficing bounded rationality ‘aspiration level’ design quality adequacy professional when to stop”
- “wicked problems ’no stopping rule’ Rittel Webber satisficing design adequacy good enough”
- “Donald Schon ‘reflective practitioner’ design judgment sufficiency professional tacit knowing”
- “AI augmented design ‘good enough’ quality judgment professional practice stopping criteria 2024 2025”
Executive Summary
“Good enough” in design is not a failure of ambition—it is the rational response to bounded cognition, wicked problem structure, and finite resources. Herbert Simon’s concept of satisficing provides the philosophical foundation: professionals set an aspiration level and stop refining when output meets it. In systemic design, this is complicated by wicked problem properties—particularly the absence of any stopping rule and the inherently plural, contested nature of adequacy criteria. AI disrupts both dimensions simultaneously: it lowers the cost of iteration (raising aspiration levels) while eroding the tacit judgment capacity that enables practitioners to recognise when adequacy has been reached. The Capability–Comprehension Gap (Lin et al., 2026) formalises the deeper risk: as AI handles more of the work, practitioners lose the epistemic grip that makes quality judgments meaningful. “Good enough” in AI-augmented systemic design requires explicit attention to aspiration level negotiation, preserved practitioner comprehension, and stakeholder-grounded stopping criteria.
Key Sources
Bounded Rationality (Stanford Encyclopedia of Philosophy)
- URL: https://plato.stanford.edu/entries/bounded-rationality/
- Type: Encyclopedia
- Key points:
- Satisficing replaces the optimization objective with an aspiration level: search until an option that meets the level is found, then stop
- Aspiration levels are adaptive: if no option meets the level, lower it; if options come easily, raise it
- Satisficing is ecologically rational — in many real environments it outperforms optimization
- Tenet alignment: Aligns with Symbiotic Intelligence (realistic capacity matching) and Human Intent First (aspiration levels encode human intent)
- Quote: “Given a specification of what will count as a good-enough outcome, satisficing replaces the optimization objective from expected utility theory”
Herbert A. Simon — Nobel Prize Lecture
- URL: https://www.nobelprize.org/uploads/2018/06/simon-lecture.pdf
- Type: Lecture
- Key points:
- Satisficing models provide “good enough” decisions with reasonable computational cost
- By giving up optimization, richer and more realistic models of decision-making become possible
- Simon coined “satisficing” from “satisfy” + “suffice”
- Tenet alignment: Aligns with Always Scalable (matching effort to resources) and Human Intent First
- Quote: “By giving up optimization, a richer and more tractable theory of human choice behaviour became possible”
Wicked Problems in Design and Ethics (ResearchGate / Rittel & Webber tradition)
- URL: https://www.researchgate.net/publication/330608073_Wicked_Problems_in_Design_and_Ethics
- URL2: https://www.sympoetic.net/Managing_Complexity/complexity_files/1973%20Rittel%20and%20Webber%20Wicked%20Problems.pdf
- Type: Academic paper / original source
- Key points:
- Wicked problems have no stopping rule — the designer must choose when to stop
- Solutions to wicked problems are evaluated as “good” or “bad,” “better” or “worse,” “satisfying” or “good enough” — not true or false
- The designer may decide a solution is “good enough” under pressure of time or budget — stopping is always a judgment call, never a logical terminus
- Design thinking aims at finding a satisficing response, not the best response
- Tenet alignment: Aligns with Pluralism (adequacy is contested, not singular) and Human Intent First (stakeholders define adequacy, not algorithms)
- Quote: “‘Wicked problems have no stopping rule’ (Rittel and Webber 1973). Solutions are assessed as ‘good enough or not good enough, viable or unviable’”
Untangling Wicked Problems (Cambridge Core / AI EDAM)
- URL: https://www.cambridge.org/core/journals/ai-edam/article/untangling-wicked-problems/8D27B8017EC7534BBB9E734524EBEF8F
- Type: Journal article
- Key points:
- Evaluations of proposed solutions to wicked problems are expressed as “good,” “bad,” “better,” “worse,” “satisfying,” or “good enough”
- Adequacy is always contextual and stakeholder-relative, not universally determinable
- Tenet alignment: Aligns with Pluralism of Perspectives; conflicts slightly with any tendency to use AI as universal quality arbiter
The Reflective Practitioner — Donald Schön (1983)
- URL: https://raggeduniversity.co.uk/wp-content/uploads/2025/03/1_x_Donald-A.-Schon-The-Reflective-Practitioner_-How-Professionals-Think-In-Action-Basic-Books-1984_redactedaa_compressed3.pdf
- Type: Book (foundational)
- Key points:
- Practitioners have “tacit knowing-in-action”: they know more than they can say, including when something is good enough
- “Reflection-in-action” is the mechanism by which practitioners revise quality judgments in real-time
- “Good enough” is not applied from a pre-set rule — it emerges from professional artistry and contextual attunement
- Design involves a “reflective conversation with the situation” — stopping emerges from dialogue, not rule-following
- Tenet alignment: Aligns with Symbiotic Intelligence (judgment cannot be fully delegated) and Human Intent First (practitioner intent shapes stopping decisions)
- Quote: “Competent practitioners usually know more than they can say, exhibiting a kind of knowing in practice, most of which is tacit”
Position: Human-Centric AI Requires a Minimum Viable Level of Human Understanding (Lin et al., 2026)
- URL: https://arxiv.org/abs/2602.00854
- Type: Preprint (arXiv:2602.00854v1, January 2026)
- Key points:
- Capability–Comprehension Gap: as AI-assisted performance improves, users’ internal models of what they’re doing deteriorate
- Cognitive Integrity Threshold (CIT): the minimum comprehension a user must retain to meaningfully verify, contest, or revise AI output — i.e., to judge whether it is “good enough”
- Once comprehension drops below CIT, oversight becomes “structurally hollow” — the practitioner nominally controls the process but cannot judge its adequacy
- Three dimensions of CIT: (i) verification capacity, (ii) comprehension-preserving interaction, (iii) institutional scaffolds for governance
- The “Critical Decoupling Point” is where the operator can no longer reconstruct system intent in the face of anomalies
- Tenet alignment: Strongly aligns with Symbiotic Intelligence (AI must not erode the human’s evaluative capacity) and Human Intent First (intent requires comprehension to be articulated)
- Quote: “Once human comprehension drops below the CIT, oversight becomes structurally hollow — entering a regime of Empty Oversight”
Bounded Rationality, Satisficing, Artificial Intelligence, and Decision-Making in Public Organizations (Schwarz, 2022)
- URL: https://onlinelibrary.wiley.com/doi/10.1111/puar.13540
- Type: Journal article
- Key points:
- Simon’s satisficing concepts remain foundational to understanding AI-assisted decision-making
- AI shifts aspiration levels by changing what options are visible and how cheaply iterations can be generated
- “Good enough” thresholds are socially constructed within institutional contexts, not individually determined
- Tenet alignment: Aligns with Context as Infrastructure (institutional context shapes adequacy criteria) and Always Scalable
How Do Workers Develop Good Judgment in the AI Era? (HBR, 2026)
- URL: https://hbr.org/2026/02/how-do-workers-develop-good-judgment-in-the-ai-era
- Type: Professional publication
- Key points:
- Experienced practitioners gain huge AI productivity benefits because they have judgment to evaluate AI output — they know “good enough” when they see it
- Junior employees cannot tell whether AI-generated work is good enough because they lack the calibrated experience
- AI now handles the “messy, repetitive tasks” that historically built junior judgment — the training pathway for developing “good enough” recognition is being disrupted
- Tenet alignment: Aligns with Symbiotic Intelligence (symbiosis requires preserving the human judgment layer) and Always Scalable (judgment is the irreducible human contribution)
Design principles for AI-augmented decision making (Tandfonline, 2024)
- URL: https://www.tandfonline.com/doi/full/10.1080/0960085X.2024.2330402
- Type: Journal article
- Key points:
- AI does not replace professional judgment but changes the texture of decisions — when to accept AI output is itself a judgment
- Quality thresholds require explicit design attention; they do not emerge automatically from AI outputs
- Tenet alignment: Aligns with Human Intent First and Symbiotic Intelligence
Major Positions
Satisficing as Rational Design Practice
- Proponents: Herbert Simon, Gerd Gigerenzer (ecological rationality)
- Core claim: “Good enough” is not a compromise — it is the rational response to bounded information, time, and cognitive capacity. Professionals set an aspiration level and stop when it is met. This is efficient and often more accurate than exhaustive optimisation.
- Key arguments:
- Complete information is never available; optimisation is a fiction in complex domains
- Aspiration levels encode the practitioner’s intent and context knowledge
- Satisficing strategies are ecologically rational: they match decision-making to the actual structure of real environments
- Relation to site tenets: Strong alignment with Human Intent First (aspiration levels are intent-expressions) and Always Scalable (effort matches returns). Tension with any perfectionist reading of AI capability.
Wicked Problem Adequacy — No Logical Stopping Point Exists
- Proponents: Rittel & Webber (1973), Buchanan (1992), Coyne (2005)
- Core claim: In systemic design, “good enough” can never be derived from first principles. Wicked problems have no stopping rule — practitioners stop because they run out of time, money, or consensus, not because a condition is satisfied. Solutions are evaluated as “better or worse,” never “correct.”
- Key arguments:
- Every “solution” generates new problems — there is no clean endpoint
- Adequacy is plural and stakeholder-relative: what satisfies one stakeholder may be wholly inadequate for another
- Design responses to wicked problems are interventions, not solutions; they require ongoing evaluation
- Relation to site tenets: Strong alignment with Pluralism of Perspectives (adequacy is contested and multi-perspectival). Challenges any AI framing that implies objective quality metrics.
Professional Artistry and Tacit “Good Enough” — Schön’s Framework
- Proponents: Donald Schön (1983), Chris Argyris
- Core claim: Professional quality judgments — including “good enough” — arise from tacit knowing-in-action that cannot be fully codified. Reflection-in-action allows practitioners to revise and calibrate these judgments in real-time. AI can generate to explicit criteria but cannot replicate tacit quality recognition.
- Key arguments:
- Practitioners’ stopping decisions are embedded in professional artistry — they recognise adequacy through embodied, contextual attunement
- Tacit knowing resists articulation: you cannot write a specification for “good enough” in complex design situations
- AI disrupts the formation of tacit judgment by handling the iterative refinement that builds tacit knowledge over time
- Relation to site tenets: Aligns strongly with Symbiotic Intelligence (tacit judgment is the irreducible human contribution). Raises concerns about cognitive offloading risks.
The Capability–Comprehension Gap — AI Erodes Quality Judgment Capacity
- Proponents: Lin et al. (2026), Fangzhou Lin et al.
- Core claim: As AI takes over more design work, practitioners lose the comprehension needed to judge whether AI output is actually “good enough.” The Cognitive Integrity Threshold marks the point below which oversight becomes empty — the practitioner nominally approves output but cannot meaningfully evaluate it.
- Key arguments:
- Performance and comprehension diverge: AI output quality can improve even as practitioner understanding deteriorates
- Below the CIT, the practitioner cannot verify, contest, or revise — “good enough” becomes a rubber stamp
- Three dimensions must be maintained: verification capacity, comprehension-preserving interaction, institutional governance scaffolds
- Relation to site tenets: Aligns directly with Symbiotic Intelligence (the tenet requires expanding, not eroding, human capacity). Critical concern for the “good enough” question specifically.
Aspiration Level Inflation — AI Raises the Bar Indefinitely
- Proponents: Implicit in Simon’s framework; articulated in practitioner sources
- Core claim: Simon’s satisficing model assumes aspiration levels are stable. AI disrupts this by making iteration nearly free, which continuously raises aspiration levels. Practitioners can always prompt for “one more pass” — there is no natural stopping condition. “Good enough” becomes harder to reach, not easier.
- Key arguments:
- When marginal cost of improvement approaches zero, aspiration levels rise without bound
- Professionals under AI pressure may never reach “good enough” — perpetual refinement becomes the norm
- This creates a new form of the wicked problem’s “no stopping rule” — but now generated by tool capability rather than problem complexity
- Relation to site tenets: Conflicts with Always Scalable (effort-in vs results-out balance), aligns with concerns about AI intensification of work
Key Debates
Debate 1: Is “good enough” a skill or a threshold?
- Sides:
- Schön tradition: “good enough” is tacit knowing — a practitioner quality, not a measurable threshold
- Simon tradition: “good enough” is an aspiration level — it can be stated explicitly and operationalised
- Core disagreement: Whether adequacy criteria are ever fully articulable, or whether tacit knowing-in-action is irreducible
- Current state: Unresolved. AI exacerbates the tension — explicit thresholds are machine-readable, tacit ones are not.
Debate 2: Does AI raise or lower the bar for “good enough”?
- Sides:
- Optimist: AI produces better first drafts, so “good enough” is reached faster and more reliably
- Pessimist: AI lowers the cost of iteration, raising aspiration levels indefinitely; “good enough” becomes permanently out of reach
- Core disagreement: Whether aspiration levels are stable (Simon’s original model) or dynamically inflated by tool capability
- Current state: The pessimist reading appears empirically stronger — AI intensification research (Ranganathan & Ye, 2026) documents rising workloads, not falling ones.
Debate 3: Who defines adequacy in AI-augmented systemic design?
- Sides:
- Human-centric: adequacy is always defined by human stakeholders and professional judgment; AI cannot determine it
- Efficiency-centric: adequacy can be operationalised through evaluation metrics, and AI can participate in judging it
- Core disagreement: Whether human intent and pluralistic stakeholder values can be captured in metrics that AI can assess
- Current state: The wicked problems tradition and CIT paper both argue strongly for the human-centric position in systemic design contexts.
Historical Timeline
| Year | Event/Publication | Significance |
|---|---|---|
| 1947 | Simon, Administrative Behavior | First formulation of bounded rationality and satisficing |
| 1955 | Simon, “A Behavioral Model of Rational Choice” | Formalized aspiration level and stopping rule mechanism |
| 1969 | Simon, The Sciences of the Artificial | Extended satisficing to design contexts |
| 1973 | Rittel & Webber, “Dilemmas in a General Theory of Planning” | Wicked problems: no stopping rule, good-or-bad not true-or-false |
| 1983 | Schön, The Reflective Practitioner | Tacit knowing-in-action as the mechanism of professional quality judgment |
| 1978 | Simon, Nobel Prize Lecture | Satisficing as core of economic rationality |
| 1992 | Buchanan, “Wicked Problems in Design Thinking” | Extended wicked problem framework to design practice |
| 2022 | Schwarz, “Bounded Rationality, Satisficing, AI, and Decision-Making” | Connected Simon’s framework to AI-assisted decision-making in organizations |
| 2024 | Ranganathan & Ye, HBR article on AI intensification | Empirical evidence that AI raises, not lowers, work standards |
| 2026 | Lin et al., “Human-Centric AI Requires Minimum Viable Human Understanding” | Cognitive Integrity Threshold — formalizes the comprehension required to judge “good enough” |
Potential Article Angles
Based on this research, an article could:
“The Aspiration Level Problem: Why AI Makes ‘Good Enough’ Harder to Reach” — Examines how AI’s near-zero marginal cost of iteration inflates aspiration levels without bound. Connects Simon’s satisficing model to AI work intensification. Tenet alignment: Always Scalable, Human Intent First. Would argue for explicit aspiration-level setting as a design practice skill.
“The Stopping Problem: Professional Judgment When AI Can Always Iterate” — Explores the tacit-knowledge dimension of “good enough” (Schön) and how AI disrupts both its formation (judgment gap for juniors) and its exercise (Capability-Comprehension Gap). Tenet alignment: Symbiotic Intelligence, Human Intent First. Would argue for preserving the reflective practitioner capacity in AI-augmented workflows.
“Wicked Adequacy: Why ‘Good Enough’ in Systemic Design Is Always Plural” — Takes the wicked problems frame as its spine and argues that adequacy criteria in systemic design are inherently contested, stakeholder-relative, and non-computable. AI can generate to explicit criteria but cannot resolve the fundamentally political question of what counts as adequate. Tenet alignment: Pluralism, Human Intent First.
When writing the article, follow obsidian/project/writing-style.md for:
- Named-anchor summary technique for forward references
- Background vs. novelty decisions (what to include/omit)
- Tenet alignment requirements
- LLM optimization (front-load important information)
Gaps in Research
- Empirical studies on aspiration level inflation in AI-augmented design teams: Research exists on AI intensification, but not specifically on how practitioners calibrate stopping decisions over time with AI
- Cross-professional comparison: How do UX strategists, systemic designers, and product owners differ in their “good enough” heuristics when using AI? What role does seniority play?
- Positive practices: What interventions (explicit aspiration levels, time-boxing, stakeholder stopping criteria) effectively counteract aspiration level inflation?
- The collective “good enough” problem: In team design settings, individual practitioners may satisfice at different levels — how does AI mediate these differences?
- AI’s own quality representations: When AI proposes output as “complete” or “final,” what internal criteria is it using? How does this interact with human aspiration levels?
Citations
- Simon, H. A. (1947). Administrative Behavior. Free Press.
- Simon, H. A. (1955). A behavioral model of rational choice. Quarterly Journal of Economics, 69(1), 99–118. https://iiif.library.cmu.edu/file/Simon_box00063_fld04838_bdl0001_doc0001/Simon_box00063_fld04838_bdl0001_doc0001.pdf
- Simon, H. A. (1978). Nobel Prize Lecture: Rational decision-making in business organizations. https://www.nobelprize.org/uploads/2018/06/simon-lecture.pdf
- Rittel, H. W. J., & Webber, M. M. (1973). Dilemmas in a general theory of planning. Policy Sciences, 4(2), 155–169. https://www.sympoetic.net/Managing_Complexity/complexity_files/1973%20Rittel%20and%20Webber%20Wicked%20Problems.pdf
- Schön, D. A. (1983). The Reflective Practitioner: How Professionals Think in Action. Basic Books.
- Buchanan, R. (1992). Wicked problems in design thinking. Design Issues, 8(2), 5–21. https://systemsorienteddesign.net/wp-content/uploads/2011/01/Buchanan_Wicked_Problems_DT.pdf
- Frankish, K., & Ramsey, W. M. (Eds.) (2012). Bounded Rationality. Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/bounded-rationality/
- Schwarz, G. M. (2022). Bounded rationality, satisficing, artificial intelligence, and decision-making in public organizations. Public Administration Review. https://onlinelibrary.wiley.com/doi/10.1111/puar.13540
- Lin, F., Ge, Q., Xu, L., Li, P., Gao, X., Xing, S., Yamada, K., Zhang, Z., Zhang, H., & Tu, Z. (2026). Position: Human-centric AI requires a minimum viable level of human understanding. arXiv:2602.00854. https://arxiv.org/abs/2602.00854
- HBR (2026). How do workers develop good judgment in the AI era? https://hbr.org/2026/02/how-do-workers-develop-good-judgment-in-the-ai-era
- Klir, G. J. (2016). Untangling wicked problems. AI EDAM, 30(2). https://www.cambridge.org/core/journals/ai-edam/article/untangling-wicked-problems/8D27B8017EC7534BBB9E734524EBEF8F