Untitled

Pawel Zimoch · ~8 min read · Essay 12

Counter-Arguments and Objections — For Review

This document catalogs counter-arguments, alternative approaches, and failure cases that could be addressed in the essays. Review these to decide which deserve treatment in the essays.


1. Counter-Arguments to the Core Thesis

"Models will get smart enough to not need structure"

The objection: Current structure requirements are a temporary limitation. Future models with better reasoning, larger context windows, and improved world models will handle unstructured domains reliably.

Potential response:

Strength of objection: Medium. This is the most common objection and deserves direct treatment.


"Structure kills creativity and flexibility"

The objection: By encoding everything into explicit rules, you lose the human ability to adapt, improvise, and handle genuinely novel situations. Rigid structure produces bad outcomes at the edges.

Potential response:

Strength of objection: Medium-High. The creative domain counter-example is underexplored.


"The cost of structure exceeds benefits for most use cases"

The objection: Building explicit structure is expensive. For many processes, the ROI doesn't justify the investment. Humans filling gaps is cheaper than engineering precise rules.

Potential response:

Strength of objection: High. The essays somewhat hand-wave cost/benefit analysis.


"This is just traditional software engineering dressed up"

The objection: Everything in these essays—state machines, validation, type systems, domain modeling—has been known for decades. There's nothing new here.

Potential response:

Strength of objection: Medium. Fair point—the essays could be more explicit about building on established patterns.


2. Alternative Approaches

Checking agents instead of validation layers

The approach: Use a second, more powerful agent to verify the first agent's outputs instead of building explicit validation.

What the essays say: This works for judgment calls but doesn't eliminate structure—just relocates it. Two agents can agree on wrong answers.

What's underexplored:


General-purpose tool use vs. DSLs

The approach: Give agents access to general-purpose tools (APIs, code execution) rather than domain-specific languages.

What the essays say: This recovers all the original problems—arbitrary actions, violated invariants, unpredictable failures.

What's underexplored:


Big-bang structure design vs. incremental

The approach: Invest heavily upfront in comprehensive domain modeling before deploying any agents.

What the essays say: This doesn't work because structure must be discovered through operation.

What's underexplored:


Constitutional AI / RLHF as alternative to external structure

The approach: Train models with better values and judgment, reducing the need for external constraints.

What's missing from essays:


3. Failure Cases and Edge Cases

Domains where structure exists but agents still fail

Examples to explore:

Questions: What's missing in these cases? Is it structure quality, interface design, or something else?


Over-structured systems that became brittle

Examples to explore:

Questions: How do you distinguish good structure from bureaucratic ossification?


When the boundary model breaks down

Scenarios:

Questions: What do you do when incremental deployment doesn't work?


Structure discovery that stalled

Scenarios:

Questions: How do you know when to give up on structuring a domain?


4. Open Objections to Address

What about emergent behavior requirements?

The objection: Some valuable agent behaviors can't be specified upfront—they emerge from the interaction of capabilities. Requiring explicit structure precludes beneficial emergence.

Possible responses:


How does this apply to creative tasks?

The objection: Creative work (writing, design, brainstorming) doesn't have "correct answers" or "valid states." The whole framework seems inapplicable.

Possible responses:


What about multi-agent systems?

The objection: These essays focus on single-agent architectures. What about systems where multiple agents collaborate, negotiate, or compete?

What's missing:


Isn't this just for "boring" enterprise work?

The objection: Structure makes sense for back-office operations, but the exciting AI applications are in open-ended domains where this doesn't apply.

Possible responses:


What about agents that learn and adapt?

The objection: Static structure doesn't capture domains that evolve. Agents should learn the structure as they operate, not have it imposed externally.

Possible responses:


5. Missing Evidence

What we'd ideally have:

Why we don't:


Recommendations

High priority (address in essays):

  1. Cost/benefit of structure investment — The essays assert structure is valuable but don't quantify. At minimum, acknowledge the tradeoff explicitly.

  2. Creative and open-ended domains — Be more explicit about the scope of the framework. Not everything fits.

  3. When the boundary model doesn't work — The incremental approach is presented as universal. Acknowledge its limits.

Medium priority (consider addressing):

  1. Hybrid approaches — General tools with DSL-like constraints. Real systems will be hybrid.

  2. Agent-based checking — When and how to combine with validation layers.

  3. Multi-agent coordination — Growing in relevance as agentic systems become more complex.

Lower priority (acknowledge but don't elaborate):

  1. Constitutional AI / training-based approaches — Complementary, not competing.

  2. "Just software engineering" objection — Acknowledge and move on.

  3. Learning/adapting agents — Covered somewhat in "who specifies the specification."