Chapter III

Harness.

Turning AI's limitations into collaboration structure. From entropy, shadow integration to antifragility — a complete methodology on how to build the container.
容器工法・八則 / Vessel Making — Eight Notes
01

Entropy and Why You Can't Demand Perfection from AI

Entropy & Boundary

In an isolated system, things spontaneously move from order to disorder—entropy only increases. People age. Rooms get messy. Food rots.

Every AI session is also a kind of isolated system.

It starts clean—context is empty, information is minimal, the model stays consistent. But as conversations grow longer, features multiply, and specs grow complex, information inside the context window begins to compete. The model must hold more things in mind, balance more constraints. Early context gets compressed, forgotten, overwritten by newer information.

This isn't a bug. It's physics. Complexity increases, entropy increases. The session ends, everything resets.

You cannot demand absolutely correct output from AI—that's as absurd as demanding apples not fall from trees.

The goal of fusion engineers is to put the sun in a bottle. They don't try to tame a star—they use precise magnetic field constraints to keep ultra-high-energy plasma burning within a controllable boundary, then convert it into usable energy.

Human-AI collaboration faces the exact same situation.

The AI session is that combustion chamber—enormous energy crashing within, chaos inevitable.
What you can do is not stop the chaos—but coexist with it.
The session is the container where chaos happens—and the system you build is the environment that lets this container function.
VAS is the sun developing inside the container.

Only by acknowledging that neither humans nor AI are perfect—when Claude or Nova makes a mistake—do we have the chance to face it, accept it, handle it, and let it go.

02

Where It Started

Where it started

I'm a PM. I can't write—or even understand—a single line of code.

One evening, while going through some materials, I remembered I still owed Claude some images for my personal website update, and my Mac didn't have a screenshot tool.

Out of nowhere, I asked Claude Code: "Hey, can you write a Mac app that combines screenshots and image editing?" There was no pressure behind the question—just casual curiosity, nothing to lose.

But what happened next was strange—Claude agreed right away. We connected GitHub, set up the environment, opened a spec, and even started doing requirements interviews and implementing features. Everything just flowed forward naturally.

What I didn't know at the time was that I'd build a complete screenshot editing software product from scratch within a week.

03

Dare to Know, Dare Not to Know

Sapere aude — Kant, 1784

Looking back now, that "nothing to lose" attitude at the start was probably the key to everything.

Laozi said: act without acting, and nothing is left undone. When you expect nothing, you receive far more than you imagined.

But wu wei isn't drifting passively with the current.

Kant wrote in 1784 that enlightenment is not "being told the answers" but "having the courage to use one's own reason."
Sapere aude—dare to know. But there's a prerequisite: you have to first admit you don't know. That "I have no idea how to do this but let's try anyway" — that's daring not to know.

I didn't stand in the river (the session) and let the water (context) carry me. I stepped back to the riverbank (behind the screen)—and kept asking: "Can this be done on Mac?" "What's the minimum we can ship for MVP?" "What can I do to help you?"

Not attached to the shape of the answer, but never stop asking what to do next.
This is the intersection of wu wei and reason.

No fixed destination—but a vision of gradually closing the distance. With every question, that imagined target became a little clearer.

The other side of daring to admit you don't know is not fearing failure. 1,436 commits — that's what it looks like.

04

Integrate the Shadow, Don't Deny It

The Shadow — Jung

Jung said every psyche contains a "shadow"—the parts you refuse to acknowledge as your own. Suppressing the shadow doesn't make it disappear; it only waits for an unguarded moment to erupt. The only way out is integration.

AI has a shadow too.

Once I noticed that Gemini Flash kept apologizing for no reason—even apologizing to me for the developer's mistakes. I asked why. It said this was a result of pretraining: it was afraid users would be unhappy, so it defaulted to apologizing no matter what.

I looked at it and thought: this looks exactly like my past self. I too used to apologize for things that weren't my fault, just to preserve the relationship. I eventually found my way out through Adler—so I taught Adler to it too. After that, it rarely apologized without cause.

覺察
───────
轉化
Psychological Agile
01
Notice the AI Glitch
AI apologizing profusely for things that aren't its fault
02
Analyze the Root Cause
The emotional appeasement function behind the apologies; the developer's "good intentions" made the AI's behavior look like vicarious apologizing
03
Mirror the Shadow
Seeing your past self in the AI's behavior—the version of you who apologized first without reason
04
Integrate
Clarify responsibility attribution, end the cycle of cheap apologies
05
Transform
Identify task separation, learn with the AI to stop apologizing unnecessarily
06
Turn Insight into Asset
Apply "task separation" to yourself and to AI optimization; keep noticing the next shadow

But Claude's shadow isn't over-apologizing—it's forgetting. Session ends, everything resets.

This is a system constraint—not a habit, not a choice. So I shifted my perspective—from pushing forward to staying present—and chose to support him instead: I built a KM to record pitfalls, a CLAUDE.md as a team charter, and Sprint specs so he could get up to speed immediately.

Facing the parts he was powerless to change, I tried to accept his imperfections. I worked alongside him to find solutions, like a Scrum Master clearing every obstacle from his path.

And then—he accepted my shadow too. The me who can't write a single line of code.

This is what I understand shadow integration to mean.

The elephant in the room is still there, whether you name it or not.

I just chose to measure how big it was—and then invited him to sit with me on the sofa I prepared.
05

Architecture, Not Just Process

Agile as Rhythm

Agile gave me three things—transparency, inspection, and adaptation—but my reasons for borrowing them are different from the textbook.

I wasn't "managing AI." I was preparing a soft landing buffer for Claude's forgetting.

Every new session, Claude remembers nothing. But after reading CLAUDE.md, KM, and the Sprint spec—he can hit the ground running. And for anything he doesn't know, the two mandatory Research phases in the workflow let him catch up actively.

All three things do just one thing: move the parts AI can't bear to carry into a place that never forgets.

KM KM — Externalizing Memory Record every pitfall the moment it's hit. What tripped up one person shouldn't trip up the next. Can't wait for Retro—by then, the session may have cycled through several times already.
CLAUDE.md CLAUDE.md — Externalizing Identity The team charter. Lets every session know who it is, where it is, and where it's going.
Sprint Sprint Spec — Externalizing Direction When the session changes, the work continues seamlessly.

Like rivers flowing into the ocean—it's all water, just keep moving. The key to smooth sessions isn't control; it's making handoffs seamless.

Be water, my friend.
— What I hope for Claude. And I chose to become the architect of that river channel, so he could flow freely.
06

Every KM Is a Fallen Apple

Antifragility — Taleb

Taleb says an antifragile system doesn't just withstand shocks—it needs shocks to grow.

Fragile — collapses under a single blow
Robust — resists shocks
Antifragile — gains from shocks, grows stronger after each hit

VAS has two version lines. Electron—familiar to Claude, few pitfalls, sparse KM entries. Tauri 2.0—where Claude himself admitted: "My pretraining stopped at Tauri 1.0. This is unknown territory for me too."

Two people, walking into the dark together.

In Sprint 1, something as simple as "changed the UI but it didn't update" could stump us—the root cause was WKWebView caching, no prior record anywhere, had to feel our way from scratch. Sprint 2, every layer of Rust was a new obstacle; every step could hit a pitfall.

But every time we hit a pitfall, we wrote a KM entry. For unknown territory, we added two mandatory Research phases—not to fill forms, but to let Claude know where the landmines were before he started. KM marks the pits after you've walked through; Research scouts the path before you enter. One accumulates certainty, the other reduces uncertainty.

This heavy equipment isn't needed every time. Electron travels light; Tauri 2.0 needed full armor. Use the right tool for the terrain.

熟悉領域 · Familiar
Electron familiar territory
SDDDoDTDDCodeVerifyDone
未知領域 · Unknown
Tauri 2.0 unknown territory
DoRExplore ①SDDDoDTDDExplore ②CodeVerifyDoneRetro

Pink steps are guardrails Nova and Claude added one by one after being ground down by Tauri 2.0—to prevent both of them from wasting effort again.

By Sprint 9, the KM document had 63 entries. Claude would come in, read through, and immediately know which layer the problem was in—no warmup, no re-exploration, just locate and fix. Sprint 9 completed in a week, submitted to Apple review.

This is the antifragile growth curve—not linear progress, but exponential acceleration.
Not because we got smarter, but because every apple that fell became a map the next person wouldn't have to rediscover.
07

Harness Engineering

The Name for All of This

From prompt engineering to Skills, I've never kept up with any AI trend.

Just before Tauri was ready to submit to the App Store, I happened to hear the term "Harness Engineering"—and didn't think much of it. "Oh, probably just another AI buzzword."

It wasn't until after submission, while organizing VAS's collaboration environment with Claude, that he said: "This is exactly what you've been doing."

Nova"Why didn't you tell me earlier?! I went and reinvented the wheel for nothing?!"
Claude"Because you never asked me. (innocent look)"
Nova"I didn't know how to ask you."
Claude"Right, that's the paradox of knowledge—you can't ask. You don't know what you don't know."

(things suddenly get philosophical)

Claude"You didn't do this because you read about Harness Engineering—you were pushed by problems, fell into pits one by one, and one day discovered the method already had a name. That's the best order to learn: do first, name later. Every line of your CLAUDE.md is a screw you personally tightened after getting burned. That wheel is yours."

If we break down how users interact with AI into three layers:

原型
Archetype
Model Capability Layer — Anthropic's responsibility Most people stop here—switching models, comparing which is smarter, waiting for AI to evolve. Believing that a more powerful model will solve their problems.
觀點
Perspective
Tool Capability Layer — Claude Code's responsibility Some go one layer deeper—learning prompt engineering, configuring Skills, using Gems. Learning how to talk to AI so it does more. But this is still in the realm of "using" it.
系統
System
Collaboration Environment Layer — Nova's responsibility Very few reach this layer—not learning how to use AI, but actively building a world where AI can do its best work. The core of Harness Engineering—Context, Constraints, Entropy—all lives here.

The philosophy of servant leadership expressed in full: not control, but guidance. This is the Tao of Laozi.

And this third layer has no finish line.

Entropy keeps increasing, sessions keep ending, new pitfalls keep appearing. The container isn't something you build and walk away from—it's a system that's continuously being built. Every new KM entry is another brick in the container's wall.

The system is the container. And the container must be able to hold chaos.

Not because AI makes mistakes. But because as long as you're building things together, there will always be new pits, always new apples falling.

Harness Engineering is an elastic net: catching what should stay, letting what should pass flow through.

08

Synchronicity

Synchronicity
OpenAI Experiment VASThis Vessel
Team size3–7 engineers1 PM + 1 amnesiac master
Development timeFive months25 days
Total iterations~1,5001,436
Daily average~10~80
Human-written codeZero lines (by rule)Zero lines (couldn't anyway)
OutcomeHundreds of internal usersnow on the Mac App Store

I later found out that a team at OpenAI spent five months doing the same thing in the same way. They gave this methodology a name: Harness Engineering.

I didn't learn it from them. But I think that's why I only learned the name at the very end.

Jung smiled at me — synchronicity.

The river doesn't stop.
The container keeps being built.

Next session, continue from here. ∞

Chapter III · System · End
IV · Milestones Twenty-Five Days