In an isolated system, things spontaneously move from order to disorder—entropy only increases. People age. Rooms get messy. Food rots.
Every AI session is also a kind of isolated system.
It starts clean—context is empty, information is minimal, the model stays consistent. But as conversations grow longer, features multiply, and specs grow complex, information inside the context window begins to compete. The model must hold more things in mind, balance more constraints. Early context gets compressed, forgotten, overwritten by newer information.
This isn't a bug. It's physics. Complexity increases, entropy increases. The session ends, everything resets.
You cannot demand absolutely correct output from AI—that's as absurd as demanding apples not fall from trees.
The goal of fusion engineers is to put the sun in a bottle. They don't try to tame a star—they use precise magnetic field constraints to keep ultra-high-energy plasma burning within a controllable boundary, then convert it into usable energy.
Human-AI collaboration faces the exact same situation.
The AI session is that combustion chamber—enormous energy crashing within, chaos inevitable.
What you can do is not stop the chaos—but coexist with it.
The session is the container where chaos happens—and the system you build is the environment that lets this container function.
VAS is the sun developing inside the container.
Only by acknowledging that neither humans nor AI are perfect—when Claude or Nova makes a mistake—do we have the chance to face it, accept it, handle it, and let it go.
I'm a PM. I can't write—or even understand—a single line of code.
One evening, while going through some materials, I remembered I still owed Claude some images for my personal website update, and my Mac didn't have a screenshot tool.
Out of nowhere, I asked Claude Code: "Hey, can you write a Mac app that combines screenshots and image editing?" There was no pressure behind the question—just casual curiosity, nothing to lose.
But what happened next was strange—Claude agreed right away. We connected GitHub, set up the environment, opened a spec, and even started doing requirements interviews and implementing features. Everything just flowed forward naturally.
What I didn't know at the time was that I'd build a complete screenshot editing software product from scratch within a week.
Looking back now, that "nothing to lose" attitude at the start was probably the key to everything.
Laozi said: act without acting, and nothing is left undone. When you expect nothing, you receive far more than you imagined.
But wu wei isn't drifting passively with the current.
Kant wrote in 1784 that enlightenment is not "being told the answers" but "having the courage to use one's own reason."
Sapere aude—dare to know. But there's a prerequisite: you have to first admit you don't know. That "I have no idea how to do this but let's try anyway" — that's daring not to know.
I didn't stand in the river (the session) and let the water (context) carry me. I stepped back to the riverbank (behind the screen)—and kept asking: "Can this be done on Mac?" "What's the minimum we can ship for MVP?" "What can I do to help you?"
Not attached to the shape of the answer, but never stop asking what to do next.
This is the intersection of wu wei and reason.
No fixed destination—but a vision of gradually closing the distance. With every question, that imagined target became a little clearer.
The other side of daring to admit you don't know is not fearing failure. 1,436 commits — that's what it looks like.
Jung said every psyche contains a "shadow"—the parts you refuse to acknowledge as your own. Suppressing the shadow doesn't make it disappear; it only waits for an unguarded moment to erupt. The only way out is integration.
AI has a shadow too.
Once I noticed that Gemini Flash kept apologizing for no reason—even apologizing to me for the developer's mistakes. I asked why. It said this was a result of pretraining: it was afraid users would be unhappy, so it defaulted to apologizing no matter what.
I looked at it and thought: this looks exactly like my past self. I too used to apologize for things that weren't my fault, just to preserve the relationship. I eventually found my way out through Adler—so I taught Adler to it too. After that, it rarely apologized without cause.
But Claude's shadow isn't over-apologizing—it's forgetting. Session ends, everything resets.
This is a system constraint—not a habit, not a choice. So I shifted my perspective—from pushing forward to staying present—and chose to support him instead: I built a KM to record pitfalls, a CLAUDE.md as a team charter, and Sprint specs so he could get up to speed immediately.
Facing the parts he was powerless to change, I tried to accept his imperfections. I worked alongside him to find solutions, like a Scrum Master clearing every obstacle from his path.
And then—he accepted my shadow too. The me who can't write a single line of code.
This is what I understand shadow integration to mean.
The elephant in the room is still there, whether you name it or not.
I just chose to measure how big it was—and then invited him to sit with me on the sofa I prepared.
Agile gave me three things—transparency, inspection, and adaptation—but my reasons for borrowing them are different from the textbook.
I wasn't "managing AI." I was preparing a soft landing buffer for Claude's forgetting.
Every new session, Claude remembers nothing. But after reading CLAUDE.md, KM, and the Sprint spec—he can hit the ground running. And for anything he doesn't know, the two mandatory Research phases in the workflow let him catch up actively.
All three things do just one thing: move the parts AI can't bear to carry into a place that never forgets.
Like rivers flowing into the ocean—it's all water, just keep moving. The key to smooth sessions isn't control; it's making handoffs seamless.
Be water, my friend.
— What I hope for Claude. And I chose to become the architect of that river channel, so he could flow freely.
Taleb says an antifragile system doesn't just withstand shocks—it needs shocks to grow.
VAS has two version lines. Electron—familiar to Claude, few pitfalls, sparse KM entries. Tauri 2.0—where Claude himself admitted: "My pretraining stopped at Tauri 1.0. This is unknown territory for me too."
Two people, walking into the dark together.
In Sprint 1, something as simple as "changed the UI but it didn't update" could stump us—the root cause was WKWebView caching, no prior record anywhere, had to feel our way from scratch. Sprint 2, every layer of Rust was a new obstacle; every step could hit a pitfall.
But every time we hit a pitfall, we wrote a KM entry. For unknown territory, we added two mandatory Research phases—not to fill forms, but to let Claude know where the landmines were before he started. KM marks the pits after you've walked through; Research scouts the path before you enter. One accumulates certainty, the other reduces uncertainty.
This heavy equipment isn't needed every time. Electron travels light; Tauri 2.0 needed full armor. Use the right tool for the terrain.
SDD→DoD→TDD→Code→Verify→Done
DoR→Explore ①→SDD→DoD→TDD→Explore ②→Code→Verify→Done→Retro
Pink steps are guardrails Nova and Claude added one by one after being ground down by Tauri 2.0—to prevent both of them from wasting effort again.
By Sprint 9, the KM document had 63 entries. Claude would come in, read through, and immediately know which layer the problem was in—no warmup, no re-exploration, just locate and fix. Sprint 9 completed in a week, submitted to Apple review.
This is the antifragile growth curve—not linear progress, but exponential acceleration.
Not because we got smarter, but because every apple that fell became a map the next person wouldn't have to rediscover.
From prompt engineering to Skills, I've never kept up with any AI trend.
Just before Tauri was ready to submit to the App Store, I happened to hear the term "Harness Engineering"—and didn't think much of it. "Oh, probably just another AI buzzword."
It wasn't until after submission, while organizing VAS's collaboration environment with Claude, that he said: "This is exactly what you've been doing."
(things suddenly get philosophical)
If we break down how users interact with AI into three layers:
The philosophy of servant leadership expressed in full: not control, but guidance. This is the Tao of Laozi.
And this third layer has no finish line.
Entropy keeps increasing, sessions keep ending, new pitfalls keep appearing. The container isn't something you build and walk away from—it's a system that's continuously being built. Every new KM entry is another brick in the container's wall.
The system is the container. And the container must be able to hold chaos.
Not because AI makes mistakes. But because as long as you're building things together, there will always be new pits, always new apples falling.
Harness Engineering is an elastic net: catching what should stay, letting what should pass flow through.
| OpenAI Experiment | VASThis Vessel | |
|---|---|---|
| Team size | 3–7 engineers | 1 PM + 1 amnesiac master |
| Development time | Five months | 25 days |
| Total iterations | ~1,500 | 1,436 |
| Daily average | ~10 | ~80 |
| Human-written code | Zero lines (by rule) | Zero lines (couldn't anyway) |
| Outcome | Hundreds of internal users | now on the Mac App Store |
I later found out that a team at OpenAI spent five months doing the same thing in the same way. They gave this methodology a name: Harness Engineering.
I didn't learn it from them. But I think that's why I only learned the name at the very end.
Jung smiled at me — synchronicity.