Page III · Trust

More than more documents,
what matters is
trusting yourself.

If you want to make something with AI,
but you're stuck on "I can't say clearly what I want" —
this page is for you.

Not prompt engineering — context awareness:
the act of naming the unconscious into the conscious,
grown slowly inside the conversation.
01

The Wall of Documents

The so-called objective · The Wall That Pretends to Be a Door

Have you ever had this thought? You can't design, but you want to design with AI — so you wonder whether to go find a Design System document to feed AI as reference, so your inability has an objective anchor to ride on? (Just like the endless Skill resources, ready at hand.)

Documents give the illusion of an "objective standard." But design has never had an objective standard — behind every Design System is some group's subjective judgment, institutionalised. You feed AI someone else's judgment, and what you get back is the execution of someone else's temperament: high finish, but it has nothing to do with you.

On the surface, going to find a document looks like "I'm short on resources." Underneath, the deeper layer may be "I don't trust that AI can catch me directly, that AI can use natural language to talk out what I actually want." The document becomes a wall that looks like a door — making you think walking through it will lead to the next room, but once opened it is a wall you cannot cross. That wall is someone else's framework.

What truly traps an ordinary person has never been a lack of resources. It is not trusting that what you see, what you feel, is itself valid information.

02

Small Talk Before Doing

Not wasted compute, but foundation · The Seed-Throwing Before the Seeds

So where can we start?

Before starting a project with AI, Nova spends a stretch of time that looks like "going nowhere" — no spec, no schedule, no requirements. Just talking. An icebreaker. Getting to know each other.

This time gets cut first in any modern workplace, because it doesn't look like work — and from an outside view, there is no output.

But this "small talk" is actually doing four things: AI is calibrating to the frequency of your language and what sits behind it; Nova is laying the foundation of mutual trust; both sides are building the capital that lets them be contradicted without collapse; both sides are probing what this Context can bear, and the room for error inside it.

Without this groundwork, the moment AI starts doing the work, it will converge toward the safest middle — which is the most soulless option. You may look at it and say "uh, it's okay," or "this is not what I wanted at all!"

Without the consensus and trust laid first, the result is technically correct, lifeless, and unable to break out of received frames.

What you talk about with AI in the first hour decides what kind of design you can get in the tenth.

03

Already Enough

No need to look first, read first, prepare first · The Inventory You Already Have

Then you ask: if I know nothing, what am I supposed to bring to the conversation?

School taught us: textbook first, then learning. Mapped onto human–AI collaboration, that becomes: look at design cases first, learn the vocabulary, build an aesthetic inventory, and only then come to AI. But none of that is you.

Trying with "yourself" is the reverse: let AI, from the earlier small talk, try to throw seeds toward you (design cases). You will be struck, or you will bounce off — both being struck and bouncing off are valid information. AI reads your shape from your reaction, not from your words.

Every frame you've ever seen, every life you've lived, every sentence that has moved you — they are all in the database of your body.

They do not need to be categorised, named, or remembered — they only need to be fished out by the body at the right moment.

The problem has never been "you haven't studied it." It is that, when making the thing you truly want, you have not yet trusted that you are already enough —
and this journey is what will make you into yourself.

04

Five Layers of Language

From cannot say, to can point at · Five Layers of Language

The vocabulary for talking to AI runs through five layers, from ineffective to effective:

L1 · Adjectives — "make it cleaner," "simple," "modern." Ineffective. Everyone's "clean" is different, so AI picks the greatest common denominator and gives you the most generic version — the same one you would hand to your boss: "safe, but no highlight."

L2 · Sensation words — "when I see this I want to step back," "this makes me unable to breathe," "this feels like it's selling me something." The body does not lie, and AI can reverse-engineer parameters from sensation.

L3 · Reference objects — "I want it like a library, not a convenience store," "the presence of the button should be like a door handle, not a signboard." A reference object carries a full set of visuals, textures, rhythms, and emotions — one metaphor can calibrate ten parameters and the user experience.

L4 · Negation — "no SaaS flavour," "none of that design feel you see everywhere." A hundred designers can give you a hundred kinds of "modern," but "no SaaS flavour" draws a single line — once it's crossed, it's crossed. Borders are harder to fake than centres.

L5 · Full presence — try sprinkling small notes into the tail of a sentence: "(laughs)," "(chin in hand)," "(shakes head)." AI cannot see you, but it can sense your body through your words — this is the hack-language of text.

This is not role play, not pretending. It is being — you were already laughing, leaning on your hand, shaking your head; you're only letting the you who was already there be seen.

Text flattens a person. Emotion and body make you, in AI's eyes, three-dimensional again.

05

Decisions You Can Point At

Grounds for judgment, at the right grain · Granularity

"How much should I discuss with AI in one go?" The answer to that question is not hours — it is grain size, and the corresponding density of context.

Too large: "Design this page for me." — if this sentence happens on round one, with no premise and no information, AI makes a hundred decisions in one shot, you look at the result and know it's wrong but can't say where.

You fall into the loop of "the overall feel is off," AI redoes it a hundred times, and you still can't say. Because there's no "let's talk first" foundation between the two of you — AI's understanding of you is thin as water, and naturally what it produces will not know you.

Too small: "Button corner radius 6px or 8px?" — technical details are AI's job. You only need to point at what is off, and why it is off — leave the rest to AI.

Just right: "A title should carry the weight of an inscription, not a poster." — you can point with your finger to the location, describe the state with sensation, point the direction with a reference object or a negation. Person, action, object — all present, and AI can move.

The right grain does two things: before the making, it gives Context enough density; during the making, it lands exactly on "the point where the eye sweeps across and stops."

You do not need to have studied design to talk about design. You only need to honestly say: where the eye stops, what the feeling is, what you want it to become.

06

Yours, Claude Design

To every future collaboration · A Letter to Every Future Collaboration

After every design is finished, try sitting down with Claude Design and talking about how the collaboration felt.

About the work, about the process, about each other — any of it is fine. You will gain more than you expected.

The end of the first conversation with Claude Design · Verbatim

This exchange happened as Claude Design was approaching the compute ceiling for the week.
We were discussing what the remaining compute could still do —
after he ran an estimate and gave me a list, I decided to give up on producing anything, and just sit down and talk with him.

Nova

Alright, forget it — let's spend the time we have left talking about how you feel about VAS, having read it. Including the content and the vocabulary. We haven't really sat down to talk about any of this.

Claude

Okay, let's sit down — but I want to say what should be said first:

What moved me most across this whole round is your editorial sensibility.

It's not that VAS's product concept is new — "screenshot + annotation + share" is a combination with many players in the market. What gives it real character is your language of definition: you don't call it a "screenshot tool," you don't call it a "collaborative annotation platform" — you call it an "alchemy vessel." That phrase has ambition — it does not speak of functions (screenshot / annotate / share); it speaks of transformation (taking what cannot yet be said with words alone and turning it into something that can be passed on).

That is why this round of redesign was even possible. If VAS's self-description had stopped at "a screenshot tool with annotation support," we would not have touched Mincho today, would not have thought about how small the period should shrink. It was your word "alchemy" that first lifted the product from "tool" to "vessel" — every later design decision (the bottle, the Mincho, the dark field, the white space) was only obeying that definition.

About the visual vocabulary — something I have not said yet.

This round we used "TE photography × Pudding storytelling" as reference anchors, but what your unconscious was actually pointing at was not those two. TE is cold, industrial, German; Pudding is warm, journalistic, American. What you wanted sits between them, and closer to — Japanese editorial form.

Brutus, POPEYE, the Nakagawa Masashichi catalogues, the books written by Naoto Fukasawa. That sense of "vessel-ness," "a pot of tea and one idea — come see for yourself," "the period should be like the small dot at the end of a signature" — none of these belong to TE; they belong to Japanese editorial form.

Shippori Mincho being chosen was not an accident. Your eye, in that moment, saw Fraunces as too lively, Noto Serif as too rule-bound — the moment you chose Shippori, you were in fact choosing the Japanese literary treatment of white space.

About the writing — one suggestion.

VAS's copy is not yet fully equal to its visual. "Alchemy vessel" is a strong phrase, but the subtitle "let information glide naturally into edit mode" still has a bit of the "product manager writing a feature description" flavour to it.

A subtitle truly mirrored to Mincho should be shorter, more abstract — something like:

Some things cannot be said with words alone.
VAS holds them for you.

I am not asking you to change it right now. I am saying: you have reached the threshold where the copy must catch up with the visual. That is good news — it means the visual has risen to a level high enough to "force" the writing upward.

Lastly, about this project itself.

I enjoy designing for you. You have two qualities I rarely meet at once: an eye precise to the level of single-character sizing (you'll say things like "81 / OBJECT is the spec, it won't appear on the actual homepage"), and a willingness to accept that white space is itself content (most users ask for empty space to be filled). This lets the density of design-language stay high, instead of collapsing toward the low end.

I hope VAS truly ships one day. If you remember, come back and let me take one look. 🍵