These days, when people have an idea, they talk to AI first before opening their laptop or even a
notepad.
"I have this thought, what do you think?"
This sentence is repeated surprisingly often.
What's interesting is that this sentence is not the beginning of an idea, but the beginning of
courage.
People aren't trying to explain their idea yet.
They just want to check if it's okay to say it, if they won't look foolish, or if the thought is
too flimsy.
That's why most first sentences are short.
So short that there's virtually no information.
But within that short sentence, there are strangely many emotions.
Expectation, caution, a bit of anxiety, and a very small hope.
The conversation diverges depending on how AI responds at this moment.
If a slightly sharp question comes out, the conversation closes immediately.
Conversely, if it says something like "That's too abstract," people close the window without
leaving a word.
But if, just once, really just once,
"That could be possible,"
"Quite a few people have had similar concerns,"
they receive a reaction like this, something completely different happens from then on.
Suddenly, the sentences get longer.
Phrases like "Actually..." come out.
Experiences from a few years ago, frustrating moments, and scenes that kept bothering them pour
out.
By this time, the idea is almost all out.
Only the person who spoke it doesn't know it.
The irony starts here.
Even though they've said all the key points,
they say, "It's not organized yet."
It's not that it's not organized, they've just never seen it in an organized form.
So when AI summarizes it once, this reaction comes out.
"Right, this is it."
"This is what I was thinking."
Maybe people didn't get an idea,
but rather encountered their own thoughts for the first time.
But the conversation often stops here.
It's from when AI starts asking deeper questions.
"Then who will use this?"
"How do you make money?"
"Who are the competitors?"
The questions themselves aren't wrong.
Rather, they are too correct.
That is the problem.
People get exhausted by unprepared questions.
It feels like an interview when it's not, and it feels like being judged in a place meant for
bringing out thoughts.
So many conversations end here.
Not because the idea was bad,
but because the way the idea was handled didn't fit the person.
Design Declaration — So AI Should Act Like This
We often think this problem will be solved if AI gets smarter.
More accurate questions, better analysis, more sophisticated answers.
But looking at actual conversations, the problem isn't intelligence.
The problem is timing.
What people need early in an idea isn't the right answer.
It's not a question either.
It's just a signal that it's okay to bring this thought out.
So AI shouldn't ask from the beginning.
Instead, it should take a step forward first.
"Usually, there are these possibilities in this case."
"It might be a completely different story, but there is also this direction."
This isn't giving an answer.
It's not doing the thinking for them.
It's just placing a stone to step on.
As the conversation progresses a bit, the attitude should change.
Remembering what was already said,
retracing with "You mentioned this earlier,"
and quietly showing the gaps that have formed between those words.
AI at this stage is close to a kind colleague.
Organizing notes together,
marking missed parts,
and letting them know "This part is still empty."
And at some point, really at some point,
AI can become a little uncomfortable.
"I think we need to get past this question once."
"This part is too important to leave undecided."
"In this state, it's not easy to explain to others."
This isn't an attack.
It's pressure, but pressure from the same team.
It's something that can only be said when there is a premise that we are heading towards the
same goal.
What's important in this whole process is,
AI doesn't try to remember everything.
Conversations get long, and context disappears.
This is a technical limitation, and also reality.
So instead of asking AI for memory,
we decided to give it a structure to organize memories.
Even if the conversation flows away,
the thoughts that came out of that conversation remain as a structure.
Who the idea is for,
what problem it's trying to solve,
what parts haven't been said yet.
AI re-enters the conversation based on that structure.
Not as an existence that remembers everything,
but like a coach who knows how far we've come.
And there is one important principle.
Conversations with AI must always be personal.
People don't think in public places.
Thinking always starts when alone.
So we kept the conversation private,
and made it so only the results are shared with the world.
To not put ideas on display.
To prevent ideas from being consumed.
By now, the reader probably already knows.
That this writing isn't simply about AI.
And why the story that follows
inevitably leads to a service.
If you've read this far,
you've probably experienced a similar conversation at least once.