AI coding is having its NFT moment | by Mo | Jan, 2026 | Medium
Sign up
Sign in
Sign up
Sign in
AI coding is having its NFT moment
--
Listen
Share
The idea that humans will make themselves subservient to computers is a pipe dream. We saw how this played out with NFTs and smart contracts.
“Wouldn’t it be great if absolutely no one was in charge and if there was a bug in a smart contract everyone loses their money forever?”
Both hinge on the same fantasy: complex systems can run themselves, and humans are a problem to be removed.
The crypto version was:“Trust the math, not people.”
Now it’s:“Trust the model, not engineers.”
But software doesn’t exist to eliminate thinking. It exists to encode thinking so it can be inspected, debated, repaired.
Because when something goes wrong — and it always does — someone has to:
The funniest part is the people selling this vision still want accountability, safety, and correctness.
There is no future where systems get more powerful and less legible and society just vibecodes its way through the consequences.
“But as long as the tests pass,” right?
Tests don’t describe truth.
They describe what you thought to ask at a particular moment in time.
If “tests passing means the code doesn’t matter,” then by the same logic a bridge is safe because yesterday’s load test worked and a legal contract is correct because it passed spellcheck.
Tests capture examples, not understanding.
They don’t encode:
Most bugs live in the negative space of tests, the things no one thought to assert.
“Applications in the future will be generated on the fly,” right?
Code isn’t just laws and execution. It’s basically compressed human judgement.
When you throw that away and say “we’ll be able to regenerate the whole thing,” you’re throwing away tradeoffs made under highly actualized constraints and scars from previous outages, and domain knowledge that never made it into tests.
Two implementations can also pass the same test suite and be wildly different in:
“Humans will not need to understand the code, given they can ask an agent in real-time and get an explanation,” right?
Asking an agent to explain code is not the same thing as understanding a system.
That distinction is the whole thing.
An explanation is a story told after the fact. Understanding is a mental model you can act from.
An agent has no skin in the future.
When a human reads code, they implicitly ask:
“If I change this, what breaks three steps downstream, six months from now?”
An agent on the other hand just says: “Here’s what this does.”
It’s already past tense.
The moment you need to decide, not describe — performance tradeoff, security boundary, product compromise — the agent has no authority. It doesn’t own the consequences.
Someone still has to choose.
In addition, most failures are not “this function is wrong.”
They are:
Agents are good at local reasoning but system failures are generally global.
Global reasoning requires a coherent worldview of the system. Something humans build over time, not something queried ad hoc.
If no one understands it, no one can trust it
At some point, someone asks, “is this safe?” or, “can we ship this?” or “who’s responsible if this fails?”
“Ask the agent” is not an answer.
AI can assist in recall and analysis. But it can’t replace ownership.
Ownership requires understanding.
Responsibility cannot be delegated to what is basically a narrator.
If the system matters, someone has to be able to say:
“I know how this works well enough to change it, and I accept the consequences.”
No agent will ever be able to say that.
--
--
Written by Mo
Passionate about software. Working on Shape, a radically simple workspace for teams: shape.work. Also blogging at mo.io
No responses yet
Help
Status
About
Careers
Press
Blog
Privacy
Rules
Terms
Text to speech