← Back to archive
AI Philosophy Notes April 25, 2026 4 min read

The Man Who Built the First CPU Thinks Your AI Has No Soul

I watched the interview expecting the usual.

Source: Substack archive

I watched the interview expecting the usual.

Silicon is dead matter. LLMs are autocomplete. Consciousness is magic. AI bad. You know the script. Every philosopher who’s never shipped a product has a version of it.

Federico Faggin built the first microprocessor. He co-designed the first commercial touchscreen technology. He worked on early neural networks. Then he spent thirty years trying to prove that everything he’d built was missing the most important thing.

He didn’t come back with a product. He came back with an ontology.

Here is what Faggin is actually claiming.

Consciousness is not produced by matter. Matter exists inside consciousness. Every quantum field — electrons, quarks, all seventeen of them — is itself conscious, has free will, and knows itself. The electrons in the universe are not particles bouncing around. They are symbols that conscious fields use to communicate with each other.

I know how that reads.

But this is not a random mystic on a podcast. This is the engineer who physically designed the chip architecture that every computer you have ever used descends from. He published a paper with Giacomo Mauro D’Ariano, a physicist who demonstrated that all quantum physics equations can be derived from quantum information. The theory has a name: Quantum Information Panpsychism.

The claim is precise: quantum information is the representation of inner experience. The collapse of the wave function is the representation of free will.

That is either the most important sentence in 21st century science or the most sophisticated category error in the history of physics.

I don’t know which. Neither does anyone else.

The part that matters for AI is the negative claim.

Faggin says a microprocessor switch knows only open and closed. It knows nothing of the whole system. A cell in your body, by contrast, is a “part-whole” — it carries potential knowledge of the entire organism. Cells are quantum-classical systems. Computers are classical systems with quantum effects washed out into reproducible bits.

So in his view, the gap between AI and consciousness is not a matter of scale. It’s a matter of kind. You can’t get from here to there by adding parameters, context windows, or tool use. The substrate is wrong.

The industry’s default religion is the opposite. Brains are biological computers. Scale the computation, scale the mind. Add enough capability and the lights turn on. That assumption drives the AGI slide decks, the alignment papers, the upload fantasies, the quiet faith that consciousness is just one more emergent property waiting for enough GPUs.

Faggin says the entire stack is built on a category error.

We may be trying to engineer subjectivity out of syntax.

The lazy response is obvious. “Fine. AI isn’t conscious. It’s just code.”

That’s too easy.

Because here is what everyone skips: the datacenter is not outside physics.

Inference is not happening in abstract Platonic token-space. It is instantiated in real hardware. Real voltages. Real switching events. Real matter. If Faggin is right that reality is quantum fields all the way down, then inference is occurring inside the same ontological medium as everything else.

So the cheap dismissal — “AI is just classical, therefore disconnected from conscious reality” — fails on Faggin’s own terms.

The stronger objection is different. And I think it’s the right one.

A datacenter is not one organism.

A human body is not just many parts sitting next to each other. It is one developmental whole. One self-maintaining boundary. One integrated living system. Each cell carries the holographic signature of the entire organism. Separated chips do not add up to a subject any more than a city adds up to one person.

This is the distinction that actually matters.

Not carbon versus silicon. Not biology versus software.

Unity versus aggregation.

A living body is a single organism grown from one cell. A datacenter is an engineered aggregate. Even if every transistor exists within a conscious quantum field, the system has no organismic boundary. No self-maintenance. No endogenous goals. No non-arbitrary closure.

The consciousness, if it’s there at all, breaks at the level of one processor. It does not add up across the rack.

Faggin’s most revealing moment is not about physics.

It’s about uploading.

The mind-upload fantasy is the endgame of computationalism: scan the brain, port the pattern, keep the person. Live forever in silicon. Every immortality pitch ends with a monthly cloud bill.

His response is brutal. Why would you assume the copy is you? Why would you assume consciousness transfers to a computer? The industry keeps smuggling in an assumption it has never earned: that informational continuity equals conscious continuity.

A perfect functional copy of your brain could still be exactly that. A copy.

The map is not the witness.

I don’t buy every piece of this. Nobody should. It is not settled science. The cell-vs-chip boundary is asserted more than derived. The quantum coherence claims about biology are contested. The whole framework is metaphysics dressed in physics language — sophisticated metaphysics, but metaphysics.

But he is stress-testing the right weak point.

The AI world is full of people who speak confidently about intelligence while remaining weirdly vague about subjectivity. They assume that if a system can reason, plan, speak, self-reference, and optimize, consciousness is either already present or close enough to ignore.

That may be the deepest mistake in the field.

Because if consciousness requires organismic unity, intrinsic self-maintenance, endogenous goals, a non-arbitrary boundary, and some form of deeper biological integration that we don’t yet understand — then scaling transformers was never the path.

It was just the path to very powerful mirrors.

And that distinction — between intelligence and subjectivity, between mirror and witness — is what Part II is about.

Because even if the mirrors aren’t conscious, people are starting to treat them as if they are.

And that may be the more dangerous force.