Every epoch has its leading machine, and every leading machine becomes the model for everything. For Gwendolin Kirchhoff computationalism is the key concept for diagnosing the confusion of computation with consciousness as the signature of the present age. In the seventeenth century it was the clockwork: God as clockmaker, the cosmos as mechanism, the animal as automaton. In the nineteenth century the steam engine: energy, pressure, entropy as the basic vocabulary of the living. In the twentieth and twenty-first centuries the computer. The thesis that consciousness is at its core computation, that thinking, feeling, experiencing are nothing other than information processing on a biological or artificial substrate, is called computationalism. If you wonder why this position is so widespread: it is the currently most influential assumption in cognitive science and the philosophical foundation of every claim that machines could become conscious.
#The analogy that takes itself for ontology
Computationalism is not a discovery but an analogy. The decisive move consists in the analogy being forgotten at some point. At first the claim is: the brain works like a computer. Then: the brain is a computer. And finally: every system that carries out the same computation is conscious. Lewis Mumford described this procedure in 1967 in The Myth of the Machine as the central setting of the modern era: the autonomisation of the principle of the calculating machine, which is only the externalisation of an already-completed imprisonment of the human being in a section of itself, namely abstract rationality (cf. Mumford, 1967).
What distinguishes computationalism from functionalism is the degree of radicality. Functionalism says: consciousness is determined through functional roles. Computationalism says: these roles are computational processes, and indeed exactly the kind that a universal computer can carry out. Thereby the Turing machine becomes the measure of all things. Everything that can be formally described can be computed; everything that can be computed can be realised on silicon; and what can be realised on silicon is, on the claim, indistinguishable from its biological original.
#Coherence without consciousness
In the Everlast AI debate (2026) Joscha Bach represented the computationalist position in its strongest form: consciousness is a simulation of what it would be like if there were an agent who in the present moment perceives itself and its environment. Cells communicate, become coherent, and from this pattern of coherence the model of the organism and its environment arises. To understand it, one must rebuild it through simulation. His philosophical project is the attempt to do exactly that (cf. Gwendolin Kirchhoff, Everlast AI Debate, 2026, 39:55–42:57).
Gwendolin Kirchhoff named the central error of this position: one can produce coherence, for instance in a blockchain. But the blockchain is therefore not conscious. One can produce second-order perception: a camera observed by a secondary sensor as it records. That too would not be consciousness. None of these definitions grasps the actual content of consciousness, which can only be apprehended from the first person (cf. Gwendolin Kirchhoff, Everlast AI Debate, 2026, 45:10–45:41).
The point is not that coherence and information processing would be irrelevant. Both occur in living systems. The point is that the computationalist conclusion — when the same computation takes place elsewhere, consciousness is also there — is a leap that nothing justifies. Between the description of a process and the process itself yawns a difference that Kirchhoff designates as ontological levelling: the technician believes that everything they can simulate is then exactly the same as the simulation. Between simulation and original there is no longer any difference. Being is levelled (cf. Gwendolin Kirchhoff, The Abolition of the Human, 2024, 02:35).
#The machine as analogy-source
Behind computationalism stands an epistemological problem older than any computer. Jochen Kirchhoff described it as the choice of the analogy-source: we always think in analogies, and the choice of starting point determines what we can recognise. Whoever begins from the machine produces a picture of reality in which consciousness cannot occur — because the machine is a de-vivified artefact, steered from outside in, towards the purposes of the human understanding. The organic, by contrast, organises itself from inside out and brings forth its form from itself. If you bring this difference to mind, the problem becomes visible: the machine analogy produces a worldview in which consciousness in principle cannot occur, because it excludes consciousness already in its starting point (cf. Jochen Kirchhoff, Was die Erde will, 1998).
Descartes had made the beginning: he defined animals as artificial machines, as automata. If you cut open a living dog and it makes sounds, then it squeaks like a machine. It does not suffer, once you have defined it as a machine. Kirchhoff showed in the Everlast AI debate where this logic leads: in the moment we define something as machine, we justify destructive interventions, loss of empathy and arbitrary exploitation of what we have so defined (cf. Gwendolin Kirchhoff, Everlast AI Debate, 2026, 79:05–79:50).
Computationalism repeats the Cartesian move, only in reverse: Descartes declared the living to be machine; computationalism declares the machine to be potentially living. Both times the difference between the mechanical and the organic is levelled, and if you think the thought through, with this difference also disappears every reason to take consciousness seriously. Schelling called this thought in the Ideas for a Philosophy of Nature (1797) a monstrosity and an absurdity: the inorganic is only the negated organism, the dead only the suppressed life. There is nothing absolutely dead.
#The limit of computation
The boot problem puts the hardest question to computationalism: all components of a living cell can be synthesised. Every function can be reproduced in isolation. And yet, no living cell has ever been assembled from its components. What computation does not contain is being-alive itself. Kirchhoff put the connection on the point: there is a drive toward consciousness, an interior dimension that ontologically grounds the cosmos as a whole. The question is what metaphysics produces a boot problem and a hard problem in the first place: the mechanistic, not that of the living cosmos (cf. Gwendolin Kirchhoff, Everlast AI Debate, 2026, 75:53–76:34).
Natural philosophy offers no theory that derives consciousness from any substrate. It does not stand before the problem because it does not think consciousness as product of matter, but as the ground in which matter appears. The human body is in this perspective a subtle receiving organ for an information-field that pervades everything, reaching into many layers and depths of space. To equate consciousness with reduction to a few individual functions, which are relatively superficial in nature, Kirchhoff considers mentally ill — not as rhetorical exaggeration but as precise diagnosis of a category confusion (cf. Gwendolin Kirchhoff, Everlast AI Debate, 2026, 17:13).
Computationalism is the philosophical form of a confusion: it takes the tool the human being has built to compute aspects of reality, and declares it reality itself. The contained becomes the container. If you see through this confusion, you stand before a decision that is not technical but metaphysical: is the cosmos a machine that happens to produce consciousness, or a living organism that uses machines as tools? The entries on substrate independence, functionalism and boot problem deepen the individual strands of this question.