The popular question runs: can artificial intelligence develop consciousness? What seldom gets noticed is the presupposition already embedded in the question itself. Whoever asks this has already decided that consciousness could be a product — that artificial intelligence could in principle possess what constitutes subjective experience — something that emerges from sufficiently complex computation. This prejudgement is not neutral. It is the deepest premise of the mechanistic worldview, and it deserves philosophical scrutiny.
The Hidden Premise
The idea that AI could become conscious rests on a foundational assumption that is generally not made explicit: that consciousness can arise from unconscious matter. The reductionist-mechanistic framework that has dominated natural science since Galileo (Galilei, 1623) thinks the entire natural order on the model of the most advanced machine of the day — first the clockwork, then the steam engine, now the computer. Everything is projected onto this model: including the living, including consciousness. The question whether AI consciousness is possible — whether artificial intelligence can have qualia, subjective experience (Nagel, 1974) — only arises within this dualist framework that has already separated mind from matter.
Natural philosophy challenges this premise. Jochen Kirchhoff pointed out that conventional natural science constantly employs metaphysical hypotheses without identifying them as such (Kirchhoff, 1998). The claim that consciousness is a by-product of neural activity is not an empirical finding but a metaphysical posit. Schelling called such science half-blind at best: whoever treats nature only as a dead object finds it remains a dead object (Schelling, 1797).
Organic and Mechanical: An Ontological Difference
The decisive difference between a living being and a machine lies not in complexity but in directionality. The mechanical is steered from the outside in — by a consciousness, toward that consciousness’s purposes. The organic, by contrast, organises itself from the inside out; it brings forth its form from within itself. A machine is built; it can, following instructions, build a second machine — but it cannot beget one. Begetting, self-propagation, the bringing forth of the new from living ground: that is reserved to the organic.
Schelling brought this thought to a point: the inorganic is merely the negated organic, the dead merely suppressed life (Schelling, 1798). Nothing is absolutely dead. Everything is primal seed, or it is nothing. The hard problem of consciousness (Chalmers, 1995) asks how matter produces experience. The question whether a machine can become conscious presupposes precisely what is here denied: that something absolutely dead exists, from which the living could emerge through increasing complexity.
Consciousness as Medium, Not Product
The human body is a subtle receptive organ for an all-pervading field that reaches into many layers and spatial depths (Kirchhoff, 2007). The cosmic anthropos describes the human being as an entity that does not produce consciousness but participates in a comprehensive cosmic consciousness. Natural philosophy since Schelling overcomes the Cartesian dualism and conceives of consciousness not as a result of matter but as the ground in which matter appears at all. “Nature is to be visible spirit, spirit invisible nature” — so runs Schelling’s formula from the System of Transcendental Idealism (Schelling, 1800, p. 12).
If consciousness is the ground in which everything takes place, then no computation that itself operates only within this ground can generate the ground. This is not a technical obstacle that could be overcome with more powerful hardware. It is a logical impossibility: the contained cannot bring forth its container.
Pathogenesis, Not Evolution
The current AI debate follows a particular pattern (Kirchhoff, 1998): what is celebrated today as “technological evolution” could equally well be read as the progressive symptomatology of a spiritual illness. Why should the merging of the human being with AI and technical components represent an evolution? The drive to have only one leg, escalated to the point of amputation, would hardly be described as higher development either.
The model of pathogenesis rather than progress turns the diagnostic gaze: where the mainstream sees innovation, a symptom becomes visible. The colonisation of the space of thought by algorithms, the externalisation of consciousness into technical hardware, the equation of human cognition with machine data processing — none of this describes an expansion of the human but its reduction to the computable. Where inner world once was, a pure outer world, available to the interests of the few, is to take its place.
What the Analogy Reveals
The human being, according to the epistemology of natural philosophy, is a source of analogy for the universe. We understand the world because we are alive as it is. When the machine becomes the model for everything, this is not because the world is machine-like but because the gaze upon the world has narrowed. Whoever can only think of the cosmos as a dead mechanism finds the computer to be the logical crowning achievement. Whoever recognises the cosmos as alive finds the computer to be a tool — useful, but categorically different from the consciousness that conceived it.
The question “Can AI be conscious?” is therefore less a question about machines than a question about the image of the human being that underlies it. The answer depends on what one understands by consciousness: a computational product that silicon will eventually produce as well, or the living ground of the cosmos in which the human being participates through its embodied existence.
Further entries: Philosophy of Consciousness | Natural Philosophy | Critique of Science
Sources
- Chalmers, D. (1995). Facing Up to the Problem of Consciousness. Journal of Consciousness Studies, 2(3), 200–219.
- Galilei, G. (1623). Il Saggiatore. Rome: Mascardi.
- Kirchhoff, J. (1998). Was die Erde will. Munich: Diederichs.
- Kirchhoff, J. (2007). Räume, Dimensionen, Weltmodelle. Drachen Verlag.
- Nagel, T. (1974). What Is It Like to Be a Bat? The Philosophical Review, 83(4), 435–450.
- Schelling, F. W. J. (1797). Ideas for a Philosophy of Nature. Leipzig: Breitkopf und Härtel.
- Schelling, F. W. J. (1798). On the World Soul. Hamburg: Perthes.
- Schelling, F. W. J. (1800). System of Transcendental Idealism. Tübingen: Cotta.