Scholars are divided: some argue machines could develop consciousness, while others deem it impossible. The answer hinges on defining 'consciousness,' a complex concept blending philosophy and neuroscience. If it's an illusion from brain processes, machines mimicking those could be conscious too. But how do we confirm it?
Daniel C. Dennett, director of the Center for Cognitive Studies and philosophy professor at Tufts University, is a leading authority. In 1996, he collaborated with MIT researchers on an intelligent robot potentially capable of consciousness. He's authored hundreds of articles on the mind's intricacies.
Dennett proposes a rigorous Turing test—conducted with vigor, aggressiveness, and intelligence—where a machine convinces a human it's conscious. Michael Graziano, professor of psychology and neuroscience at Princeton's Neuroscience Institute, advocates a direct method: scrutinizing the machine's information processing.
Consciousness extends beyond self-identification. Research identifies key processes like perception, decision-making, learning, reasoning, and language. Five major theories explain it, each backed by experts:
Graziano's research probes consciousness's neural roots. Why does the brain infer a non-physical subjective experience? What's its adaptive value? His team investigates these brain mechanisms.
His attention schema theory posits consciousness as the brain's simplified self-model. Graziano believes we could engineer machines with similar models: "If we build it so we can peer inside, we'll confirm it holds a rich self-description." By examining this processing, we verify machine consciousness.
For Graziano, consciousness could arise in software or hardware, biological or synthetic. Anil Seth, professor of cognitive and computational neuroscience at the University of Sussex and co-director of the Sackler Centre for Consciousness, is cautious. He questions if consciousness is substrate-independent, emphasizing brain-like structures essential for human consciousness and their material composition.
Integrated information theory simplifies machine assessment: phi > 0 suffices. But calculating phi is computationally infeasible for complex systems, leaving us unable to verify even designed integration.
Phil Maguire from Maynooth University, Ireland, insists machines can't be conscious. Integrated systems defy part-by-part analysis; machines, being decomposable, are disintegrated and thus non-conscious.
Selmer Bringsjord, director of Rensselaer Artificial Intelligence and Reasoning and AI philosophy expert, agrees. Author of What Robots Can & Can't Be, he argues machines lack the non-material essence underlying human subjectivity, barring true consciousness.
Machines will grow smarter—surpassing humans in calculation, reasoning, analysis, and prediction. Consciousness might enhance environmental interpretation and decisions, but it remains distinct from intelligence. The debate endures.