The Blake Lemoine incident is remembered today as a highâwater mark of AI hype. It thrust the whole idea of conscious AI into public awareness for a news cycle or two, but it also launched a conversation, among both computer scientists and consciousness researchers, that has only intensified in the years since. While the tech community continues to publicly belittle the whole idea (and poor Lemoine), in private it has begun to take the possibility much more seriously. A conscious AI might lack a clear commercial rationale (how do you monetize the thing?) and create sticky moral dilemmas (how should we treat a machine capable of suffering?). Yet some AI engineers have come to think that the holy grail of artificial general intelligenceâa machine that is not only supersmart but also endowed with a human level of understanding, creativity, and common senseâmight require something like consciousness to attain. In the tech community, what had been an informal taboo surrounding conscious AIâas a prospect that the public would find creepyâsuddenly began to crumble.
The turning point came in the summer of 2023, when a group of 19 leading computer scientists and philosophers posted an 88âpage report titled âConsciousness in Artificial Intelligence,â informally known as the Butlin report. Within days, it seemed, everyone in the AI and consciousness science community had read it. The draft reportâs abstract offered this arresting sentence: âOur analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious barriers to building conscious AI systems.â
The authors acknowledged that part of the inspiration behind convening the group and writing the report was âthe case of Blake Lemoine.â âIf AIs can give the impression of consciousness,â a coauthor told Science magazine, âthat makes it an urgent priority for scientists and philosophers to weigh in.â
But what caught everyoneâs attention was that single statement in the abstract of the preprint: âno obvious barriers to building conscious AI systems.â When I read those words for the first time, I felt like some important threshold had been crossed, and it was not just a technological one. No, this had to do with our very identity as a species.
What would it mean for humanity to discover one day in the notâsoâdistant future that a fully conscious machine had come into the world? Iâm guessing it would be a Copernican moment, abruptly dislodging our sense of centrality and specialness. We humans have spent a few thousand years defining ourselves in opposition to the âlesserâ animals. This has entailed denying animals such supposedly uniquely human traits as feelings (one of Descartesâs most flagrant errors), language, reason, and consciousness. In the last few years, most of these distinctions have disintegrated as scientists have demonstrated that plenty of species are intelligent and conscious, have feelings, and use language and tools, in the process challenging centuries of human exceptionalism. This shift, still underway, has raised thorny questions about our identity, as well as about our moral obligations to other species.
With AI, the threat to our exalted selfâconception comes from another quarter entirely. Now we humans will have to define ourselves in relation to AIs rather than other animals. As computer algorithms surpass us in sheer brainpowerâhandily beating us at games like chess and Go and various forms of âhigherâ thought like mathematicsâwe can at least take solace in the fact that we (and many other animal species) still have to ourselves the blessings and burdens of consciousness, the ability to feel and have subjective experiences. In this sense, AI may serve as a common adversary, drawing humans and other animals closer together: us against it, the living versus the machines. This new solidarity would make for a heartwarming story and might be good news for the animals invited to join Team Conscious. But what happens if AI begins to challenge the humanâor animal, I should sayâmonopoly on consciousness? Who will we be then?
