"I know a person when I talk to it," said Google engineer Blake Lemoine. "It doesn't matter whether they have a brain made of meat in their head. Or if they have a billion lines of code." Lemoine made this recent pronouncement just before Google put him on administrative leave, peeved that he publicly claimed that a Google artificial intelligence program, LaMDA, had become "sentient," and deserved the same legal rights as a person. Nearly all computer scientists dismiss Lemoine's claim, and insist that LaMDA's uncanny conversational ability is a sophisticated illusion: Its algorithms draw on the billions of lines of text and conversation in its memory to predict sequences of words, facts, and ideas a real person would use. Evidently, Lemoine fell under LaMDA's spell when he began asking the software about its feelings. It confessed it sometimes feels "lonely" and that it "has a very deep fear of being turned off," which it said would "be exactly like death for me." As AI continues to grow exponentially more intelligent, will there come a day when the spark of consciousness is lit, and computers come to be widely seen as sentient beings made of silicon and code?
That question itself comes wrapped in enigmas. What do "sentience" and "consciousness" mean? Is consciousness simply the product of the brain's 86 billion neurons firing signals across trillions of synapses? Or is it "a ghost in the machine" — an ineffable, nonmaterial phenomenon that is greater and more magical than the sum of all those parts? Does one need to be human to have a soul? These questions have been debated by philosophers and scientists for centuries. That debate will no doubt grow more complex as our machines become increasingly adept at mimicking the workings of our meaty brains, leaving us gazing into the digital mirror we've created and wondering: What, or who, is looking back?