‘Is This AI Sapient?’ Is the Wrong Question to Ask About LaMDA

The controversy sparked by Blake Lemoine, a Google engineer who thinks Language Model for Dialogue Applications (LaMDA), one of the company’s most sophisticated chat programmes, is intelligent, has a curious component: Real AI ethics experts are all but foregoing further discussion of the AI sapience question or consider it a distraction. Their reasoning is sound.

Lemoine’s edited transcript made it abundantly clear that LaMDA had used a variety of websites to generate its text. Its interpretation of a Zen koan could have come from anywhere, as well as its fable read like an automated story (though its description of the monster as “wearing human skin” was a cute HAL-9000 touch). There was nothing except little magic tricks to cover up the fissures; there was not a glimmer of cognizance.

Looking at the social media reactions to the transcript, though, it’s simple to understand how someone could be duped. Even some intelligent people expressed surprise and a readiness to believe. Therefore, the concern here is not that the AI is actually sentient but rather that we are well-positioned to develop clever robots that can resemble humans to such an extent that we cannot help but anthropomorphize them—and that major tech businesses may use this in highly immoral ways.

We are really highly capable of empathising with the nonhuman, as should be evident from the way we treat our pets, how we engage with Tamagotchi, or how we reload a save if we unintentionally make an NPC weep in video games. Consider the potential of such an AI if it performed a therapeutic role, for example. What are you prepared to say to it? Even if you were certain it wasn’t a human? And how much would the business that created the therapy bot value such priceless data?

It becomes spookier. The metadata you leave behind online that demonstrates your thought process, or “ecto-metadata,” according to systems engineer and historian Lilly Ryan, is soon going to be open to abuse. Imagine a scenario where a business built a robot replica of you and kept control of your digital “ghost” even after you passed away. These ghosts of famous people, former acquaintances, and coworkers would find a ready market. They would also help to elicit further data since they would present themselves to us as a dependable family member or as someone with whom we have previously established a parasocial relationship. It gives the concept of “necropolitics” a whole new meaning. If there is an afterlife, Google might control it.

It is not impossible that companies could market the realism and humanness of AI like LaMDA in a way that never makes any truly outrageous claims while still encouraging us to anthropomorphize it just enough to let our guard down. This is similar to how Tesla is cautious about how it markets its “autopilot,” never quite claiming that it can drive the car by itself in true futuristic fashion while still inducing consumers to behave as if it does (with deadly consequences). All of this existed before the singularity and doesn’t require sapient AI. Instead, it brings up the more complicated social issue of how we approach technology and what occurs when users regard their AIs like sentient beings.

The academics Jason Edward Lewis, Noelani Arista, Archer Pechawis, and Suzanne Kite examine the relationship we have with our machines in “Making Kin With the Machines,” questioning whether we’re modelling or acting out something truly awful with them—as some people are prone to do when they are abusive or sexist toward their largely feminine-coded virtual assistants. Suzanne Kite makes the case in her portion of the book that it is crucial to understand that sapience does not set the parameters for what (or who) constitutes a “person” deserving of respect. She bases her argument on Lakota ontologies.

The opposite of the current AI ethical conundrum is as follows: Companies can take advantage of us if we treat their chatbots like our best friends, but it’s also dangerous to treat them like they’re worthless objects. A technologically exploitative approach may only serve to encourage exploitative social and environmental practises. A humanoid chatbot or virtual assistant should be treated with respect lest their resemblance to humanity train us to treat real people cruelly.

The ideal connection between oneself and one’s surroundings is one that is modest and reciprocal, acknowledging interdependence and connectedness. She goes on to say that stones are thought of as ancestors since they actively talk, communicate with humans by speaking through them, and perceive and understand. Stones desire to assist most of all. The agency of stones is intimately related to the debate over artificial intelligence because AI is created using both code and earthly elements. This is a fantastic approach to connect the natural world with something that is normally thought of as the essence of artificiality.

What results from such a viewpoint? One is suggested by science fiction novelist Liz Henry: “We may agree that our relationships to everything in the world around us are worthy of emotional work and attention. In the same way that we should respect everyone around us and recognise that each person has a unique life, viewpoint, needs, feelings, ambitions, and position in the world.

The necessity to create kin for our machines must be balanced against the numerous ways it may and will be used as a weapon against us in the upcoming period of surveillance capitalism. Although I would want to be a persuasive academic standing up for Mr. Data’s rights and dignity, the more complicated and muddy reality is what needs our attention. Since sapient AI is not necessary for a robot rebellion, we may participate in it by releasing these instruments from the most heinous forms of capital exploitation.

Leave a Reply

Your email address will not be published.