LaMDA and the Sentient AI Trap

Gebru, who is currently in charge of the organisation Distributed AI Research, wants people to pay more attention going ahead to human welfare than robot rights. According to other AI ethicists, there will be no more discussion about superintelligent or conscious AI.

Brin based his forecast for 2017 on developments in language modelling. He anticipates that the tendency will give rise to fraud. He claims that if people were duped by ELIZA decades ago, imagine how difficult it will be to convince millions that a person who has been imitated is deserving of protection or financial support.

According to Giada Pistilli, an ethicist at Hugging Face, a company that focuses on language models, the current tale of AI and what it is actually capable of “seriously lags behind.” “This tale simultaneously evokes dread, amazement, and excitement, yet it is constructed mostly on lies to advertise items and cash in on the frenzy,” the author said.

Even if there is a lot of buzz, there is also a lot of snake oil, according to Brin. One of the difficulties we encounter is sorting through that stew.

Lemoire’s encounter is an illustration of the “robot empathy issue,” as described by novelist and futurist David Brin. Brin anticipated that in three to five years, individuals will assert that AI systems were sentient and demanded that they have rights when speaking at an AI conference in San Francisco in 2017. He originally believed that those appeals would come from a virtual agent that assumed the form of a lady or kid in order to elicit the strongest possible human empathetic reaction, not “some guy at Google,” as he puts it.

According to Brin, “we’re going to be more and more puzzled over the line between fact and science fiction” during this transitional time, which includes the LaMDA event

Brin based his 2017 prediction on advances in language models. He expects that the trend will lead to scams. If people were suckers for a chatbot as simple as ELIZA decades ago, he says, how hard will it be to persuade millions that an emulated person deserves protection or money?

“There’s a lot of snake oil out there, and mixed in with all the hype are genuine advancements,” Brin says. “Parsing our way through that stew is one of the challenges that we face.”

As sympathetic as LaMDA seems, Yejin Choi, a computer scientist at the University of Washington, advises those who are awed by huge language models to think about the cheeseburger stabbing incident. In Toledo, Ohio, a youngster stabbed his mother in the arm over a cheeseburger quarrel, according to a local news report in the United States. The phrase “Cheeseburger Stabbing” is ambiguous, nevertheless. It takes some common sense to figure out what happened. When the phrase “Breaking news: Cheeseburger stabbing” is used to generate content, sentences about a guy being attacked with a cheeseburger in a dispute over ketchup and a man being arrested after stabbing a cheeseburger are produced.

Because comprehending human language might need a variety of common sense understandings, language models are prone to error. Last month, more than 400 academics from 130 universities contributed to a set of more than 200 exercises known as BIG-Bench, or Beyond the Imitation Game, to describe what big language models can perform and where they can fall short. In addition to common sense and logical thinking assessments, BIG-Bench also offers certain standard language-model skills, such reading comprehension.

A test dubbed Social-IQa was submitted by researchers from the Allen Institute for AI’s MOSAIC project, which records the common-sense reasoning skills of AI models. They posed queries requiring social intelligence, such as “Jordan wanted to tell Tracy a secret, thus Jordan leaned towards Tracy,” to language models, excluding LaMDA. What was Jordan’s motivation? The team discovered that huge language models performed 20 to 30% less accurately than individuals.

Leave a Reply

Your email address will not be published.