[ad_1]
In interviews and public statements, many in the AI community pushed back at the engineer’s claims, while some pointed out that his tale highlights how the technology can lead people to assign human attributes to it. But the belief that Google’s AI could be sentient arguably highlights both our fears and expectations for what this technology can do.
The engineer, Blake Lemoine, reportedly told the Washington Post that he shared evidence with Google that LaMDA was sentient, but the company did not agree. In a statement, Google said Monday that its team, which includes ethicists and technologists, “reviewed Blake’s concerns for our AI Principles and have informed him that the evidence does not support his claims.”
A Google spokesperson confirmed that Lemoine remains on administrative leave. According to The Washington Post, he was placed on leave for violating the company’s confidentiality policy.
Lemoine was not available for comment on Monday.
The continued emergence of powerful computing programs trained on massive troves data has also given rise to concerns over the ethics governing the development and use of such technology. And sometimes advancements are viewed through the lens of what may come, rather than what’s currently possible.
In an interview Monday with CNN Business, Marcus said the best way to think about systems like LaMDA is like a “glorified version” of the auto-complete software you may use to predict the next word in a text message. If you type “I’m really hungry so I want to go to a,” it might suggest “restaurant” as the next word. But that’s a prediction made using statistics.
“Nobody should think auto-complete, even on steroids, is conscious,” he said.
“What’s happening is there’s just such a race to use more data, more compute, to say you’ve created this general thing that’s all knowing, answers all your questions or whatever, and that’s the drum you’ve been playing,” Gebru said . “So how are you surprised when this person is taking it to the extreme?”
In its statement, Google pointed out that LaMDA has undergone 11 “distinct AI principles reviews,” as well as “rigorous research and testing” related to quality, safety, and the ability to come up with statements that are fact-based. “Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” the company said.
“Hundreds of researchers and engineers have converged with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has,” Google said.
[ad_2]
Source link