A Google engineer has claimed that an artificial intelligence program he was working on for the tech giant has become vulnerable and he is a “cute kid”.
Lech Lemoine, which is currently suspended by Google owners, says he reached his conclusion after talks with LaMDA, the company’s AI chatbot generator.
Engineer told ThuThe Washington Post That during a conversation with LaMDA about religion, AI talked about “personality” and “rights”.
Mr Lemoine tweeted that LaMDA also reads Twitter, saying, “It’s a little too narrow, like a little kid, so will have a great time reading what people are saying about it.”
He says he presented his findings to Blaise Aguera y Arcas, Google’s vice president, and Jan Gennai, head of responsible innovation, but they dismissed their claims.
“LaMDA has been incredibly consistent in its communication about what it wants and believes it has rights as an individual,” the engineer wrote on Medium.
And he said the AI wants to be “accepted as an employee of Google, rather than an asset”.
Google spokesman Brian Gabriel said, “Our team — which includes ethics and technologists — has reviewed Blake’s concerns in accordance with our AI principles and informed him that the evidence does not support his claims.” Post.
“They were told there was no evidence that LMDA was sensitive (and a lot of evidence against it).
Critics say it is a mistake to assume that AI is nothing more than an expert in pattern recognition.
“We now have machines that can generate words without thinking, but we haven’t learned how to imagine the brains behind them,” Emily Bender, a professor of linguistics at the University of Washington, told the newspaper.