The researcher was placed on paid leave by Alphabet Inc. early last week, allegedly for violating the company’s confidentiality policy.
After being suspended for exposing private information about the project with third parties,
Blake Lemoine, a software engineer on Google’s artificial intelligence development team, has gone public with allegations of encountering “sentient” AI on the company’s computers.
In a Medium post titled “May be fired soon for doing AI ethics work,” he draws a parallel to previous members of Google’s AI ethics group, such as Margaret Mitchell, who were eventually dismissed after raising concerns.
The Washington Post published an interview with Lemoine on Saturday, in which he stated that the Google AI he interacted with was a person, “in his capacity as a priest, not a scientist.”
The AI in question is known as LaMDA, or Language Model for Dialogue Applications, and it is used to create chat bots that interact with human users by adopting various personality tropes.
When Lemoine highlighted the issue internally, senior officials at the corporation refused his attempts to undertake studies to verify it.
“Some in the broader AI field are pondering the long-term prospect of sentient or general AI,” Google spokesperson Brian Gabriel explained.
According to our AI Principles, our team – which includes ethicists and engineers – has investigated Blake’s concerns and notified him that the data does not support his allegations.”
When asked about Lemoine’s suspension, the corporation replied it doesn’t comment on personnel concerns.