Are Google chatbots impersonating humans?


Laura Edelson, a postdoctoral researcher in computer science security and member of NYU's Center for Cybersecurity, commented on the recent allegations that Google’s LaMDA, which stands for Language Model for Dialogue Applications and is a family of conversational neural language models, showed signs of sentience.

Edelson stressed that misjudging the sentience of AI could lead people to think we can safely delegate “large intractable problems” to an AI when doing so could be absolutely disastrous — and unethical. “We can’t wash our problems through machine learning, get the same result, and feel better about it because an AI came up with it. It leads to an abdication of responsibility,” said Edelson.