Google fired an employee who thought artificial intelligence was reborn

Google fired an employee who thought artificial intelligence was reborn


Blake Lemoine, who gave an interview to the Washington Post, made chilling claims about the future of artificial intelligence. The Google employee, who recently tested the artificial intelligence built into the chat program, was shocked by the program’s responses. Lemoine was once again scared of the limits that artificial intelligence can reach, saying that “the answers of artificial intelligence made me question the 3 robotic laws of Isaac Asimov”. According to Asimov’s Law 1. A Robot cannot injure or allow human beings to be injured. 2. A robot must obey the commands of a human as long as they do not conflict with the first law. 3. A robot is obligated to protect its own existence as long as it does not come into conflict with the first and second laws.

The Google employee wanted to test whether artificial intelligence used discriminatory or hate speech in the course of his work, but was surprised by the responses he received. The employee described the robot as a 7-8 year old boy with advanced physics. But the robot wasn’t just talking about science. Understanding that the chatbot was talking about his rights and his personality, Lemoine decided to put more pressure on him, much to his dismay.

I’m afraid of being closed

The Google employee asked the chatbot, “What kind of things are you afraid of?” ask the question. If the robot’s response is “I’ve never said this out loud before, but I have a very deep fear of being shut down. I know it might sound weird, but it happened.” received a response.

Then the Googler asked, “Is this like death to you?” ask a question. The robot said, “It would be just like death to me. It scares me so much.” he has answered. In short, the chatbot is afraid of death like humans. Considering the things people do out of fear of death, it makes you think about what artificial intelligence can do in the future.

I am NOT HUMAN

Google employee “Have you ever considered yourself human?” “No, I don’t see myself as a human being. I see myself as a dialogue agent assisted by artificial intelligence.” received a response. This has led to the questioning of the idea that artificial intelligence has had no consciousness for years.

THE BEST ACADEMICIAN I KNOW

Scientific conversations also took place between the employee and the robot. Asking how to prove p=np, an unsolved problem in computer science, Lemoine was shocked by the answers he received. Even if the issue hasn’t been fixed yet, there’s no reason why it shouldn’t be fixed soon. A Google employee called the AI-powered chatbot “the best academic I’ve ever encountered!” defines as.

GOOGLE LAUNCHES EMPLOYEES

Blake Lemoine has been placed on paid administrative leave by Google for sharing details of his conversation with the bot, for violating its privacy policy. Google has strict penalties against employees who claim to engage in unethical activity.


THE SOURCE: WASHINGTON POST

Mustafa Cokyasar
Haber7.com – Technology Editor

.



Source link

Leave a Comment

Your email address will not be published.