Artificial intelligence is doomed to become a dangerous sociopath

News » Science & Technology

 Artificial intelligence is doomed to become a dangerous sociopath

< p>

Princeton University neuroscientist Michael Graziano claims in a new essay published by The Wall Street Journal that AI-based chatbots are doomed to become dangerous sociopaths that could pose a threat to humans.

With the advent of powerful mind simulation systems like ChatGPT, artificial intelligence tools are more accessible than ever before. But these algorithms can easily lie about anything that suits their purpose, Michael Graziano believes. To align them with our values, they need consciousness.

“Consciousness – it's part of the toolkit that evolution has given us to make us an empathic, pro-social species. Without it, we would necessarily be sociopaths, because we would lack the tools for prosocial behavior, – writes Gratiano.

Sociopathy in humans is a personality disorder that is characterized by disregard for social norms, impulsivity, aggressiveness, and an extremely limited ability to form attachments. A sociopath does not accept the norms of society and behaves aggressively towards other people. He makes decisions based solely on his own interests and does not think about how his actions affect others, because he does not care about anyone else's feelings.

Of course, ChatGPT is not going to take over the world and harm humanity , but granting more and more powers to artificial intelligence may have very real consequences that humanity should be wary of in the near future.

To make them more obedient, according to Gratiano, we must allow them to realize that the world is filled with other minds than their own. However, we have no efficient way to know if an AI is conscious or not.

“If we want to know if a computer is conscious, we need to test if the computer understands how conscious minds interact. In other words, we need a reverse Turing test: let's see if the computer can tell who it's talking to – with a person or with another computer”,– proposes a scientist.

The Turing test consists in an attempt by a computer to deceive a person so that he thinks that he is dealing with the same living person. For such an experiment, a human (judge) alternately interacts with another human and a computer. Based on the answers received from them, the judge must determine with whom he is talking. All test participants do not see each other. If the judge can't tell for sure which of the interlocutors is human, then the machine is considered to have passed the test.

So if we can't figure out these difficult questions, we risk dire consequences. A sociopathic machine that can make consistent decisions is very dangerous. So far, chatbots are still limited in their capabilities and are essentially toys. But if we don't think deeper about machine consciousness, Graziano warns, we could face a crisis in a year or five years.

Follow us on Telegram

Add a Comment

Your email address will not be published. Required fields are marked *