The switch prevents AI from wiping out the world


Mustafa Suleyman, co-founder of Google DeepMind, believes that a simple way to prevent AI from wiping out the world is to prevent it from automatically updating its source code.

The explosion of generative AI has raised a big question for the technology world: Are humans programming their own destiny? Suleyman offers a suggestion to avoid the scenario of artificial intelligence destroying human civilization.

According to Technology Revirew , Suleyman believes that developers should lock in AI's ability to self-upgrade. "You wouldn't want to let AI automatically stop and update its own source code without supervision. Similar to handling dangerous diseases or nuclear materials, this needs to be under the control of human".

Suleyman describes setting up additional commands in the system to prevent AI from automatically upgrading as a "switch" to ensure humanity's safety in the rapidly developing AI era .

Mustafa Suleyman, co-founder of Google DeepMind: Photo: Inflection AI

Last week, a number of technology leaders such as Bill Gates , Sam Altman , Elon Musk and Mark Zuckerberg gathered in Washington to attend a closed forum on AI. Suleyman believes that this is an important move for the community to sit down and set limits on the use of personal data.

"Basically, we need to set boundaries, clearly delineate the lines AI cannot cross," he said. Regulations need to be established from coding to end-to-end human interactions with AI.

Last year, Suleyman co-founded startup Inflection AI. The company owns the Pi chatbot, which is designed to be a neutral listener and provide emotional support. Suleyman says Pi is not as popular as other chatbots, but it can be controlled to the fullest. That's why he's optimistic about the future in which AI can be effectively regulated.

When talking about existing concerns with artificial intelligence, the DeepMind co-founder said that there are 101 real problems that need to be solved immediately, from user privacy, AI bias in facial recognition to control browse online.

Suleyman is one of many AI experts who have warned against developing clear regulations related to this technology. Demis Hassabis, another co-founder of DeepMind, also said that AI development should be done carefully, using scientific methods, under the strict supervision of experiments and testing instead of racing. massively by companies.

According to Microsoft CEO Satya Nadella, to avoid the scenario of "AI slipping out of human hands", developers need to provide clear classification levels for transparent, doubtful, and accountable models. close...

In March, dozens of experts, including "AI godfathers" Geoffrey Hinton and Yoshua Bengio , signed an open letter, asking laboratories to stop training models more powerful than OpenAI's GPT-4 within 6 months. month.

Financial and Cryptocurrency News Forum by Company Remitano Network

Copyright © 2017 - ALO. All rights reserved