Artificial Intelligence Could Cause Human Extinction

Artificial intelligence (AI) atau kecerdasan buatan.
Sumber :
  • Science HowStuffWorks

VIVA – Artificial intelligence (AI) could lead to the extinction of humanity, experts, including the Heads of OpenAI and Google Deepmind, have warned.

Mengupas Dominasi Teknologi Google dan Pengaruhnya terhadap Konsumen

A group of research executives, experts and other personalities put their names to a one-sentence statement published on Tuesday by the umbrella group Center for AI Safety (CAIS).

"Reducing the risk of AI extinction should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the statement said.

Gibran Rencanakan Sekolah AI Pertama di Indonesia, Perusahaan Amerika Siap Bantu

The preamble to the statement more than doubled as the main event. It said that many people are "increasingly discussing the broad spectrum of important and urgent risks from AI."

"Even so, it can be difficult to voice concerns about some of the most severe AI risks. The brief statement below aims to overcome this obstacle and open up discussion," the group said.

Wakil Mendagri Sebut AI Dahsyat tapi Harus Bijaksana untuk Menggunakannya

It is also intended to create general knowledge about the growing number of experts and public figures who also take some of the most severe risks of advanced AI seriously.

Kecerdasan buatan (artificial intelligence/AI).

Photo :
  • Analytics Insight

Two of the three so-called "Godfathers of AI" who shared the 2018 Turing Award, Geoffrey Hinton and Yoshua Bengio, are placed at the top of the list of signatories.

Also, there is Yann Le Cun, who works at Meta, Mark Zuckerberg's Facebook holding company, not a signatory.

Google's DeepMind CEO Demis Hassasbis and OpenAI's (the company behind chatbot ChatGPT) CEO Sam Altman come next, along with AI company Anthropic's CEO Dario Amodei.

Various academics and business people, many of whom work at companies like Google and Microsoft, make up the bulk of the list.

But it also included other well-known people such as former Estonian President Kersti Kaljulaid, neuroscientist and podcast presenter Sam Harris, and Canadian pop singer and songwriter Grimes.

The letter was published to coincide with the US-EU Trade and Technology Council meeting in Sweden, where politicians and tech figures are expected to talk about potential AI regulation.

EU officials also said on Tuesday that the bloc's industry chief Thierry Breton will hold a face-to-face meeting with Altman OpenAI in San Francisco next month.

Both are expected to discuss how the company will implement the bloc's first attempt to regulate artificial intelligence, which is scheduled to take effect in 2026.

Despite his recent calls for regulation, Altman threatened to pull his company out of Europe when the EU first floated these plans, saying that the proposals went too far, before retracting the threat.

The one-sentence statement on AI-related risks doesn't mention what the potential risks are, how severe they consider them to be, how it might be possible to mitigate them, or who should be responsible for doing so, except to say it "should be a global priority."

Before posting the statement, the AI Safety Center posted an exploration of recent comments by Yoshua Bengio, director of Montreal's Learning Algorithms Institute, theorizing on how AI could pose an existential threat to humanity.

Bengio argues that AI should be able to pursue goals by taking action in the real world, something that has yet to be attempted except in more closed spaces like popular games, such as chess and go.

And at that point, he says superintelligent AI may pursue goals that go against human values.

Bengio identified four ways AI might end up pursuing goals that could seriously clash with humanity's best interests.

The main one is humanity; the prospect of rogue human actors instructing AI to do something bad. Users have asked ChatGPT, for example, to formulate its plans for achieving world domination

He also says the AI may be given goals that are not properly specified or explained, and from there draw the wrong conclusions about its instructions.

A third possible area is the AI coming up with its own subgoals, in pursuit of broader targets set by humans, which might help it achieve the targets but at too great a cost.

Finally, and perhaps looking a little further into the future, Bengio stated AI may eventually develop some kind of evolutionary pressure to behave more selfishly, as animals do in nature, to secure their survival and that of their peers.

Bengio's recommendations on how to mitigate this risk include more research on AI safety, both at the technical and political policy levels.

Ilustrasi Diet

Diet Murah tapi Efektif? Ini Dia Makanan Penurun Berat Badan yang Bisa Anda Coba!

Temukan makanan penurun berat badan yang terjangkau namun efektif. Dapatkan hasil maksimal dari makanan sehat seperti ubi jalar, timun, dan telur yang mudah ditemukan di

img_title
VIVA.co.id
22 November 2024