24.6 C
New York

“Huge technology has distracted the world from the existential risk of AI,” scientists say.

Seoul Summit promoted debates on technology, AI safety, innovation and inclusion.

Speaking to the Guardian at the AI ​​Summit in Seoul, South Korea, Max Tegmark said that the shift in focus from the extinction of life to a broader conception of synthetic lucidity security risked an impermissible deterrent to imposing strict regulation on the creators of the most powerful programs. .

“In 1942, Enrico Fermi built the first self-sustaining nuclear prison reactor under a football field in Chicago,” said Tegmark, who trained as a physicist. “When the leading physicists of the age discovered this, they really freaked out, because they realized that the biggest hurdle remaining in the construction of a nuclear explosive had completed being overcome.

They realized that they were only a few years away – and in fact, they were three years away, with the Trinity test in 1945.

“AI models that can pass the Turing test [where someone can’t tell during a conversation that they’re not talking to another human being] are the same warning for the kind of AI you can lose control over. This is why people like Geoffrey Hinton and Yoshua Bengio – and even many tech CEOs, at least personally – are panicking right now.”

New technologies advance and scare.

Tegmark’s nonprofit Future of Life Institute last year led the call for a six-month “pause” in advanced AI research, based on these fears. The launch of OpenAI’s GPT-4 prototype in March of that year was the canary in the coal mine, he said, and proved the risk was unacceptably close.

Despite thousands of signatures, from experts like Hinton and Bengio, two of the three “godfathers” of AI who pioneered the approach to machine learning that underpins the field today, no pause was agreed.

Instead, AI summits, of which Seoul is the second after Bletchley Park in the UK last November, have spearheaded the fledgling field of AI regulation. “We wanted that missive to legitimize the conversation and we are very pleased with the result.

When people realized that people like Bengio were worried, they thought, ‘It’s too much for me to worry about that.’ Even the guy at my gas station told me afterward that he was worried about AI replacing us.

“But now, we need to stop just talking the talk and start doing the same.”

Since the opening session of what became the Bletchley Park summit, however, the focus of international AI regulation has shifted away from existential risk.

In Seoul, only one of three “ top- level” groups directly addressed security and analyzed the “full spectrum” of risks, “from privacy violations to labor market disruptions and potential catastrophic outcomes.” Tegmark argues that minimizing the most serious risks is not healthy – and is not eventual .

“This is exactly what I predicted would happen with the industry lobby,” he said. “In 1955, the first periodical articles appeared saying that smoking excuses lung cancer , and you would think that there would soon be some regulation. But no, it took until 1980, because there was a huge industry push to distract. I feel like that’s what’s happening now.

“It’s simple that AI also excuses current harms: there is prejudice, it harms marginalized groups… But as [UK Secretary for Science and Technology] Michelle Donelan herself said, it’s not like we can’t deal with both. It’s like saying : ‘We’re not going to pay attention to climate change because there’s going to be a hurricane this year, so we should focus exclusively on the hurricane’”.

Tegmark’s critics have made the same argument regarding his own claims: that the industry wants everyone to talk about hypothetical risks in the future to distract from actual harm in the present, a claim he rejects. “Even if you think about it on its own merits, it’s pretty clever: it would be pretty 4D chess for someone like [OpenAI head] Sam Altman, in order to avoid regulation, to express to everyone that he could own lights erased for everyone and then try to persuade people like us to take action .

Instead, he argues, the taciturn support from some tech leaders is because “I think everyone feels like they’re stuck in an impossible situation where even if they want to stop, they can’t. If the CEO of a tobacco company agrees one morning and feels that what he is doing is not clear , what will happen? They will replace the CEO. Therefore, the only way to achieve safety first is if the government sets safety standards for everyone.”

Related Articles

Latest Articles