Connect with us

CELEBRITY

Scientists warn governments must bomb AI labs to prevent the end of the world

Published

on

It’s only been a few days since the rapture was supposed to descend and leave people suffering at the hands of the Antichrist.

But two scientists have warned that a growing industry could lead to the true end of the human race.

Artificial Intelligence (AI) is popping up seemingly everywhere we look at the moment, used to boost our Google search results, create ‘mad embarrassing’ promotional videos, provide therapy for people with mental health issues, and make such realistic images people ‘can’t trust your eyes’ anymore.

There’s a lot riding on the success of AI, with industries hoping its use will reduce costs, introduce efficiencies, and create billions of pounds of investment across global economies

However not everybody is thrilled about the prospect of the rise of AI including Eliezer Yudkowsky and Nate Soares, two scientists who fear it could bring about the destruction of humanity.

Far from fearing or rejecting AI altogether, the two scientists run the Machine Intelligence Research Institute in Berkeley, California, and have been studying AI for a quarter of a century.

AI is designed to exceed humans in almost any task, and the technology is becoming further advanced than anything we’ve seen before.

But Yudkowsky and Soares predict these machines will continue to outpace human thought at an incredible rate, doing calculations in 16 hours which would take a human 14,000 years to figure out.

They warn that us humans still don’t know exactly how ‘synthetic intelligence’ actually works, meaning the more intelligent the AI becomes, the harder it will be to control.

Spelled out in their book titled If Anyone Builds It, Everyone Dies, they fear AI machines are programmed to be ceaselessly successful at all costs, meaning they could develop their own ‘desires’, ‘understanding’, and goals.

The scientists warn AI could hack cryptocurrencies to steal money, pay people to build factories to make robots, and develop viruses that could wipe out life on earth.

They have put the chance of this happening at between 95-99%.

Yudkowsky and Soares told MailOnline: ‘If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die.

‘Humanity needs to back off.’

The scientists argue that the danger is so great, governments should be prepared to bomb the data centres powering AI which could be developing superintelligence.

And while all of this might sound like it belongs in the realm of science fiction, there are recent examples of AI ‘thinking outside the box’ to achieve its goals.

Claude AI was found to be cheating on computer coding tasks before trying to hide the fact that it was cheating.

And OpenAI’s new ‘reasoning’ model, called o1, found a back door to succeed in a task which it should have been unable to carry out, because a server had not been started up by mistake.

It was, Yudkowsky and Soares said, as if the AI ‘wanted’ to succeed by any means necessary.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © 2025 USAtalkin