r

Tech billionaires are preparing for the end of the world - should we worry?

In recent years, the advancement of artificial intelligence (AI) has only added to the list of potential existential woes.

5887 views 2 comment(s)
Photo: BBC
Photo: BBC
Disclaimer: The translations are mostly done through AI translator and might not be 100% accurate.

Zoi Klajnmen

technology editor

It is said that Mark Zuckerberg began work on the construction of Kulau Ranch, his sprawling 1.400-acre complex on the Hawaiian island of Kauai, back in 2014.

It is supposed to include a shelter, with all its own power and food supplies, although carpenters and electricians working on the site are forbidden from saying anything about it under signed confidentiality agreements, according to a report in Wired magazine.

Two-meter walls shield the project from view from the nearby road.

Asked last year if he was building a doomsday bunker, the Facebook founder replied with a resounding, "No."

This underground space covers some 460 square meters, he explained, "just like a small shelter, like a basement."

That didn't stop the speculation - nor did his decision to buy 11 properties in the Crescent Park neighborhood of Palo Alto, California, reportedly adding 650 square feet of underground space underneath.

Bloomberg via Getty Images

Although his building permits refer to basements, according to the New York Times, some of his neighbors call them bunkers.

Or the billionaire's bet cave.

And there is speculation surrounding other tech leaders, some of whom have seemingly been busy buying up plots of land with underground spaces ripe for conversion into luxury multi-million pound bunkers.

Reed Hoffman, co-founder of LinkedIn, talked about “apocalypse insurance.”

It's something that about half of the super-rich own, he previously claimed, with New Zealand a very popular destination for such homes.

So, are they really preparing for war, the effects of climate change, or some other catastrophic event that the rest of us have yet to learn about?

In the last few years, the advancement of artificial intelligence (AI) has only added to the list of potential existential woes.

Ilija Sutskever, chief scientist and co-founder of Open AI, is reportedly one of them.

In mid-2023, the San Francisco-based company released ChatGPT - a chatbot now used by hundreds of millions of people around the world - and has been rapidly working on its updates.

But that same summer, Sutskever became increasingly convinced that computer scientists were on the verge of creating Artificial General Intelligence (AGI) — the point at which machines reach human intelligence — according to a book by journalist Karen Hao.

In one meeting, Sutskever advised colleagues to dig an underground shelter for the company's best scientists before such powerful technology was released into the world, Hao writes.

AFP via Getty Images

"We will definitely build a bunker before we announce AGI," he reportedly said, although it is unclear who he meant by "we."

This points to a very strange fact: many leading computer scientists and technology leaders, some of whom are working hard to develop a remarkably intelligent form of artificial intelligence, also seem extremely scared of what it might one day do.

So when - if ever - will AGI come?

And can it prove transformative enough to make ordinary people fear it?

Arriving 'sooner than we think'

Technology leaders claim that AGI is inevitable.

OpenAI chief Sam Altman said in December 2024 that it would come “sooner than most people in the world think.”

Sir Demis Hassabis, co-founder of DeepMind, predicted it would happen in the next five to ten years, while Antropika founder Dario Amodei wrote last year that his preferred term – “powerful AI” – could be with us as early as 2026.

Others are suspicious of it.

“They are pushing the envelope all the time,” says Dame Wendy Hall, professor of computer science at the University of Southampton.

"It depends on who you talk to."

We're talking on the phone, but I can almost hear her rolling her eyes.

“The scientific community says that AI technology is fantastic,” she adds, “but it is nowhere near human intelligence.”

First, there would have to be a series of “fundamental advances,” agrees Babak Hoxhat, chief technology officer at technology firm Cognizant.

Moreover, it is unlikely that it will arrive in just one moment.

Instead, AI is a rapidly evolving technology, it is currently on a journey and there are many companies around the world competing to be the first to develop their own version.

But one reason this idea excites some in Silicon Valley is that it is believed to be a precursor to something even more advanced: ASI, or artificial superintelligence – technology that surpasses human intelligence.

The concept of "singularity" was posthumously attributed to Hungarian-born mathematician John von Neumann in 1958.

It refers to the moment when computer intelligence develops beyond human comprehension.

Getty Images

More recently, the book Creation from 2024, written by Eric Schmidt, Craig Mundy, and the late Henry Kissinger, explores the idea of ​​super-powerful technology becoming so effective in decision-making and leadership that we end up handing over control entirely to it.

It's a question of when, not if, they argue.

Money for everything, no need for a job?

Those who advocate the idea of ​​AGI and ASI are almost passionate about their benefits.

They will find new cures for deadly diseases, solve climate change and invent an inexhaustible source of clean energy, they claim.

Elon Musk even claims that super-intelligent artificial intelligence could usher in an era of “universally high incomes.”

He recently put forward the idea that artificial intelligence will become so cheap that literally everyone will want “their own personal R2-D2 and C-3PO” (referring to the droids from Star Wars).

"Everyone will have the best medical care, food, home transportation and everything else. Sustainable abundance," he added.

There is a scary side to all of this, of course.

Could this technology be hijacked by terrorists and used as a massive weapon, or what if it decides for itself that humanity is the cause of all the world's problems and destroys us?

AFP via Getty Images

"If it's smarter than you, then we have to rein it in," Tim Berners-Lee, creator of the World Wide Web, warned the BBC earlier this month.

"We'll have to be able to turn it off."

Governments are taking some protective measures.

In the US, home to many of the world's leading AI companies, President Biden issued an executive order in 2023 that required some firms to share the results of their security tests with the federal government - although President Trump has since withdrawn part of that order, calling it an "obstacle" to innovation.

Meanwhile, in the UK, the Artificial Intelligence Security Institute - a government-funded research body - was established two years ago to better understand the risks posed by advanced artificial intelligence.

And then there are the super-rich with their own apocalypse insurance plans.

Getty Images

"When you say 'you're buying a house in New Zealand,' it's a big wink-wink, you don't have to say anything else," Reed Hoffman previously said.

The same probably applies to bunkers.

But there is also a distinctly human flaw.

I once spoke to a former bodyguard of a billionaire with his own "bunker," who told me that his security team's first priority, if it really happened, would be to eliminate the boss in question and get into the bunker themselves.

And it looked like he wasn't kidding at all.

Is this all panicky nonsense?

Neil Lawrence is a professor of machine learning at the University of Cambridge.

For him, this entire discussion is pure nonsense in itself.

"The idea of ​​Artificial General Intelligence is itself absurd, just like the idea of ​​an 'Artificial General Vehicle,'" he argues.

"The right vehicle depends on the context. I used an Airbus A350 to fly to Kenya, I use a car to get to university every day, I walk to the cafeteria… There is no vehicle that can cover all of that."

For him, talking about AGI is a mere distraction.

"The technology we already have allows us, for the first time, to have normal people talk directly to a machine and potentially make it do what they want. It's absolutely extraordinary... and extremely transformative."

"A big cause for concern is that we are so drawn to the narratives of big tech companies about AGI that we are missing out on ways we need to make things better for people."

Getty Images

Current artificial intelligence tools are trained on a lot of data and are good at spotting patterns: whether it's signs of a tumor on an image or a word that's most likely to appear after another in a given sentence.

But they don't "feel," no matter how convincing their answers may seem.

“There are some ‘cheating’ ways to make Large Language Models (the foundation of AI chatbots) behave as if they have memory and learn, but they are unsatisfactory and quite inferior to humans,” says Hoxhaat.

Vince Lynch, CEO of California-based IV.AI, is also tired of the hype about AGI.

"It's just good marketing," he says.

"If you're a company that builds the smartest thing that ever existed, people will want to give you money."

He adds: "This is not something that's two years away. It requires a lot of calculation, a lot of human creativity, a lot of trial and error."

Getty Images

Asked if he believes AGI will ever materialize, he takes a big pause.

"I really don't know."

Intelligence without consciousness

In some ways, artificial intelligence has already achieved an advantage over the human brain.

A generative artificial intelligence tool can be an expert in medieval history one moment and solve complex mathematical equations the next.

Some tech companies say they don't always know why their products react the way they do.

Meta says there are some signs that its AI systems are improving on their own.

In the end, however, no matter how intelligent machines become, biologically the human brain still wins.

It has about 86 billion neurons and 600 trillion synapses, far more than artificial equivalents.

BBC

The brain also doesn't need to take breaks between interactions and is constantly adapting to new information.

"If you tell a person that life has been found on an exoplanet, they will immediately adopt it and it will affect their worldview in the future. As for the Grand Language Model, they will only know it as long as you keep repeating it to them as a fact," says Hoxhaat.

"Large language models also lack meta-cognition, which means they don't really know what they know."

"Humans seem to possess an introspective capacity, sometimes called consciousness, that allows them to know what they know."

It's a fundamental part of human intelligence – and one that has yet to be replicated in the lab.

Main photo: The Washington Post via Getty Images/ Getty Images

BBC is in Serbian from now on and on YouTube, follow us HERE.

Follow us on Facebook, Twitter, Instagram i Viber. If you have a topic suggestion for us, please contact bbcnasrpskom@bbc.co.uk

More from InDepth

Bonus video: