Skype co-founder Jaan Tallinn reveals the 3 existential risks he’s most concerned about

Skype co-founder Jaan Tallinn

Center for the Study of Existential Risk

LONDON – Skype co-founder Jaan Tallinn has identified what he believes are the three biggest threats to humanity’s existence this century.

While the climate emergency and the coronavirus pandemic are seen as issues that require urgent global solutions, Tallinn told CNBC that artificial intelligence, synthetic biology and so-called unknown unknowns each represent an existential risk through to 2100.

Synthetic biology is the design and construction of new biological parts, devices and systems, while unknown unknowns are “things that we can’t perhaps think about right now,” according to Tallinn.

The Estonian computer programmer who helped set up file-sharing platform Kazaa in the ’90s and video calling service Skype in the ’00s has become increasingly worried about AI in recent years.

“Climate change is not going to be an existential risk unless there’s a runaway scenario,” he told CNBC via Skype.

To be sure, the United Nations has recognized the climate crisis as the “defining issue of our time,” recognizing its impacts as global in scope and unprecedented in scale. The international group has also warned there is alarming evidence to suggest that “important tipping points, leading to irreversible changes in major ecosystems and the planetary climate system, may already have been reached or passed.”

Of the three threats that Tallinn’s most worried about, AI is his focus and he’s spending millions of dollars to try and ensure the technology is developed safely. That includes making early investments in AI labs like DeepMind (partly so that he can keep tabs on what they’re doing) and funding AI safety research at universities like Oxford and Cambridge.

Referencing a book from Oxford professor Toby Ord, Tallinn said there’s a one-in-six chance that humans won’t survive this century. Why? One of the biggest potential threats in the near term is AI, according to the book, while it says the likelihood of climate change causing a human extinction event is less than 1%.

Predicting the future of AI

When it comes to AI, no one knows just how intelligent machines will become and trying to guess how advanced AI will be in the next 10, 20 or 100 years is basically impossible.

Trying to predict the future of AI is further complicated by the fact that AI systems are starting to create other AI systems without human input.

“There is one very important parameter when trying to predict AI and the future,” said Tallinn. “How strongly and how exactly will AI development feedback to AI development? We know that AIs are currently being used to search for AI architectures.”

If it turns out that AI isn’t great at building other AIs, then we don’t need to be overly concerned as there will be time for AI capability gains to be dispersed and deployed, Tallinn said. (Should this line be in quote marks? I think we should rephrase if this is not a verbatim quote) If, however, AI is proficient at building other AIs then it’s “very justified to be concerned … about what happens next” he said.  

Tallinn explained how there are two main scenarios that AI safety researchers are looking at.

The first is a lab accident where a research crew leaves an AI system to train on some computer servers in the evening and “the world is no longer there in the morning.” The second is where the research crew produces a proto technology that then gets adopted and applied to various domains “where they end up having an unfortunate effect.”

Tallinn said he is more focused on the former as there are less people thinking about that scenario.

Asked if he’s more or less worried about the idea of superintelligence (the hypothetical point where machines achieve human-level intelligence and then quickly surpass it) than he was three years ago, Tallinn says his view has become more “muddy” or “nuanced.”

“If one is saying that it’s going to be happening tomorrow, or it’s not going to happen in the next 50 years, both I would say are overconfident,” he said.

Open and closed labs

The world’s biggest tech companies are dedicating billions of dollars to advancing the state of AI. While some of their research is published openly, much of it is not, and this has raised alarm bells in some corners.

“The transparency question is not obvious at all,” says Tallinn, claiming it’s not necessarily a good idea to publish the details about a very powerful technology.

Some companies are taking AI safety more seriously than others, according to Tallinn. DeepMind, for example, is in regular contact with AI safety researchers at places like the Future of Humanity Institute in Oxford. It also employs dozens of people who are focused on AI safety.

At the other end of the scale, corporate centers such as Google Brain and Facebook AI Research, are less engaged with the AI safety community, according to Tallinn. We need to reach out for comment from both.

If AI becomes more “arms racey” then it’s better if there are fewer participants in the game, according to Tallinn, who has recently been listening to the audiobook for “The Making of The Atomic Bomb” where there we (typo? were?) big concerns about how many research groups were working on the science. “I think it’s a similar situation,” he said.

“If it turns out that AI will not be any very disruptive anytime soon, then sure it would be useful to have companies actually trying to solve some of the problems in a more distributed manner,” he said.

Technology

Products You May Like

Articles You May Like

My Favorite Amazon Deal of the Day: The Amazon Fire Max 11 Tablet
Vanity Fair’s 2025 Hollywood Issue, John Krasinski, Katy Perry and John Mayer, and More | Jam Session
Gun Industry’s Sharing of Customer Data Slammed by U.S. Sen. Blumenthal — ProPublica
COP29: Satellites spot methane leaks – but ‘super-emitters’ don’t fix them
Here’s some cool stuff you can do with Bluesky