Geoffrey Hinton, one of the so-called godfathers of artificial intelligence (AI), has called on governments to step in and make sure machines don't take over society.
Hinton made headlines in May when he announced he was leaving Google after a decade to speak more freely about the dangers of artificial intelligence, shortly after the release of ChatGPT captured the world's imagination.
The highly respected artificial intelligence scientist, based at the University of Toronto, spoke to a packed audience at the Collision technology conference in the Canadian city.
The conference brought together more than 30.000 startup founders, investors and tech workers, most of whom want to learn how to ride the wave of artificial intelligence and not listen to the lessons about its dangers.
"Before AI is smarter than us, I think the people developing it should be encouraged to put a lot of effort into understanding how it can try to take over," Hinton said.
"Right now there are 99 very smart people trying to improve AI and one very smart person trying to figure out how to stop it from taking over (everything) and who might want to be more balanced," he said.
AI could deepen inequality, Hinton added, warning that the risks posed by artificial intelligence should be taken seriously despite his critics who believe he is exaggerating the risk story.
"I think it's important for people to understand that this is not science fiction, that this is not just fear mongering," he insisted. "It is a real risk that we have to think about and we have to figure out in advance how to deal with it".
Hinton also expressed concern that artificial intelligence will deepen inequality, with huge increases in productivity due to its use benefiting the rich rather than workers.
"Wealth will not go to working people. It will go so that the rich will be richer, not the poorer, and that is very bad for society," he added.
He also pointed to the danger of fake news created by ChatGPT-style bots and said he hoped AI-generated content could be watermarked in a way similar to how central banks watermark cash.
"It is very important that we try, for example, to mark everything that is fake as fake. Whether we can technically do that, I don't know," he said.
The European Union is considering such a technique in its artificial intelligence law, a law that will set the rules for artificial intelligence in Europe, which is being negotiated by lawmakers.
"Overpopulation on Mars"
Hinton's list of AI dangers contrasts with discussions at conferences that were less about security and threats and more about seizing the opportunity created by the advent of ChatGPT.
Venture capitalist Sara Guo said talk of AI as an existential threat was premature and likened it to "talk of overpopulation on Mars," citing another AI guru, Andrew Ng.
She also warned of "regulatory capture" which would see government intervention protect incumbents before it has a chance to benefit sectors such as health, education or science.
Opinions differed on whether the current generative AI giants, mainly backed by Microsoft OpenAI and Google, would remain unrivaled or whether new players would expand the field with their own models and innovations.
"Five years from now, I still imagine if you want to go and find the best, most accurate, most advanced general model, you're probably going to have to go to one of the few companies that have the capital to do that," said Lee Marie Braswell of the venture capital firm Kleiner Perkins.
Bonus video: