SOMEONE ELSE

Compliance problem

Another crossroads in the race for general artificial intelligence, an autonomous system that outperforms humans in most economically valuable jobs, between profit and the well-being of humanity

2033 views 0 comment(s)
Photo: Shutterstock
Photo: Shutterstock
Disclaimer: The translations are mostly done through AI translator and might not be 100% accurate.

On the website of the company OpenAI there is a page with an explanation of its specific structure. As a non-profit, it was co-founded by several tech companies and a bunch of entrepreneurs, including Peter Thiel and Elon Musk. The goal was to secure one billion dollars in donations for the development of artificial general intelligence (AGI) for the benefit of humanity.

Having secured only about $130 million—not nearly enough for such a resource-intensive venture—the team created a financial engine that would allow the company to attract investment capital while remaining more or less true to its founding mission. The for-profit subsidiary, OpenAI Global, would be legally required to pursue OpenAI's goals and "rather than focus on pure profit maximization, it would have limits on the maximum financial returns to investors and employees, which would encourage them to research, develop and apply AGI in a way that balances commerciality on the one hand and safety and sustainability on the other".

The company's legal structure should ensure that, if it succeeds in developing an AGI - defined as an autonomous system that outperforms humans in most economically valuable tasks - the intellectual property belongs to a non-profit organization for the benefit of humanity. But investors will get a share of any profits generated by non-AGI innovations created along the way. A prominent message on the site reads: "It would be wise to view any investment ... in the spirit of a donation, with the understanding that it may be difficult to know what role money will play in a post-AGI world."

Now it seems clear that Microsoft, the most prominent of OpenAI's investors, is not looking at its $13 billion investment in that spirit, but rather wants to make the most of its position in a world where the role of money is well understood. When OpenAI's board fired the company's CEO, Sam Altman, last Friday, accusing him of a lack of candor in his communications with them, Microsoft pressured the board to reverse its decision. The board then spent the weekend negotiating with Altman, but failed to lure him back. On Monday, it was announced that Altman, along with a number of colleagues from OpenAI, will move to Microsoft and launch a new AI initiative there.1

Analysts assessed this dispute as a conflict between two camps in the company. Altman's backers are excited about the opportunities for new products and profits; those who cluster around the other co-founder, Ilja Satskever, and his allies on the board, focus more on the original mission.

Since ChatGPT's wildly successful launch a year ago, Altman has become the company's public face and wanted to leverage its leadership in generative AI in as many ways as possible. It is said to be trying to secure a multi-billion investment to develop new specialized hardware, to be in talks with Jonathan Ivey (a former Apple designer) about an OpenAI equivalent of the iPhone, and to be the driving force behind the recent GPT store announcement. There was talk about allowing employees to cash out their shares.

Altman dropped out of Stanford in 2005 to start a social networking company. Satskever graduated from Toronto that year and moved on to doctoral studies with Jeffrey Hinton. He helped develop the AlexNet architecture, which transformed the approach to image analysis. His startup based on that research was later bought by Google. Satskever, like Hinton, seems fascinated by the potential of AI while increasingly fearful of its implications. Since AI programs, no matter how intelligent they are, are still just programs, we should be able to rely on them to do what they're told. The trouble is how to make sure that we've actually told them to do what we want them to do - a problem known as the compliance problem. In July, OpenAI announced that Satskever's primary focus as head of research is now "supercompliance": over the next four years, 20 percent of the computing resources they've already provided will be dedicated to building a human-level automated compliance investigator.

Satskever believes that OpenAI is on the way to create entities more intelligent than humans and thus make almost all forms of work redundant and money unnecessary. He is concerned that we could thus accidentally create software agents beyond our control, whose goals are imperfectly aligned with ours, potentially carrying a significant risk of extinction. The proposed solution is to create a form of artificial superintelligence that will be able to see that this is happening and intervene to prevent it. What could go wrong?

It should be easy to pick a side here: the conflict is between a CEO who encourages innovation to make as much money as possible, and a board of nonprofits who strive to be accountable and pay due attention to the unintended consequences of technology. The only thing is that the nonprofits' stated goals - the predicted consequences of the technology - are staggeringly disruptive, and their approach to responsible thinking was not to slow down or involve other stakeholders in a system of independent checks and balances of interest, but to invest in a high-risk speculative research project.

Perhaps OpenAI's progress toward AGI has been significantly slowed by Altman's departure. Seven hundred of the company's 770 employees, some of whom may be more excited about getting a stake in OpenAI's $80 billion valuation than the prospect of a cash-free world, have signed a letter threatening to join Altman at Microsoft if he doesn't returned to OpenAI. Amazingly, the letter was signed by Satskever, who seems to have had temporary difficulty reconciling his goals and his actions.

(London Review of Books; Peščanik.net; translation: M. Jovanović)

Bonus video:

(Opinions and views published in the "Columns" section are not necessarily the views of the "Vijesti" editorial office.)