ABOUT POWER AND RIGHT

Artificial intelligence and national security

Technology evolves faster than politics or diplomacy, especially when driven by intense market competition in the private sector. When it comes to addressing the potential security risks associated with artificial intelligence, policymakers need to step up

3391 views 1 comment(s)
Photo: Reuters
Photo: Reuters
Disclaimer: The translations are mostly done through AI translator and might not be 100% accurate.

Humans are a tool-making species. But can we control these tools? When Robert Oppenheimer and other physicists developed the first nuclear weapons in the 1940s, they worried that their invention could destroy humanity. This has not happened so far, but the control of atomic weapons has been a persistent challenge ever since.

A tool that is also transforming our lives today, many scientists and artificial intelligence (AI), that is, algorithms and programs that allow machines to perform tasks that normally require human intelligence. Like previous general-purpose technologies, artificial intelligence has enormous potential for both good and evil. For example, in the field of cancer research, it can analyze and summarize more analyzes in a few minutes than a team of people can in a few months. And it can reliably predict protein folding patterns that would take scientists years to do.

At the same time, AI reduces the costs and obstacles to the activities of eccentrics, terrorists and other nefarious actors who wish to cause harm. The authors of a new study from the RAND Corporation warn that "the marginal cost of recreating a dangerous smallpox-like virus could be as little as $100.000, while developing a comprehensive vaccine could cost more than $XNUMX billion."

In addition, some experts fear that advanced artificial intelligence will become so much smarter than humans that it will control us, not the other way around. Estimates of the development time for such supersmart machines – known as general purpose artificial intelligence – range from a few years to a few decades. In any case, the increased risks of even today's "narrow-purpose" artificial intelligence already demand greater attention.

For 40 years, the Aspen Strategy Group (which includes former officials, academics, businessmen and journalists) has met every summer on a specific national security issue. Previous meetings have discussed topics such as nuclear weapons, cyber attacks and the rise of China. This year, the focus is on the implications of artificial intelligence on national security, analysis of both benefits and risks.

Benefits include improving the ability to analyze vast amounts of intelligence, strengthening early warning systems, improving complex logistics systems, and validating computer codes to improve cyber security. But there are also major risks, including the development of autonomous weapons, random errors in software algorithms, and adversary artificial intelligence systems that could undermine cybersecurity.

China is investing heavily in the AI ​​arms race, and boasts some structural advantages. Artificial intelligence requires three key resources: data to train models; smart engineers who develop algorithms; computing power that ensures their work. China has few legal or privacy restrictions on data access (although some data sets are restricted for ideological reasons), and the country has many bright young engineers. China lags behind the United States the most in advanced microchips, which provide computing power for AI.

US export controls limit China's access not only to these advanced chips but also to the expensive Dutch lithography machines that make them. According to experts in Aspen, China lags behind the US by a year or two, but the situation is certainly unstable. Although Presidents Joe Biden and Xi Jinping agreed during their meeting last fall to hold bilateral talks on artificial intelligence, Aspen is not particularly optimistic about the prospects for arms control with artificial intelligence.

A particularly serious threat comes from autonomous weapons. Even after more than a decade of diplomatic efforts at the UN, the countries of the world have not been able to agree on a ban on lethal autonomous weapons. International humanitarian law requires the military to distinguish between soldiers and civilians, and the Pentagon has long had a rule that people must participate in decisions about the use of weapons. However, in some situations (for example, defense against incoming missiles) there is simply no time for human intervention.

Since specific conditions are important, people need to firmly define (with a codex) what a weapon can and cannot do. In other words, man must be involved in control "from above" if he cannot directly participate "from within". These are not empty ramblings. During the Ukrainian war, the Russians jam the signals of the Ukrainian military, forcing the Ukrainians to program their devices to autonomously make the final decisions about when to fire.

One of the most frightening dangers of artificial intelligence is its use in biological warfare or terrorist attacks. When countries around the world agreed to ban biological weapons in 1972, it was believed that such weapons were not of much use because of the risk of a "boomerang", that is, hitting the person who used them. Synthetic biology, however, makes it possible to create weapons that will destroy one population group without affecting another. Or a terrorist with access to a lab might simply want to kill as many people as possible, as the "Aum Shinrikyo" sect did in Japan in 1995. (Although they used non-infectious sarin, modern equivalents of this cult can use artificial intelligence to develop an infectious virus.) .

As far as nuclear technologies are concerned, the countries of the world signed the "Nuclear Non-Proliferation Treaty" in 1968, which has been acceded to by 191 countries. The International Atomic Energy Agency regularly reviews national energy programs to ensure that they are used exclusively for peaceful purposes. Despite intense rivalry during the Cold War, the leading countries in nuclear technology agreed in 1978 on certain restrictions on the export of the most dangerous equipment and technical know-how. This milestone opens up possible avenues of progress in the field of artificial intelligence, although of course there are obvious differences between these technologies.

It is quite obvious that technology advances faster than politics or diplomacy, especially if it is stimulated by intense market competition in the private sector. If there's one important takeaway from this year's Aspen meeting, it's that governments need to pick up the pace.

The author is a professor emeritus at Harvard University

Copyright: Project Syndicate, 2024. (translation: NR)

Bonus video:

(Opinions and views published in the "Columns" section are not necessarily the views of the "Vijesti" editorial office.)