STEGA: Clear recognition and definition of the misuse of artificial intelligence is a priority need of Montenegrin society

The organization said that if it does not want today's scenario of "is/isn't it original, is Miloš/isn't Miloš Medenica" to be repeated, the responsible authorities must first of all react proactively in several parallel directions.

1803 views 0 comment(s)
Photo: STEGA
Photo: STEGA
Disclaimer: The translations are mostly done through AI translator and might not be 100% accurate.

Clear recognition and definition of the misuse of artificial intelligence when it manifests itself through the presentation of digital manipulation, especially when the goal is to deceive the public, cause panic or cause reputational damage, is a priority need of Montenegrin society, assessed the Strategy for a European and Civic Montenegro (STEGA).

They said that if it does not want today's scenario of "is/isn't it original, is Miloš/isn't Miloš Medenica", the responsible authorities must first of all react proactively in several parallel directions.

"The digital transformation of Montenegrin society in the 21st century and the development of artificial intelligence have raised the issue of legal and social responsibility in the digital space, especially when it comes to generating audio and video content that can imitate real people and events. In the case of the recordings that have recently been linked to the escaped convict Miloš Medenica, the reaction in Montenegrin public discourse shows how strong the effect of such content that appears in the media and on social networks can be when authentic confirmation cannot be obtained about them or an objective answer can be given to the population regarding their veracity," the STEGA statement states.

As the first line of action in preventing damage, STEGA recognizes the amendment of the legal framework through more efficient and timely implementation of European legislative solutions, especially in light of the likely imminent accession to the European Union.

"Implementation would primarily be carried out through the adjustment of criminal legislation. Amendments to the provisions or the definition of a special chapter that would recognize social harm and rank the criminal liability of an individual, economic or political entity are the first step that must be followed by the creation and adoption of a series of new, as well as amendments to existing legal acts in related areas, with the aim of comprehensive protection of the population," the organization explained.

They add that in addition to criminal legislation, in order to align with already adopted European regulations such as the Artificial Intelligence Act (EU) 2024/1689, Digital Services Act (EU) 2022/2065, Data Act (EU) 2023/2854, Cyber ​​Resilience Act (EU) 2024/2847, GDPR (EU) 2016/679, Digital Markets Act (EU) 2022/1925, NIS2 Directive (EU) 2022/2555 and Data Governance Act (EU) 2022/868, it is necessary to adopt a number of completely new laws that have not existed so far, such as the Law on Artificial Intelligence, the Law on Digital Services, the Law on the Storage and Manipulation of Digital Data, the Law on the Security of Digital Products, as well as amendments to existing laws: the Law on the Protection of Personal Data, the Law on the Protection of Competition, the Law on Information Security and the Law on Management. data.

The second proposed course of action is an institutional response, so STEG says that the state must develop operational capacities for digital forensics.

"This implies the formation of specialized teams within a specialized independent government agency responsible for forensics, which must not be within the police, judiciary or prosecutor's office, but would also work for their needs. Members of this agency would be trained in the recognition and analysis of deepfake content, as well as establishing continuous institutional cooperation with domestic and foreign universities, other forensic bodies and institutes, as well as the technology sector. Without technical knowledge, institutions remain slow, while manipulative content spreads faster than the system that is supposed to control it," the statement said.

The third direction is the organizational and developmental aspect, and STEGA recognizes the need to create a digital compass by 2030 through the establishment of a national strategy for artificial intelligence that would simultaneously regulate risks and encourage innovation and be in line with the Sofia Declaration and the EU Digital Decade Policy Programme 2030.

"This includes support for domestic projects, the use of AI solutions in public administration to provide services to citizens more efficiently, as well as the transparent use of algorithms in the public sector," the organization says.

The fourth direction of action that STEGA considers necessary is greater information and transparency through the promotion of media and digital literacy.

"No law can replace an informed citizen, and a citizen is informed if the data is publicly available. Introducing media and digital literacy into the education system, as well as public campaigns that explain how manipulative content is created, are directions for long-term protection of society," the statement emphasizes.

STEGA says that trust is not defended through prohibitions, but through education, transparency, and understanding.

"STEGA believes that it is always better to react proactively in order to prevent or reduce the effects than to act retroactively to eliminate or reduce the damage caused, and media and digital literacy of the population develops internal individual proactive defense mechanisms. The conscious, intentional and targeted generation and distribution of digital content created for the purpose of deception, and for the purpose of achieving benefit or causing damage, by manipulating the original or creating completely new digital content with real actors that did not happen, should and must be recognized in detail and introduced into Montenegrin legislation through amendments or supplements," the organization states.

They emphasize that differences in the regulation of artificial intelligence clearly reflect the broader political and social models of the three largest global technological spaces - the United States and China on the one hand, and the European Union on the other.

"Although all three sides face the same technological challenges, especially in the area of ​​manipulative audio and video content, their responses differ significantly. The American approach is significantly more decentralized and market-oriented. In the United States, there is no single federal law equivalent to the AI ​​Act, but regulation is developed through a combination of sectoral rules, recommendations and case law. The emphasis is on innovation and rapid technological development, while the problem of abuse, including deepfake content, is mainly addressed through existing laws on fraud, defamation and consumer protection. The advantage of this model is flexibility and speed of development, but the disadvantage is uneven protection and slower reaction to systemic risks. The Chinese model, on the other hand, is characterized by strong central control and regulatory intervention by the state. China is among the first countries to adopt special rules for "generated content", including mandatory labeling of synthetic media and registration of certain AI systems with regulators. However, unlike the European approach, the focus is not primarily on protecting individual rights, but on preserving social stability and state control over information. "Regulation is strict and operationally efficient, but with less emphasis on transparency towards users. The European model attempts to take a middle path between American deregulation and Chinese centralization. The goal is to enable the development of technology while preserving democratic standards and trust in the public space. This is precisely why the European approach today increasingly serves as a reference framework for countries that seek to harmonize technological development with legal certainty and social responsibility," the statement said.

The European Union has developed a proactive approach that STEGA considers immanent to Montenegro's path as a recent part of the EU space, based on risk regulation and protection of fundamental rights.

"AI Act (EU) 2024/1689 starts from the assumption that technology must be aligned with democratic standards, privacy protection and transparency. The focus is not on banning development, but on the obligation of producers and users to clearly label 'generated content', prevent misleading the public and take responsibility for the consequences of using the system. The European model, which STEGA prefers, emphasizes trust, legal certainty and long-term stability of the digital space as a prerequisite for the harmonious and unhindered economic, social, scientific and educational development of society," the statement concludes.

Bonus video: