A large number of companies in the field of artificial intelligence (AI) systems work outside the home country and regulation must exist, and this will entail the agreement to build a general system at the international level, to which countries will accede.
Thus, a master's degree in international law in the field of human rights, who is engaged in research in the field of ethics of artificial intelligence and a member of the Montenegrin Association for Artificial Intelligence (MAIA). Irina Stamatović comments for "Vijesti" on the approach to the regulation of AI at the international level.
At the end of May, the member states of the European Union (EU) approved the so-called The Artificial Intelligence Act, which aims to harmonize AI rules.
It is the first act of its kind in the world that can set global standards for legislation on artificial intelligence, according to the announcement of the Council of the EU.
The EU law is more comprehensive than the approach of easy voluntary compliance with the regulations of the United States of America (USA), while the Chinese approach aims to maintain social stability and state control, writes "Voice of America".
"Vijesti" published a questionnaire on its portal in mid-June, the aim of which was to get the public's impression of the regulation of AI, their opinion on the way in which this technology could be regulated, as well as the role of large companies that manage these systems.
Although it is not a representative sample, the same message runs through the largest number of answers - artificial intelligence must be strictly regulated in order to "work" for the benefit of citizens, because the consequences of potential abuses would be great.
It is significant that the questionnaire, which was reviewed almost 40 thousand times, generated less than 50 answers, which indicates that this issue is not a priority of citizens interested in artificial intelligence.
Be careful with personal data
AI systems very often use and collect users' personal data. Speaking about the protection of that important information, Stamatović points out that it is not only an issue related to artificial intelligence, but must be seen in terms of the legal system and the assumptions on which that structure rests.
"With the development of AI systems, these questions become more and more important, and the answers more and more complex. Nevertheless, it is necessary to introduce strict regulations that require anonymity of data and limit access to information only to authorized persons, but also assume that there is a certain code of conduct. Also, on the other hand, the implementation of strong encryption methods and regular updating of security protocols is needed," she points out.
Her opinion is shared by the majority of those who filled out the "Vijesti" questionnaire. A feeling of apprehension prevails, and some have expressed fear for their safety.
"Legal measures to protect user privacy. Strictly prohibit the necessity of entering personal data, general identity data, more detailed questions and information related to the user's location..."; "Collection of personal data without express consent is prohibited by our law. Data collection must be clearly indicated as well as the purpose and scope of data collection”; "The benefits are immeasurable if they meet the following conditions - media campaigns that promote and involve citizens, the development of the professional community and cooperation at the international level, strive for a certain level of digital sovereignty, the application of AI in education, tourism, agriculture, fishing, environmental protection, medicine and energy, Urgent priority opening of the most modern national data center for collecting, storing and processing data, ethics and cyber security in the mentioned fields must be a priority. Introduce chairs for the study of AI ethics" - are some of the answers.
Associate for Data Privacy Policy at the Institute for Human-Centered Artificial Intelligence at the US Stanford University Jennifer King stated in its white paper “Rethinking Privacy in the AI Era: Policy Provocations for a Data-Centric World” that AI systems pose many of the same privacy risks as decades of unrestricted data collection, but that the difference is in scale.
“Artificial intelligence systems are so data-hungry and non-transparent that we have even less control over what information about us is collected, what it is used for, and how we can correct or remove such personal data. Today, it's basically impossible for people using online products or services to escape systematic digital surveillance in most aspects of life - and artificial intelligence can make things worse," King explains.
Anti-bias algorithms
When asked how the objectivity and impartiality of the AI system can be ensured and whether it is possible, Stamatović replied that it is a multi-layered and extremely complex problem.
“Ensuring that AI-based models are unbiased requires careful algorithm design and the use of diverse, representative data sets for training, and that these data are 'cleaned' of bias/bias. But, first of all, it is necessary to examine and determine the impartiality of people who work on the creation of AI-based systems, as well as decision makers in companies", says Stamatović.
He adds that regular assessment and correction of possible biases in the results is necessary.
"...Which indicates the need for constant human control. Although it is extremely difficult to achieve absolute objectivity, it is questionable whether it exists as such, continuous monitoring and improvement of the process can significantly reduce bias", the interlocutor assesses.

When asked in the questionnaire whether they believe that AI systems make objective decisions, that is, whether they heard that an AI system was biased, the answers are different.
"Artificial intelligence does not have a social component, and as long as that moment does not exist, fair decisions are debatable"; "AI systems make both fair and unfair decisions. It brings both biased and unbiased. Why should anyone care about that? It is a matter of human intelligence to use information regardless of bias or unfairness”; "In order for AI to make correct decisions, the following conditions must be met - quantity and quality, systematization, purpose and analysis, data security, robust AI systems, high ethics and definition"...
Liability of companies
Speaking about the transparency of the AI system in his white paper, King states that the discussion in the US was focused on transparency requirements around the purpose of the algorithmic systems of companies, and that the AI Act in the EU did not cover that area either.
“It was only mentioned in the context of high-risk AI systems. So this is an area where a lot of work is needed if we want to feel that our personal data is protected from being included in artificial intelligence systems, including large systems such as so-called 'foundation models', according to King.
Stamatović told "Vijesta" that companies that develop AI systems should publicly publish information about how their algorithms work and what data they use.
"However, there is still no strong enough obligation mechanism at the global level that could be used for these purposes. What happens in the event of an error is a question of responsibility, and it should be on the companies that developed and implemented those systems, because they have control over their design and operation," she points out.
The interlocutor emphasizes that clear regulatory frameworks and obligations to report errors will further help in increasing accountability and transparency.
"And only a system that not only has but also enforces regulations can lead to that. However, this responsibility should not be absolute, and aspects of personal responsibility must be taken into account," says Stamatović.
When asked who is to blame if the AI system makes a mistake - the state or the company, some of those who filled out the questionnaire believe that the AI owner is solely to blame, while others call for a chain of responsibility.
"The company should be responsible if the AI-assisted system makes a mistake with consequences"; “One must know the chain of responsibility...users must know the level of reliability of the AI and accept all the consequences if the AI makes a mistake”; "That there is a rulebook, a regulator that is general and harmonized with the type of business. The one for whom that AI will do the work is responsible," read some of the answers.
Video surveillance - a double-edged sword
The use of AI systems in video surveillance is a widely accepted practice at the global level, so China has the most advanced facial recognition software, and Beijing is the "best" covered world city in that context.
Stamatović says that it is not easy to conclude whether the potential use of AI in security surveillance improves citizens' safety or violates their privacy.
"First of all, we must understand that security surveillance exists precisely with a security goal. However, the implementation depends on the legal and political system and its stability, as well as the level of knowledge and expertise in the relevant administrative units, such as the Ministry of Internal Affairs (MIA)," she explains.
He states that ensuring the safety of citizens is the responsibility of the state.
"How much we can rely on that depends on where we had the privilege or misfortune to be born and live. China and the way in which they implement this system, although different in many ways, can serve us to learn important lessons", Stamatović assesses.
BIRN published information in mid-May that the Ministry of Interior has acquired an Israeli software program for facial recognition that can be used within the system for monitoring public areas in Podgorica, Bar and Budva.
The Israeli company "AnyVision", from which the MIA bought the software, describes the "Better Tomorrow" program as "the world's most advanced tactical surveillance system based on artificial intelligence, capable of detecting, tracking and recognizing persons of interest and objects at mass gatherings in real time ".
The Council of the Agency for the Protection of Personal Data and Free Access to Information (AZLP) ordered the MUP to turn off video surveillance on the streets of Podgorica, Bar and Budva, but then, at the request of the MUP, suspended that obligation.
When asked how they feel about the use of AI in the video surveillance system, the majority of those who filled out the questionnaire believe that it would contribute to security, if it were regulated in a way that does not harm citizens' privacy.
"I absolutely agree that this possibility should be used in the context of security"; "If AI control is good, it improves security, in our country there are no institutions or staff that would be responsible for working with AI"; "AI has been used in surveillance, including facial recognition in public places, for years. It is possible to improve security in this way, but it is necessary to comply with the provisions of certain regulations and laws that regulate the issue of privacy, for example GDPR, etc." These were some of the answers that "Vijesti" received.

Montenegro is waiting for a solution
Stamatović says that in Montenegro, there is nothing concrete regarding the issue of AI regulation.
"What complicates this problem is that we are waiting for it to be 'solved somehow' in order to apply the same medicine," the interviewee points out.
The Minister of Public Administration Maraš Dukaj announced at the end of May that Montenegro intends to become part of the global progress in the management of artificial intelligence.
"The global trend predicts that artificial intelligence will improve many aspects of our lives, including health, transport, education and other sectors, which will contribute to the development of more efficient public administration," said the minister at the World Summit on the Information Society WSIS+20 Forum in Geneva.
Dukaj pointed out that greater cooperation between the public and private sectors is necessary to encourage AI research and encourage industry leaders to embrace open source collaboration.
"Ultimately, it is our responsibility as political leaders to ensure, with the appropriate use of modern technologies, the development of our economies and the realization of all benefits for citizens, but not to jeopardize human rights, equality and the right to decide on issues that are crucial for the development of societies, the protection of life environment and sustainable development", he said.
EU regulated area
The EU Act on AI categorizes different forms of artificial intelligence according to risk. Thus, as stated in the announcement of the Council of the EU, AI systems that represent a limited risk would be subject to very mild obligations in terms of transparency, while high-risk systems would be approved on the condition that they meet a number of requirements and obligations for accessing the EU market.
Artificial intelligence systems, such as, for example, cognitive manipulation of behavior and social evaluation will be banned in the EU due to unacceptable risk.
The act also prohibits the use of AI for predictive police processing, which is based on profiling and systems that use biometric data to categorize people according to specific categories such as race, religion or sexual orientation.
Companies that breach the Act will be fined based on a percentage of their total turnover in the previous financial year or over a predetermined amount, whichever is greater.
Proportional administrative fines are subject to small and medium-sized enterprises, as well as the so-called start-up companies.
Bonus video:
