BLOG

Digital security is not a luxury, but a civic duty

As artificial intelligence becomes a major tool for both content creation and abuse, the intersection of cybersecurity and media literacy is becoming the foundation of a stable society. They are no longer separate disciplines, but a single essential skill.

1875 views 1 comment(s)
Photo: Shutterstock
Photo: Shutterstock
Disclaimer: The translations are mostly done through AI translator and might not be 100% accurate.

(medijskapismenost.me)

In today's digital age, almost the entire population of Montenegro has access to the internet, and it is estimated that around 472 thousand citizens actively use social networks. The digital space today stores more data about us than any institution in real life. Although such connectivity brings undoubted advantages, it also brings with it sophisticated threats to privacy and information quality.

In an era where artificial intelligence (AI) is becoming a major tool for both content creation and abuse, the combination of cybersecurity and media literacy is becoming the foundation of a stable society. They are no longer separate fields, but a unified skill set essential in the modern world.

Privacy under attack by algorithms

Every click, search, or “like” we make leaves a digital footprint. Research shows that just a few dozen online interactions are enough for algorithms to accurately assess our habits, consumer preferences, and political beliefs. Scandal Cambridge Analytica, where the data of millions of users was misused for political purposes without their consent, is the best proof that we are under constant surveillance.

This data is primarily used to personalize content and ads, but the same mechanisms can be used to manipulate public opinion. When private photos, locations, and contacts fall into the wrong hands, we risk everything from targeted marketing to identity theft.

How platforms shape our reality

Social media has a huge influence on opinion formation, with over 60% of Montenegrins using it as their main source of news. However, the algorithms that power these platforms are optimized for attention, not truth.

Content that evokes strong emotions (whether positive or negative) spreads the fastest. If the algorithm recognizes that we are reacting to sensationalism, it will serve us even more extreme headlines. This creates the so-called "echo komore“ (echo rooms) in which our own views are constantly confirmed, while different opinions are filtered out. An internal analysis by Facebook showed that as many as 64% of joining extremist groups was a direct result of algorithm recommendations. In such an environment, users become more vulnerable to misinformation because it acts as confirmation of “their truth”.

Deepfake and AI: A new era of fraud

With the advent of advanced artificial intelligence, manipulation has entered a new phase. Deepfake The technology allows for the creation of hyperrealistic videos and audio recordings in which real people say or do things that never happened.

The dangers are multiple, from political destabilization with a fake video of Ukrainian President Zelensky from 2022, in which he allegedly calls on the army to surrender, he showed how deepfakes can influence geopolitical crises. Then via financial fraud There have been cases where an AI-cloned voice of a company director convinced an employee to make a fraudulent payment of $243.000. Young people are not spared either, and are particularly vulnerable because their photos can be used for digital violence and blackmail.

Montenegro targeted by digital manipulation

The Montenegrin digital space has not been spared either. In addition to campaigns to spread fake news that undermine trust in institutions, we are also witnessing sophisticated fraud. A concrete and dangerous example is the recent misuse of a doctor's identity. Fraudsters used AI tools to alter a TV interview with a well-known Montenegrin doctor so that it looked and sounded like he was advertising a dubious medical product. This fake news was placed on a website that visually copied the RTCG portal, which is why many citizens believed it and became victims of fraud.

Line of Defense: Critical Thinking and Technical Protection

Technical protection is important, but the first line of defense is critical thinkingIn the world of artificial intelligence, the principle of "don't believe your eyes" is becoming the rule.

How to recognize AI manipulation?

  • Video: Pay attention to unnatural blinking, lip movements that don't perfectly match the speech, or strange light reflections.
  • Audio: Listen for abrupt transitions, robotic monotony, or a complete lack of background noise.
  • Emotion: Be suspicious of content that causes sudden anger or fear as this is often a sign of manipulation.

Practical protection measures:

  1. Account protection: Use strong, unique passwords and be sure to enable two-factor authentication (2FA). It's the most effective defense against account theft.
  2. Discretion: Limit who can see your posts and profile. The less personal information (date of birth, address) you share publicly, the harder it is to make your profile a scam.
  3. Source verification: Do not click on links in suspicious messages ("phishing"). If you receive an urgent request for money from a "friend" or "director", always verify by calling a number you know.

Conclusion

Digital security is not a luxury, but a civic duty. Artificial intelligence is revolutionizing, but understanding how it works gives us an advantage. Through joint efforts, and through technical protection of our accounts and critical verification of the information we consume, we create a safer environment for ourselves, our families, and the entire society.

The author is a cybersecurity expert.

Bonus video: