r

AI on the exam: A threat or an ally to education?

Futurologist and communicologist Ljubiša Bojić, a guest of the FUNK podium at the University of Montenegro, speaks for "Vijesti" at a debate on artificial intelligence in classrooms

6150 views 0 comment(s)
Photo: Private archive
Photo: Private archive
Disclaimer: The translations are mostly done through AI translator and might not be 100% accurate.

Does artificial intelligence empower students and professors, or undermine the meaning of critical thinking and academic integrity? This key question will be at the center of the debate “Artificial Intelligence - Support or Threat to Education?” which will be opened on October 11 at 19 p.m. in the hall of the Faculty of Engineering of the University of Montenegro by a futurologist and a communicologist Ljubisa Bojic from the Belgrade Institute for Philosophy and Social Theory. As a guest of the FUNK podium, a segment of the Festival of Arts, Science and Culture, Bojić will discuss with the audience whether universities should limit, ban or integrate AI tools into teaching in the future.

Along with Bojić, professors will participate in the conversation. Vuk Vukovic i DIjana Vukcevic from the Faculty of Philosophy.

On this occasion, Ljubiša Bojić speaks to Vijesti.

When it comes to education: is AI a tool that empowers professors and students, or a threat that undermines the meaning of critical thinking? Should universities ban, limit, or include AI tools in teaching?

AI tools like ChatGPT are both empowering and challenging, which is a key dilemma I have explored in my work. On the one hand, they help teachers and students by providing personalized learning, instant feedback, and greater availability of resources, making teaching more interactive and effective. On the other hand, there is a danger of over-reliance, which can undermine critical thinking, as students may skip deep thinking and simply copy answers. AI’s lack of emotional intelligence limits its role in developing social skills. Universities should not ban AI, as this would weaken their competitiveness. Instead, it should be included with strict guidelines: mandatory education on ethical use, integration into curricula where AI serves as a tool for creativity rather than a substitute for thinking, and regular tests to check the originality of work. As I emphasize in my academic writings, further research and oversight are needed to avoid invisible manipulations.

photo: Promo

If universities don't react quickly enough, what could be lost in terms of educational standards? And does this mean we are living in a period of higher education revolution?

If universities don’t act quickly, we will lose the essence of education: the development of critical thinking, creativity, and the ability to solve problems on our own. Instead, we will see a leveling of thinking, where most students use AI simply to “get the job done” without depth, deepening social divisions. A small group of motivated individuals will use AI for creativity and excel, while the rest will be left behind. This is a revolution in higher education, similar to the digital transition. Those who adapt will dominate, while those who refuse, like those who avoid smartphones today, will not be able to compete. In the long run, AI will affect everyone, leveling creativity, but we can mitigate this by including diversity in AI content to encourage different perspectives.

Do you think that tools like ChatGPT "level" differences among students or, on the contrary, deepen existing inequalities: gender, economic, (inter)disciplinary, etc.?

AI tools like ChatGPT do not equalize differences, but rather deepen them, reinforcing existing inequalities. Students from wealthier backgrounds, with better access to high-speed internet and advanced AI tools, will have an advantage in personalized learning. Those from rural or economically disadvantaged areas, including gender and interdisciplinary differences, may be excluded due to limited access or lack of digital literacy. This leads to a divide: a motivated minority will use AI for deep creativity, while the majority will use the tools superficially, deepening economic and gender gaps. In my research on recommender systems, I see how AI is already polarizing society through recommender systems. A similar thing will happen in education if we do not introduce inclusive policies.

You often write about aligning AI with human values. How can you translate that dilemma into the context of education?

Aligning AI with human values, a central theme of my work on AI alignment, in education means ensuring that AI does not simply replicate knowledge, but enriches it with diversity and ethics. For example, through algorithmic changes to content recommendation systems, we can embed values ​​such as inclusivity, critical thinking, and cultural diversity, so that AI encourages creativity rather than uniformity. In education, this means developing guidelines where AI serves as a mentor that promotes ethical thinking rather than just efficiency, for example, by testing AI in simulations that prioritize human well-being. This is key to avoiding polarization and ensuring that technology serves all students, not just the elite.

Your research connects society, digital technologies and artificial intelligence - both at the local, regional and global levels. How different is the perception and use of AI in higher education in the region compared to European and US universities? Or some other, more developed countries?

The perception and use of AI in higher education in the region, such as Serbia and the Balkans, differs significantly from Asia. In China and South Korea, AI is being adopted faster due to greater trust, better infrastructure and integration into curricula. For example, universities such as MIT are already using ChatGPT for personalized learning. In Europe, the focus is on regulation, such as the EU AI Act, which slows down the process, but is an attempt to integrate ethics. A study we conducted in Austria, Denmark, France and Serbia showed that trust in AI is crucial. In Serbia, it is slightly lower than in Western Europe due to cultural factors, historical suspicions of the technology and weaker digital infrastructure, which leads to slower adoption and greater fear. Globally, developed countries see AI as an opportunity for innovation, while in the region it is very difficult to support new ideas due to a lack of money and vision.

How do you imagine the university of 2050: will students study in amphitheaters, in the metaverse, or with personalized AI mentors?

The university of 2050 will be hybrid. Amphitheaters will survive and be used for social interaction and discussion, but much of the learning will take place in the metaverse with personalized AI tutors. Much like print survived, it is little used and irrelevant. Imagine students “teleporting” to a virtual ancient Rome to experience historical simulations or using AI to adapt teaching to their learning styles in real time. However, the risks are significant: dependency on virtual environments, concentration of power in large tech companies, and a reduction in creativity. The key is to incorporate diversity into AI to mitigate the homogenizing effects of creativity. This can be regulated by law.

Bonus video: