At Sweden's leading Lund University, professors decide which students can use artificial intelligence when solving assignments.
At the University of Western Australia in Perth, professors discussed with students the challenges and possible benefits of using artificial intelligence, while the University of Hong Kong allows the use of ChatGPT subject to strict rules.
Launched by MSFT.O, backed by Microsoft, ChatGPT is the fastest-growing app in the world and has spawned competition from the likes of Google's GOOGL.O Bard.
GenAI tools, like ChatGPT, use patterns in language and data to generate everything from essays to videos to mathematical calculations that appear human-made at first glance, fueling debate about unprecedented transformation in many fields, including and education.
Academics are among those who may face an existential threat if AI turns out to be able to replicate - and significantly faster - the research currently being done by humans. Many also see benefits from GenAI's ability to process information and data that can serve as the basis for deeper analysis by humans.
"It can help students tailor course materials to their own needs, helping them in a similar way as a personal tutor would," Lif Kari, vice president for education at KTH, the Royal Institute of Technology in Stockholm, told Reuters.
The United Nations Educational, Scientific and Cultural Organization (UNESCO) yesterday unveiled what it claims is the first global guidance on the application of artificial intelligence in education and academic research.
The guidance sets out steps national regulatory agencies should take in areas such as data protection and revising copyright laws, and calls on states to ensure teachers acquire the necessary AI-related skills.
Some educators compare artificial intelligence to hand-held digitrons, which began entering classrooms in the 1970s, sparking debate about how they would affect the learning process before quickly being embraced as a valuable aid.
Others have expressed concern that students could simply rely on AI to do the work for them, which is tantamount to cheating — especially as AI-generated content gets better over time. Presenting AI work as original work also raises copyright issues and the question of whether AI should be banned in academia.
Rachel Forsyth, a project manager at the Office for Strategic Development at Lund University in southern Sweden, told Reuters the ban "doesn't feel like something that can be enforced".
"We are trying to put the focus back on learning and put aside the issue of cheating and student control," she said.
Around the world, Turnitin software has been one of the main ways to check plagiarism for decades.
In April, they unveiled a tool that uses AI to discover AI-generated content. The tool is available for free to more than 10.000 educational institutions worldwide, although a charge for use is planned from January.
So far, a content detection tool produced by AI has determined that only three percent of students used artificial intelligence for more than 80 percent of their papers, and that 78 percent of them did not use it at all, Turnitin data showed.
Problems have arisen with so-called false positives, when texts written by humans - in some cases by professors trying to test the software - are flagged as the work of artificial intelligence, even though those unfairly accused of using artificial intelligence can defend themselves if they have preserved different versions of their works.
The students themselves are experimenting with artificial intelligence and some have rated it as bad, pointing out that it can produce content at a basic level, but that it is necessary to check the facts because AI cannot distinguish fact from fiction and right from wrong.
Her knowledge is also limited to what she can download from the internet, which is not enough for some specific questions.
"I think AI has a long way to go before it becomes useful in a real way," Sophie Constant, a 19-year-old law student at England's Oxford University, told Reuters.
“I can't ask her questions about the cases. He just doesn't know and doesn't have access to the material I'm studying, so he's not very helpful.”
UNESCO's latest guidelines also highlight the risk that GenAI will deepen social divisions, as educational and economic success increasingly depend on access to electricity, computers and the internet, which the poorest lack.
"We are struggling to link the speed of transformation of the education system with the speed of technological progress," Stefanija Giannini, Assistant Director General for Education at UNESCO, told the British agency.
So far, the European Union is among those leading the process of regulating the use of artificial intelligence with a draft law that has yet to be adopted. The regulations do not specifically deal with education, but with broader ethical rules that can be applied in that area.
After leaving the EU, Britain is also trying to draw up guidelines for the use of artificial intelligence in education by consulting educators and says it will publish the results later this year.
Singapore, which is leading efforts to train teaching staff to use the technology, is among nearly 70 countries that have developed or are planning AI-related strategies.
"In terms of the university, as a professor, I think that instead of fighting against artificial intelligence, it is necessary to use it, get to know it, develop a good framework, guidelines and a responsible AI system, and then work with students to find mechanisms that work," said Kirsten Rulf , a partner in the Boston Consulting Group.
Rulfova participated in the negotiations on the European Union's Artificial Intelligence Act while she was the head of digital policy in the German government.
"I think we are the last generation to live in a world without GenAI".
Bonus video: