France plans to use artificial intelligence (AI) to scan thousands of athletes, coaches and spectators arriving in Paris for the Olympics as part of a gruesome surveillance exercise, rights activists claim.
In recent months, French authorities have tested a surveillance system based on artificial intelligence at train stations, concerts and football matches.
When the Games begin in late July, these systems will scan crowds, check for suspicious packages, detect weapons and more.
French officials say the instruments will not be fully operational before the Games, but will be used by French police, fire and rescue services, and some transport security agents until March 31, 2025.
Activists fear that AI surveillance could become the new normal.
"The Olympic Games are a huge opportunity to test this kind of surveillance under the guise of security issues, and open the way for more intrusive systems like facial recognition," Katya Roux of the French branch of Amnesty International told Reuters.
Train stations and Taylor Swift
The French government has involved four companies in this effort - Videtics, Orange Business, ChapsVision and Wintics.
These companies' security platforms measure eight key parameters: wrong-way traffic, presence of people in restricted areas, mass movement, abandoned packages, presence or use of weapons, overcrowding, body on the ground and fire.
Concerts by Depeche Mode and the Black-Eyed Peas, as well as a soccer match between Paris Saint-Germain and Olympique Lyonnais, were the best tests for the software.
Additional tests were conducted on crowds traveling through busy subway stations to attend a Taylor Swift concert and on the 40 attendees at the Cannes Film Festival.
Cannes Mayor David Lisnard said the city already has "the densest video surveillance network in France", with 884 cameras - one for every 84 inhabitants.
There are about 90 surveillance cameras across France, monitored by the police and gendarmerie, according to a 000 report.
"The biggest concern is that, while in most cases the use of these systems does not appear to involve revealing the identity or profiling of individuals, they still require the deployment of a surveillance infrastructure that is only a software update away from being able to perform the most invasive types of mass surveillance," he said. Daniel Lefer, senior analyst at digital rights group AccessNow.
"Members of the public will have little or no insight into what these systems are monitoring, what updates are being made, and the like which leads to the inevitable chilling effect that comes with this type of public surveillance," he said.
The Olympics as a playground for AI
French lawmakers tried to mitigate criticism by introducing a ban on facial recognition. The authorities claim that this is a red line that must not be crossed.
Matijas Hulije, one of the founders of Wintics, said that the experiment was "strictly limited" to the eight use cases indicated by law, and that functions such as the detection of mass movement cannot be used for other processes such as gait detection, whereby a person's unique gait can be used for her identification.
Hulije said that the design of the system itself "absolutely makes it impossible" for both end users and talented engineers to use Wintics for facial recognition.
Representatives from Videtics, Orange Business and ChapsVision did not respond to Reuters requests for comment.
Experts are concerned that the way the government measures the success of these tests and the exact way the technology works are not available to the public.
"There is nowhere near enough transparency about these technologies. There is a very unfortunate narrative that we can't allow transparency about these systems, especially in the context of law enforcement or public safety, but that's bullshit," Lefer said.
"The use of such surveillance technologies, especially in the context of law enforcement and public safety, carries perhaps the greatest potential for harm, and therefore requires the greatest degree of public responsibility," he said.
Privacy activists say exemptions in the legislation would allow the use of facial recognition by "competent authorities" for purposes including national security and migration.
"This is not a ban. It's actually an authorization for law enforcement agencies. People have the illusion that it's OK because it says we're banning the technology — except in this, this, and this situation — but those are the situations that are the most problematic," Roux said.
The use of surveillance in France has also raised concerns in the past. In November of last year, a non-profit organization revealed that since 2015, law enforcement agencies have been secretly using facial recognition software they obtained from the Israeli company Briefcam.
French politicians suggest there is still a gap between the promise of AI surveillance and its capacity.
"AI-based video surveillance will not be adopted during the Olympic Games. However, the Olympics will be a great playground for experimenting with it," said Senator Anjes Kanajer.
"More internal security forces or private security forces will be needed to compensate for the lack of technology," she said.
The interior ministry, which is in charge of French law enforcement, did not respond to Reuters' requests for comment.
In a list of proposals on the future of AI-based surveillance systems, the government's Law Commission recommended that the "experimental basis" of the technology continue
In a series of proposals on the future of AI surveillance, the government's legal commission recommended that the "experimental basis" of the technology continue and that the retention period of footage recorded by the systems be extended to "test the equipment in all seasons" and during minor events. .
"That's why we decided to immediately campaign and raise awareness about facial recognition, even if it won't be used during the Olympics," Roux said. "If we wait until it is used, then it will be too late".
Translation: NB
Bonus video: