Despite concerns from privacy advocates, several airports in Europe will be employing artificial intelligence as part of their security protocol.
These AI features comprise the main part of a program called iBorderCtrl and exist in the form of lie detectors currently undergoing European Union trials at a number of border checkpoints in Greece, Hungary, and Latvia.
If approved, the system will be another safeguard in airport and airline security and will be yet another task passengers will have to complete before being declared risk-free. The procedure involving the AI system will require passengers to complete an online application form and present an uploaded scan of their passports to an avatar security guard. The virtual sentry will then ask a few travel-related questions and examine passenger response through the use of micro-gesture software.
Questions include such basics as name, age, birthdate, purpose of trip and name of the trip funder. Identifying gestures will provide clues to the AI system about whether the passenger is telling the truth and is eligible to pass through the checkpoint. Any false gestures will trigger the system to repeat the questions to the consumer in a more skeptical tone before it alerts more organic authorities of a possible flight risk.
Advocates are so keen on iBorderCtrl in improving flight security and reducing congestion, the European Union is throwing $5.1 million into the project which involves technology being developed and implemented by such bodies as Manchester Metropolitan University and Luxembourg-based IT procurement company European Dynamics.
Those railing against iBorderCtrl due to privacy issues think the whole AI endeavor is a terrible idea. Tests so far indicate that the system has a 76 percent success rate. That's considered a passing grade if the setting were in a primary school system, but in terms of real-world application, that means roughly one in four passengers would likely be under more intense scrutiny by airport security, causing even more delays in processing and boarding airline customers.
"This is part of a broader trend towards using opaque, and often deficient, automated systems to judge, assess and classify people," said Frederike Kaltheuner, data program lead at Privacy International, a group opposing iBorderCtrl.
Other groups against the system argue that AI may not be able to distinguish between cultural gestures that may differ between genders and ethnicities. How algorithms can account for those difference to boost their accuracy remains to be seen.
"I don't believe that you can have a 100 percent accurate system," said Keeley Crockett of Manchester Metropolitan University.