Since 9/11, both governments and airlines have steadily increased the amount of data collected on travelers. Passenger Name Records (PNR) and Advanced Passenger Information (API) now include addresses, phone numbers, payment details, itineraries, travel dates, and the names of fellow passengers. But this data is often imperfect. So what happens when companies use it to train AI systems designed to predict people’s behavior based on past activities?
We looked at four firms offering software that leverages algorithms to profile passengers, assess risk, and flag various categories of individuals—from terrorists and human traffickers to undocumented migrants.
Executives at Swiss start-up Travizory explained that their system identifies passengers with unusual behavior patterns or characteristics resembling those of known offenders, analyzing an extensive range of variables to decide “who looks unusual.” The company’s chief data scientist described the algorithms behind these assessments as “black boxes.” According to him, the software “will flag someone as potentially risky or different, but the reasoning behind that decision remains largely mysterious.”
SITA, another company in this space, envisions using traveler data to track “what people are doing, not just who they are” at borders. It also aims to help governments effectively “export” their borders to every global location where passengers board flights, ships, or trains destined for their territory.
Those we interviewed raised concerns about the accuracy of the data fueling these models. A Dutch activist, after years of information requests, discovered that his Passenger Name Records listed flights he had never taken and omitted flights he had. Experts warn against relying on machines to predict terrorism risk and caution that such profiling could threaten the legal right to seek asylum.