Inside Sweden’s Suspicion Machine: Fraud-Hunting AI That Hits the Wrong Targets

by Varga Balázs

Sweden is often celebrated as a model welfare state. It ranks at the top of global transparency indices and maintains high levels of public trust. Yet beneath this reputation for openness, the country’s Social Insurance Agency (Försäkringskassan) has quietly run large-scale experiments with algorithms designed to score hundreds of thousands of benefit recipients, ostensibly to predict who might commit fraud.

Sources within the agency, responsible for administering Sweden’s social security system, describe these algorithms as its “best-kept secret.” Those flagged—who sometimes face humiliating investigations or even benefit suspensions—are unaware that an algorithm singled them out.

In October 2021, we submitted a freedom-of-information request to the Social Insurance Agency to learn more. It was immediately denied. Over the next three years, we exchanged hundreds of emails and filed dozens of additional requests, almost all rejected. We went to court twice and consulted half a dozen public authorities.

Through persistent efforts, Lighthouse Reports and Svenska Dagbladet obtained an unpublished dataset of applicants to Sweden’s temporary child support scheme, which assists parents caring for sick children. Each applicant in the dataset had been flagged as suspicious by the agency’s predictive algorithm. Analysis revealed that the fraud prediction system disproportionately targeted women, migrants, low-income earners, and people without a university education.

Months of reporting—including interviews with confidential sources—show how the agency deployed these systems with little oversight, despite warnings from regulatory bodies and even its own data protection officer.

Methods

The Suspicion Machines series has examined welfare surveillance algorithms across more than eight countries. Sweden, unexpectedly, proved the most challenging.

Over three years, Lighthouse invoked Sweden’s freedom-of-information laws at scale, requesting technical documentation and evaluations similar to those obtained in prior investigations in France, Spain, and the Netherlands.

The agency’s refusals were relentless. Officials declined to release even basic information, claiming disclosure could help fraudsters evade detection. They refused to confirm whether algorithms were trained on random samples or disclose how many people were flagged overall. Basic statistics on fraud estimates were also withheld. In one email, a senior official, seemingly forgetting a reporter was CC’d, wrote: “let’s hope we are done with him!”

Even requests for data already published in annual reports were denied under the claim of confidentiality.

Still, strong indications suggested serious problems. A 2016 report by Sweden’s Integrity Committee described the practice as “citizen profiling,” warning of extreme risks to personal privacy. A redacted 2020 note from the agency’s data protection officer questioned the system’s legality.

A 2018 report from the independent supervisory authority ISF offered another avenue for investigation. Using a dataset it had received, ISF concluded that the algorithm predicting fraud in parental benefits did not treat applicants equally. The agency rejected these findings, challenging the analysis’s validity.

By obtaining the dataset behind the ISF report, in-depth analysis became possible. It included over 6,000 people flagged in 2017 along with their demographic details. With eight academic experts, Lighthouse and Svenska Dagbladet ran statistical fairness tests, showing that women, migrants, low-income earners, and those without a university degree were disproportionately flagged. The tests also revealed these groups were more likely to be falsely labeled as suspicious.

The methodology and underlying code and data are now available on GitHub.

Storylines

Lighthouse and Svenska Dagbladet produced a three-part series based on joint reporting and analysis.

The first story exposes how Sweden’s social security agency has deployed machine learning at industrial scale and largely in secret. It tells the stories of parents left without funds for basic necessities. Those with the highest risk scores face investigators with broad powers, operating in isolated corners of agency offices inaccessible to other employees. Vulnerable groups historically subjected to discrimination were also disproportionately targeted for investigation.

Anders Viseth, who oversees the agency’s fraud algorithm, denied wrongdoing. He argued that investigations are not harmful because a human investigator ultimately decides the outcome—despite benefit delays and invasive scrutiny.

The second story scrutinizes claims of widespread fraud, a key justification for secrecy and AI use. Publicly cited fraud estimates, highlighted in media reports and annual publications, rest on shaky assumptions. For example, the agency’s definition of fraud did not differentiate between errors and intentional acts. Data from the agency and the Swedish courts shows few cases reach trial, and even fewer result in a finding of intentional fraud.

The final story critiques the agency’s opacity and lack of accountability. Virginia Dignum, a professor at Umeå University and member of the UN expert group on AI, sharply criticized the agency’s anti-transparency stance. David Nolan of Amnesty International’s Algorithmic Accountability Lab highlighted the lack of recourse for those profiled.

“The opacity of the system means most people don’t know that fraud control algorithms flagged them,” Nolan said. “How can someone challenge a decision about them when they are unaware it was made by an automated system?”

Viseth, asked whether the agency should be more transparent, responded: “I don’t think we need to be.”