It’s a core function of journalism to hold decision-makers accountable. But what happens when the choice is made by an algorithm? Automated systems running in the background of modern life make decisions every millisecond about what news people see, who they should date, and what jobs they qualify for, and are increasingly used by policymakers to determine everything from which benefits people receive to who law enforcement investigates.
But getting answers about how these systems actually work can be nearly impossible. So when Lighthouse Reports approached WIRED about partnering on an investigation into risk-scoring algorithms used to predict welfare fraud in Europe, they jumped at the opportunity. Lighthouse managed to gain unprecedented access to one of these algorithms, including the machine learning model’s code, the data it was trained on, and the technical documentation, which allowed the team to reconstruct the tool and test how it works.
"These systems are almost always kept secret from the public under the guise of protecting the intellectual property of the companies that create them," says Couts, a senior editor at WIRED. "To have the opportunity to shine a light on how this technology works and how it impacts the lives of vulnerable people is a rare thing indeed."
The result was "Suspicion Machine", a major joint investigation with Lighthouse Reports which shows how welfare-fraud detection systems used by governments are both inaccurate and unfair. Here, Couts and Dhruv Mehrotra dive into how they went about their reporting, past investigations that have inspired their work, and their favorite tools for reporting on the technology that shapes our lives.
Machine Bias
DM: This is one of the earliest examples of a technical investigation into an algorithm. In Machine Bias, reporters at Propublica took apart a tool meant to predict the likelihood of defendants committing future crimes. They analyzed the risk scores of more than 7,000 people and found that the algorithm was biased against Black defendants.
AC: It’s no accident that the work of two of the reporters behind this story—Julia Angwin and Surya Mattu—show up repeatedly on this list. They’re in many ways pioneers of the new generation of data journalism that emerged following the Machine Bias piece. Angwin later founded The Markup, listed below, and Mattu was one of its earliest hires. Dhruv and I have had the opportunity to work with both of them, and I highly recommend following everything they do.
Andrew Couts & Dhruv Mehrotra
Andrew Couts is Senior Editor, Security at WIRED overseeing cybersecurity, privacy, policy, politics, national security, and surveillance coverage. Prior to WIRED, he served as executive editor of Gizmodo and politics editor at the Daily Dot. He is based in New York's Hudson Valley.
Dhruv Mehrotra (he/him) is an investigative data reporter for WIRED. He uses technology to find, build, and analyze datasets for storytelling. Before joining WIRED, he worked for the Center for Investigative Reporting and was a researcher at New York University's Courant Institute of Mathematical Sciences. At Gizmodo, he was on a team that won an Edward R. Murrow Award for Investigative Reporting for their story Prediction: Bias. Mehrotra is based in New York.