Earlier this year, I wrote two articles for AlgorithmWatch:
Both articles were part of AlgorithmWatch's upcoming report Automating Society 2020, which investigates the applications of automated decision-making in the public sector in various European countries.
The full report will be published this fall, but the abridged versions of both stories that will come in the report have now been published on the website Ethics of Algorithms of the Bertelsmann Stiftung:
Especially the Dutch story about SyRI, an algorithm cross-referencing personal data from citizens in various databases, makes it clear that algorithmic fraud detection has a lot of pitfalls: the way it was used in the Netherlands there was no transparency (the exact algorithm or risk model was never disclosed), it exacerbated biases and discrimination, and it was used for purposes it wasn't designed for.
The best argument against such systems is actually quite simple: you don't need them. As Ronald Huissen from the Platform Bescherming Burgerrechten said, the government doesn't need this kind of mass surveillance to prevent fraud:
The government already has information about who owns which house, so it could check this before granting the person a rental allowance. For all big fraud scandals in social security we have seen in the past decades it became clear afterwards that they could have been prevented with simple checks beforehand. That happens far too little. It is tempting to look for solutions in secret algorithms analyzing big data sets, but often the solution is far simpler.
It's an issue I see all too often: there's a problem (social fraud, fiscal fraud, crime, a virus outbreak, ...) and the government thinks it has to solve this with a big centralized database, massive surveillance and some technological voodoo (big data, AI, an app, or the next big thing). The solution is often much simpler: everyone just has to do their job instead of outsourcing it to technology.