Crime Prediction Software Is Here and It’s a Very Bad Idea

April 18th, 2010

Via: Gizmodo:

There are no naked pre-cogs inside glowing jacuzzis yet, but the Florida State Department of Juvenile Justice will use analysis software to predict crime by young delinquents, putting potential offenders under specific prevention and education programs. Goodbye, human rights!

They will use this software on juvenile delinquents, using a series of variables to determine the potential for these people to commit another crime. Depending on this probability, they will put them under specific re-education programs. Deepak Advani—vice president of predictive analytics at IBM—says the system gives “reliable projections” so governments can take “action in real time” to “prevent criminal activities?”

Really? “Reliable projections”? “Action in real time”? “Preventing criminal activities”? I don’t know about how reliable your system is, IBM, but have you ever heard of the 5th, the 6th, and the 14th Amendments to the United States Constitution? What about article 11 of the Universal Declaration of Human Rights? No? Let’s make this easy then: Didn’t you watch that scientology nutcase in Minority Report?

Sure. Some will argue that these juvenile delinquents were already convicted for other crimes, so hey, there’s no harm. This software will help prevent further crimes. It will make all of us safer? But would it? Where’s the guarantee of that? Why does the state have to assume that criminal behavior is a given? And why should the government decide who goes to an specific prevention program or who doesn’t based on what a computer says? The fact is that, even if the software was 99.99% accurate, there will be always an innocent person who will be fucked. And that is exactly why we have something called due process and the presumption of innocence. That’s why those things are not only in the United States Constitution, but in the Universal Declaration of Human Rights too.

Other people will say that government officials already makes these decisions based on reports and their own judgement. True. It seems that a computer program may be fairer than a human, right? Maybe. But at the end the interpretation of the data is always in the hands of humans (and the program itself is written by humans).

But what really worries me is that this is a first big step towards something larger and darker. Actually, it’s the second: IBM says that the Ministry of Justice in the United Kingdom—which has an impeccable record on not pre-judging its citizens—already uses this system to prevent criminal activities. Actually, it may be the third big step, because there’s already software in place to blacklist people as potential terrorist, although most probably not as sophisticated as this.

IBM clearly wants this to go big. They have spent a whooping $12 billion beefing up its analytics division. Again, here’s the full quote from Deepak Advani:

Predictive analytics gives government organizations worldwide a highly-sophisticated and intelligent source to create safer communities by identifying, predicting, responding to and preventing criminal activities. It gives the criminal justice system the ability to draw upon the wealth of data available to detect patterns, make reliable projections and then take the appropriate action in real time to combat crime and protect citizens.

If that sounds scary to you, that’s because it is. First it’s the convicted-but-potentially-recidivistic criminals. Then it’s the potential terrorists. Then it’s everyone of us, in a big database, getting flagged because some combination of factors—travel patterns, credit card activity, relationships, messaging, social activity and everything else—indicate that we may be thinking about doing something against the law.

4 Responses to “Crime Prediction Software Is Here and It’s a Very Bad Idea”

  1. AHuxley says:

    In the past the next generation of internal “future threats” was dealt with one of two ways.
    The burn now or work to death and burn later camp lines.
    Why not sort internal populations with more care?
    Good kids get full scholarships.
    If your part of the system you should be productive.
    Stable kids get to join City Year.
    Your useful and might still get that scholarship.
    Big pharma has a chemical solution for the rest.
    Feel happy working to death over decades.
    Safer communities for all.

  2. bloodnok says:

    Putting aside all the blatant human rights violations, this sort of system has other dangers. Feedback for one. If you’ve got a pattern detection system (which this is) that determines an action which affects future inputs to your pattern matching, you’re going to get feedback effects.

    eg: Group A has been assigned a moderate probability of offending based on the input data, so is put under higher surveillance. On the principle that you’re going to catch more crimes this way (because if you weren’t watching, crimes would go un-noticed), you record that Group A commits more crimes than other un-monitored groups. This data is biased and will result in your system assigning a higher probability to Group A.

  3. Miraculix says:

    Now THAT is genuinely dystopian. So much for “rehabilitation”. This isn’t just “guilty until proven innocent”. In this model, EVERYONE is a criminal all the time.

    I wonder, will they be able to bust people for buying raw milk direct from a farmer with cash using this system? After all, that’s a criminal act in more than a few states in the USSA, as well as many countries.

  4. LykeX says:

    That’s an interesting point, bloodnok.
    Maybe they’ll take that into account. Or maybe they won’t, seeing as it could easily be taken as a sign that the system is working well, i.e. “look at how many criminals we’re catching.”

Leave a Reply

You must be logged in to post a comment.