Emerging Technologies Law is a blog by William Ting which examines 21st century legal, business & Social tech issues.

AI: Catching Criminals Before the Act (Part 2)

AI: Catching Criminals Before the Act (Part 2)

catching only the bad guys? (licensed by Getty Images)

catching only the bad guys? (licensed by Getty Images)

Crime Predicting AI: are we in trouble?

“The moment our actions are conditioned out of fear for how a machine can interpret them through artificially intelligent algorithm processes, we lose that which should never be predictable: our human nature.”

   What kind of world will face us when behavioral predictive technologies are used to predict crimes? How much will our lives change and how much power are we handing over to public authorities and even private companies?  Part 1 of this series explored machine learning’s application to behavioral predictive technologies as well as some of the major pitfalls of such approach. Here, Part 2 will look at the legal, privacy, ethical issues raised by crime predicting behavioral predictive technologies.

how will Justice herself be weighed in the future? (CC0 Creative Commons license)

how will Justice herself be weighed in the future? (CC0 Creative Commons license)

20/20 Justice?

   We all have seen the traditional symbol of justice personified in a blind-folded female figure wielding a sword and scale. But what if lady-justice can see into the future on our actions? Would she still be just?

   In the laws of most jurisdictions, a person may not be arrested for a crime they have yet to commit. (Later, we will discuss behavioral predictive technologies and inchoate crimes). This is the current state of law which can be changed if a national government is determined to implement AI to help combat and prevent crimes. Given political will and industry influence, it would not be difficult for a major government spending billions of dollars on AI research & development to change its criminal law and procedures to accommodate advances in unsupervised predictive machine learning. One day when that happens, what will be the relationship between traditional principles of criminal law and predictions made by unsupervised machine learning?

The future of criminal intent & acts

   Its axiomatic in criminal law that a crime requires both an act and state of mind (except where strict liability is concerned). Usually, the offenses that carry serious penalties require the prosecution to show some sort of intent on the defendant’s part (like specific intent). For example, most charges of homicide requires a showing that the defendant had specific intent to cause the victim’s death or grievous bodily harm. For a charge of homicide, simply showing the defendant intended to pull the gun trigger is not enough, the prosecution must show more. For the lesser offenses like battery (touching someone without consent), then it is sufficient to show that the defendant intended to put his hand on the victim without more (this is known as basic intent). 

hey! please don't throw me! (CC0 Creative Commons license)

hey! please don't throw me! (CC0 Creative Commons license)

   In addition to intent, criminal law also requires the defendant to perform some sort of act like breaking something or hitting someone. 

   The application of behavioral predictive technologies would throw a Thai baby elephant into traditional principles of criminal intent and acts by raising some time-warping conundrums. Will these traditional principles be relevant if a defendant is arrested before one has acted or intended a crime?

Breakdown of traditional notions of criminal law

   Let’s say your credit card company charged you for a very expensive gold watch before you even bought it. You could easily cancel this charge by telling your credit card company that you have not acted yet to buy said watch nor intend to do so. Likewise, a criminal defendant can make similar arguments when flagged by predictive technologies for a crime that he/she has not yet committed.

   Very interesting situations will arise when behavioral predictive technologies are applied to predicting crimes. 

not guilty my lord (CC0 Creative Commons license)

not guilty my lord (CC0 Creative Commons license)

   First, if one is arrested on the charge of committing a crime requiring specific intent in the future, then under traditional principles of criminal law, the defendant may cancel such future intent by denying any intention to commit the future crime. It would be easy to negate such intent because the defendant was arrested before committing the act or having the requisite intent.  After all isn’t arresting somebody before they commit a crime the whole raison d’être of criminal behavioral predictive technologies? But what is the point of investing billions of research & development dollars into this field if a defendant can easily evade conviction by denying: (i) the existence of the act charged and (ii) one's intent to commit the crime?

   Second, the traditional burden of proof falls on prosecutors who must prove the defendant’s guilt beyond reasonable doubt.  But in the predictive technologies context, it will be hard for the prosecutor to prove something that has not occurred. Will the burden of proof need to be watered down to make room for the relevance of predictive technologies?

future cops: arresting someone before the crime? (CC0 Creative Commons license)

future cops: arresting someone before the crime? (CC0 Creative Commons license)

   There are only two ways to resolve this apparent paradox. Either not apply behavioral predictive technologies to prevent crimes or abolish (or materially change) traditional notions of criminal law to make room for new predictive technologies. Since governments are spending a lot of time and money to achieve the goal of using AI to prevent crime (and it seems too late to stem this tide), it is likely that principles of criminal law (requiring an intent, act, and the prosecutor to prove its case) will be watered down or worse, eliminated. When that happens, a new paradigm of criminal justice will arise that may not be friendly to defendants. When lady-justice wears 20/20 googles looking into the future, it will be hard to apply principles of criminal law from the past.

When lady-justice wears 20/20 googles looking into the future, it will be hard to apply principles of criminal law from the past. (CC0 Creative Commons license)

When lady-justice wears 20/20 googles looking into the future, it will be hard to apply principles of criminal law from the past. (CC0 Creative Commons license)

Safeguards

   Absent a ban on using predictive technologies in the criminal context (like polygraph machines & tests), there are certain procedural safeguards that the courts can consider to balance criminal law & procedures against the application of emerging technologies.

   First, evidence based on behavioral predictive technologies can be deemed merely persuasive and not conclusive. If evidence is merely persuasive, then there will be room for criminal defense attorneys to develop innovative arguments based on traditional principles of criminal law to be applied in a novel context. For example, defense attorneys can argue that some of the pitfalls discussed in Part 1 relevant to machine learning taint the procedural and qualitative nature of such evidence as to render it unreliable. 

   Second, the courts can borrow concepts from the law of inchoate or incomplete crimes to build safeguards in the predictive technologies context. An inchoate crime is a conduct deemed criminal without actual harm being done. An inchoate crime like attempt, solicitation or conspiracy criminalizes behavior even though the defendant has not committed the criminal act yet. So both inchoate crimes and predictive criminal technologies seek to punish someone for a crime which they have not committed. Therefore, defenses available to negate inchoate crimes may be used in the predictive criminal technologies context.

near completion of crime = more culpability (CC0 Creative Commons license)

near completion of crime = more culpability (CC0 Creative Commons license)

   The key problem in making a defendant liable for his future crime is determining how close to completing the offense the defendant must get before he can be deemed to have committed the crime. For example, a defendant who is arrested for his future crime of homicide after having studied his victim’s daily routine and purchased his murder weapon is more culpable than one who has not taken any actions to further his crime at all. Another test would be to focus on examining the amount left to be done to ascertain how close the defendant is to completing the offense predicted. The closer a defendant is arrested to completing the criminal act, the more just it is to ascribe liability for such act. Therefore proximity to the criminal act predicted serves as a safeguard against abuse. To prevent a case from being thrown out of court, prosecutors would be incentivized to use behavioral predictive technologies that are able to identify and stop a suspect as close to the criminal act predicted as possible so as to minimize the chance of false positives. 

blurring lines between predicting & snooping... (CC0 Creative Commons license)

blurring lines between predicting & snooping... (CC0 Creative Commons license)

Privacy Challenges: predicting vs. snooping?

   Behavioral predictive technologies present challenges to privacy.  Machine learning systems require huge amounts of data to make predictions. Feeding copious amounts of data on citizens blurs the line between making criminal predictions and snooping.

   One of the key cornerstones of privacy protection is the requirement of obtaining consent before anyone can use one’s personally identifiable information (See Article 4 (11) of the GDPR here). Do we want the authorities prying into what they think we will do in our future without our consent? Remember for machine learning systems to be able to generate predictions, it needs massive amounts of data points. This means that for the system to be effective in making accurate predictions, the state would need to feed their AI machines a lot of our personal information (that is far more intrusive than our date of birth or social security number) such as where we like to go for coffee and with whom, when we use our kitchen knives, how many times a day do we say or write certain key trigger words like “bomb” and the people with whom we interact in our daily lives.  

   Since using behavioral predictive technologies is intrusive, should the authorities give warnings to its citizens that it will compile dossiers on all aspects of what they do currently to predict what they’ll do in the future. If so, then do citizens have rights in the dossier being kept against them such as the right to challenge the accuracy of the data points and make corrections therein or have it be deleted after a certain time? 

how looking ahead can be too far... (CC0 Creative Commons license)

how looking ahead can be too far... (CC0 Creative Commons license)

Ethical Issues: how far is too far?

   Behavioral predictive technologies have applications in other contexts outside of criminal law. For example, in the family context, can the state file for custody of a child before he/she is even born based on the predicted evidence that his/her parents will be unfit? If yes, then this would break a family apart even before they become a family. Or, in the corporate context, can a company fire someone for a future mistake or conduct invasive employee monitoring to prevent and detect fraud? It wouldn’t be long before apps use behavioral predictive technologies to help users find their “soulmate” as a basis for marrying or dating someone (the ultimate “dating app”)! 

the vulnerable need protecting? (CC0 Creative Commons license)

the vulnerable need protecting? (CC0 Creative Commons license)

Should we protect the public from using it?

   Should such technology be controlled subject to stringent export control laws or be banned or highly regulated for limited private sector use? There are arguments both ways. Some will argue that if predictive AI technology, like any disruptive technology, can make the lives of private citizens and companies more convenient and productive, then it should be marketed to the private sector globally. Why keep a good thing for government use only? But we should know that not all disruptive technologies are the same. Using behavioral predictive technologies is not like using a ride-sharing app.

   Should we introduce special protections against minors from using it? Will our children one day be able to use apps to predict things in their lives that have yet to happen. Think of the damage from a child psychology perspective to learn at a very early age that one will grow up to be a loser. Nothing can be more damaging to a child’s self-esteem than to know that they will not grow up to amount to anything. Will this cause cases of teen suicides (already at a 40-year peak among teenage girls) to increase?

Social engineering tool?

   Behavioral predictive technologies can become a tool for social engineering whether intended or unintended. Machine learning can be manipulated intentionally by its operators because it depends for its workings on the type and quality of data it is fed. Therefore machine learning can be intentionally fed certain types of data in order to produce a certain type of prediction about a certain type of people group (be they political dissidents, social class of “undesirables” or suspected terrorists). 

reinforcing prejudices? (CC0 Creative Commons license)

reinforcing prejudices? (CC0 Creative Commons license)

   There is one thing more dangerous than social engineering conducted intentionally. That is social engineering conducted unintentionally. 

   Algorithms used in machine learning are written by humans. This carries the risk that the inherent biases and outlooks (whether dictated by their sex, religious beliefs, ethnicity, or age) may carry over and affect the results produced by the algorithms they are writing. For example, MIT noted that “one area of potential bias comes from the fact that so many of the programmers creating these programs, especially machine-learning experts, are male.” A Harvard study shows that online advertising technology (which like machine learning is algorithm driven) perpetuates racial biases. 

   If these biases are left unchecked, this may unintentionally cause machine learning networks to produce a certain line of predictions that disproportionately target a specific population group.

Increased Cybersecurity Risk

our eyes are on you too (CC0 Creative Commons license)

our eyes are on you too (CC0 Creative Commons license)

   One thing is worse than having governments monitor our activities and that's having cyberpunks do so. What happens when the authorities lose control over their behavioral predictive technologies under a cyber-attack? Governments should not be confident that their AI predictive systems will be immune from being hacked.  Which hacker in the world would not relish the chance to mess with something so powerful than an AI system that can potentially ruin someone’s life for some future crime that he may or may not commit. Ransomwares will operate on an entirely different level: if one doesn’t pay up, one may go to jail instead of having their files deleted. They can also disrupt the operations of global corporations by tampering with the predictive AI system to get their key executives arrested for crimes they may never commit.

Conclusion

   Modern disruptive technologies are very liberating because they make our lives easier and more convenient. But not all disruptive technologies are alike. Some like AI-driven behavioral predictive technologies given their enormous potential for causing harm of great magnitude on society should be controlled by policies and procedures expressly tailored to meet pervasive legal, privacy and ethical challenges. Yet there is a point when even the best-laid protection plans and safeguards would inevitably lag behind quick advances in technologies and thus fail. Social disasters unfortunately often happen because we forget that just because we can do something doesn’t necessarily mean that we should. The moment our actions are conditioned out of fear for how a machine can interpret them through artificially intelligent algorithm processes, we lose that which should never be predictable: our human nature.

#AI #machinelearning #ML #behavioralpredictivetechnologies #ethics #privacy #discrimination

A Clash of Coins

A Clash of Coins

AI: Catching Criminals Before the Act! (Part 1)

AI: Catching Criminals Before the Act! (Part 1)