ethics

Under what conditions, if any, can the use of AI in predictive policing be given ethical justification

Algorithms trained on massive data sets are used to predict a range of issues in a number of sectors, including policing. One of the most promising applications of technology in policing is the use of artificial intelligence (AI) is to identify patterns, such as the likelihood of crime in certain areas or the possibility of re-offending for convicted criminals (Dery 2016). This technology, called predictive policing,  was born from crime analysis and big data. It is just one of the ways police departments have incorporated big data into their work in the last two decades.  Algorithmic prediction is changing policing: law enforcement resource allocation is increasingly guided by crime forecasts generated by predictive algorithm, which is also promising a cost-effective crime analysis that is more accurate, and less susceptible to bias, than man made “analogue” crime analysis. But these developments often have not been accompanied by adequate safeguards. 

In this essay I will attempt to explore under what conditions, if any, can the use of AI in predictive policing be given an ethical justification.  In order to unpack this complex issue, I will look into some of the ethical considerations of big data and algorithms pertaining to policing as well as their societal implications. 

Introduction

Predictive policing is an umbrella term[1] for a law enforcement technology using data analysis to aid the identification of criminal activity. Its intention is to proactively[2] reduce crime by providing police forces with likely areas of high risk variables. While this is a noble pursuit, like any other tool intended to affect peoples’ lives must be accompanied by the ethical considerations of its potential consequences. 

From a common standpoint there is a distinct bias against using data analytics to enforce a rule of law. This is significantly different from the perception of data analytics for the purposes of health or economic development. Such attitude is not surprising though, as policing activities are of fundamentally different nature – the normative paradigm here is commandment. The ability to enforce the rule of law is perceived as fundamental to social cohesion and rests solely in the state, which is important given that research and development of a number of technological solution implemented by police is delivered by private entities. 

The promises of predictive policing are quite alluring, given how the preventative measures could work. Forecasting future crimes can for example facilitate the shift to deterrence and prevention rather than only acting “post hoc” (Zedner, 2007).  Moreover, Zedner observed that “we are on the cusp of a shift from a post-crime to a pre-crime society, (where) forestalling risks competes with and even takes precedence over responding to wrongs done” (Zedner, 2007).  Further to this statement, proponents of predictive policing see it as a tool to help police better allocate their resources, allowing national agencies to do more with less, hypothetically increasing efficiency and addressing the issue of reactivity of the force (no longer responding only to crimes already committed). However, the focus on retributive justice is not considering the fact that crime can be prevented by addressing what is known as “third variable problems”— root causes and factors addressed in areas of government other than policing (healthcare, welfare). Predictive policing is still a useful tool, however it does not make use of the fact that many of social variables are codependent.

From an ethical point of view it is also important to look at the issue of  underutilizing a tool that could save lives. This angle could see not using predictive policing as an ethical violation, since it will definitively reduce crime, and it could potentially do so without arrests. One can however easily counter this argument by saying that more police interventions (especially in case of a petty crime) derived from algorithmic predictions increase a social control rather than enhance the protective aspect of police work (Pelaez, 2008).

Furthermore, in order to compel prospective criminal not to commit the crime, pre-crime policies might also have to involve “pre-punishment” (New, 1992), leading to ethical dilemma of whether a pre-punishment of any kind be justified given that “a person who is going to commit a crime has not yet done anything to deserve punishment” (New, 1992).  This results in a paradox: if a predicted criminal is successfully prevented from committing a crime, does that not mean they are not guilty of committing that crime? Moreover, as the predicted crime does not even occur, then does that not also render the initial prediction false? Many scholars argue that punishment can only be justified when deserved, therefore not condoning pre-punishment. Such logic stems from an underlying belief that a person does not deserve to suffer for a crime they may commit in the future. On the other hand, so called “deterrence-theorists” justify punishment so long as it results in a general benefit to society (Williams, 2013), approving of pre-punishment on the grounds that by deterring one from committing a crime the utility gained counterbalances the wrongs (unfairness) done to the would-be offender. It is worth highlighting that preventative and pre-emptive punishment are already in place in contemporary policing practice.  Many of such measures have less to do with rehabilitating criminals and more with restricting them. This is because it is recognised these people are likely to be recidivists again. Such measures might be seen as concerned with the welfare of society, but at the same time it occurs at a cost of (former) criminals. Furthermore, it can be argued that such approach is common in other fields, for example counterterrorism (consequentialist debate of “dirty hands”: can it ever be justified to committ immoral actions when they are necessary for realizing moral end). Such actions, done in anticipation of a future event, infringe the autonomy of individuals detained (who might be innocent after all). One could say that they are sacrificed at the expense of providing utility to society in the form of prevention. At the same time this practice is no different in case of algorithm-powered predictive policing and “analogue” practices widely used by the law enforcement. Importantly however, in case of predictive policing, while decisions are taken by a human actor, the mechanism of prediction (done by an algorithm)  is opaque (black box problem, to which I will return).

Another ethical dilemma related to predictive policing is connected to the extent of certainty we can predict the future. Through a determinist lens a predictions could be made with relative certainty, as the concept of choice is just an illusion, and the predicted offender will commit the crime. Contrary to this, free will dictates it cannot be determined beforehand whether or not the crime will actually occur, rendering predictive policing flawed, as the punishment cannot be justified. 

Finally, even if we assume that predictive policing can indeed help foresee crimes, leading to more accurate and effective police work than traditional methods, the critics have also raised concerns about transparency and accountability. Furthermore, while AI solution providers claim that their technologies can help remove bias from police decision-making, algorithms relying on historical data risk inevitably reproducing the same biases – even the best algorithms would produce faulty predictions if trained on unrepresentative data. Thus, understandably predictive policing models have come under much scrutiny regarding issues such as “their effectiveness, potential impact on poor and minority communities, and implications for civil liberties” (Jouvenal, 2016).

Policing and ethics                                                                                     

Regardless of the outcome of the debate on AI, police ethics are highly undertheorized, especially compared to military ethics (Miller 2016). There is a need for consensus in normative framework, as police ethics vary between jurisdictions, and these differences are amplified whenever technologies are involved. The most popular analogies used in police ethics are waging war (ref. to military) and fighting disease (ref. to public health). These can suggest possible justifications of practices, including predictive policing. In case of the military analogy one can refer to the Doctrine of Double Effect—to argue that it is sometimes permissible to impose grave risks on innocent parties under the right conditions. 

The “public health” analogy suggests that law enforcement agencies should focus more on identifying and preventing the risk factors for crime, even at the cost of burden to the liberty of communities, in ways that are both fair and effective. Furthermore, in order to think about the ethical significance of predictive policing, it helps to draw upon the theoretical models proposed within the literature and language of bioethics, such as the four basic principles (autonomy, beneficence, nonmaleficence, and justice) when assessing the dangers and flaws of predictive policing as a discretionary tool used to justify questionable processes and biases.

Big data and bias

Predictive policing can be defined as “the application of analytical techniques – particularly quantitative techniques – to identify likely targets for police intervention and prevent crime or solve past crimes by making statistical predictions” (Lum & Isaac, 2016).

As mentioned earlier, this technology is a relatively new approach to already existing attempts derived from criminal analysis. In practice, predictive policing uses analytical techniques on big data to detect probable criteria for police via statistical prediction (Perry, 2013). This means applying algorithms trained on massive historical datasets to perform a similar job that that an “analogue” analysts would do, albeit on a much bigger scale and in an infinitely shorter timeframe. In order to analyse conditions for ethical justification of predictive policing one should therefore look into the risks and benefits of this change (analogue to big data), as well as ethical consideration of using big data in policing. 

Big data, from a consumer perspective, should be highly scrutinized for privacy and accuracy concerns. The quality and quantity of the data are major factors in how the AI-based model will behave, determining whether any biases have been incorporated into the model (intentionally or unintentionally). From a policing perspective it is crucial where the data is coming from and to what extent the data includes all the required information which AI can extract trends. Overlooking these can lead to systemic bias and discrimination, as well as raise concerns regarding the validity and transparency of models used for predictive policing. According to Lum and Isaac (2016) the data currently stored on police databases can be argued to be biased (they comprise “complete census of all criminal offences, nor do they constitute a representative random sample” (Lum & Isaac (2016)).

Further ethical concerns raised by big data  use in policing include especially respecting  autonomy (provision of adequate consent) and privacy. As mentioned earlier, many factors that are aggravators or mitigators of crime are not easily quantified (Angwin et al. 2016), and one can argue that there is an infinite number of potential factors that reduce or increase a person’s likelihood of committing a crime. Furthermore, algorithms are designed by humans, who are often more likely to consider aggravators of crime, distorting potential predictions. Importantly for prediction of place with increased risk for an criminal act, if police have a higher propensity to target certain groups, it would naturally result in them gravitating towards specific neighbourhoods where these groups are concentrated (Lum & Isaac, 2016), infringing the freedom of targeted communities. Finally, some may argue that indeed, the over-policing of ethnic minority neighbourhoods is in fact happening because of high crime rates. However, there are counter-arguments that much of the excessive crime reported in these neighbourhoods can be regarded as nuisance crimes, rather than serious criminal activity (Fayyad, 2017).

To sum up, if the existing police datasets and practices are systematically biased, then this bias would be reflected and reproduced in the predictive models, as it is well researched that data will likely reflect biases of actors who construct such models (Silberg and Manyika 2019; Manyika, Silberg, and Presten 2019). Predictive policing technologies present the world to police officers in a certain way, specifically, by telling them that certain areas are likely to be more dangerous than others. When police is taught to see the world this way, it might affect the way they behave towards people encountered in such “high-risk” zones.

But is predictive policing equal to a technologically-veiled racial discrimination? A weak argument countering at least intentional discrimination is that we are not aware of any predictive policing algorithms that use race as a factor when generating crime forecasts. Furthermore, algorithms cannot intentionally do anything, not in the way we think about intentions. Selbst (2017) argues in favour of predictive policing systems saying that when modelled correctly, it leverages technology to free public institutions from human prejudice.

Democratic influence and transparency

It can be argued that algorithmic governance poses a significant threat to the legitimacy of public decision-making. Exploring the impact of the use of big data and algorithms (like in case of predictive policing) on legal rules and practice might highlight potential liability gaps that might arise through algorithms’ misbehaviour or biased training datasets. Predictive policing raised concerns over the objectivity, as well as the lack of transparency in its models (Jouvenal, 2016).

While discussing democratic influence in the context of police ability to enforce the rule of law aided by technology, it is important to look into the implications of third party software. This is rather problematic in as many national agencies are outsourcing research and development to private companies or purchasing off the shelf solutions. Firstly, it leads to a reflection on who has the access to police data. What is more, training data sourced from outside of police organization, or blending multiple data sources, might cause of errant AI applications. Secondly, not many vendors are willing to disclose their proprietary algorithms, but it is the law enforcement that bears the responsibility for the results. One can also ask what information do the designers of algorithms owe to communities and the police departments. Furthermore, adversarial attacks on private sector, resulting in imperceptible to the human eye change in output might be disastrous to police work

These observations lead to a black box problem. There is very little information given to the general public has about how the predictive policing algorithms work. This can be however associated with broader operational issues, present also in “analogue” policing. Such veil of secrecy is preventing the society from holding police accountable for actions taken with the help of algorithms.  One can therefore ask how can communities ensure accountability and oversight of predictive technologies? Who and how is expected to balance competing concerns of public interest in transparency with operational secrecy? Predictive policing models allow decision-makers to shift accountability to “black-box machinery that purports to be scientific, evidence-based and race-neutral” (Rieland, 2018). Subsequentially, lack of transparency and accountability results in little opportunity to correct applied models  (O’Neil, 2016). Further to this, Danaher (2016) suggests that the growth in algocratic systems raises two moral and political concerns: hiddenness and opacity. The former pertains to the manner in which our data is collected and used by the algorithms, as well as consent (or lack thereof). The latter pertains to the intellectual and rational basis for the algorithms, such as how the opaqueness of algorithms affects our participation in procedures.

Conclusions

Policing use of technology is understandably under heightened scrutiny, and predictive policing, as one of the most prominent and controversial of these practices,  is especially criticised.  This stems from a mix of concerns about artificial intelligence, bias, privacy, power structures (including the relationship between the state and citizen), accountability, responsibilities of private tech companies developing and selling software, among others.

Despite these issues, predictive policing systems have been implemented by a number police agencies due to their perceived efficiency.

Research on the fundamentals of AI clearly revealed that indeed, predictive policing is a powerful technology, useful for discovering patterns, helping make decisions in a fraction of time and allocating scarce police resources better. At the same time it perpetuates a number of biases, leading to decrease of public trust.  Looking at arguments presented in this essay, the use of AI in predictive policing can be given an ethical justification, but perhaps only in ideal conditions.Exploring the impact of big data and algorithms use on legal rules and practice highlighted too many potential liability gaps that might arise through algorithms’ misbehaviour or biased training datasets. Furthermore, they do not exhaust the list of possible gaps that might be created by proliferation of predictive policing systems.

To sum up, while facilitating police work is a noble pursuit, any new tool used by the law enforcement must be accompanied by ethical considerations of its potential ramifications. This is especially important for technologies like predictive policing which can have detrimental consequences to communities. Articulating values and ethical principles is just the first step in addressing AI and data ethics challenges in policing. 

References

Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine Bias. There is software that is used across the county to predict future criminals. And it is biased against blacks. <online> Available at: https://www.propublica.org/article/machine-bias-risk-assessmentsin-criminal-sentencing

Danaher, J. (2016). The Threat of Algocracy: Reality, Resistance and Accommodation. Philosophy & Technology, 29(3), pp.245–268.

Dery, S. (2016). “On the Wrong Side of Algorithms.” November 28, 2016. <online> Available at: https://medium.com/@sderymail/on-the-wrong-side-of- algorithms-part-1-f 266a4c3342b. 

Fayyad, A. (2017). Gentrification and the Criminalization of Neighborhoods.  The Atlantic. <online> Available at: https://www.theatlantic.com/politics/archive/2017/12/the-criminalization-of-gentrifying-neighborhoods/548837/.

‌ Jouvenal, J., & Morse, D. (2011, August 15). Police probe Germantown flash-mob thefts. The Washington Post. <online> Available at: https://www.washingtonpost.com/blogs/crime-scene/post/possible-flash-mob-robbery-in-germantown/2011/08/15/gIQAmZFvGJ_blog.html .

Lum, K. and Isaac, W. (2016). To predict and serve? Significance, [online] 13(5), pp.14–19. Available at: https://rss.onlinelibrary.wiley.com/doi/full/10.1111/j.1740-9713.2016.00960.x.

Manyika, J., Silberg, J. and Presten, B. (2019). What Do We Do About the Biases in AI?  Harvard Business Review. <online> Available at: https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai.

Miller, S., (2016). Shooting to Kill: The Ethics of Police and Military Use of  Lethal Force. Oxford ; New York: Oxford University Press. 

New, C. (1992). Time and PunishmentAnalysis, vol. 52, no. 1, Analysis Committee, Oxford University Press. <online> Available at: https://www.jstor.org/stable/3328880

O’Neil, C. (2018). Weapons of math destruction : how big data increases inequality and threatens democracy. London: Penguin Books.

Pelaez, V. (2008). The prison industry in the United States: big business or a new form of slavery? Global Res., 3:8. <online> Available at: https://www.globalresearch.ca/the-prison-industry-in-the-united-states-big-business-or-a-new-form-of-slavery/8289 . 

Perry, W. L., McInnis, B., Price, C., Smith, S., Hollywood, J.S. (2013). Predictive policing: The role of crime forecasting in law enforcement operations. Santa Monica, CA: Rand Corporation.

Rieland Randy, Artificial Intelligence Is Now Used to Predict Crime. But Is It Biased?, The Smithsonian Magazine 5 March 2018, <online> Available at: https://www.smithsonianmag.com/innovation/artificial-intelligence-is-now-used-predict-crime-is-it-biased-180968337/

Selbst, A.D. (2017). Disparate Impact in Big Data Policing. SSRN Electronic Journal.

Silberg, J., Manyika, J. (2019). Tackling bias in artificial intelligence (and in humans). [online] McKinsey & Company. <online> Available at: https://www.mckinsey.com/featured-insights/artificial-intelligence/tackling-bias-in-artificial-intelligence-and-in-humans.

Williams, J. N..(2012). Beyond Minority Report: Pre-Crime, Pre-punishment and Pre-desert. TRANS: Internet Journal for Cultural Sciences, 17, 10-15.

Zedner, L. (2007). Pre-crime and post-criminology? Theoretical Criminology, 11(2), pp.261–281.


[1] There are 3 different kinds of predictive analysis: 

  1. prediction of offenders (risk of recidivism), for example Hyderabad’s Integrated People Information Hub;
  2. prediction of victims, for example joint project of Saskatoon Police Service (SPS), the University of Saskatchewan and the Government of Saskatchewan: Saskatchewan Police Predictive Analytics Lab (SPPAL);
  3. prediction of time and place with increased risk for an criminal act, for example  Vancouver Police Department’s GEODASH.

[2] Given that law enforcement is perceived as reactive, proactivity of this technological solution is important factor for its proponents.

Leave a Reply

Your email address will not be published. Required fields are marked *