The Ethics of Algorithmic Governance

David Tamez

Editor-in-Chief, Philosopher, University of Kansas

Editor’s note: For a hands-on experience of how algorithmic governance takes place, scholar Denisa Kera will be holding a workshop at the Lawrence Public Library that is open to everyone with no background experience needed.

Occasionally, I assume, many of us have the following thought: “I wish I acted more according to my stated values and rational desires, than to unhealthy and future inhibiting desires.” In other words, we wish we would act according to our own best self-interests. What is good for us, we might say, when given the option of a diet consisting of all the things good for a healthy and long-lasting body and a diet consisting of meals from a certain fast food chain, is to choose the former over the latter. If only there were some sort of app or non-human mechanism to keep myself in check. What we are asking for, in a small way, is a form of algorithmic governance.

Photo by solarseven/iStock / Getty Images

Photo by solarseven/iStock / Getty Images

First, what is algorithmic governance? At first glance, it sounds like something out of a science fiction novel and the Matrix. To understand algorithmic governance, it is important to understand the aims of government and of law. For many, government and law are intended to influence human behavior in such a way that individuals act and decide appropriately. The most commonly used algorithm towards this end are blockchain algorithms. All human decision-makers are subject to biases, prejudices, and other cognitive quirks. Many times, policy decisions are developed and enacted with some of these biases, prejudices, and quirks built in. This makes for bad law. By bad law, we can mean a number of things. We might mean that the processes that went into devising policies was significantly flawed by the mere fact that decision-makers were subject to troubling tendencies such as prejudices based on race, gender, sexual orientation, or religion. So, how deal with this - the shortcomings of human cognition? We appeal to machines and algorithms. In the most simplest way, algorithmic governance refers to the growing practice by decision-makers of all socio-economic and political sectors to utilize non-human and mostly autonomous computer systems to make policy decisions based on the data theses systems collect.

Algorithms are often used to “mine, parse, sort and configure the data into useful packages.” With this in mind, policy-makers - whether at the government level or at the private institutional level - outsource their decision-making to these algorithms. The assumption here is that algorithms will be more consistent in their application and less bias than their human counterparts. This is a troubling assumption, but for now, we will put this to the side. Now, what is the data these systems are collecting? Algorithmic systems typically collect the everyday behaviors of as many individuals as possible in a given area to make informed decisions about such things as where public transportation ought to be directed to, insurance rates for a particular pool of people, or even whether one ought to buy tickets to an expensive concert given one’s financial situation. In this latter case, algorithms can be used to nudge agents into making a particular decision, but not necessarily prohibit. Credit check apps can nudge us into making smarter purchasing decisions by reminding us or warning us when our credit scores decrease and reach dangerous levels.

We have mentioned a few examples in our previous postings. One instance of algorithmic governance, at least at the microlevel, are the efforts made by Facebook and YouTube to address the growing problem of misinformation and instances of hate speech on their platforms by private content creators. To be clear, much of this supposed moral benevolence was brought on by an event known as adpocalypse. The two platforms needed a way to capture and de-monetize accounts deemed problematic in a way that did not require human beings to wade through the millions of accounts to determine whether action ought to be taken. They were also concerned with human error on the part of evaluators who might conduct their evaluations with political biases and hidden agendas. So, they turned to using complex algorithms to make these decisions.

In another way, the judicial system has been experimenting with the use of algorithms to determine the appropriate sentencing lengths for particular convictions. These algorithms assist judges in assessing the risk that a given defendant may pose to public-safety were they to be released too soon. Again, the concern that has prompted the judiciary to explore this option was to eliminate human biases and to assist human legal cognition. Once the judge enters the relevant inputs, the algorithm takes the information and provides a calculated suggestion.

Now, why should we worry about the rise in the use of algorithms to govern human behavior? After all, do we not consider fairness and unbiased decision-making part of justice? As alluded to earlier, there are dangerous assumptions made in accepting the rise of algorithmic governance. First, there are privacy concerns related to the sort of data these systems collect. This is perhaps less of concern depending on the type of governance, but for others there is a new distinction that is deserving of some fleshing out. For example, data about our shopping or traveling habits might be said to be our property. As such, we have a right to restrict access to said property or to even receive the benefits that may come from the selling of our data. Many of those who sign terms and condition agreements on Facebook and similar social media platforms are unaware that they are also signing away their rights to the their data.

Further, as noted by Ramon Alvarado in another article for Lawrence Talks, there are concerns over opacity. Despite the concerns with human error in human decision-making, we at least have access to the sorts of reasons and information that went into their pronouncements. With algorithmic systems, we do not always have access to the sources used nor the weighing of significance. Further, there are times when the developers of said systems lack insight into at what point or what data sets these algorithms used in forming conclusions or recommendations. Thus, there is very little transparency available in order to evaluate the sort of decisions these systems make. This is concerning for philosophers, specifically ethicists, who believe that the merits of a decision ought to be evaluated or determined by looking at the available reasons. While we might be able to make a consequentialist determination of whether the systems acted rightly or wrongly, this forced acceptance of consequentialism as the only available moral framework will not sit well with many, absent of arguments and deliberation.

Finally, if we do not know how these systems weigh the significance of the various nodes of data they utilize, it is difficult to make a determination of whether the decisions (outputs) or reasons (inputs) meet our demands for fairness. Additionally, for those systems that are proscribed by developers with fixed weights for specific data points, we might question whether the developers accurately or fairly fixed significance in their algorithms. Sentencing algorithms may unjustly place greater significance on where a defendant grew up, without taking into account the social causes or historical causes that led to the conditions in which they lived.

Questions to consider:

What forms of algorithmic governance are you aware of?

Can you think of other problems that arise as more and more decision-making is outsourced to algorithmic systems?

Should we be concerned with algorithms governing human behavior?


For further reading:

Danaher, J., Hogan, Noone, Kennedy, Behan, De Paor, . . . Shankar. (2017). Algorithmic governance: Developing a research agenda through the power of collective intelligence. Big Data & Society, 4(2), Big Data & Society, September 2017, Vol.4(2).

Symons, J., & Alvarado, R. (2016). Can we trust Big Data? Applying philosophy of science to software. Big Data & Society. https://doi.org/10.1177/2053951716664747

Zuboff, Shoshana. The Age of Surveillance Capitalism.