On Artificial Intelligence and the Ethics of AI

David Menager | Computer Science | University of Kansas


The nature of intelligence is complex and not fully understood. In the broadest sense, I believe the word intelligence describes an agent’s capacity to process and act on information in order to achieve goals. What I mean by intelligent agent is to describe an entity with the ability to decide what to do for itself. The more a thing is free to do this, the more it is an intelligent agent. In the following paragraphs I will briefly expound on the nature of intelligence, then I will make a connection to how I believe the quest to understand intelligence is related to artificial intelligence.

In my view, part of understanding intelligence is to understand the ways in which information is processed inside an agent’s mind. This cognitive activity is complex, so defining the units of thought and their function is essential to the understanding of intelligence. Such a description of the mind is called a cognitive architecture. It specifies the memory structures for storing information as well as the cognitive processes that operate over the memory contents. Therefore, to understand intelligence is to understand the structure and function of the mind. Intelligence is not defined by the knowledge an agent possesses, but instead by how an agent uses its knowledge to accomplish goals. Minds with different architectures afford different levels of intelligence. Scientists study many kinds of minds, but the human mind is generally regarded as the primary example of intelligence because of its unique ability to engage in abstract thought.

Next, I believe that the primary role of cognition is to allow an agent to achieve goals. In other words, cognition is grounded in goal-directed action. For humans, such goals can pertain to basic needs like survival, as well as more abstract concepts such as being successful and finding love. Because of the human ability to engage in both basic and abstract thought, the space of possible goals to pursue is large. Additionally, making progress towards any goal at all requires that an agent possess the cognitive mechanisms and control strategies for achieving that goal. So, I believe that mental processes like perception, conceptualization, goal nomination, problem solving, and heuristic search are part of the nature of intelligence.

Now that I have provided my framework for understanding intelligence, I would like to consider what kinds of entities can possess it. Humans have the capacity to be intelligent, and it seems that other animals are capable as well, but to a lesser degree. I believe artifacts are also capable of being intelligent, and thus, I think it is natural to consider what artificial intelligence means. Strictly speaking, it is the intelligence of artifacts, but also, artificial intelligence seems to be a distinct concept from human intelligence. I am not sure if that is true. I tend to think that the fundamental difference between human intelligence and artificial intelligence is ultimately one of biology. Although no one has created an artificial intelligence system with human-level capabilities, I think it is theoretically possible to do so.

Although no one has created an artificial intelligence system with human-level capabilities, I think it is theoretically possible to do so.
— David Menager

Considering human and artificial intelligence is profound because it goes to the heart of how humans think about themselves and their role in the universe. The advent of computers helped people to take this question seriously. In the mid 1950’s, scientists began to use computers to model intelligent behavior. This came from the realization that computers are not merely number crunchers, but rather, they are symbol manipulators. Computers could be given goals, methods for achieving them, as well as general strategies for solving problems. Given a problem description and an initial state, a computer could generate a plan to solve the problem. Such thinking machines were the first artifacts from the field of artificial intelligence. The field began with the common vision to reproduce the full range of human intelligent behavior on a computer. Hence, many focused on defining cognitive architectures that aimed to support human-level behavior. Up until the 90’s, much progress was made towards this goal.

As time passed, however, the field began to fragment, and people abandoned the original aims of the field in favor of specializing in subfields of artificial intelligence, such as machine learning. As a result, each subfield progressed in its narrowly defined scope, however, progress towards the broader goal of making machines with the same mental ability as humans slowed down. Today, research in artificial intelligence is dominated by work in machine learning. Machine learning systems can learn and recognize subtle statistical patterns present in data and use these patterns to make predictions. Most researchers no longer attempt to build general purpose machines, but choose, instead, to build specialized systems for specific domains. These systems have come into favor because they often can learn from more data and demonstrate pattern-matching capabilities that outperform human ability. Despite such impressive capabilities, these systems say little about the nature of human-level intelligence. Pattern recognition is not the whole of intelligence, and it certainly is not distinctive of human intelligence since dogs, cats, and other animals do well at this task. Furthermore, state-of-the-art machine learning systems tend to learn opaque patterns that are analytically incomprehensible to humans.

In the early days, the dominant ideology in artificial intelligence was not machine learning. Learning was seen as a necessary part of a larger cognitive system whose main purpose is to achieve goals. I bring this up because the field is called artificial intelligence precisely because it shares common goals with the study of intelligence which, in itself, is broader than learning. To be clear, not everyone in artificial intelligence abandoned the original aim of the field, but I think that those who have a singular focus on machine learning, run the risk of forgetting the many successes the field had building general-purpose intelligent systems.

So, I advocate a return to the cognitive systems paradigm. The field should take up, once again, the challenging topic of high-level cognition. It should attempt to build systems that engage in multi-step reasoning in favor of those that simply recognize then act. The field should recommit itself to understanding how mental representations of information afford cognition. The field should also be concerned with how faculties of the mind process representations as well as work together to produce intelligent behavior. And, the field should return to focusing on how intelligent systems manage complexity by using heuristics. In all these areas, learning can play a role, but learning should not the central focus.

I think my point about focusing on high-level intelligence over learning becomes clearer when intelligent systems perform complex tasks. For example, an autonomous vehicle should receive sensory information from multiple modalities and integrate them into a common representation. Whenever necessary, the system should deliberate about which goals to pursue given its circumstances. Normally, the system will want to obey all the traffic laws, but if the passenger is late to catch his or her flight, the system might deliberate about whether being at the airport on time is worth the risk of driving over the speed limit. These behaviors go beyond simple pattern recognition. They instead require computational agents that can coordinate their thoughts and actions toward achieving goals.

In closing, I argued that intelligence is a concept that describes an agent’s capacity for processing and acting on information. This consideration led me posit that artifacts can have human-level intelligence. From there I discussed the field of research attempting to create thinking machines capable of reproducing the full range of human behavior.

Ramon Alvarado

Philosophy

University of Kansas

On AI

Recently, artificial intelligence seems to be everywhere. The methods and technology associated with the term are used in everything from science to finance and, of course, entertainment. While recent discussions of artificial intelligence and its applications involve both hype and skepticism, important details about their fundamental nature as machine-implemented statistical methods remain largely unexplored. Yet, it is precisely these details that are at the core of the challenges of this technology and our relationship to it. Artificial intelligence is first and foremost a data science and it must be understood as such in order to be clear about it’s true potential for positive and negative effects.

Before artificial intelligence was called artificial intelligence the set of techniques and tasks covered by the term were referred to as complex information processing systems. The driving force behind these technologies is their ability to analyze—i.e. label, categorize, sort—items and their relationships within large amounts of data. These systems can assess with a degree of probability whether an item fits in a category or not, whether a data set is of a given type or not, and/or whether future data sets will have certain properties or not. An important aspect of the methods particular to artificial intelligence is that they also use the data of the analysis itself in order to improve the processes and tasks they are designed to carry out.

In short, artificially intelligent technologies are software systems designed and deployed to use statistics, look through data, categorize it and sometimes predict some of the properties within it. These methods are deployed in various other contexts to explore, suggest, or even decide courses of action, optimal outcomes, etc. They can, for example, explore hypotheses in science, suggest music or videos to watch, recognize faces, or even decide whether to shut down an electrical grid or not. It is usually these applications that become the center of discussions related to artificial intelligence because it is easy to see their consequences. However, understanding the underlying statistical processes of artificial intelligence can help us better clarify what is of particular philosophical and ethical interest in the development and use of this technology.

 With that in mind, here are three immediate philosophical and ethical challenges concerning artificial intelligence.

 Data dredging 

An immediate challenge from artificial intelligence comes from the basic recipe of its statistical components. It is a well-known fact that the more data you have, the more items there are in that data, and the more analyses you conduct in that data, the more likely it is that you will find statistically significant relationships within it. This is particularly the case when data sets are explored without explicit hypotheses in mind. That is, when we let the data “speak for itself”. This is called data dredging or the ‘multiple comparisons problem’. When we take into consideration the large data sets required to train artificially intelligent systems, the ability of the technology to conduct a vast number of exploratory analysis, plus the use of unsupervised machine-learning algorithms—that is, exploratory algorithms that look for patterns without explicit parameters for guidance— it becomes increasingly clear that, as some have put it, there has never been a better time to do science wrong.

Although this issue is problematic enough in scientific inquiry and other practices (marketing studies, etc.), it is particularly alarming when the applications of artificial intelligence are of social consequence. Algorithms used to assess credit worthiness, set bail and/or calculate jail sentences are particularly troubling when these deceitful correlations are not carefully filtered out, as many seemingly unrelated items might prove to be statistically correlated. This can result in unjust sentencing, opaque discrimination, and other injustices.

Opacity 

The complex—and multilayered—methods of analysis behind much of artificial intelligence applications make it very difficult for anyone to explicitly know how they arrive at their results. In other words, the processes in artificial intelligence are opaque.

While in the problem discussed above the issue was artificial intelligence yielding bad results, here the problems persist even when the results are correct. If the methods, in this case artificial intelligence, are inaccessible there is no way to assess whether they are reliable. There is also no way to see what features of the phenomenon under investigation are considered as salient by the methods to start with. This is important when we care about explanation and understanding and not just about results.

When artificial intelligence is used in socially consequential contexts such as policy-making, autonomous technology, etc., the stakes go from being an epistemic problem—a problem about access to knowledge—to being an ethical issue.
— Ramon Alvarado

When artificial intelligence is used in socially consequential contexts such as policy-making, autonomous technology, etc., the stakes go from being an epistemic problem—a problem about access to knowledge—to being an ethical issue. It is hard, for example, to ascribe moral responsibility to anyone for the consequences of a technology if no one understands it.

 An even deeper ethical issue arises when we consider that human beings, as an inquisitive species, will be gradually left behind, as opaque artificial intelligence methods become the norm in inquiry.

 

Autonomy and agency

 It is important to note that most of the data being generated today is data related to human behavior. From consumer habits to traffic patterns to facial gestures, data about us is being created, gathered, analyzed and used in unprecedented ways. Some of the ways in which it will be used may even be “good”. Apps will help you live healthier life-styles. Data results may nudge you into making less risky choices. Yet, it is precisely when we can visualize artificial intelligence actually doing all the things we think it can do that we ought to question whether we want it to do so, even when it is doing “good” things. And it is here that the ethical dilemmas really arise, when we question what we really ought to do with it.

Dr. Sarah Robins

Philosophy

University of Kansas

Dr. Robins could not join us in person for our discussion, but she was able to provide her responses to some of the questions we hope to raise and address.