Executive Summary: A brief overview of the entire thesis, summarizing research, findings and results in a short, comprehensible way.

History of AI

“History of AI” will provide a brief overview of historical events concerning artificial intelligence (AI) from the 1950s until today. Technological breakthroughs in the field of AI will be explained in their historical context, as well as their contribution to the overall progress in AI. The developments will be placed into five successive phases in order to quickly evaluate the speed and progress of developments:

1. “The Beginnings” (1950-1955): the idea of AI was established

2. “The Golden Years” (1956-1973): the term “artificial intelligence” was coined, leading to a widespread hype

3. “The First AI Winter” (1974-1979): the following disappointment realizing AI will be more difficult than initially thought

4. “Boom” (1980-1986): a second phase of hype, sparked by numerous inventions during this time

5. “The Second AI Winter” (1987-1992): the flow of technological inventions could not be held up, leading to another disappointing phase

6. “Recovery” (1993-2010): a slow recovery from the second disappointing phase, achieved through numerous inventions as well as progress in computing power

7. “Modern Day AI” (2011-today): the technological progress that led us to the current state of AI

Current State of AI

This chapter looks at methods and tools that are currently used to develop AI. Techniques such as supervised learning, reinforcement learning, and the network architectures used by these systems will be briefly examined. Additionally, tools and services like Google’s “Tensorflow”, or cloud-based platforms, such as “IBM Watson” and “Microsoft Azure” will be analysed. This chapter concludes with an overview of current challenges AI is facing. These challenges include the differences between biological and artificial intelligence, the current inability of AI to achieve “general” intelligence as well the difficulties of intransparent AI.

The Future of AI

In order to roughly estimate the level of intelligence an AI will achieve, three potential stages for AI will be presented: “Narrow Artificial Intelligence” (NAI), which is how current systems can be thought of, “Artificial General Intelligence” (AGI), which is an AI capable of performing every task as least as well as humans, and “Artificial Superintelligence” (ASI), which is a system that is capable of performing tasks in ways humans cannot understand anymore. As the past has shown, it is possible that the development of AI can increase or decrease in speed. Therefore, possible future speed bumps, as well as accelerators will be presented. Depending on whether these speed bumps and accelerators will become reality, the intelligence level of AI will advance faster or slower. The concept of an “intelligence-takeoff” describes this development, showing that oncea state of AGI is reached, it could lead to recursive self-improvement of the system and therefore to an exponentially fast increase of intelligence. Possible outcomes of such developments will be outlined. Additionally to AI there are also multiple ideas for paths of how higher levels of intelligence could be achieved. The biggest question regarding future, possibly highly intelligent systems is how it can be ensured that they act in a way that is desirable for humans. First of all, such values must be defined in a meaningful way, which presents a large challenge on itself. Then, these values must be communicated in a way that an AI can understand and can use as a base to act upon.

Our standing: Ethical questions must be addressed, long before superintelligent artificial agents become a reality. To conclude, we will further present on which aspects we will focus during the following work.

Ethics and Moral

The goal of implementing values in AI is to ensure that machines make moral and ethical decisions. Therefore, different perceptions of what morals and ethics are will be described. The focus hereby will lie on morals in machines and how machines can be classified as moral actors. As machines develop more and more autonomy their actions do have moral implications, but they do not yet possess the ability to take moral responsibility as they lack consciousness, free will and the capability of self-reflection. The understanding of ethics and morals in this chapter will be essential for the later debate about beneficial AI in different use cases.

Conceptualizing beneficial AI

Pillars of Beneficiality

In order to successfully transition from the debate about superintelligent agents to more concrete ethical questions, we suggest a model for beneficial AI. This model consists of a foundation and three pillars that must be satisfied in order to achieve “beneficial AI”. The foundation describes essential prerequisites that have to be addressed beforehand, whereas the pillars describe aspects of a system that must be approached in order for the system to be beneficial. These three pillars address defining, implementing and ensuring beneficiality in artificial agents.

Defining Problem spaces

Concrete ethical issues in use cases often have underlying ethical problems. To frame these underlying problems, we suggest the concept of a “problem space”. We show that problem spaces can be described by analyzing use cases in detail and inducting from there to a higher-level issue. These higher-level issues are useful, because they allow transferring approaches from different situations, in which the underlying ethical issues are similar. We present a model, in which the problem space is first framed by analyzing the effects on involved stakeholders over time in the use case that is being developed. Then an idealized goal is developed, which may not ever be reached, but serves as a guideline for what should be achieved under ideal circumstances. Next, concrete actions are formulated to approach the idealized goal. These concrete actions can be seen as countermeasures, meaning that they can be used to reduce the impact of the problem space. It is important to note, that problem spaces typically cannot be fully solved, therefore the actions that are developed during this process are used to reduce the impact of a problem space, rather than completely eliminate it. Because problem spaces cannot be entirely resolved, they will affect a system using AI over a longer period of time, which means the problem space must be regularly reevaluated. Lastly, we present several methods that can be used to frame and approach a problem space. These methods mostly originate from design-practices. We elaborate how they can be used in the context of working with AI.

Use Cases

Current and future applications of AI will be set in various different areas, ranging from healthcare over crime fighting to the financial sector. An overview of possible future impact areas of AI will be presented. Three of these examples, healthcare, job finding and urban planning, will be analyzed in more detail in use cases. The methodology of defining problem spaces and establishing counter actions will be demonstrated in these use cases. Each use case is set in a different timeframe, with each consecutive use case dealing with a higher level of AI. Every use case involves a user of some sort, who is dealing with an artificial agent that possesses a certain level of AI.

Transparency

As every use case relies on the problem space of transparency being approached to some degree, it is essential that questions, such as how the current black box of AI can be broken up, are addressed beforehand. Possible approaches in achieving AI that is more transparent and therefore more explainable can be found in incentives for the developers, as well as regulatory measures. Possible concepts include certificates that guarantee a certain degree of transparency or other measures of voluntary or mandatory regulation through requiring a certain transparency to be allowed to release an AI product.

Timeframe: near-future
Role of the Agent: assistant
Problem space: bias

Bias in Urban Planning

Set in a near-future scenario, this use case analyzes how biased decisions in urban planning can be reduced by using artificial agents. For this purpose, a future software solution is sketched out that uses AI to analyze locations for urban development projects. The artificial agent analyzes data sets to make predictions about the usage of potential projects, as well as to create suggestions towards optimizing the project. As these predictions and suggestions result from fairly large amounts of data, bias could be possible, for example, if the agent mistakes correlation for causation. Recent progress in machine learning shows how such bias can be uncovered. The use case therefore suggests how such an uncovering of potential bias can be included in the user interface of the software. The user would then be made aware of potential bias and could act upon it with the goal of reducing the problem space in data-driven decision making.

Timeframe: mid-future
Role of the Agent: colleague
Problem space: accountability

Accountability in Medical Diagnosis

The second use case is set in a mid-future scenario, where artificial agents have reached a higher level of intelligence, possibly comparable with the cognitive abilities of humans. The use case will be concerned with the issue of accountability of moral decisions that artificial agents have made. The situation takes place in the context of medical diagnosis, where a doctor is supported by an artificial agent. The agent can hereby create a complete diagnosis of a patient and suggest treatment. It will be analyzed, how the involved parties, most notably the agent’s manufacturer and the doctor, can or cannot be held accountable for the diagnosis and the suggested treatment. The investigation of this use case shows that the manufacturer has a moral obligation to enable the doctor to be held accountable by making the agent’s process transparent and by demanding active approval of the doctor when critical steps in the treatment process come up.

Timeframe: distant-future
Role of the Agent: supervisor
Problem space: self-determination

Self-determination in Job Finding

The last use case is set in a distant future, where the process of finding a job has been largely adopted by artificial agents, rather than the humans themselves. The artificial agent analyzes large amounts of personal information, as well as previous employments, education and other data. The agent uses this data to predict which jobs are most ideally suited for the individual. The problem space that is analyzed in this use case is self-determination, as there is a certain danger in using data-driven systems to decide such major decisions for an individual. The analysis of this problem space shows the relevance of self-determination in such situations, as well as how self-determination can be endangered by artificial agents in the future. The use case then exemplifies how transparency and the ability to actively control the conclusions of the artificial agent can help an individual to remain self-determined.