Technology & ethics have to be established as a singular vision to enable beneficial artificial outcomes. We believe, that future development of AI must serve an overall purpose. It must preserve and enhance human life, as well as preserve and enhance the planet and its eco-system. There are great chances for AI to do so, but there are also numerous risks where AI could lead to an extremely negative outcome. It is of utmost importance to focus our efforts on the positive chances AI can enable us and to specifically target the risks that might occur so that they can be avoided. We do not believe it is a viable option to consider reversing or degenerating the development of AI, as the pace of development is already fairly high. This unavoidable development strengthens our focus and the overall relevance on working towards the beneficial use of AI.

Having spent some time researching the field of ethics and AI we stumbled upon many opinions, thoughts and concepts that shaped and refined our viewpoints on the development of beneficial artificial intelligence. This short essay will reflect on the main takeaways regarding future challenges and show how they influenced the creation of this thesis. Therefore we will explore five main aspects in a more detail.

Our stand on the future of AI and respective challenges

Opinions on the future of artificial intelligence research are diverse. As has been shown, no clear consensus exists on terminology, timelines or milestones, as to when such systems of higher intelligence will be attained and how they should be classified. All opinions and disagreement aside, it is apparent that the impact artificial agents have on our lives grows by the minute and will only further increase in the future. The exact predictability or precise definition of abstract milestones like “artificial general intelligence” or “superintelligence” is therefore not necessarily relevant when considering the creation of beneficial agents regarding their short term impact. Long before any kind of superintelligence is achieved humans will feel the tremendous influence of such systems in their work and social lives. This creates new, unprececented challenges that need to find their way into broader debates so the focus is not lost only on discussions on potential future scenarios. It will be of no use to us, if we one day are able to clearly define general intelligence, but haven’t done anything for the beneficial use of it. predict and is not deductible from human analogies. Therefore active work to ensure beneficial behavior of artificial entities is

required in order to (i) minimize potential risks in the short (e.g. bias in decision making) and long term (e.g. misinterpretation of final goals) and (ii) maximize chances, for example the better prediction of catastrophes, curing devastating diseases and improving the overall quality of life on a broader level. Taking into account the challenges and societal shifts artificial agents might bring in the future, it can be debated whether the creation of such intelligent agents seems desirable at all. We believe that the development of AI must serve an overall purpose. It must preserve and enhance human life, as well as preserve and enhance the planet and its eco-system. There are great chances for AI to do so, but there are also numerous risks where AI could lead to an extremely negative outcome. It is of utmost importance to focus our efforts on the positive chances AI can enable us and to specifically target the risks that might occur so that they can be avoided. We do not believe it is a viable option to consider reversing or degenerating the development of AI, as the pace of development is already fairly high. This inevitable development strengthens our focus and the overall relevance on working towards the beneficial use of AI.

Chances should be embraced and utilized for a maximally beneficial outcome.

Superintelligence research impact on the thesis

Even though the resulting work will not exclusively focus on the development of superintelligent agents but rather look at various problem spaces that occur with all levels of capabilities, the time spent researching the field helped to shape the general idea about the issue fundamentally. Without the work of Tegmark, Bostrom and others we would have probably never been tempted to take a deeper dive into the issue. The scenarios of potentials and threats that arise with the development of superintelligent agents sparked our interest and made us ask what contribution we can make to the conversation. As many we have been confronted either with the dystopian (yet often highly entertaining) scenarios of AI various authors and directors have dreamed up or more imminent problems like potential implications of AI on the job market. The field of superintelligence in combination with machine ethics offered a more argumentative view into potential future scenarios that go beyond

the entertaining works of science fiction. Exploring the different opinions of experts and seeing the value they assign to the issue helped to evaluate the debate in a more informed way. That said we are aware that we only scratched the surface.

The broad overview also proved helpful in identifying potential areas for approaching ethical machines resulting in the realisation that concepts for beneficial behaviour are not only required for entities of higher intelligence but much earlier. One thing that stood out during learning about suggested concepts was their high level nature as very few actually get into details of how ethical concepts can be applied to intelligent machines to make them act as intended. Current documents often focus on defining philosophical terminology and showcase that AI needs to be ethical without going into detail on concrete strategies.

Why AI ethics matter aside superintelligence scenarios

Superintelligence is an abstract objective that can serve as a benchmark for the development of intelligent systems. Ensuring the beneficial behaviour of such a high level entity that surpasses our own cognitive performance by multiple orders of magnitude requires robust concepts for defining the content and safe execution of objectives. As many experts point out this makes deliberate work, ahead planning and multi-level preparation essential. The constant increase in intelligence makes work on ethical system a relevant issue long before ASI has been attained. A system does not need to be superintelligent to cause serious issues. For example as the autonomy in operation of those agents increases utterly new questions for example in the area of accountability for action will arise.

As soon as AI systems influence our daily life – and in many ways they already do – it becomes essential to design them to act in a beneficial way now and looking forward. Establishing ethics as an integral part of design, economics, engineering and development will take time as this is not done by installing an overseeing “ethics board” without any power of really influencing decisions as Oliver Reichenstein, founder of iA writer, points out (Reichenstein, 2019). The beneficial behaviour of early seed intelligence seems inevitably linked to the values promoted inside the developing institution. Such paradigm shifts need to happen; not by poorly conceived top down enforcement but rather they have to grow in an institutions internal structure and be reflected in the behaviour of its employees.

Since this probably won’t happen for altruistic reasons alone, political and legal institutions have to become more proactive in their role as mediating regulators protecting their citizens interests. Again, this is a tremendous task and will take a lot of time. There are signs that the relevance of the topic is slowly finding its way into the respective bodies of governance as the High-Level Expert Group on Artificial Intelligence’s “Guidelines for trustworthy AI” from the European Commission exemplify (High-Level Expert Group on Artificial Intelligence, 2019). Though such developments are welcome, it will take more work to find appropriate answers ensuring the development of artificial systems happens in a way that does not conflict with the individuals fundamental rights.

Missing out on these developments will make them hard to control in retrospect, as the effect of inertia in decision making can currently be seen when looking at how politics tackle another pressing issue of our time: climate change. Many social impact areas of AI are endangered to be neglected at first and only become visible over time if they are not actively uncovered. Problem spaces like accountability and bias in the predictions algorithms produce have been known issues for years, yet remain unsolved as there are no easy answers. Approaching these problems requires interdisciplinary and cross institutional efforts, debate and extensive testing. Shortterm and longterm threats are often related and require adaptive answers over time. Only if both timeframes are covered the beneficial behaviour of agents can be holistically approached.

The scope of the field and potential starting points

Working towards the beneficial behaviour of artificial agents is an issue with many different aspects. It ranges from abstract philosophical debates to more concrete questions like responsibility distribution in action. The number of disciplines involved in this process is high, ranging from philosophers over developers to politicians and economic experts, all with varying goals, intentions and ideas of whats beneficial. This broad spectrum creates many different starting points for working towards beneficial systems as both (i) the relevance of the issue and (ii) the definition of what’s beneficial have to be sufficiently approached. Approaching the issue of diverse actors will go hand in hand with establishing beneficial behaviour on a broader scale resulting in action on multiple layers as different stakeholders require different layers of pickup. Our thesis tries to evoke the thinking that the future of AI is open to be shaped and not destined to become either utopian or dystopian. As designers we see ourselves wandering the path between high level thoughts and hands on techniques. We want to offer guidelines and concepts how the abstract high level discussion found during our research might transfer into more concrete use cases breaking them down into graspable problem spaces to be worked on.

Scope of the thesis and desired impact

The project’s initial idea was to create methods for the development of beneficial artificial intelligence that ensure the ethical behaviour of such systems as they become more and more intelligent. Since then the focus has shifted towards a more connecting role we as designers can take in the process to (i) create overall awareness for the potentials and dangers artificial agents bring as their autonomy in action and impact on our lives increases over time (ii) give the abstract high level discussion graspable focus points with different use cases and exemplify concrete problem spaces within those use cases (iii) induct a framework from the use cases that helps identify and approach similar problem spaces in the future (iv) and in the best case inspire readers to further explore and work on the issue in future.

Bibliography

  • Reichenstein, O., 2019. Ethics in Contemporary Technology, Design and Business. URL https://ia.net/topics/ethics-and-ethics (accessed 6.28.19).
  • High-Level Expert Group on Artificial Intelligence, 2019. ETHICS GUIDELINES FOR TRUST- WORTHY AI.