Article: Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction

As debates about the policy and ethical implications of AI systems grow, it will be increasingly important to accurately locate who is responsible when agency is distributed in a system and control over an action is mediated through time and space. Analyzing several high-profile accidents involving complex and automated socio-technical systems and the media coverage that surrounded them, I introduce the concept of a moral crumple zone to describe how responsibility for an action may be misattributed to a human actor who had limited control over the behavior of an automated or autonomous system. Just as the crumple zone in a car is designed to absorb the force of impact in a crash, the human in a highly complex and automated system may become simply a component – accidentally or intentionally – that bears the brunt of the moral and legal responsibilities when the overall system malfunctions. While the crumple zone in a car is meant to protect the human driver, the moral crumple zone protects the integrity of the technological system, at the expense of the nearest human operator. The concept is both a challenge to and an opportunity for the design and regulation of human-robot systems. At stake in articulating moral crumple zones is not only the misattribution of responsibility but also the ways in which new forms of consumer and worker harm may develop in new complex, automated, or purported autonomous technologies.


Paper: Augmented Utilitarianism for AGI Safety

In the light of ongoing progresses of research on artificial intelligent systems exhibiting a steadily increasing problem-solving ability, the identification of practicable solutions to the value alignment problem in AGI Safety is becoming a matter of urgency. In this context, one preeminent challenge that has been addressed by multiple researchers is the adequate formulation of utility functions or equivalents reliably capturing human ethical conceptions. However, the specification of suitable utility functions harbors the risk of ‘perverse instantiation’ for which no final consensus on responsible proactive countermeasures has been achieved so far. Amidst this background, we propose a novel socio-technological ethical framework denoted Augmented Utilitarianism which directly alleviates the perverse instantiation problem. We elaborate on how augmented by AI and more generally science and technology, it might allow a society to craft and update ethical utility functions while jointly undergoing a dynamical ethical enhancement. Further, we elucidate the need to consider embodied simulations in the design of utility functions for AGIs aligned with human values. Finally, we discuss future prospects regarding the usage of the presented scientifically grounded ethical framework and mention possible challenges.


Article: Data science ethical considerations: a systematic literature review and proposed project framework

Data science, and the related field of big data, is an emerging discipline involving the analysis of data to solve problems and develop insights. This rapidly growing domain promises many benefits to both consumers and businesses. However, the use of big data analytics can also introduce many ethical concerns, stemming from, for example, the possible loss of privacy or the harming of a sub-category of the population via a classification algorithm. To help address these potential ethical challenges, this paper maps and describes the main ethical themes that were identified via systematic literature review. It then identifies a possible structure to integrate these themes within a data science project, thus helping to provide some structure in the on-going debate with respect to the possible ethical situations that can arise when using data science analytics.


Article: The Need for Analytic and Algorithm Governance is Growing

Two articles in the last week show the fallacy of the idea of perfect information. In the Economist this week there is ‘How Price-bots can conspire against consumers – and how trustbusters might thwart them’. This article reports on an illicit cooperation between price bots setting gas prices in the US. In today’s Wall Street Journal there is an article titled, ‘To Set Prices, Stores turn to Algorithms’ that reports on a similar story in Germany. The point of the message is that when algorithms are charged with maximizing certain objectives, they might accidentally ‘collude’ (more precisely, behave as if they were colluding since they cannot) and the result has been, so the observations states, a rise in prices. How can this be? Why do I note this today? In yesterday’s US print edition of the Wall Street Journal there was a fine insert with the title and focus: Artificial Intelligence. It is a quick snapshot on the state of the market of AI. One startling item concerns an innovation being worked on that seems to suggest that those of us that are ill with heart disease, but might not yet know it, might be helped simply by talking to a specific app trained with AI to watch for certain heart disease markets. Yes, markers that can be found in ones’ voice!


Article: Überlegungen zur Disziplin der Maschinenethik

Mit dem rechtlichen und moralischen Status von Maschinen mit Chips beschäftigt man sich in der Wissenschaft schon seit den 1950er Jahren. Die Science-Fiction-Literatur war noch früher dran. Lange Zeit ging es vor allem darum, ob Roboter Objekte der Moral sind, sogenannte moral patients, ob man ihnen etwa Rechte zugestehen kann. Ich befasse mich seit den 1990er Jahren mit diesem Thema. Ich habe damals keinen Grund gesehen, Robotern Rechte zu geben, und bis heute nicht meinen Standpunkt geändert. Wenn sie eines Tages etwas empfinden oder wenn sie leiden können, oder wenn sie eine Art von Lebenswillen haben, lasse ich mich bestimmt überzeugen. Aber im Moment bemerke ich keine entsprechenden Tendenzen.


Article: Maschinenethik und ‘Artificial Morality’: Können und sollen Maschinen moralisch handeln?

Fortschritte auf dem Gebiet der Künstlichen Intelligenz werfen Fragen auf, wie sie sich für jede technologische Revolution stellen: Was ist von Nutzen und Vorteil für den Menschen jenseits der technischen Machbarkeit? Wie verändern sich Wirtschaft, Arbeit und Alltag? Wo liegen Risiken? Wie lassen sich diese Entwicklungen gesellschaftlich und politisch steuern? Die Debatte um KI berührt zusätzlich Kernbereiche des Menschlichen, wenn die Grenzen zwischen Mensch und Maschine verwischen und die Maschine nicht länger ein bloßes Werkzeug ist, sondern selbst Handlungsentscheidungen treffen kann. Die Diskussion darüber, welche Entscheidungen wir überhaupt an Maschinen delegieren wollen, ist von entscheidender Bedeutung.


Article: Privacy and Analytics – a DELICATE issue. A checklist towards trusted Learning Analytics

The widespread adoption of Learning Analytics (LA) and Educational Data Mining (EDM) has somewhat stagnated recently, and in some prominent cases like the inBloom disaster even been reversed following concerns by governments, stakeholders and civil rights groups. In this ongoing discussion, fears and realities are often indistinguishably mixed up, leading to an atmosphere of uncertainty among potential beneficiaries of Learning Analytics, as well as hesitations among institutional managers who aim to innovate their institution’s learning support by implementing data and analytics with a view on improving student success.


Paper: Learning Analytics Made in France: The METALproject

This paper presents the METAL project, an ongoing French open Learning Analytics (LA) project for secondary school, that aims at improving the quality of the learning process. The originality of METAL is that it relies on research through exploratory activities and focuses on all the aspects of a Learning Analytics implementation. This large-scale project includes many concerns, divided into 4 main actions. (1) data management: multi-source data identification, collection and storage, selection and promotion of standards, and design and development of an open-source Learning Record Store (LRS); (2) data visualization: learner and teacher dashboards, with a design that relies on the co-conception with final users, including trust and usability concerns; (3) data exploitation: study of the link between gaze and memory of learners, design of explainable multi-source data-mining algorithms, including ethics and privacy concerns. An additional key of originality lies in the global dissemination of LA at an institution level or at a broader level such as a territory, at the opposite on many projects that focus on a specific school or a school curriculum. Each of these aspects is a hot topic in the literature. Taking into account all of them in a holistic view of education is an additional added value of the project.
Advertisements