Paper: Empathic Autonomous Agents

Identifying and resolving conflicts of interests is a key challenge when designing autonomous agents. For example, such conflicts often occur when complex information systems interact persuasively with humans and are in the future likely to arise in non-human agent-to-agent interaction. We introduce a theoretical framework for an empathic autonomous agent that proactively identifies potential conflicts of interests in interactions with other agents (and humans) by considering their utility functions and comparing them with its own preferences using a system of shared values to find a solution all agents consider acceptable. To illustrate how empathic autonomous agents work, we provide running examples and a simple prototype implementation in a general-purpose programing language. To give a high-level overview of our work, we propose a reasoning-loop architecture for our empathic agent.


Paper: A Decentralised Digital Identity Architecture

Current architectures to validate, certify, and manage identity are based on centralised, top-down approaches that rely on trusted authorities and third-party operators. We approach the problem of digital identity starting from a human rights perspective, asserting that individual persons must be allowed to manage their personal information in a multitude of different ways in different contexts and that to do so, each individual must be able to create multiple unrelated identities. Therefore, we first define a set of fundamental constraints that digital identity systems must satisfy to preserve and promote human rights. With these constraints in mind, we then propose a decentralised, standards-based approach, using a combination of distributed ledger technology and thoughtful regulation, to facilitate many-to-many relationships among providers of key services. Our proposal for digital identity differs from others in its approach to trust: by avoiding centralisation and the imposition of trust from the top down, we can encourage individuals and organisations to embrace the system and share in its benefits.


Paper: Liability, Ethics, and Culture-Aware Behavior Specification using Rulebooks

The behavior of self-driving cars must be compatible with an enormous set of conflicting and ambiguous objectives, from law, from ethics, from the local culture, and so on. This paper describes a new way to conveniently define the desired behavior for autonomous agents, which we use on the self-driving cars developed at nuTonomy. We define a ‘rulebook’ as a pre-ordered set of ‘rules’, each akin to a violation metric on the possible outcomes (‘realizations’). The rules are partially ordered by priority. The semantics of a rulebook imposes a pre-order on the set of realizations. We study the compositional properties of the rulebooks, and we derive which operations we can allow on the rulebooks to preserve previously-introduced constraints. While we demonstrate the application of these techniques in the self-driving domain, the methods are domain-independent.


Article: Out now: 3TH1CS – A reinvention of ethics in the digital age?

The digital transformation is affecting more and more areas of our lives, and as it does, it constantly raises new ethical questions. With the book ‘3TH1CS’ we want to explore where exactly these questions arise and how we can – or even have to – address them as a society. The 20 contributions that make up the book have been written by a group of experts selected from among the leading thinkers of Europe, Asia and America. They shine a light on those areas where the digital transformation is posing a challenge to existing moral conventions or requires them to be re-thought. ‘3TH1CS’ gives an overview of the most important ethical issues of our time from the perspectives of renowned scientists, thinkers and philosophers. They share their knowledge and thoughts on ethics in the digital age in an understandable style and present ideas, analyses and proposals that invite us to join the discussion. The topics dealt with in the book include the relationship between human and machine, moral conduct towards artificial intelligence, Big Data in medicine, autonomous weapons systems, the influence of algorithms on our lives, or the question of how our digital world can be shaped in an ethical way.


Article: Algo.Rules

Rules for the Design of Algorithmic Systems


Article: A Conversation about Tech Ethics with the New York Times Chief Data Scientist

Although I’m excited about the positive potential of tech, I’m also scared about the ways that tech is having a negative impact on society, and I’m interested in how we can push tech companies to do better. I was recently in a discussion during which New York Times chief data scientist Chris Wiggins shared a helpful framework for thinking about the different forces we can use to influence tech companies towards responsibility and ethics. I interviewed Chris on the topic and have summarized that interview here. In addition to having been Chief Data Scientist at the New York Times since January 2014, Chris Wiggins is professor of applied mathematics at Columbia University, a founding member of Columbia’s Data Science Institute, and co-founder of HackNY. He co-teaches a course at Columbia on the history and ethics of data.


Paper: A Serious Game for Introducing Software Engineering Ethics to University Students

This paper presents a game based on storytelling, in which the players are faced with ethical dilemmas related to software engineering specific issues. The players’ choices have consequences on how the story unfolds and could lead to various alternative endings. This Ethics Game was used as a tool to mediate the learning activity and it was evaluated by 144 students during a Software Engineering Course on the 2017-2018 academic year. This evaluation was based on a within-subject pre-post design methodology and provided insights on the students learning gain (academic performance), as well as on the students’ perceived educational experience. In addition, it provided the results of the students’ usability evaluation of the Ethics Game. The results indicated that the students did improve their knowledge about software engineering ethics by playing this game. Also, they considered this game to be a useful educational tool and of high usability. Female students had statistically significant higher knowledge gain and higher evaluation scores than male students, while no statistically significant differences were measured in groups based on the year of study.


Article: What Are Machine Learning Models Hiding?

Machine learning is eating the world. The abundance of training data has helped ML achieve amazing results for object recognition, natural language processing, predictive analytics, and all manner of other tasks. Much of this training data is very sensitive, including personal photos, search queries, location traces, and health-care records.
Advertisements