Paper: Achieving Ethical Algorithmic Behaviour in the Internet-of-Things: a Review

The Internet-of-Things is emerging as a vast inter-connected space of devices and things surrounding people, many of which are increasingly capable of autonomous action, from automatically sending data to cloud servers for analysis, changing the behaviour of smart objects, to changing the physical environment. A wide range of ethical concerns has arisen in their usage and development in recent years. Such concerns are exacerbated by the increasing autonomy given to connected things. This paper reviews, via examples, the landscape of ethical issues, and some recent approaches to address these issues, concerning connected things behaving autonomously, as part of the Internet-of-Things. We consider ethical issues in relation to device operations and accompanying algorithms. Examples of concerns include unsecured consumer devices, data collection with health related Internet-of-Things, hackable vehicles and behaviour of autonomous vehicles in dilemma situations, accountability with Internet-of-Things systems, algorithmic bias, uncontrolled cooperation among things, and automation affecting user choice and control. Current ideas towards addressing a range of ethical concerns are reviewed and compared, including programming ethical behaviour, whitebox algorithms, blackbox validation, algorithmic social contracts, enveloping IoT systems, and guidelines and code of ethics for IoT developers – a suggestion from the analysis is that a multi-pronged approach could be useful, based on the context of operation and deployment.


Paper: Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI

In the last years, Artificial Intelligence (AI) has achieved a notable momentum that may deliver the best of expectations over many application sectors across the field. For this to occur, the entire community stands in front of the barrier of explainability, an inherent problem of AI techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI. Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is acknowledged as a crucial feature for the practical deployment of AI models. This overview examines the existing literature in the field of XAI, including a prospect toward what is yet to be reached. We summarize previous efforts to define explainability in Machine Learning, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought. We then propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at Deep Learning methods for which a second taxonomy is built. This literature analysis serves as the background for a series of challenges faced by XAI, such as the crossroads between data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to XAI with a reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.


Paper: AI Ethics in Industry: A Research Framework

Artificial intelligence (AI) is becoming increasingly widespread in system development endeavors. As AI systems affect various stakeholders due to their unique nature, the growing influence of these systems calls for ethical considerations. Academic discussion and practical examples of autonomous system failures have highlighted the need for implementing ethics in software development. Little currently exists in the way of frameworks for understanding the practical implementation of AI ethics. In this paper, we discuss a research framework for implementing AI ethics in industrial settings. The framework presents a starting point for empirical studies into AI ethics but is still being developed further based on its practical utilization.


Paper: Algorithmic decision-making in AVs: Understanding ethical and technical concerns for smart cities

Autonomous Vehicles (AVs) are increasingly embraced around the world to advance smart mobility and more broadly, smart, and sustainable cities. Algorithms form the basis of decision-making in AVs, allowing them to perform driving tasks autonomously, efficiently, and more safely than human drivers and offering various economic, social, and environmental benefits. However, algorithmic decision-making in AVs can also introduce new issues that create new safety risks and perpetuate discrimination. We identify bias, ethics, and perverse incentives as key ethical issues in the AV algorithms’ decision-making that can create new safety risks and discriminatory outcomes. Technical issues in the AVs’ perception, decision-making and control algorithms, limitations of existing AV testing and verification methods, and cybersecurity vulnerabilities can also undermine the performance of the AV system. This article investigates the ethical and technical concerns surrounding algorithmic decision-making in AVs by exploring how driving decisions can perpetuate discrimination and create new safety risks for the public. We discuss steps taken to address these issues, highlight the existing research gaps and the need to mitigate these issues through the design of AV’s algorithms and of policies and regulations to fully realise AVs’ benefits for smart and sustainable cities.


Article: Why We Need To Rethink Central Authority In The Age of AI

We live in an age of increasing centralization that pervades all aspects of our culture. In today’s world, centralization equates to control; centralization equates to power. Centralization gave rise to bureaucratic institutions where decisions, borne by a few, ran through a hierarchical structure. This ensured a system where one authority determined how systems were run and how objectives were met. This is symbolic of how authoritarian governments operate. These governments have unlimited power but their effective size is much smaller, run by one or a few persons who impose order. If a constitution does exist within this type of system, it is essentially ignored if it promotes limiting powers of the state versus giving more voice to the people. Although leaders in many of these states are elected, this is wrapped in a shroud of whitewash where leaders do not govern based on the consent of the people. …


Article: Why we should stop developing imitation machines right now

It’s well-chronicled in sci-fi and popular science: someday soon we will create an artificial intelligence that is better at inventing than we are, and human ingenuity will become obsolete. AI will transform the way we live, making human labour obsolete. It’s the zeitgeist of this moment of our culture: we’re afraid the robots will rise up in a flurry of CGI metal. In 2018 thousands of AI researchers signed a pledge to halt development of Lethal Autonomous Weapons. The Open Philanthropy Project states strong AI poses risks of potentially ‘globally catastrophic’ proportions. But I think the most immediate risk of artificial intelligence is not some robot war, or labour hyperinflation, or hyperintelligent singularity. I think the challenge of self-directing ‘Strong’ AI is well beyond the immediate threat from AI development. This focus on an Asimov-style apocalypse overlooks the fact that even the weakest possible AI will impose legal and prudential challenges. Here is my thesis: When we develop AI, even the weakest possible AI, AI would become rights-bearers under the same logic that we use to give rights to humans. Let me explain.


Paper: Ethical Dilemmas of Strategic Coalitions

A coalition of agents, or a single agent, has an ethical dilemma between several statements if each joint action of the coalition forces at least one specific statement among them to be true. For example, any action in the trolley dilemma forces one specific group of people to die. In many cases, agents face ethical dilemmas because they are restricted in the amount of the resources they are ready to sacrifice to overcome the dilemma. The paper presents a sound and complete modal logical system that describes properties of dilemmas for a given limit on a sacrifice.


Paper: Scenarios and Recommendations for Ethical Interpretive AI

Artificially intelligent systems, given a set of non-trivial ethical rules to follow, will inevitably be faced with scenarios which call into question the scope of those rules. In such cases, human reasoners typically will engage in interpretive reasoning, where interpretive arguments are used to support or attack claims that some rule should be understood a certain way. Artificially intelligent reasoners, however, currently lack the ability to carry out human-like interpretive reasoning, and we argue that bridging this gulf is of tremendous importance to human-centered AI. In order to better understand how future artificial reasoners capable of human-like interpretive reasoning must be developed, we have collected a dataset of ethical rules, scenarios designed to invoke interpretive reasoning, and interpretations of those scenarios. We perform a qualitative analysis of our dataset, and summarize our findings in the form of practical recommendations.