Paper: AI Ethics for Systemic Issues: A Structural Approach
The debate on AI ethics largely focuses on technical improvements and stronger regulation to prevent accidents or misuse of AI, with solutions relying on holding individual actors accountable for responsible AI development. While useful and necessary, we argue that this ‘agency’ approach disregards more indirect and complex risks resulting from AI’s interaction with the socio-economic and political context. This paper calls for a ‘structural’ approach to assessing AI’s effects in order to understand and prevent such systemic risks where no individual can be held accountable for the broader negative impacts. This is particularly relevant for AI applied to systemic issues such as climate change and food security which require political solutions and global cooperation. To properly address the wide range of AI risks and ensure ‘AI for social good’, agency-focused policies must be complemented by policies informed by a structural approach.
Paper: Kernel Dependence Regularizers and Gaussian Processes with Applications to Algorithmic Fairness
Current adoption of machine learning in industrial, societal and economical activities has raised concerns about the fairness, equity and ethics of automated decisions. Predictive models are often developed using biased datasets and thus retain or even exacerbate biases in their decisions and recommendations. Removing the sensitive covariates, such as gender or race, is insufficient to remedy this issue since the biases may be retained due to other related covariates. We present a regularization approach to this problem that trades off predictive accuracy of the learned models (with respect to biased labels) for the fairness in terms of statistical parity, i.e. independence of the decisions from the sensitive covariates. In particular, we consider a general framework of regularized empirical risk minimization over reproducing kernel Hilbert spaces and impose an additional regularizer of dependence between predictors and sensitive covariates using kernel-based measures of dependence, namely the Hilbert-Schmidt Independence Criterion (HSIC) and its normalized version. This approach leads to a closed-form solution in the case of squared loss, i.e. ridge regression. Moreover, we show that the dependence regularizer has an interpretation as modifying the corresponding Gaussian process (GP) prior. As a consequence, a GP model with a prior that encourages fairness to sensitive variables can be derived, allowing principled hyperparameter selection and studying of the relative relevance of covariates under fairness constraints. Experimental results in synthetic examples and in real problems of income and crime prediction illustrate the potential of the approach to improve fairness of automated decisions.
Paper: (When) Is Truth-telling Favored in AI Debate?
For some problems, humans may not be able to accurately judge the goodness of AI-proposed solutions. Irving et al. (2018) propose that in such cases, we may use a debate between two AI systems to amplify the problem-solving capabilities of a human judge. We introduce a mathematical framework that can model debates of this type and propose that the quality of debate designs should be measured by the accuracy of the most persuasive answer. We describe a simple instance of the debate framework called feature debate and analyze the degree to which such debates track the truth. We argue that despite being very simple, feature debates nonetheless capture many aspects of practical debates such as the incentives to confuse the judge or stall to prevent losing. We then outline how these models should be generalized to analyze a wider range of debate phenomena.
Article: Who Are The Lawyers Who Understand AI Algorithms?
There’s been a lot of negative press on AI algorithms lately. Everyone has a different opinion when it comes to inherent bias in Artificial Intelligence systems that are designed to help us make decisions. It’s easy to say, ‘Oh no, that Artificial Intelligence algorithm is racist, sexist, or even ageist.’ It’s easy to point the fingers and fire accusations against our machine counterparts. An article published by the New Scientist identified 5 biases inherent in existing AI Systems can potentially impact people’s lives in a real way. The most popular scandal is the one exposing COMPAS, an algorithm designed in the US to guide sentencing for predicting the likelihood of criminal reoffending. By ProPublica analysis, black defendants pose a higher risk of recidivism. But, we, humans are the creators of these Algorithms. Algorithms are not designed to be biased. Most often, it’s the usage of algorithms that creates bias. The data that the algorithms that it trains on also provides the bias. There are no ‘perfect fit’ in most situations, especially in social situations. So, when you are trying to fit a round peg into an oval hole, there will be biases. There will be inadequacies of current AI capabilities in AI-enabled intelligent systems. In this type of environment, what do we need? We need understanding.
Article: Why Businesses Should Adopt an AI Code of Ethics — Now
The issues of ethical development and deployment of applications using artificial intelligence (AI) technologies is rife with nuance and complexity. Because humans are diverse — different genders, races, values and cultural norms — AI algorithms and automated processes won’t work with equal acceptance or effectiveness for everyone worldwide. What most people agree upon is that these technologies should be used to improve the human condition.
Article: “AI is a lie”
Eric Jonas on AI hype and questions of ethics.
Paper: Reporting on Decision-Making Algorithms and some Related Ethical Questions
Companies report on their financial performance for decades. More recently they have also started to report on their environmental impact and their social responsibility. The latest trend is now to deliver one single integrated report where all stakeholders of the company can easily connect all facets of the business with their impact considered in a broad sense. The main purpose of this integrated approach is to avoid delivering data related to disconnected silos, which consequently makes it very difficult to globally assess the overall performance of an entity or a business line. In this paper, we focus on how companies report on risks and ethical issues related to the increasing use of Artificial Intelligence (AI). We explain some of these risks and potential issues. Next, we identify some recent initiatives by various stakeholders to define a global ethical framework for AI. Finally, we illustrate with four cases that companies are very shy to report on these facets of AI.
Paper: An Unethical Optimization Principle
If an artificial intelligence aims to maximise risk-adjusted return, then under mild conditions it is disproportionately likely to pick an unethical strategy unless the objective function allows sufficiently for this risk. Even if the proportion ${\eta}$ of available unethical strategies is small, the probability ${p_U}$ of picking an unethical strategy can become large; indeed unless returns are fat-tailed ${p_U}$ tends to unity as the strategy space becomes large. We define an Unethical Odds Ratio Upsilon (${\Upsilon}$) that allows us to calculate ${p_U}$ from ${\eta}$, and we derive a simple formula for the limit of ${\Upsilon}$ as the strategy space becomes large. We give an algorithm for estimating ${\Upsilon}$ and ${p_U}$ in finite cases and discuss how to deal with infinite strategy spaces. We show how this principle can be used to help detect unethical strategies and to estimate ${\eta}$. Finally we sketch some policy implications of this work.