Paper: A Legal Definition of AI

When policy makers want to regulate AI, they must first define what AI is. However, legal definitions differ significantly from definitions of other disciplines. They are working definitions. Courts must be able to determine precisely whether or not a concrete system is considered AI by the law. In this paper we examine how policy makers should define the material scope of AI regulations. We argue that they should not use the term ‘artificial intelligence’ for regulatory purposes because there is no definition of AI which meets the requirements for legal definitions. Instead, they should define certain designs, use cases or capabilities following a risk-based approach. The goal of this paper is to help policy makers who work on AI regulations.

Paper: Valuating User Data in a Human-Centric Data Economy

The idea of paying people for their data is increasingly seen as a promising direction for resolving privacy debates, improving the quality of online data, and even offering an alternative to labor-based compensation in a future dominated by automation and self-operating machines. In this paper we demonstrate how a Human-Centric Data Economy would compensate the users of an online streaming service. We borrow the notion of the Shapley value from cooperative game theory to define what a fair compensation for each user should be for movie scores offered to the recommender system of the service. Since determining the Shapley value exactly is computationally inefficient in the general case, we derive faster alternatives using clustering, dimensionality reduction, and partial information. We apply our algorithms to a movie recommendation data set and demonstrate that different users may have a vastly different value for the service. We also analyze the reasons that some movie ratings may be more valuable than others and discuss the consequences for compensating users fairly.

Article: Why Accessibility Is the Future of Tech

Designing solutions for people with disabilities offers a peephole into the future. ‘It’s just the right thing to do.’ Very few people think that those of us who are blind should be exiled from the web altogether, or that people with hearing loss shouldn’t have iPhones. That’s as it should be. But all too often, the importance of accessibility – the catch-all term for designing technology that people with disabilities can use – is framed in terms of charity alone. And that’s a shame because it makes accessibility seem grudging and boring, when the reality is that it’s the most exciting school of design on the planet.

Article: The Anthropologist of Artificial Intelligence

How do new scientific disciplines get started? For Iyad Rahwan, a computational social scientist with self-described ‘maverick’ tendencies, it happened on a sunny afternoon in Cambridge, Massachusetts, in October 2017. Rahwan and Manuel Cebrian, a colleague from the MIT Media Lab, were sitting in Harvard Yard discussing how to best describe their preferred brand of multidisciplinary research. The rapid rise of artificial intelligence technology had generated new questions about the relationship between people and machines, which they had set out to explore. Rahwan, for example, had been exploring the question of ethical behavior for a self-driving car – should it swerve to avoid an oncoming SUV, even if it means hitting a cyclist? – in his Moral Machine experiment.

Paper: Avoiding Resentment Via Monotonic Fairness

Classifiers that achieve demographic balance by explicitly using protected attributes such as race or gender are often politically or culturally controversial due to their lack of individual fairness, i.e. individuals with similar qualifications will receive different outcomes. Individually and group fair decision criteria can produce counter-intuitive results, e.g. that the optimal constrained boundary may reject intuitively better candidates due to demographic imbalance in similar candidates. Both approaches can be seen as introducing individual resentment, where some individuals would have received a better outcome if they either belonged to a different demographic class and had the same qualifications, or if they remained in the same class but had objectively worse qualifications (e.g. lower test scores). We show that both forms of resentment can be avoided by using monotonically constrained machine learning models to create individually fair, demographically balanced classifiers.

Article: Developing AI responsibly

Sarah Bird discusses the major challenges of responsible AI development and examines promising new tools and technologies to help enable it in practice.

Article: Open-endedness: The last grand challenge you’ve never heard of

Artificial intelligence (AI) is a grand challenge for computer science. Lifetimes of effort and billions of dollars have powered its pursuit. Yet, today its most ambitious vision remains unmet: though progress continues, no human-competitive general digital intelligence is within our reach. However, such an elusive goal is exactly what we expect from a ‘grand challenge’ – it’s something that will take astronomical effort over expansive time to achieve – and is likely worth the wait. There are other grand challenges, like curing cancer, achieving 100% renewable energy, or unifying physics. Some fields have entire sets of grand challenges, such as David Hilbert’s 23 unsolved problems in mathematics, which laid down the gauntlet for the entire 20th century. What’s unusual, though, is for there to be a problem whose solution could radically alter our civilization and our understanding of ourselves while being known only to the smallest sliver of researchers. Despite how strangely implausible that sounds, it is precisely the scenario today with the challenge of open-endedness. Almost no one has even heard of this problem, let alone cares about its solution, even though it is among the most fascinating and profound challenges that might actually someday be solved. With this article, we hope to help fix this surprising disconnect. We’ll explain just what this challenge is, its amazing implications if solved, and how to join the quest if we’ve inspired your interest.

Article: Regulation and Ethics in Data Science and Machine Learning

Statistical inference, reinforcement learning, deep neural networks, and other jargon has recently attracted much attention, and indeed, for a fundamental reason. Statistical inference extends the basis of our decisions and changes the deliberative process in making decisions. This change constitutes the essential differentiator from what I name as the pre-data science to the subsequent data science era. In the data science era, decisions are taken based on data and algorithms. Often, decisions are made solely by algorithms and humans constitute an important actor only in the process of gathering, cleaning, structuring the data and setting up the framework for the algorithm selection (often, the algorithm itself is chosen by a metric). Given this fundamental change, it is important to take a closer look at both the extended base of decisions and the changes in thought processes in deliberation of this extended base when taking decisions in the data science era.