Paper: Conscientious Classification: A Data Scientist’s Guide to Discrimination-Aware Classification

Recent research has helped to cultivate growing awareness that machine learning systems fueled by big data can create or exacerbate troubling disparities in society. Much of this research comes from outside of the practicing data science community, leaving its members with little concrete guidance to proactively address these concerns. This article introduces issues of discrimination to the data science community on its own terms. In it, we tour the familiar data mining process while providing a taxonomy of common practices that have the potential to produce unintended discrimination. We also survey how discrimination is commonly measured, and suggest how familiar development processes can be augmented to mitigate systems’ discriminatory potential. We advocate that data scientists should be intentional about modeling and reducing discriminatory outcomes. Without doing so, their efforts will result in perpetuating any systemic discrimination that may exist, but under a misleading veil of data-driven objectivity.

Paper: Green AI

The computations required for deep learning research have been doubling every few months, resulting in an estimated 300,000x increase from 2012 to 2018 [2]. These computations have a surprisingly large carbon footprint [38]. Ironically, deep learning was inspired by the human brain, which is remarkably energy efficient. Moreover, the financial cost of the computations can make it difficult for academics, students, and researchers from emerging economies to engage in deep learning research. This position paper advocates a practical solution by making efficiency an evaluation criterion for research alongside accuracy and related measures. In addition, we propose reporting the financial cost or ‘price tag’ of developing, training, and running models to provide baselines for the investigation of increasingly efficient methods. Our goal is to make AI both greener and more inclusive—enabling any inspired undergraduate with a laptop to write high-quality research papers. Green AI is an emerging focus at the Allen Institute for AI.

Article: Underwriting by Prediction Machines

Credit decisioning has always been at the forefront of adopting innovative tools and technology. As a result, the error rates and overall cost of prediction have reduced significantly since the industry adopted scorecards and machine-based prediction. From credit scoring to analytics and now to machine learning models, the fundamental problem statement in a credit decisioning model is of a prediction. Prediction is defined as the process of filling in the missing information. Prediction takes the information (data) one has and uses it to generate information one doesn’t have.

Article: AI & Global Governance: Human Rights and AI Ethics – Why Ethics Cannot be Replaced by the UDHR

In the increasingly popular quest of trying to make the tech world ethical, a new idea has emerged: just replace ‘ethics’ with ‘human rights’. Since no one seems to know what ‘ethics’ means, it is only natural that everyone is searching for a framework that is clearly defined, and all the better if it fits onto a single, one-page document: the Universal Declaration of Human Rights (UDHR). Unfortunately, like many shortcuts, this one also simply does not solve the problem. Let me start by summarizing the argument for using the UDHR to solve questions surrounding AI ethics. To spell out this argument, I will make use of this blog post and the report that it is based on from Harvard Law School: ‘Artificial Intelligence & Human Rights: Opportunities & Risks.’ Here is the basic argument: The UDHR provides us with a (1) guiding framework that is (2) universally agreed upon and that results in (3) legally binding rules – in contradistinction to ‘ethics’, which is (1) a matter of subjective preference (‘moral compass,’ if you will), (2) widely disputed, and (3) only as strong as the goodwill that supports it. Therefore, while appealing to ethics to solve normative questions in AI gets us into a tunnel of unending discussion, human rights is the light that we need to follow to get out on the ‘right’ side. Or so the argument goes.

Article: The Economic and Business Impacts of Artificial Intelligence: Reality, not Hype

The debate on Artificial Intelligence (AI) is characterized by hyperbole and hysteria. The hyperbole is due to two effects: first, the promotion of AI by self-interested investors. It can be termed the ‘Google-effect’, after its CEO Sundar Pichai, who declared AI to be ‘probably the most important thing humanity has ever worked on’. He would say that. Second, the promotion of AI by tech-evangelists as a solution to humanity’s fundamental problems, even death. It can be termed the ‘Singularity-effect’, after Ray Kurzweil, who believes AI will cause a ‘Singularity’ by 2045.

Article: Why Machine Learning won’t cut it

Current machine learning approaches will not get us to real AI. The kind that can truly understand you, and learn new knowledge and skills by itself. Like humans do.

Article: The Limitations of Machine Learning

Machine learning is now seen as a silver bullet for solving all problems, but sometimes it is not the answer.
Limitation 1 – Ethics
Limitation 2 – Deterministic Problems
Limitation 3 – Data
Limitation 4 – Misapplication
Limitation 5 – Interpretability

Article: Avoiding Side Effects and Reward Hacking in Artificial Intelligence

I decided to take a step back, again. This time to the paper written about AI Safety published on the OpenAI page on June 2016 called Concrete Problems in AI Safety. It is now July the 26th at the time of writing. However I have great doubts as to whether I understand any more now than this collection of thinkers did then. Yet I will try my best to examine this paper.