Article: Bad News, Europe: Consumers Do Not Want to Buy an ‘Ethical’ Smart Toaster

The conventional wisdom among European policymakers is that the best way to compete against China and the United States in artificial intelligence (AI) is to differentiate European products by emphasizing ethical design. However, a new survey from the Center for Data Innovation finds that consumers are not willing to pay a premium for products labeled ‘ethical by design.’ If the European Union naively continues down this path, it may soon find itself falling further behind in AI competitiveness. Many European policymakers have long been enamored of the notion that increasing consumer trust in the digital economy leads to greater adoption because it allows them to argue that additional regulation is unambiguously good. Rather than coming at the expense of innovation, European policymakers claim that strong regulations will spur greater consumer trust and therefore lead to greater adoption of digital innovations. For example, former EU Commissioner Viviane Reding argued that ‘high standards of data protection will also give Europe’s cloud providers a competitive advantage.’


Article: 50 Years of Test (Un)fairness: Lessons for Machine Learning

Quantitative definitions of what is unfair and what is fair have been introduced in multiple disciplines for well over 50 years, including in education, hiring, and machine learning. We trace how the notion of fairness has been defined within the testing communities of education and hiring over the past half century, exploring the cultural and social context in which different fairness definitions have emerged. In some cases, earlier definitions of fairness are similar or identical to definitions of fairness in current machine learning research, and foreshadow current formal work. In other cases, insights into what fairness means and how to measure it have largely gone overlooked. We compare past and current notions of fairness along several dimensions, including the fairness criteria, the focus of the criteria (e.g., a test, a model, or its use), the relationship of fairness to individuals, groups, and subgroups, and the mathematical method for measuring fairness (e.g., classification, regression). This work points the way towards future research and measurement of (un)fairness that builds from our modern understanding of fairness while incorporating insights from the past.


Paper: Fairness in Algorithmic Decision Making: An Excursion Through the Lens of Causality

As virtually all aspects of our lives are increasingly impacted by algorithmic decision making systems, it is incumbent upon us as a society to ensure such systems do not become instruments of unfair discrimination on the basis of gender, race, ethnicity, religion, etc. We consider the problem of determining whether the decisions made by such systems are discriminatory, through the lens of causal models. We introduce two definitions of group fairness grounded in causality: fair on average causal effect (FACE), and fair on average causal effect on the treated (FACT). We use the Rubin-Neyman potential outcomes framework for the analysis of cause-effect relationships to robustly estimate FACE and FACT. We demonstrate the effectiveness of our proposed approach on synthetic data. Our analyses of two real-world data sets, the Adult income data set from the UCI repository (with gender as the protected attribute), and the NYC Stop and Frisk data set (with race as the protected attribute), show that the evidence of discrimination obtained by FACE and FACT, or lack thereof, is often in agreement with the findings from other studies. We further show that FACT, being somewhat more nuanced compared to FACE, can yield findings of discrimination that differ from those obtained using FACE.


Article: Can AI Be a Fair Judge in Court? Estonia Thinks So

Government usually isn’t the place to look for innovation in IT or new technologies like artificial intelligence. But Ott Velsberg might change your mind. As Estonia’s chief data officer, the 28-year-old graduate student is overseeing the tiny Baltic nation’s push to insert artificial intelligence and machine learning into services provided to its 1.3 million citizens. ‘We want the government to be as lean as possible,’ says the wiry, bespectacled Velsberg, an Estonian who is writing his PhD thesis at Sweden’s Umeå University on using the Internet of Things and sensor data in government services. Estonia’s government hired Velsberg last August to run a new project to introduce AI into various ministries to streamline services offered to residents.


Article: Facebook’s Most Recent Ethical tHUD

Today, the U.S. Department of Housing and Urban Development announced it is suing Facebook for violating the Fair Housing Act. It accuses the tech giant of ‘encouraging, enabling, and causing housing discrimination’ by allowing advertisers to block real estate ads from race, religion, country of birth, and other protected characteristics. Advertisers could effectively red-line by excluding inhabitants of certain zip codes from seeing their ads. They could also filter out people who are non-American-born, non-Christian, interested in Hispanic culture, or even interested in ‘deaf culture.’ With all of the ethical scandals plaguing Facebook in the last 12 months, ‘deaf culture’ seems like an apt description of the company itself.


Article: The Ethics Of AI: How To Avoid Harmful Bias And Discrimination

Build Machine Learning Models That Are Fundamentally Sound, Assessable, Inclusive, And Reversible


Article: Machine Learning and Discrimination

Most of the time, machine learning does not touch on particularly sensitive social, moral, or ethical issues. Someone gives us a data set and asks us to predict house prices based on given attributes, classifying pictures into different categories, or teaching a computer the best way to play PAC-MAN?-?what do we do when we are asked to base predictions of protected attributes according to anti-discrimination laws? How do we ensure that we do not embed racist, sexist, or other potential biases into our algorithms, be it explicitly or implicitly?


Article: The Impact and Ethics of Conversational Artificial Intelligence

Key Takeaways:
• As we use more natural interfaces with technology, like language, our relationship is shifting to one where we increasingly humanise them.
• Improvements in natural language understanding and our changing relationship means we can use chatbots in ways we couldn’t before – both to augment human conversation and support, or indeed to replace it.
• Advances in AI mean that our experience can increasingly be personalized as analysis of our physical, mental and emotional state through our conversation and voice become possible.
• As technology provides more ambient and customised experiences we risk exposing large amounts of data, perhaps without intending to, that could be used to target us or sold to other companies for their use.
• Those working in the Software Industry must understand and take responsibility for how we use Conversational AI and our user’s data.
Advertisements