Paper: State of the Art in Fair ML: From Moral Philosophy and Legislation to Fair Classifiers

Machine learning is becoming an ever present part in our lives as many decisions, e.g. to lend a credit, are no longer made by humans but by machine learning algorithms. However those decisions are often unfair and discriminating individuals belonging to protected groups based on race or gender. With the recent General Data Protection Regulation (GDPR) coming into effect, new awareness has been raised for such issues and with computer scientists having such a large impact on peoples lives it is necessary that actions are taken to discover and prevent discrimination. This work aims to give an introduction into discrimination, legislative foundations to counter it and strategies to detect and prevent machine learning algorithms from showing such behavior.


Book: Robotics, AI and the Future of Law

Artificial intelligence and related technologies are changing both the law and the legal profession. In particular, technological advances in fields ranging from machine learning to more advanced robots, including sensors, virtual realities, algorithms, bots, drones, self-driving cars, and more sophisticated ‘human-like’ robots are creating new and previously unimagined challenges for regulators. These advances also give rise to new opportunities for legal professionals to make efficiency gains in the delivery of legal services. With the exponential growth of such technologies, radical disruption seems likely to accelerate in the near future.This collection brings together a series of contributions by leading scholars in the newly emerging field of artificial intelligence, robotics, and the law. The aim of the book is to enrich legal debates on the social meaning and impact of this type of technology.


Article: Autonomy – Do we have the choice?

Why it is hard to take some decisions for humans? Whenever we have to take a complex decisions we have to deal with rationality, emotions and our beliefs. It’s a cognitive load to take decisions sometimes on certain issues. We find the situation complex, baffling. Sometimes we don’t even take difficult decisions and leave the situations as it is for years.


Article: Cathy O’Neil discusses the current lack of fairness in artificial intelligence and much more.

Cathy, the author of Weapons of Math Destruction, will discuss how societal biases are perpetuated by algorithms and how both transparency and auditability of algorithms will be necessary for a fairer future.


Article: AI Privacy and Ethical Compliance Toolkit

New applications of machine learning are raising ethical concerns about a host of issues, including bias, transparency, and privacy. In this tutorial, we will demonstrate tools and capabilities that can help data scientists address these concerns. The tools help bridge the gap between ethicists and regulators on one side, and machine learning practitioners on the other side. Namely, we will present 3 tools:
(1) Privacy-Preserving Face Landmarks Detection: We will show how to design for privacy preservation in a face detection framework. This design approach enables the extraction of facial features and does not compromise the user´s identity.
(2) Vehicle Data Assurance (VEDA): Autonomous Vehicles are characterized by the collection of huge amount of sensor data used to train ML models. We provide a solution, VEDA, to ensure compliance with strict privacy regulations regarding the use and handling of this data, and to increase trust in the collected data and its management lifecycle.
(3) Bias Detection and Remediation: It has been shown that computer vision algorithms can be biased to certain age, race or gender based on the training datasets. We will show by example how to detect these biases and how tools can be used to rebalance a biased dataset.


Article: MIT researchers show how to detect and address AI bias without loss in accuracy

Bias in AI leads to poor search results or user experience for a predictive model deployed in social media, but it can seriously and negatively affect human lives when AI is used for things like health care, autonomous vehicles, criminal justice, or the predictive policing tactics used by law enforcement. In the age of AI being deployed virtually everywhere, this could lead to ongoing systematic discrimination. That’s why MIT Computer Science AI Lab (CSAIL) researchers have created a method to reduce bias in AI without reducing the accuracy of predictive results.