Article: AI Safety: problematic cases for current algorithms

Artificial Intelligence is currently one of the hottest topics out there, mostly for bad reasons than good. On one hand, we’ve been able to achieve major breakthroughs in technology, putting us one step closer to creating thinking machines with human like perception. On the other, we gave rise to a whole new danger for our society that is not external like a meteorite or a deadly bacteria, but that comes from within humanity itself. It would be foolish to think that something so powerful and revolutionary can only have a positive impact on our society. Despite the fact that most of the aims within the community are geared towards noble causes, we cannot predict what are the medium to long term effects of inserting AI algorithms in every single part of our lives. Take a look at social media, which is now widely considered as something that can have negative effect on human psyche, all with the purpose of generating more clicks. The truth is that no matter how aware we are of the environment around us, there will always be unwanted side effects from trying to improve peoples’ lives with technology.


Paper: Designing Normative Theories of Ethical Reasoning: Formal Framework, Methodology, and Tool Support

The area of formal ethics is experiencing a shift from a unique or standard approach to normative reasoning, as exemplified by so-called standard deontic logic, to a variety of application-specific theories. However, the adequate handling of normative concepts such as obligation, permission, prohibition, and moral commitment is challenging, as illustrated by the notorious paradoxes of deontic logic. In this article we introduce an approach to design and evaluate theories of normative reasoning. In particular, we present a formal framework based on higher-order logic, a design methodology, and we discuss tool support. Moreover, we illustrate the approach using an example of an implementation, we demonstrate different ways of using it, and we discuss how the design of normative theories is now made accessible to non-specialist users and developers.


Paper: Explaining individual predictions when features are dependent: More accurate approximations to Shapley values

Explaining complex or seemingly simple machine learning models is a practical and ethical question, as well as a legal issue. Can I trust the model? Is it biased? Can I explain it to others? We want to explain individual predictions from a complex machine learning model by learning simple, interpretable explanations. Of existing work on interpreting complex models, Shapley values is the only method with a solid theoretical foundation. Kernel SHAP is a computationally efficient approximation to Shapley values in higher dimensions. Like most other existing methods, this approach assumes independent features, which may give very wrong explanations. This is the case even if a simple linear model is used for predictions. We extend the Kernel SHAP method to handle dependent features. We provide several examples of linear and non-linear models with linear and non-linear feature dependence, where our method gives more accurate approximations to the true Shapley values. We also propose a method for aggregating individual Shapley values, such that the prediction can be explained by groups of dependent variables.


Article: ISO 26000 – Social responsibility

Business and organizations do not operate in a vacuum. Their relationship to the society and environment in which they operate is a critical factor in their ability to continue to operate effectively. It is also increasingly being used as a measure of their overall performance. ISO 26000 provides guidance on how businesses and organizations can operate in a socially responsible way. This means acting in an ethical and transparent way that contributes to the health and welfare of society.


Paper: Learning Optimal and Fair Decision Trees for Non-Discriminative Decision-Making

In recent years, automated data-driven decision-making systems have enjoyed a tremendous success in a variety of fields (e.g., to make product recommendations, or to guide the production of entertainment). More recently, these algorithms are increasingly being used to assist socially sensitive decision-making (e.g., to decide who to admit into a degree program or to prioritize individuals for public housing). Yet, these automated tools may result in discriminative decision-making in the sense that they may treat individuals unfairly or unequally based on membership to a category or a minority, resulting in disparate treatment or disparate impact and violating both moral and ethical standards. This may happen when the training dataset is itself biased (e.g., if individuals belonging to a particular group have historically been discriminated upon). However, it may also happen when the training dataset is unbiased, if the errors made by the system affect individuals belonging to a category or minority differently (e.g., if misclassification rates for Blacks are higher than for Whites). In this paper, we unify the definitions of unfairness across classification and regression. We propose a versatile mixed-integer optimization framework for learning optimal and fair decision trees and variants thereof to prevent disparate treatment and/or disparate impact as appropriate. This translates to a flexible schema for designing fair and interpretable policies suitable for socially sensitive decision-making. We conduct extensive computational studies that show that our framework improves the state-of-the-art in the field (which typically relies on heuristics) to yield non-discriminative decisions at lower cost to overall accuracy.


Article: With facial recognition, shoplifting may get you banned in places you’ve never been

At my bodega down the block, photos of shoplifters sometimes litter the windows, a warning to would-be thieves that they’re being watched. Those unofficial wanted posters come and go, as incidents fade from the owner’s memory. But with facial recognition, getting caught in one store could mean a digital record of your face is shared across the country. Stores are already using the technology for security purposes and can share that data — meaning that if one store considers you a threat, every business in that network could come to the same conclusion. One mistake could mean never being able to shop again.


Article: KI Bundesverband e.V. – KI Gütesiegel

Das KI Gütesiegel des KI Bundesverband e.V. verfolgt das Ziel einen menschen-zentrierten und menschen-dienlichen Einsatz von Künstlicher Intelligenz zu sichern. Durch das Definieren und Einhalten von einem übergeordneten Werte- und Prozessverständnis stellt das Gütesiegel eine ethisch verträgliche Service- und Produktentwicklung sicher. Im Zentrum stehen die Gütekriterien Ethik, Unvoreingenommenheit, Transparenz sowie Sicherheit und Datenschutz. Zu jedem Gütekriterium sind notwendige Maßnahmen festgehalten. Zum Zeitpunkt der Einführung umfasst das Gütesiegel eine Selbstverpflichtungserklärung.


Article: The purpose of visualization is insight, not pictures: An interview with visualization pioneer Ben Shneiderman

Visualization is such a powerful amplifier of human abilities that it should be illegal, unprofessional, and unethical to do data analysis using only statistical and algorithmic processes.’ – Ben Shneiderman
Advertisements