• Home
  • About
  • Books
  • Courses
  • Documents
  • eBooks
  • Feeds
  • Images
  • Quotes
  • R Packages
  • What is …

AnalytiXon

~ Broaden your Horizon

Category Archives: Ethics

AI related Ethics

Let’s get it right

02 Monday Dec 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Article: Why We Need Ethics for AI: Aethics

We need to think specifically about the implications of classification, machine learning and artificial intelligence on decision making processes.


Python Library: transparentai

Python tool to create an ethic AI from defining users’s need to monitoring the model.


Paper: Moral Dilemmas for Artificial Intelligence: a position paper on an application of Compositional Quantum Cognition

Traditionally, the way one evaluates the performance of an Artificial Intelligence (AI) system is via a comparison to human performance in specific tasks, treating humans as a reference for high-level cognition. However, these comparisons leave out important features of human intelligence: the capability to transfer knowledge and make complex decisions based on emotional and rational reasoning. These decisions are influenced by current inferences as well as prior experiences, making the decision process strongly subjective and apparently biased. In this context, a definition of compositional intelligence is necessary to incorporate these features in future AI tests. Here, a concrete implementation of this will be suggested, using recent developments in quantum cognition, natural language and compositional meaning of sentences, thanks to categorical compositional models of meaning.


Article: Ethics of Artificial Intelligence

The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent beings. It is typically divided into roboethics, a concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings, and machine ethics, which is concerned with the moral behavior of artificial moral agents (AMAs).


Article: Does the gift you’re giving your loved ones respect their rights?

When picking a gift this year, we urge you to think carefully about the choice you’re making. Is that smart assistant smart enough to respect your friend or family member’s rights? Does that tablet really have their best interest in mind? Or does that shiny gadget come at a cost much higher than its price tag? When we allow proprietary software created by Facebook, Amazon, Apple, Google, and countless other companies to handle our basic computing tasks, we put an enormous amount of power in their hands, power which they freely exploit. It’s only through using free software, and devices running free software, that we can seize this power back.


Article: Should we be worried about artificial intelligence?

We should be concerned about Artificial Intelligence because all things have consequences and unintended side effects we cannot foresee or control into the future. Human nature is what it is! We will do terrible things to each other. History and this article has shown this.


Paper: Fooling with facts: Quantifying anchoring bias through a large-scale online experiment

Living in the ‘Information Age’ means that not only access to information has become easier but also that the distribution of information is more dynamic than ever. Through a large-scale online field experiment, we provide new empirical evidence for the presence of the anchoring bias in people’s judgment due to irrational reliance on a piece of information that they are initially given. The comparison of the anchoring stimuli and respective responses across different tasks reveals a positive, yet complex relationship between the anchors and the bias in participants’ predictions of the outcomes of events in the future. Participants in the treatment group were equally susceptible to the anchors regardless of their level of engagement, previous performance, or gender. Given the strong and ubiquitous influence of anchors quantified here, we should take great care to closely monitor and regulate the distribution of information online to facilitate less biased decision making.


Article: Artificial Intelligence and a More or Less Ethical Future of Work

On November the 28th an article was posted in TechCrunch about the Future of Work. The article is a conversation between Greg M. Epstein, the Humanist Chaplain at Harvard and MIT, and the author of the New York Times bestselling book Good Without God – and two key organisers EmTech. These two key organiser were Gideon Lichfield and Karen Hao. I could not access it, because it was behind a paywall. However this was accompanied by another article called: Will the future of work be ethical? After generations of increasing inequality, can we teach tech leaders to love their neighbors more than algorithms and profits? That article is open for access and one I recommend to read. The theme of EmTech this year seems to be AI, Machine Learning, and the future of work. It is what Greg describes as the ‘…opportunity to have an existential crisis; I could even say a religious crisis, though I’m not just a confirmed atheist but a professional one as well.’ He ponders whether the future leaders will exploit more efficiently or find a different path.

Let’s get it right

24 Sunday Nov 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Paper: Response to NITRD, NCO, NSF Request for Information on ‘Update to the 2016 National Artificial Intelligence Research and Development Strategic Plan’

We present a response to the 2018 Request for Information (RFI) from the NITRD, NCO, NSF regarding the ‘Update to the 2016 National Artificial Intelligence Research and Development Strategic Plan.’ Through this document, we provide a response to the question of whether and how the National Artificial Intelligence Research and Development Strategic Plan (NAIRDSP) should be updated from the perspective of Fermilab, America’s premier national laboratory for High Energy Physics (HEP). We believe the NAIRDSP should be extended in light of the rapid pace of development and innovation in the field of Artificial Intelligence (AI) since 2016, and present our recommendations below. AI has profoundly impacted many areas of human life, promising to dramatically reshape society — e.g., economy, education, science — in the coming years. We are still early in this process. It is critical to invest now in this technology to ensure it is safe and deployed ethically. Science and society both have a strong need for accuracy, efficiency, transparency, and accountability in algorithms, making investments in scientific AI particularly valuable. Thus far the US has been a leader in AI technologies, and we believe as a national Laboratory it is crucial to help maintain and extend this leadership. Moreover, investments in AI will be important for maintaining US leadership in the physical sciences.


Paper: An Introduction to Artificial Intelligence and Solutions to the Problems of Algorithmic Discrimination

There is substantial evidence that Artificial Intelligence (AI) and Machine Learning (ML) algorithms can generate bias against minorities, women, and other protected classes. Federal and state laws have been enacted to protect consumers from discrimination in credit, housing, and employment, where regulators and agencies are tasked with enforcing these laws. Additionally, there are laws in place to ensure that consumers understand why they are denied access to services and products, such as consumer loans. In this article, we provide an overview of the potential benefits and risks associated with the use of algorithms and data, and focus specifically on fairness. While our observations generalize to many contexts, we focus on the fairness concerns raised in consumer credit and the legal requirements of the Equal Credit and Opportunity Act. We propose a methodology for evaluating algorithmic fairness and minimizing algorithmic bias that aligns with the provisions of federal and state anti-discrimination statutes that outlaw overt, disparate treatment, and, specifically, disparate impact discrimination. We argue that while the use of AI and ML algorithms heighten potential discrimination risks, these risks can be evaluated and mitigated, but doing so requires a deep understanding of these algorithms and the contexts and domains in which they are being used.


Article: AI and Accessibility: A Discussion of Ethical Considerations

According to the World Health Organization, more than one billion people worldwide have disabilities. The field of disability studies defines disability through a social lens; people are disabled to the extent that society creates accessibility barriers. AI technologies offer the possibility of removing many accessibility barriers; for example, computer vision might help people who are blind better sense the visual world, speech recognition and translation technologies might offer real time captioning for people who are hard of hearing, and new robotic systems might augment the capabilities of people with limited mobility. Considering the needs of users with disabilities can help technologists identify high-impact challenges whose solutions can advance the state of AI for all users; however, ethical challenges such as inclusivity, bias, privacy, error, expectation setting, simulated data, and social acceptability must be considered.


Article: How to recognize AI snake oil

Much of what’s being sold as ‘AI’ today is snake oil – it does not and cannot work. Why is this happening? How can we recognize flawed AI claims and push back?


Paper: The Human Body is a Black Box’: Supporting Clinical Decision-Making with Deep Learning

Machine learning technologies are increasingly developed for use in healthcare. While research communities have focused on creating state-of-the-art models, there has been less focus on real world implementation and the associated challenges to accuracy, fairness, accountability, and transparency that come from actual, situated use. Serious questions remain under examined regarding how to ethically build models, interpret and explain model output, recognize and account for biases, and minimize disruptions to professional expertise and work cultures. We address this gap in the literature and provide a detailed case study covering the development, implementation, and evaluation of Sepsis Watch, a machine learning-driven tool that assists hospital clinicians in the early diagnosis and treatment of sepsis. We, the team that developed and evaluated the tool, discuss our conceptualization of the tool not as a model deployed in the world but instead as a socio-technical system requiring integration into existing social and professional contexts. Rather than focusing on model interpretability to ensure a fair and accountable machine learning, we point toward four key values and practices that should be considered when developing machine learning to support clinical decision-making: rigorously define the problem in context, build relationships with stakeholders, respect professional discretion, and create ongoing feedback loops with stakeholders. Our work has significant implications for future research regarding mechanisms of institutional accountability and considerations for designing machine learning systems. Our work underscores the limits of model interpretability as a solution to ensure transparency, accuracy, and accountability in practice. Instead, our work demonstrates other means and goals to achieve FATML values in design and in practice.


Paper: Forbidden knowledge in machine learning — Reflections on the limits of research and publication

Certain research strands can yield ‘forbidden knowledge’. This term refers to knowledge that is considered too sensitive, dangerous or taboo to be produced or shared. Discourses about such publication restrictions are already entrenched in scientific fields like IT security, synthetic biology or nuclear physics research. This paper makes the case for transferring this discourse to machine learning research. Some machine learning applications can very easily be misused and unfold harmful consequences, for instance with regard to generative video or text synthesis, personality analysis, behavior manipulation, software vulnerability detection and the like. Up to now, the machine learning research community embraces the idea of open access. However, this is opposed to precautionary efforts to prevent the malicious use of machine learning applications. Information about or from such applications may, if improperly disclosed, cause harm to people, organizations or whole societies. Hence, the goal of this work is to outline norms that can help to decide whether and when the dissemination of such information should be prevented. It proposes review parameters for the machine learning community to establish an ethical framework on how to deal with forbidden knowledge and dual-use applications.


Paper: Hard Choices in Artificial Intelligence: Addressing Normative Uncertainty through Sociotechnical Commitments

As AI systems become prevalent in high stakes domains such as surveillance and healthcare, researchers now examine how to design and implement them in a safe manner. However, the potential harms caused by systems to stakeholders in complex social contexts and how to address these remains unclear. In this paper, we explain the inherent normative uncertainty in debates about the safety of AI systems. We then address this as a problem of vagueness by examining its place in the design, training, and deployment stages of AI system development. We adopt Ruth Chang’s theory of intuitive comparability to illustrate the dilemmas that manifest at each stage. We then discuss how stakeholders can navigate these dilemmas by incorporating distinct forms of dissent into the development pipeline, drawing on Elizabeth Anderson’s work on the epistemic powers of democratic institutions. We outline a framework of sociotechnical commitments to formal, substantive and discursive challenges that address normative uncertainty across stakeholders, and propose the cultivation of related virtues by those responsible for development.


Article: How to Build AI That Won’t Destroy Us

It’s hard to find a discussion about AI safety that doesn’t focus on control. The logic is, if we’re not controlling it, something bad will happen. This sounds to me like actual, real life madness. Do we honestly think that ‘laws’, ‘control structures’ or human goals will matter to a super-intelligent machine? You may as well tell me that ants run the world. We need to look more closely at nature. Our idea that the world is a hostile, dog-eat-dog sort of place isn’t as old or as well-placed as we think. Nor is our control fetish. There might be solutions for us in the way that complex natural systems stay stable.

Let’s get it right

15 Friday Nov 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Paper: AI Ethics for Systemic Issues: A Structural Approach

The debate on AI ethics largely focuses on technical improvements and stronger regulation to prevent accidents or misuse of AI, with solutions relying on holding individual actors accountable for responsible AI development. While useful and necessary, we argue that this ‘agency’ approach disregards more indirect and complex risks resulting from AI’s interaction with the socio-economic and political context. This paper calls for a ‘structural’ approach to assessing AI’s effects in order to understand and prevent such systemic risks where no individual can be held accountable for the broader negative impacts. This is particularly relevant for AI applied to systemic issues such as climate change and food security which require political solutions and global cooperation. To properly address the wide range of AI risks and ensure ‘AI for social good’, agency-focused policies must be complemented by policies informed by a structural approach.


Paper: Kernel Dependence Regularizers and Gaussian Processes with Applications to Algorithmic Fairness

Current adoption of machine learning in industrial, societal and economical activities has raised concerns about the fairness, equity and ethics of automated decisions. Predictive models are often developed using biased datasets and thus retain or even exacerbate biases in their decisions and recommendations. Removing the sensitive covariates, such as gender or race, is insufficient to remedy this issue since the biases may be retained due to other related covariates. We present a regularization approach to this problem that trades off predictive accuracy of the learned models (with respect to biased labels) for the fairness in terms of statistical parity, i.e. independence of the decisions from the sensitive covariates. In particular, we consider a general framework of regularized empirical risk minimization over reproducing kernel Hilbert spaces and impose an additional regularizer of dependence between predictors and sensitive covariates using kernel-based measures of dependence, namely the Hilbert-Schmidt Independence Criterion (HSIC) and its normalized version. This approach leads to a closed-form solution in the case of squared loss, i.e. ridge regression. Moreover, we show that the dependence regularizer has an interpretation as modifying the corresponding Gaussian process (GP) prior. As a consequence, a GP model with a prior that encourages fairness to sensitive variables can be derived, allowing principled hyperparameter selection and studying of the relative relevance of covariates under fairness constraints. Experimental results in synthetic examples and in real problems of income and crime prediction illustrate the potential of the approach to improve fairness of automated decisions.


Paper: (When) Is Truth-telling Favored in AI Debate?

For some problems, humans may not be able to accurately judge the goodness of AI-proposed solutions. Irving et al. (2018) propose that in such cases, we may use a debate between two AI systems to amplify the problem-solving capabilities of a human judge. We introduce a mathematical framework that can model debates of this type and propose that the quality of debate designs should be measured by the accuracy of the most persuasive answer. We describe a simple instance of the debate framework called feature debate and analyze the degree to which such debates track the truth. We argue that despite being very simple, feature debates nonetheless capture many aspects of practical debates such as the incentives to confuse the judge or stall to prevent losing. We then outline how these models should be generalized to analyze a wider range of debate phenomena.


Article: Who Are The Lawyers Who Understand AI Algorithms?

There’s been a lot of negative press on AI algorithms lately. Everyone has a different opinion when it comes to inherent bias in Artificial Intelligence systems that are designed to help us make decisions. It’s easy to say, ‘Oh no, that Artificial Intelligence algorithm is racist, sexist, or even ageist.’ It’s easy to point the fingers and fire accusations against our machine counterparts. An article published by the New Scientist identified 5 biases inherent in existing AI Systems can potentially impact people’s lives in a real way. The most popular scandal is the one exposing COMPAS, an algorithm designed in the US to guide sentencing for predicting the likelihood of criminal reoffending. By ProPublica analysis, black defendants pose a higher risk of recidivism. But, we, humans are the creators of these Algorithms. Algorithms are not designed to be biased. Most often, it’s the usage of algorithms that creates bias. The data that the algorithms that it trains on also provides the bias. There are no ‘perfect fit’ in most situations, especially in social situations. So, when you are trying to fit a round peg into an oval hole, there will be biases. There will be inadequacies of current AI capabilities in AI-enabled intelligent systems. In this type of environment, what do we need? We need understanding.


Article: Why Businesses Should Adopt an AI Code of Ethics — Now

The issues of ethical development and deployment of applications using artificial intelligence (AI) technologies is rife with nuance and complexity. Because humans are diverse — different genders, races, values and cultural norms — AI algorithms and automated processes won’t work with equal acceptance or effectiveness for everyone worldwide. What most people agree upon is that these technologies should be used to improve the human condition.


Article: “AI is a lie”

Eric Jonas on AI hype and questions of ethics.


Paper: Reporting on Decision-Making Algorithms and some Related Ethical Questions

Companies report on their financial performance for decades. More recently they have also started to report on their environmental impact and their social responsibility. The latest trend is now to deliver one single integrated report where all stakeholders of the company can easily connect all facets of the business with their impact considered in a broad sense. The main purpose of this integrated approach is to avoid delivering data related to disconnected silos, which consequently makes it very difficult to globally assess the overall performance of an entity or a business line. In this paper, we focus on how companies report on risks and ethical issues related to the increasing use of Artificial Intelligence (AI). We explain some of these risks and potential issues. Next, we identify some recent initiatives by various stakeholders to define a global ethical framework for AI. Finally, we illustrate with four cases that companies are very shy to report on these facets of AI.


Paper: An Unethical Optimization Principle

If an artificial intelligence aims to maximise risk-adjusted return, then under mild conditions it is disproportionately likely to pick an unethical strategy unless the objective function allows sufficiently for this risk. Even if the proportion ${\eta}$ of available unethical strategies is small, the probability ${p_U}$ of picking an unethical strategy can become large; indeed unless returns are fat-tailed ${p_U}$ tends to unity as the strategy space becomes large. We define an Unethical Odds Ratio Upsilon (${\Upsilon}$) that allows us to calculate ${p_U}$ from ${\eta}$, and we derive a simple formula for the limit of ${\Upsilon}$ as the strategy space becomes large. We give an algorithm for estimating ${\Upsilon}$ and ${p_U}$ in finite cases and discuss how to deal with infinite strategy spaces. We show how this principle can be used to help detect unethical strategies and to estimate ${\eta}$. Finally we sketch some policy implications of this work.

Let’s get it right

11 Monday Nov 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Paper: Achieving Ethical Algorithmic Behaviour in the Internet-of-Things: a Review

The Internet-of-Things is emerging as a vast inter-connected space of devices and things surrounding people, many of which are increasingly capable of autonomous action, from automatically sending data to cloud servers for analysis, changing the behaviour of smart objects, to changing the physical environment. A wide range of ethical concerns has arisen in their usage and development in recent years. Such concerns are exacerbated by the increasing autonomy given to connected things. This paper reviews, via examples, the landscape of ethical issues, and some recent approaches to address these issues, concerning connected things behaving autonomously, as part of the Internet-of-Things. We consider ethical issues in relation to device operations and accompanying algorithms. Examples of concerns include unsecured consumer devices, data collection with health related Internet-of-Things, hackable vehicles and behaviour of autonomous vehicles in dilemma situations, accountability with Internet-of-Things systems, algorithmic bias, uncontrolled cooperation among things, and automation affecting user choice and control. Current ideas towards addressing a range of ethical concerns are reviewed and compared, including programming ethical behaviour, whitebox algorithms, blackbox validation, algorithmic social contracts, enveloping IoT systems, and guidelines and code of ethics for IoT developers – a suggestion from the analysis is that a multi-pronged approach could be useful, based on the context of operation and deployment.


Paper: Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI

In the last years, Artificial Intelligence (AI) has achieved a notable momentum that may deliver the best of expectations over many application sectors across the field. For this to occur, the entire community stands in front of the barrier of explainability, an inherent problem of AI techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI. Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is acknowledged as a crucial feature for the practical deployment of AI models. This overview examines the existing literature in the field of XAI, including a prospect toward what is yet to be reached. We summarize previous efforts to define explainability in Machine Learning, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought. We then propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at Deep Learning methods for which a second taxonomy is built. This literature analysis serves as the background for a series of challenges faced by XAI, such as the crossroads between data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to XAI with a reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.


Paper: AI Ethics in Industry: A Research Framework

Artificial intelligence (AI) is becoming increasingly widespread in system development endeavors. As AI systems affect various stakeholders due to their unique nature, the growing influence of these systems calls for ethical considerations. Academic discussion and practical examples of autonomous system failures have highlighted the need for implementing ethics in software development. Little currently exists in the way of frameworks for understanding the practical implementation of AI ethics. In this paper, we discuss a research framework for implementing AI ethics in industrial settings. The framework presents a starting point for empirical studies into AI ethics but is still being developed further based on its practical utilization.


Paper: Algorithmic decision-making in AVs: Understanding ethical and technical concerns for smart cities

Autonomous Vehicles (AVs) are increasingly embraced around the world to advance smart mobility and more broadly, smart, and sustainable cities. Algorithms form the basis of decision-making in AVs, allowing them to perform driving tasks autonomously, efficiently, and more safely than human drivers and offering various economic, social, and environmental benefits. However, algorithmic decision-making in AVs can also introduce new issues that create new safety risks and perpetuate discrimination. We identify bias, ethics, and perverse incentives as key ethical issues in the AV algorithms’ decision-making that can create new safety risks and discriminatory outcomes. Technical issues in the AVs’ perception, decision-making and control algorithms, limitations of existing AV testing and verification methods, and cybersecurity vulnerabilities can also undermine the performance of the AV system. This article investigates the ethical and technical concerns surrounding algorithmic decision-making in AVs by exploring how driving decisions can perpetuate discrimination and create new safety risks for the public. We discuss steps taken to address these issues, highlight the existing research gaps and the need to mitigate these issues through the design of AV’s algorithms and of policies and regulations to fully realise AVs’ benefits for smart and sustainable cities.


Article: Why We Need To Rethink Central Authority In The Age of AI

We live in an age of increasing centralization that pervades all aspects of our culture. In today’s world, centralization equates to control; centralization equates to power. Centralization gave rise to bureaucratic institutions where decisions, borne by a few, ran through a hierarchical structure. This ensured a system where one authority determined how systems were run and how objectives were met. This is symbolic of how authoritarian governments operate. These governments have unlimited power but their effective size is much smaller, run by one or a few persons who impose order. If a constitution does exist within this type of system, it is essentially ignored if it promotes limiting powers of the state versus giving more voice to the people. Although leaders in many of these states are elected, this is wrapped in a shroud of whitewash where leaders do not govern based on the consent of the people. …


Article: Why we should stop developing imitation machines right now

It’s well-chronicled in sci-fi and popular science: someday soon we will create an artificial intelligence that is better at inventing than we are, and human ingenuity will become obsolete. AI will transform the way we live, making human labour obsolete. It’s the zeitgeist of this moment of our culture: we’re afraid the robots will rise up in a flurry of CGI metal. In 2018 thousands of AI researchers signed a pledge to halt development of Lethal Autonomous Weapons. The Open Philanthropy Project states strong AI poses risks of potentially ‘globally catastrophic’ proportions. But I think the most immediate risk of artificial intelligence is not some robot war, or labour hyperinflation, or hyperintelligent singularity. I think the challenge of self-directing ‘Strong’ AI is well beyond the immediate threat from AI development. This focus on an Asimov-style apocalypse overlooks the fact that even the weakest possible AI will impose legal and prudential challenges. Here is my thesis: When we develop AI, even the weakest possible AI, AI would become rights-bearers under the same logic that we use to give rights to humans. Let me explain.


Paper: Ethical Dilemmas of Strategic Coalitions

A coalition of agents, or a single agent, has an ethical dilemma between several statements if each joint action of the coalition forces at least one specific statement among them to be true. For example, any action in the trolley dilemma forces one specific group of people to die. In many cases, agents face ethical dilemmas because they are restricted in the amount of the resources they are ready to sacrifice to overcome the dilemma. The paper presents a sound and complete modal logical system that describes properties of dilemmas for a given limit on a sacrifice.


Paper: Scenarios and Recommendations for Ethical Interpretive AI

Artificially intelligent systems, given a set of non-trivial ethical rules to follow, will inevitably be faced with scenarios which call into question the scope of those rules. In such cases, human reasoners typically will engage in interpretive reasoning, where interpretive arguments are used to support or attack claims that some rule should be understood a certain way. Artificially intelligent reasoners, however, currently lack the ability to carry out human-like interpretive reasoning, and we argue that bridging this gulf is of tremendous importance to human-centered AI. In order to better understand how future artificial reasoners capable of human-like interpretive reasoning must be developed, we have collected a dataset of ethical rules, scenarios designed to invoke interpretive reasoning, and interpretations of those scenarios. We perform a qualitative analysis of our dataset, and summarize our findings in the form of practical recommendations.

Let’s get it right

01 Friday Nov 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Article: Killer Robots in the US Military: Ethics as an Afterthought

The US military is not discounting the future development of killer robots, or lethal autonomous weapon systems (LAWS), as agents in the US war machine. Artificial intelligence (AI) has shown much promise since its original inception by Alan Turing and his contemplation of machines that can learn to think and act like humans. Machine learning and its subset deep learning have inspired hope that machines can one day develop or even supersede human cognition. This is a potential technology that the Department of Defense (DoD) cannot and will not ignore. Whilst the DoD has established the Directive 3000.09, putting in place a framework for developing autonomous weapon systems (AWS) and the lethal counterpart LAWS, their development of an ethical framework is currently a mere afterthought. But when advancing towards a future where robots may take the lives of humans, shouldn’t ethics be at the heart of every aspect of this technology?


Paper: Automating dynamic consent decisions for the processing of social media data in health research

Social media have become a rich source of data, particularly in health research. Yet, the use of such data raises significant ethical questions about the need for the informed consent of those being studied. Consent mechanisms, if even obtained, are typically broad and inflexible, or place a significant burden on the participant. Machine learning algorithms show much promise for facilitating a ‘middle ground’ approach: using trained models to predict and automate granular consent decisions. Such techniques, however, raise a myriad of follow-on ethical and technical considerations. In this paper, we present an exploratory user study (n = 67) in which we find that we can predict the appropriate flow of health-related social media data with reasonable accuracy, while minimising undesired data leaks. We then attempt to deconstruct the findings of this study, identifying and discussing a number of real-world implications if such a technique were put into practice.


Paper: Challenges of Human-Aware AI Systems

From its inception, AI has had a rather ambivalent relationship to humans—swinging between their augmentation and replacement. Now, as AI technologies enter our everyday lives at an ever increasing pace, there is a greater need for AI systems to work synergistically with humans. To do this effectively, AI systems must pay more attention to aspects of intelligence that helped humans work with each other—including social intelligence. I will discuss the research challenges in designing such human-aware AI systems, including modeling the mental states of humans in the loop, recognizing their desires and intentions, providing proactive support, exhibiting explicable behavior, giving cogent explanations on demand, and engendering trust. I will survey the progress made so far on these challenges, and highlight some promising directions. I will also touch on the additional ethical quandaries that such systems pose. I will end by arguing that the quest for human-aware AI systems broadens the scope of AI enterprise, necessitates and facilitates true inter-disciplinary collaborations, and can go a long way towards increasing public acceptance of AI technologies.


Article: How A.I. Undermines Democracy

Big Data powering Big Tech and Big Money, the tyranny of the minority, and more on what awaits politics in the AI era. Artificial intelligence (AI) is poised to fundamentally alter almost every dimension of human life – from healthcare and social interactions to military and international relations. However, much of the discussion about the effects of AI has been limited to the analysis of its impact on job losses and fears that omnipotent algorithms will take over the world and exterminate humans. Instead of focusing on the long-term, it is worth considering the immediate effects of the advent of AI in politics – for politics are one of the fundamental pillars of today’s societal system, and understanding the dangers that AI poses for politics is crucial to combating AI’s negative implications, while at the same time maximizing the benefits stemming from the new opportunities in order to strengthen democracy.


Paper: Two Case Studies of Experience Prototyping Machine Learning Systems in the Wild

Throughout the course of my Ph.D., I have been designing the user experience (UX) of various machine learning (ML) systems. In this workshop, I share two projects as case studies in which people engage with ML in much more complicated and nuanced ways than the technical HCML work might assume. The first case study describes how cardiology teams in three hospitals used a clinical decision-support system that helps them decide whether and when to implant an artificial heart to a heart failure patient. I demonstrate that physicians cannot draw on their decision-making experience by seeing only patient data on paper. They are also confused by some fundamental premises upon which ML operates. For example, physicians asked: Are ML predictions made based on clinicians’ best efforts? Is it ethical to make decisions based on previous patients’ collective outcomes? In the second case study, my collaborators and I designed an intelligent text editor, with the goal of improving authors’ writing experience with NLP (Natural Language Processing) technologies. We prototyped a number of generative functionalities where the system provides phrase-or-sentence-level writing suggestions upon user request. When writing with the prototype, however, authors shared that they need to ‘see where the sentence is going two paragraphs later’ in order to decide whether the suggestion aligns with their writing; Some even considered adopting machine suggestions as plagiarism, therefore ‘is simply wrong’. By sharing these unexpected and intriguing responses from these real-world ML users, I hope to start a discussion about such previously-unknown complexities and nuances of — as the workshop proposal states — ‘putting ML at the service of people in a way that is accessible, useful, and trustworthy to all’.


Article: Digital Wellbeing Experiments

What is Digital Wellbeing Experiments? A collection of ideas and tools that help people find a better balance with technology. We hope these experiments inspire developers and designers to consider digital wellbeing in everything they design and make. All the code is open sourced and helpful guides and tips are available to kick start new ideas. Try the experiments and create new ones. The more people that get involved the more we can all learn about building better technology for everyone.


Paper: Artificial Intelligence and the Future of Psychiatry: Qualitative Findings from a Global Physician Survey

The potential for machine learning to disrupt the medical profession is the subject of ongoing debate within biomedical informatics. This study aimed to explore psychiatrists’ opinions about the potential impact of innovations in artificial intelligence and machine learning on psychiatric practice. In Spring 2019, we conducted a web-based survey of 791 psychiatrists from 22 countries worldwide. The survey measured opinions about the likelihood future technology would fully replace physicians in performing ten key psychiatric tasks. This study involved qualitative descriptive analysis of written response to three open-ended questions in the survey. Comments were classified into four major categories in relation to the impact of future technology on patient-psychiatric interactions, the quality of patient medical care, the profession of psychiatry, and health systems. Overwhelmingly, psychiatrists were skeptical that technology could fully replace human empathy. Many predicted that ‘man and machine’ would increasingly collaborate in undertaking clinical decisions, with mixed opinions about the benefits and harms of such an arrangement. Participants were optimistic that technology might improve efficiencies and access to care, and reduce costs. Ethical and regulatory considerations received limited attention. This study presents timely information of psychiatrists’ view about the scope of artificial intelligence and machine learning on psychiatric practice. Psychiatrists expressed divergent views about the value and impact of future technology with worrying omissions about practice guidelines, and ethical and regulatory issues.


Paper: Solidarity should be a core ethical principle of Artificial Intelligence

Solidarity is one of the fundamental values at the heart of the construction of peaceful societies and present in more than one third of world’s constitutions. Still, solidarity is almost never included as a principle in ethical guidelines for the development of AI. Solidarity as an AI principle (1) shares the prosperity created by AI, implementing mechanisms to redistribute the augmentation of productivity for all; and shares the burdens, making sure that AI does not increase inequality and no human is left behind. Solidarity as an AI principle (2) assesses the long term implications before developing and deploying AI systems so no groups of humans become irrelevant because of AI systems. Considering solidarity as a core principle for AI development will provide not just an human-centric but a more humanity-centric approach to AI.

Let’s get it right

25 Friday Oct 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Paper: Towards Ethical Machines Via Logic Programming

Autonomous intelligent agents are playing increasingly important roles in our lives. They contain information about us and start to perform tasks on our behalves. Chatbots are an example of such agents that need to engage in a complex conversations with humans. Thus, we need to ensure that they behave ethically. In this work we propose a hybrid logic-based approach for ethical chatbots.


Article: Artificial Inhumanity

A few months ago, Fr Philip Larrey published his book called ‘Artificial Humanity’. It discusses the need for developing humane Artificial Intelligence (AI). In this article, we will explain what would happen if we have an inhumane AI.


Paper: Ethical Hacking for IoT Security: A First Look into Bug Bounty Programs and Responsible Disclosure

The security of the Internet of Things (IoT) has attracted much attention due to the growing number of IoT-oriented security incidents. IoT hardware and software security vulnerabilities are exploited affecting many companies and persons. Since the causes of vulnerabilities go beyond pure technical measures, there is a pressing demand nowadays to demystify IoT ‘security complex’ and develop practical guidelines for both companies, consumers, and regulators. In this paper, we present an initial study targeting an unexplored sphere in IoT by illuminating the potential of crowdsource ethical hacking approaches for enhancing IoT vulnerability management. We focus on Bug Bounty Programs (BBP) and Responsible Disclosure (RD), which stimulate hackers to report vulnerability in exchange for monetary rewards. We carried out a qualitative investigation supported by literature survey and expert interviews to explore how BBP and RD can facilitate the practice of identifying, classifying, prioritizing, remediating, and mitigating IoT vulnerabilities in an effective and cost-efficient manner. Besides deriving tangible guidelines for IoT stakeholders, our study also sheds light on a systematic integration path to combine BBP and RD with existing security practices (e.g., penetration test) to further boost overall IoT security.


Article: Programmed or Programmable Society – Monetizing Smart Cities

We have seen multiple cities, especially in Asia make announcements about intentions to launch government issued reward tokens as part of its smart city initiative (e.g. Seoul S Coin and Municipal Tokens). The apparent goal is to encourage citizens to participate in the use of public services, increase the tax base, foster economic activity and respond to government sponsored questionnaires. Providing feedback in terms of social services will entitle citizens to receive tokens in the form of a reward. Once collected the tokens can then be spent on goods and services, often via their mobile phones at different merchant outlets. There are few things to consider with these announcements. First, not all such tokens are pure crypto, issued using blockchain (eg: Belfast). There are thousands of private loyalty and rewards schemes in both physical and digital form. Second, these experiments are not new in the context of government initiatives, especially municipalities, that have been trying to nudge citizens into adopting certain, different behavioural patterns.


Article: Ethics in AI: Decisions by Algorithms

Like anything, boundaries and frameworks need to be established, and artificial intelligence should be no different. Whether we have realized it or not, AI is changing the way we live. It’s present in the way social media feeds are organised; the way predictive searches show up on Google; and how music services such as Spotify make song suggestions. The technology is also helping transform the way enterprises do business. It will make the world of work more efficient and many professions superfluous. From algorithms detecting Parkinson’s disease to saving people from cancer to improving mental health by Ai-enabled counseling sessions to reducing road accidents, Ai has huge benefits for human intelligence in the future. Humanity desperately needs it. Ai can be critical in solving dilemmas in healthcare, for instance, where healthcare expenditure is growing at unsustainable rates. Ai can be the crucial technology that helps pretty much every sector in our society.


Article: Military Artificial Intelligence Can Be Easily and Dangerously Fooled

AI warfare is beginning to dominate military strategy in the US and China, but is the technology ready?


Paper: Designing Trustworthy AI: A Human-Machine Teaming Framework to Guide Development

Artificial intelligence (AI) holds great promise to empower us with knowledge and augment our effectiveness. We can — and must — ensure that we keep humans safe and in control, particularly with regard to government and public sector applications that affect broad populations. How can AI development teams harness the power of AI systems and design them to be valuable to humans? Diverse teams are needed to build trustworthy artificial intelligent systems, and those teams need to coalesce around a shared set of ethics. There are many discussions in the AI field about ethics and trust, but there are few frameworks available for people to use as guidance when creating these systems. The Human-Machine Teaming (HMT) Framework for Designing Ethical AI Experiences described in this paper, when used with a set of technical ethics, will guide AI development teams to create AI systems that are accountable, de-risked, respectful, secure, honest, and usable. To support the team’s efforts, activities to understand people’s needs and concerns will be introduced along with the themes to support the team’s efforts. For example, usability testing can help determine if the audience understands how the AI system works and complies with the HMT Framework. The HMT Framework is based on reviews of existing ethical codes and best practices in human-computer interaction and software development. Human-machine teams are strongest when human users can trust AI systems to behave as expected, safely, securely, and understandably. Using the HMT Framework to design trustworthy AI systems will provide support to teams in identifying potential issues ahead of time and making great experiences for humans.


Article: Opinion of the Data Ethics Commission

Our society is experiencing profound changes brought about by digitalisation. Innovative data-based technologies may benefit us at both the individual and the wider societal levels, as well as potentially boosting economic productivity, promoting sustainability and catalysing huge strides forward in terms of scientific progress. At the same time, however, digitalisation poses risks to our fundamental rights and freedoms. It raises a wide range of ethical and legal questions centring around two wider issues: the role we want these new technologies to play, and their design. If we want to ensure that digital transformation serves the good of society as a whole, both society itself and its elected political representatives must engage in a debate on how to use and shape data-based technologies, including artificial intelligence (AI). Germany’s Federal Government set up the Data Ethics Commission (Datenethikkommission) on 18 July 2018. It was given a one-year mandate to develop ethical benchmarks and guidelines as well as specific recommendations for action, aiming at protecting the individual, preserving social cohesion, and safeguarding and promoting prosperity in the information age. As a starting point, the Federal Government presented the Data Ethics Commission with a number of key questions clustered around three main topics: algorithm-based decision-making (ADM), AI and data. In the opinion of the Data Ethics Commission, however, AI is merely one among many possible variants of an algorithmic system, and has much in common with other such systems in terms of the ethical and legal questions it raises. With this in mind, the Data Ethics Commission has structured its work under two different headings: data and algorithmic systems (in the broader sense).

Let’s get it right

13 Sunday Oct 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Paper: Towards Effective Human-AI Teams: The Case of Human-Robot Packing

We focus on the problem of designing an artificial agent capable of assisting a human user to complete a task. Our goal is to guide human users towards optimal task performance while keeping their cognitive load as low as possible. Our insight is that in order to do so, we should develop an understanding of human decision for the task domain. In this work, we consider the domain of collaborative packing, and as a first step, we explore the mechanisms underlying human packing strategies. We conduct a user study in which human participants complete a series of packing tasks in a virtual environment. We analyze their packing strategies and discover that they exhibit specific spatial and temporal patterns (e.g., humans tend to place larger items into corners first). Our insight is that imbuing an artificial agent with an understanding of this spatiotemporal structure will enable improved assistance, which will be reflected in the task performance and human perception of the AI agent. Ongoing work involves the development of a framework that incorporates the extracted insights to predict and manipulate human decision making towards an efficient route of low cognitive load. A follow-up study will evaluate our framework against a set of baselines featuring distinct strategies of assistance. Our eventual goal is the deployment and evaluation of our framework on an autonomous robotic manipulator, actively assisting users on a packing task.


Article: We can’t trust AI systems built on deep learning alone

Gary Marcus is not impressed by the hype around deep learning. While the NYU professor believes that the technique has played an important role in advancing AI, he also thinks the field’s current overemphasis on it may well lead to its demise. Marcus, a neuroscientist by training who has spent his career at the forefront of AI research, cites both technical and ethical concerns. From a technical perspective, deep learning may be good at mimicking the perceptual tasks of the human brain, like image or speech recognition. But it falls short on other tasks, like understanding conversations or causal relationships. To create more capable and broadly intelligent machines, often referred to colloquially as artificial general intelligence, deep learning must be combined with other methods.


Article: Why the ‘why way’ is the right way to restoring trust in AI

As so many more organizations now rely on AI to deliver services and consumer experiences, establishing a public trust in the AI is crucial as these systems begin to make harder decisions that impact customers.


Article: Don’t blame the AI, it’s the humans who are biased.

Bias in AI programming, both conscious and unconscious, is an issue of concern raised by scholars, the public, and the media alike. Given the implications of usage in hiring, credit, social benefits, policing, and legal decisions, they have good reason to be. AI bias occurs when a computer algorithm makes prejudiced decisions based on data and/or programming rules. The problem of bias is not only with coding (or programming), but also with the datasets that are used to train AI algorithms, in what some call the ‘discrimination feedback loop.’


Article: Saving democracy from fakes and AI misuse

Today was another rainy Friday afternoon in Berlin. At 4 pm sharp, a dear colleague of mine came to my working place and took me for a quick coffee break. While walking, avoiding the small water puddles in the floor, she said: ‘Yesterday I saw a movie about this couple. After 9 years being together and still loving each other, they broke up. The girl had an amazing working opportunity in another place and the guy did not want a long distance relationship. Seriously mate, heartbreaking.’ One thing led to another, and suddenly I said: ‘look, no matter what everybody says. I am convinced that when two people love, truly love, each other, everything can be solved. They can overcome everything. I understand and respect other people opinions, but love is at the core of everything that I am and do’. ‘But, dude, really, why you just can’t nurture yourself from other human experiences around you. Can’t you see that life is harsh and realise that love is not enough?’, she replied.


Article: Ethics and Security in Data Science

Benefits of data science, you might guess that data science has played a role in your daily life. After all, it not only affects what you do online, but what you do offline. Companies are using massive amounts of data to create better ads, produce tailored recommendations, and stock shelves, in the case of retail stores. It’s also shaping how, and who, we love. Here’s how data impacts us daily.


Article: Bias and Algorithmic Fairness

The modern business leader’s new responsibility in a brave new world ruled by data. As Data Science moves along the hype cycle and matures as a business function, so do the challenges that face the discipline. The problem statement for data science went from ‘we waste 80% of our time preparing data’ via ‘production deployment is the most difficult part of data science’ to ‘lack of measurable business impact’ in the last few years.


Paper: Data management for platform-mediated public services: Challenges and best practices

Services mediated by ICT platforms have shaped the landscape of the digital markets and produced immense economic opportunities. Unfortunately, the users of platforms not only surrender the value of their digital traces but also subject themselves to the power and control that data brokers exert for prediction and manipulation. As the platform revolution takes hold in public services, it is critically important to protect the public interest against the risks of mass surveillance and human rights abuses. We propose a set of design constraints that should underlie data systems in public services and which can serve as a guideline or benchmark in the assessment and deployment of platform-mediated services. The principles include, among others, minimizing control points and non-consensual trust relationships, empowering individuals to manage the linkages between their activities and empowering local communities to create their own trust relations. We further propose a set of generic and generative design primitives that fulfil the proposed constraints and exemplify best practices in the deployment of platforms that deliver services in the public interest. For example, blind tokens and attribute-based authorization may prevent the undue linking of data records on individuals. We suggest that policymakers could adopt these design primitives and best practices as standards by which the appropriateness of candidate technology platforms can be measured in the context of their suitability for delivering public services.

Let’s get it right

08 Tuesday Oct 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Article: A Human Centered approach to AI

Tools and approaches to help CX Designers and Product Owners approach emergent tech projects.


Article: Artificial Intelligence: Do stupid things faster with more energy!

If you think this is a no-brainer and reliable option (A) is the obvious answer, think again. It really depends on the skills of whoever’s giving the workers their instructions. Reliable workers will efficiently scale up the intelligent decision-making of a good leader, but they will unfortunately also amplify a foolish decision-maker. Remember those classic café posters? ‘Coffee: Do stupid things faster with more energy!’ When a leader is incompetent (or depraved), unreliable workers are a blessing. Can’t drag single-minded determination out of them? How wonderful! Things can get scary when zealots wholeheartedly pursue objectives set by a bad decision-maker.


Article: The Weaponization of Artificial Intelligence

In 2019, we live in a world where we are augmenting our soldiers and military units with AR. We are increasing living in a world of deep fakes, and the weaponization of AI appears to have no limit. A.I. technology has for years led military leaders to ponder a future of warfare that needs little human involvement yet in 2019 impressively, it’s how consumers are under a dark threat of a repressive authoritarian internet. In China, its internet can bar millions from travel for ‘social credit’ offences. Meanwhile new apps weaponize idol submissiveness to the state. We already know Facebook and other American tech companies practice privacy invasion and 3rd party data harvesting at a nightmarish scenario of exploiting user data.


Article: Debating the AI Safety Debate

As I am moving into the area of AI Safety within the field of artificial intelligence (AI) I find myself both confused and perplexed. Where do you even start? I covered the financial developments in OpenAI yesterday, and they are one of the foremost authorities on AI Safety. As such I thought it would be interesting to look at one of their papers. The paper that I will be looking at is called AI safety via debate published October 2018. You can of course read the article yourself in arXiv, and critique my article in turn; that would be the ideal situation. This debate about AI debates is of course ongoing.


Article: If Software is Eating the World

With the advent of Alexa, Google Assistant, Siri, and Alibaba and Baidu killing it in smart speaker adoption in China, consumer voice AI is eating the world, but to what end? Case in point, Alexa echo devices don’t make much of a profit for Amazon on hardware sales. It’s always been about the software ecosystem and the add-on value it could create. The longer-term goal could be to make money off an app marketplace through skills. ‘Skills’ are in a sense what Alexa calls its app store.


Article: AI equal with human experts in medical diagnosis, study finds

Artificial intelligence is on a par with human experts when it comes to making medical diagnoses based on images, a review has found. The potential for artificial intelligence in healthcare has caused excitement, with advocates saying it will ease the strain on resources, free up time for doctor-patient interactions and even aid the development of tailored treatment. Last month the government announced £250m of funding for a new NHS artificial intelligence laboratory.


Paper: Minimizing Margin of Victory for Fair Political and Educational Districting

In many practical scenarios, a population is divided into disjoint groups for better administration, e.g., electorates into political districts, employees into departments, students into school districts, and so on. However, grouping people arbitrarily may lead to biased partitions, raising concerns of gerrymandering in political districting, racial segregation in schools, etc. To counter such issues, in this paper, we conceptualize such problems in a voting scenario, and propose FAIR DISTRICTING problem to divide a given set of people having preference over candidates into k groups such that the maximum margin of victory of any group is minimized. We also propose the FAIR CONNECTED DISTRICTING problem which additionally requires each group to be connected. We show that the FAIR DISTRICTING problem is NP-complete for plurality voting even if we have only 3 candidates but admits polynomial time algorithms if we assume k to be some constant or everyone can be moved to any group. In contrast, we show that the FAIR CONNECTED DISTRICTING problem is NP-complete for plurality voting even if we have only 2 candidates and k = 2. Finally, we propose heuristic algorithms for both the problems and show their effectiveness in UK political districting and in lowering racial segregation in public schools in the US.


Paper: Machine learning in healthcare — a system’s perspective

A consequence of the fragmented and siloed healthcare landscape is that patient care (and data) is split along multitude of different facilities and computer systems and enabling interoperability between these systems is hard. The lack interoperability not only hinders continuity of care and burdens providers, but also hinders effective application of Machine Learning (ML) algorithms. Thus, most current ML algorithms, designed to understand patient care and facilitate clinical decision-support, are trained on limited datasets. This approach is analogous to the Newtonian paradigm of Reductionism in which a system is broken down into elementary components and a description of the whole is formed by understanding those components individually. A key limitation of the reductionist approach is that it ignores the component-component interactions and dynamics within the system which are often of prime significance in understanding the overall behaviour of complex adaptive systems (CAS). Healthcare is a CAS. Though the application of ML on health data have shown incremental improvements for clinical decision support, ML has a much a broader potential to restructure care delivery as a whole and maximize care value. However, this ML potential remains largely untapped: primarily due to functional limitations of Electronic Health Records (EHR) and the inability to see the healthcare system as a whole. This viewpoint (i) articulates the healthcare as a complex system which has a biological and an organizational perspective, (ii) motivates with examples, the need of a system’s approach when addressing healthcare challenges via ML and, (iii) emphasizes to unleash EHR functionality – while duly respecting all ethical and legal concerns – to reap full benefits of ML.

Let’s get it right

06 Sunday Oct 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Article: Let’s Stop Treating Algorithms Like They’re All Created Equal

A recent poll found that most Americans think algorithms are unfair. Unfortunately, the poll was itself biased and an example of the very phenomenon it decries. All around us, algorithms are invisibly at work. They’re recommending music and surfacing news, finding cancerous tumors, and making self-driving cars a reality. But do people trust them? Not really, according to a Pew Research Center survey taken last year. When asked whether computer programs will always reflect the biases of their designers, 58 percent of respondents thought they would. This finding illustrates a serious tension between computing technology, whose influence on people’s lives is only expected to grow, and the people affected by it.


Article: American Workforce Policy Advisory Board’s Data Transparency Working Group – White Paper on Interoperable Learning Records

Better information on workers’ skills attainment, employers’ skills needs, and educational institutions’ programs to increase skills is an essential element across all of these focus areas. The AWPAB’s Data Transparency working group has identified interoperable learning records (ILRs) as a novel and technically feasible, achievable way to communicate skills between workers, employers, and education and training institutions.


Article: As FTC cracks down, data ethics is now a strategic business weapon

Five billion dollars. That’s the apparent size of Facebook’s latest fine for violating data privacy. While many believe the sum is simply a slap on the wrist for a behemoth like Facebook, it’s still the largest amount the Federal Trade Commission has ever levied on a technology company. Facebook is clearly still reeling from Cambridge Analytica, after which trust in the company dropped 51%, searches for ‘delete Facebook’ reached 5-year highs, and Facebook’s stock dropped 20%. While incumbents like Facebook are struggling with their data, startups in highly-regulated, ‘Third Wave’ industries can take advantage by using a data strategy one would least expect: ethics. Beyond complying with regulations, startups that embrace ethics look out for their customers’ best interests, cultivate long-term trust – and avoid billion dollar fines. To weave ethics into the very fabric of their business strategies and tech systems, startups should adopt ‘agile’ data governance systems. Often combining law and technology, these systems will become a key weapon of data-centric Third Wave startups to beat incumbents in their field.


Paper: Attesting Biases and Discrimination using Language Semantics

AI agents are increasingly deployed and used to make automated decisions that affect our lives on a daily basis. It is imperative to ensure that these systems embed ethical principles and respect human values. We focus on how we can attest to whether AI agents treat users fairly without discriminating against particular individuals or groups through biases in language. In particular, we discuss human unconscious biases, how they are embedded in language, and how AI systems inherit those biases by learning from and processing human language. Then, we outline a roadmap for future research to better understand and attest problematic AI biases derived from language.


Paper: Raiders of the Lost Art

Neural style transfer, first proposed by Gatys et al. (2015), can be used to create novel artistic work through rendering a content image in the form of a style image. We present a novel method of reconstructing lost artwork, by applying neural style transfer to x-radiographs of artwork with secondary interior artwork beneath a primary exterior, so as to reconstruct lost artwork. Finally we reflect on AI art exhibitions and discuss the social, cultural, ethical, and philosophical impact of these technical innovations.


Article: Coder deletes open source add-on for Chef in protest over ICE contract

On September 17, Seth Vargo – a former employee of Chef, the software deployment automation company – found out via a tweet that Chef licenses had been sold to the Immigration and Customs Enforcement Agency (ICE) under a $95,500, one-year contract through the approved contractor C&C International Computers & Consultants. In protest, Vargo decided to ‘archive’ the GitHub repository for two open source Chef add-ons he had developed in the Ruby programming language. On his GitHub repository page, Vargo wrote, ‘I have a moral and ethical obligation to prevent my source from being used for evil.’


Article: Everyone Should Have a Moral Code’ Says Developer Who Deleted Code Sold to ICE

Seth Vargo wrote code used in a platform called Chef. When he learned ICE was a customer, he wrestled with ICE using code he had personally written. Technologist Seth Vargo had a moral dilemma. He had just found out that Immigration and Customs Enforcement (ICE), which has faced widespread condemnation for separating children from their parents at the U.S. border and other abuses, was using a product that contained code that he had written. ‘I was having trouble sleeping at night knowing that software – code that I personally authored – was being sold to and used by such a vile organization,’ he told Motherboard in an online chat. ‘I could not be complicit in enabling what I consider to be acts of evil and violations of our most basic human rights.’


Article: AI, Truth, and Society: Deepfakes at the front of the Technological Cold War

This is the first part of our special feature series on Deepfakes, exploring the latest developments and implications in this nascent field of AI. We will be covering detailed implementations on generation and countering strategies in future articles, please stay tuned to GradientCrescent to learn more.

Let’s get it right

24 Tuesday Sep 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Article: Modern Times Anxiety’ in AI: Are we there yet?

I recently came across this panel discussion from AAAI conference, 1984 and weirdly enough it felt both ancient yet relevant all at the same time and I was hoping I can throw my thoughts in this pit too.


Paper: Decentralising power: how we are trying to keep CALLector ethical

We present a brief overview of the CALLector project, and consider ethical questions arising from its overall goal of creating a social network to support creation and use of online CALL resources. We argue that these questions are best addressed in a decentralised, pluralistic open source architecture.


Paper: Interpreting Social Respect: A Normative Lens for ML Models

Machine learning is often viewed as an inherently value-neutral process: statistical tendencies in the training inputs are ‘simply’ used to generalize to new examples. However when models impact social systems such as interactions between humans, these patterns learned by models have normative implications. It is important that we ask not only ‘what patterns exist in the data?’, but also ‘how do we want our system to impact people?’ In particular, because minority and marginalized members of society are often statistically underrepresented in data sets, models may have undesirable disparate impact on such groups. As such, objectives of social equity and distributive justice require that we develop tools for both identifying and interpreting harms introduced by models.


Paper: Learning Fair Rule Lists

The widespread use of machine learning models, especially within the context of decision-making systems impacting individuals, raises many ethical issues with respect to fairness and interpretability of these models. While the research in these domains is booming, very few works have addressed these two issues simultaneously. To solve this shortcoming, we propose FairCORELS, a supervised learning algorithm whose objective is to learn at the same time fair and interpretable models. FairCORELS is a multi-objective variant of CORELS, a branch-and-bound algorithm, designed to compute accurate and interpretable rule lists. By jointly addressing fairness and interpretability, FairCORELS can achieve better fairness/accuracy tradeoffs compared to existing methods, as demonstrated by the empirical evaluation performed on real datasets. Our paper also contains additional contributions regarding the search strategies for optimizing the multi-objective function integrating both fairness, accuracy and interpretability.


Paper: Recognizing Human Internal States: A Conceptor-Based Approach

The past few decades has seen increased interest in the application of social robots to interventions for Autism Spectrum Disorder as behavioural coaches [4]. We consider that robots embedded in therapies could also provide quantitative diagnostic information by observing patient behaviours. The social nature of ASD symptoms means that, to achieve this, robots need to be able to recognize the internal states their human interaction partners are experiencing, e.g. states of confusion, engagement etc. Approaching this problem can be broken down into two questions: (1) what information, accessible to robots, can be used to recognize internal states, and (2) how can a system classify internal states such that it allows for sufficiently detailed diagnostic information? In this paper we discuss these two questions in depth and propose a novel, conceptor-based classifier. We report the initial results of this system in a proof-of-concept study and outline plans for future work.


Article: The Work of the Future: Shaping Technology and Institutions

Technological change has been reshaping human life and work for centuries. The mechanization that began with the Industrial Revolution enabled dramatic improvements in human health, well-being, and quality of life – not only in the developed countries of the West, but increasingly throughout the world. At the same time, economic and social disruptions often accompanied those changes, with painful and lasting results for workers, their families, and communities. Along the way, valuable skills, industries, and ways of life were lost. Ultimately new and unforeseen occupations, industries, and amenities took their place. But the benefits of these upheavals often took decades to arrive. And the eventual beneficiaries were not necessarily those who bore the initial costs. The world now stands on the cusp of a technological revolution in artificial intelligence and robotics that may prove as transformative for economic growth and human potential as were electrification, mass production, and electronic telecommunications in their eras. New and emerging technologies will raise aggregate economic output and boost the wealth of nations. Will these developments enable people to attain higher living standards, better working conditions, greater economic security, and improved health and longevity? The answers to these questions are not predetermined. They depend upon the institutions, investments, and policies that we deploy to harness the opportunities and confront the challenges posed by this new era. How can we move beyond unhelpful prognostications about the supposed end of work and toward insights that will enable policymakers, businesses, and people to better navigate the disruptions that are coming and underway? What lessons should we take from previous epochs of rapid technological change? How is it different this time? And how can we strengthen institutions, make investments, and forge policies to ensure that the labor market of the 21st century enables workers to contribute and succeed?


Article: Making Fairness an Intrinsic Part of Machine Learning

The suitability of Machine Learning models is traditionally measured on its accuracy. A highly accurate model based on metrics like RMSE, MAPE, AUC, ROC, Gini, etc is considered to be high performing models. While such accuracy metrics important, are there other metrics that the data science community has been ignoring so far? The answer is yes – in the pursuit of accuracy, most models sacrifice ‘fairness’ and ‘interpretability.’ Rarely, a data scientist tries to dissect a model to find out if the model follows all ethical norms. This is where machine learning fairness and interpretability of models come into being.


Article: AI Safety and Intellectual Debt

A friend shared the article in the New Yorker The Hidden Costs of Automated Thinking Jonathan Zittrain. He refers first to the drug industry or pharmaceutical business and how not all drugs are fully understood – beyond that they are working. He draws this parallel towards the discussion regarding automation and artificial intelligence, machine learning techniques in particular. He mentions that ‘theory-free’ advances can be indispensable to the development of life-saving drugs, however, it comes with a cost. He mentioned altering a few pixels in photograph to fool an algorithm, and that systems can have unknown gaps. In this article, I would like to first attempt to understand slightly better who Jonathan is and secondly a reflection on this concept of intellectual debt. The subtitle of this piece is taken from one of the headings of Jonathan Zittrain’s post on Medium called Intellectual Debt: With Great Power Comes Great Ignorance. As a quick disclaimer, these texts are short reflections as part of my project #500daysofAI and as such will not be comprehensive, it is a process of learning every day about the topic.
← Older posts

Blogs by Category

  • arXiv
  • arXiv Papers
  • Blogs
  • Books
  • Causality
  • Distilled News
  • Documents
  • Ethics
  • Magister Dixit
  • Personal Productivity
  • Python Packages
  • R Packages
  • Uncategorized
  • What is …
  • WordPress

Blogs by Month

Follow Blog via Email

Enter your email address to follow this blog and receive notifications of new posts by email.

Follow AnalytiXon

Powered by WordPress.com.

 

Loading Comments...