Article: Killer Robots in the US Military: Ethics as an Afterthought

The US military is not discounting the future development of killer robots, or lethal autonomous weapon systems (LAWS), as agents in the US war machine. Artificial intelligence (AI) has shown much promise since its original inception by Alan Turing and his contemplation of machines that can learn to think and act like humans. Machine learning and its subset deep learning have inspired hope that machines can one day develop or even supersede human cognition. This is a potential technology that the Department of Defense (DoD) cannot and will not ignore. Whilst the DoD has established the Directive 3000.09, putting in place a framework for developing autonomous weapon systems (AWS) and the lethal counterpart LAWS, their development of an ethical framework is currently a mere afterthought. But when advancing towards a future where robots may take the lives of humans, shouldn’t ethics be at the heart of every aspect of this technology?


Paper: Automating dynamic consent decisions for the processing of social media data in health research

Social media have become a rich source of data, particularly in health research. Yet, the use of such data raises significant ethical questions about the need for the informed consent of those being studied. Consent mechanisms, if even obtained, are typically broad and inflexible, or place a significant burden on the participant. Machine learning algorithms show much promise for facilitating a ‘middle ground’ approach: using trained models to predict and automate granular consent decisions. Such techniques, however, raise a myriad of follow-on ethical and technical considerations. In this paper, we present an exploratory user study (n = 67) in which we find that we can predict the appropriate flow of health-related social media data with reasonable accuracy, while minimising undesired data leaks. We then attempt to deconstruct the findings of this study, identifying and discussing a number of real-world implications if such a technique were put into practice.


Paper: Challenges of Human-Aware AI Systems

From its inception, AI has had a rather ambivalent relationship to humans—swinging between their augmentation and replacement. Now, as AI technologies enter our everyday lives at an ever increasing pace, there is a greater need for AI systems to work synergistically with humans. To do this effectively, AI systems must pay more attention to aspects of intelligence that helped humans work with each other—including social intelligence. I will discuss the research challenges in designing such human-aware AI systems, including modeling the mental states of humans in the loop, recognizing their desires and intentions, providing proactive support, exhibiting explicable behavior, giving cogent explanations on demand, and engendering trust. I will survey the progress made so far on these challenges, and highlight some promising directions. I will also touch on the additional ethical quandaries that such systems pose. I will end by arguing that the quest for human-aware AI systems broadens the scope of AI enterprise, necessitates and facilitates true inter-disciplinary collaborations, and can go a long way towards increasing public acceptance of AI technologies.


Article: How A.I. Undermines Democracy

Big Data powering Big Tech and Big Money, the tyranny of the minority, and more on what awaits politics in the AI era. Artificial intelligence (AI) is poised to fundamentally alter almost every dimension of human life – from healthcare and social interactions to military and international relations. However, much of the discussion about the effects of AI has been limited to the analysis of its impact on job losses and fears that omnipotent algorithms will take over the world and exterminate humans. Instead of focusing on the long-term, it is worth considering the immediate effects of the advent of AI in politics – for politics are one of the fundamental pillars of today’s societal system, and understanding the dangers that AI poses for politics is crucial to combating AI’s negative implications, while at the same time maximizing the benefits stemming from the new opportunities in order to strengthen democracy.


Paper: Two Case Studies of Experience Prototyping Machine Learning Systems in the Wild

Throughout the course of my Ph.D., I have been designing the user experience (UX) of various machine learning (ML) systems. In this workshop, I share two projects as case studies in which people engage with ML in much more complicated and nuanced ways than the technical HCML work might assume. The first case study describes how cardiology teams in three hospitals used a clinical decision-support system that helps them decide whether and when to implant an artificial heart to a heart failure patient. I demonstrate that physicians cannot draw on their decision-making experience by seeing only patient data on paper. They are also confused by some fundamental premises upon which ML operates. For example, physicians asked: Are ML predictions made based on clinicians’ best efforts? Is it ethical to make decisions based on previous patients’ collective outcomes? In the second case study, my collaborators and I designed an intelligent text editor, with the goal of improving authors’ writing experience with NLP (Natural Language Processing) technologies. We prototyped a number of generative functionalities where the system provides phrase-or-sentence-level writing suggestions upon user request. When writing with the prototype, however, authors shared that they need to ‘see where the sentence is going two paragraphs later’ in order to decide whether the suggestion aligns with their writing; Some even considered adopting machine suggestions as plagiarism, therefore ‘is simply wrong’. By sharing these unexpected and intriguing responses from these real-world ML users, I hope to start a discussion about such previously-unknown complexities and nuances of — as the workshop proposal states — ‘putting ML at the service of people in a way that is accessible, useful, and trustworthy to all’.


Article: Digital Wellbeing Experiments

What is Digital Wellbeing Experiments? A collection of ideas and tools that help people find a better balance with technology. We hope these experiments inspire developers and designers to consider digital wellbeing in everything they design and make. All the code is open sourced and helpful guides and tips are available to kick start new ideas. Try the experiments and create new ones. The more people that get involved the more we can all learn about building better technology for everyone.


Paper: Artificial Intelligence and the Future of Psychiatry: Qualitative Findings from a Global Physician Survey

The potential for machine learning to disrupt the medical profession is the subject of ongoing debate within biomedical informatics. This study aimed to explore psychiatrists’ opinions about the potential impact of innovations in artificial intelligence and machine learning on psychiatric practice. In Spring 2019, we conducted a web-based survey of 791 psychiatrists from 22 countries worldwide. The survey measured opinions about the likelihood future technology would fully replace physicians in performing ten key psychiatric tasks. This study involved qualitative descriptive analysis of written response to three open-ended questions in the survey. Comments were classified into four major categories in relation to the impact of future technology on patient-psychiatric interactions, the quality of patient medical care, the profession of psychiatry, and health systems. Overwhelmingly, psychiatrists were skeptical that technology could fully replace human empathy. Many predicted that ‘man and machine’ would increasingly collaborate in undertaking clinical decisions, with mixed opinions about the benefits and harms of such an arrangement. Participants were optimistic that technology might improve efficiencies and access to care, and reduce costs. Ethical and regulatory considerations received limited attention. This study presents timely information of psychiatrists’ view about the scope of artificial intelligence and machine learning on psychiatric practice. Psychiatrists expressed divergent views about the value and impact of future technology with worrying omissions about practice guidelines, and ethical and regulatory issues.


Paper: Solidarity should be a core ethical principle of Artificial Intelligence

Solidarity is one of the fundamental values at the heart of the construction of peaceful societies and present in more than one third of world’s constitutions. Still, solidarity is almost never included as a principle in ethical guidelines for the development of AI. Solidarity as an AI principle (1) shares the prosperity created by AI, implementing mechanisms to redistribute the augmentation of productivity for all; and shares the burdens, making sure that AI does not increase inequality and no human is left behind. Solidarity as an AI principle (2) assesses the long term implications before developing and deploying AI systems so no groups of humans become irrelevant because of AI systems. Considering solidarity as a core principle for AI development will provide not just an human-centric but a more humanity-centric approach to AI.