Article: Are We Being Programmed?
A short read on the psychological impacts of app design, data science, and the humans predisposition for being conditioned. Before I jump in, let me ask you some questions. How often are you on your smartphone? Do you sometimes find yourself opening Facebook, YouTube, or Instagram for a spare second between tasks and end up spending more time than you meant? Have you ever found yourself reloading the feeds to see if something better just got posted? Have you ever closed a Facebook tab only to open it a few minutes later? Be honest about those questions since only you who needs to know, and I didn’t even ask about Tinder or porn. According to announcements by Facebook, users on average spend 50min of their day on the app (that’s out of 2 billion people across the globe). And it’s not just Facebook. Youtube, Snapchat, Instagram, and Twitter all have giant time slices out of their users’ day, and a lot of those users overlap.
Paper: Theories of Parenting and their Application to Artificial Intelligence
As machine learning (ML) systems have advanced, they have acquired more power over humans’ lives, and questions about what values are embedded in them have become more complex and fraught. It is conceivable that in the coming decades, humans may succeed in creating artificial general intelligence (AGI) that thinks and acts with an open-endedness and autonomy comparable to that of humans. The implications would be profound for our species; they are now widely debated not just in science fiction and speculative research agendas but increasingly in serious technical and policy conversations. Much work is underway to try to weave ethics into advancing ML research. We think it useful to add the lens of parenting to these efforts, and specifically radical, queer theories of parenting that consciously set out to nurture agents whose experiences, objectives and understanding of the world will necessarily be very different from their parents’. We propose a spectrum of principles which might underpin such an effort; some are relevant to current ML research, while others will become more important if AGI becomes more likely. These principles may encourage new thinking about the development, design, training, and release into the world of increasingly autonomous agents.
Paper: Online Explanation Generation for Human-Robot Teaming
As Artificial Intelligence (AI) becomes an integral part of our life, the development of explainable AI, embodied in the decision-making process of an AI or robotic agent, becomes imperative. For a robotic teammate, the ability to generate explanations to explain its behavior is one of the key requirements of an explainable agency. Prior work on explanation generation focuses on supporting the reasoning behind the robot’s behavior. These approaches, however, fail to consider the cognitive effort needed to understand the received explanation. In particular, the human teammate is expected to understand any explanation provided before the task execution, no matter how much information is presented in the explanation. In this work, we argue that an explanation, especially complex ones, should be made in an online fashion during the execution, which helps to spread out the information to be explained and thus reducing the cognitive load of humans. However, a challenge here is that the different parts of an explanation are dependent on each other, which must be taken into account when generating online explanations. To this end, a general formulation of online explanation generation is presented. We base our explanation generation method in a model reconciliation setting introduced in our prior work. Our approach is evaluated both with human subjects in a standard planning competition (IPC) domain, using NASA Task Load Index (TLX), as well as in simulation with four different problems.
Paper: Applying Probabilistic Programming to Affective Computing
Affective Computing is a rapidly growing field spurred by advancements in artificial intelligence, but often, held back by the inability to translate psychological theories of emotion into tractable computational models. To address this, we propose a probabilistic programming approach to affective computing, which models psychological-grounded theories as generative models of emotion, and implements them as stochastic, executable computer programs. We first review probabilistic approaches that integrate reasoning about emotions with reasoning about other latent mental states (e.g., beliefs, desires) in context. Recently-developed probabilistic programming languages offer several key desidarata over previous approaches, such as: (i) flexibility in representing emotions and emotional processes; (ii) modularity and compositionality; (iii) integration with deep learning libraries that facilitate efficient inference and learning from large, naturalistic data; and (iv) ease of adoption. Furthermore, using a probabilistic programming framework allows a standardized platform for theory-building and experimentation: Competing theories (e.g., of appraisal or other emotional processes) can be easily compared via modular substitution of code followed by model comparison. To jumpstart adoption, we illustrate our points with executable code that researchers can easily modify for their own models. We end with a discussion of applications and future directions of the probabilistic programming approach.
Article: Artificial Intelligence and Society
Press the pause button! Artificial Intelligence (AI) continues to be a growing focus in the media. An agenda gathering momentum like the cloud did, particularly in the business world. On a global path of technology innovation, AI may seem the next logical step towards progress. Computing power, storage, and processor speed have rapidly improved, and it’s now the turn of the algorithms. But what is progress? What is the cost? And is this what humanity really needs or wants? Who decides? A good place to begin is to define what AI actually is. For the purpose of this post, AI is software that, when executed, can demonstrate an element of decision-making where a programmed result may be unknown, and would typically require human intelligence to perform the decision-making task. AI usually includes an aspect of automated processing that engages one or more of the human senses i.e. sight, speech, sound, taste or smell. Recent discussions in the media, online articles and radio broadcasts sometimes blur the lines between two identifiable AI spaces:
• Near term: machines to perform faster, identify patterns, make unaided decisions, and undertake relatively complex tasks with the goal of reducing any human requirement to perform the same tasks.
• Long term: machines to potentially possess the characteristic of ‘consciousness’ – this is a different space.
Conversations can wander between the impact of these two very different visions. I recently listened to a radio discussion where a caller spoke about a cull on jobs and the knock-on effects within society, but then the caller leapt to a possibility that machines could wipe out humankind. The distinction between the two is important.
Paper: Responses to a Critique of Artificial Moral Agents
The field of machine ethics is concerned with the question of how to embed ethical behaviors, or a means to determine ethical behaviors, into artificial intelligence (AI) systems. The goal is to produce artificial moral agents (AMAs) that are either implicitly ethical (designed to avoid unethical consequences) or explicitly ethical (designed to behave ethically). Van Wynsberghe and Robbins’ (2018) paper Critiquing the Reasons for Making Artificial Moral Agents critically addresses the reasons offered by machine ethicists for pursuing AMA research; this paper, co-authored by machine ethicists and commentators, aims to contribute to the machine ethics conversation by responding to that critique. The reasons for developing AMAs discussed in van Wynsberghe and Robbins (2018) are: it is inevitable that they will be developed; the prevention of harm; the necessity for public trust; the prevention of immoral use; such machines are better moral reasoners than humans, and building these machines would lead to a better understanding of human morality. In this paper, each co-author addresses those reasons in turn. In so doing, this paper demonstrates that the reasons critiqued are not shared by all co-authors; each machine ethicist has their own reasons for researching AMAs. But while we express a diverse range of views on each of the six reasons in van Wynsberghe and Robbins’ critique, we nevertheless share the opinion that the scientific study of AMAs has considerable value.
Paper: Responsible and Representative Multimodal Data Acquisition and Analysis: On Auditability, Benchmarking, Confidence, Data-Reliance & Explainability
The ethical decisions behind the acquisition and analysis of audio, video or physiological human data, harnessed for (deep) machine learning algorithms, is an increasing concern for the Artificial Intelligence (AI) community. In this regard, herein we highlight the growing need for responsible, and representative data collection and analysis, through a discussion of modality diversification. Factors such as Auditability, Benchmarking, Confidence, Data-reliance, and Explainability (ABCDE), have been touched upon within the machine learning community, and here we lay out these ABCDE sub-categories in relation to the acquisition and analysis of multimodal data, to weave through the high priority ethical concerns currently under discussion for AI. To this end, we propose how these five subcategories can be included in early planning of such acquisition paradigms.
Paper: Machine Learning: A Dark Side of Cancer Computing
Cancer analysis and prediction is the utmost important research field for well-being of humankind. The Cancer data are analyzed and predicted using machine learning algorithms. Most of the researcher claims the accuracy of the predicted results within 99%. However, we show that machine learning algorithms can easily predict with an accuracy of 100% on Wisconsin Diagnostic Breast Cancer dataset. We show that the method of gaining accuracy is an unethical approach that we can easily mislead the algorithms. In this paper, we exploit the weakness of Machine Learning algorithms. We perform extensive experiments for the correctness of our results to exploit the weakness of machine learning algorithms. The methods are rigorously evaluated to validate our claim. In addition, this paper focuses on correctness of accuracy. This paper report three key outcomes of the experiments, namely, correctness of accuracies, significance of minimum accuracy, and correctness of machine learning algorithms.