Paper: Interaction Design for Explainable AI: Workshop Proceedings

As artificial intelligence (AI) systems become increasingly complex and ubiquitous, these systems will be responsible for making decisions that directly affect individuals and society as a whole. Such decisions will need to be justified due to ethical concerns as well as trust, but achieving this has become difficult due to the `black-box’ nature many AI models have adopted. Explainable AI (XAI) can potentially address this problem by explaining its actions, decisions and behaviours of the system to users. However, much research in XAI is done in a vacuum using only the researchers’ intuition of what constitutes a `good’ explanation while ignoring the interaction and the human aspect. This workshop invites researchers in the HCI community and related fields to have a discourse about human-centred approaches to XAI rooted in interaction and to shed light and spark discussion on interaction design challenges in XAI.


Paper: Machine learning and AI research for Patient Benefit: 20 Critical Questions on Transparency, Replicability, Ethics and Effectiveness

Machine learning (ML), artificial intelligence (AI) and other modern statistical methods are providing new opportunities to operationalize previously untapped and rapidly growing sources of data for patient benefit. Whilst there is a lot of promising research currently being undertaken, the literature as a whole lacks: transparency; clear reporting to facilitate replicability; exploration for potential ethical concerns; and, clear demonstrations of effectiveness. There are many reasons for why these issues exist, but one of the most important that we provide a preliminary solution for here is the current lack of ML/AI- specific best practice guidance. Although there is no consensus on what best practice looks in this field, we believe that interdisciplinary groups pursuing research and impact projects in the ML/AI for health domain would benefit from answering a series of questions based on the important issues that exist when undertaking work of this nature. Here we present 20 questions that span the entire project life cycle, from inception, data analysis, and model evaluation, to implementation, as a means to facilitate project planning and post-hoc (structured) independent evaluation. By beginning to answer these questions in different settings, we can start to understand what constitutes a good answer, and we expect that the resulting discussion will be central to developing an international consensus framework for transparent, replicable, ethical and effective research in artificial intelligence (AI-TREE) for health.


Paper: Bayesian Propagation of Record Linkage Uncertainty into Population Size Estimation of Human Rights Violations

Multiple-systems or capture-recapture estimation are common techniques for population size estimation, particularly in the quantitative study of human rights violations. These methods rely on multiple samples from the population, along with the information of which individuals appear in which samples. The goal of record linkage techniques is to identify unique individuals across samples based on the information collected on them. Linkage decisions are subject to uncertainty when such information contains errors and missingness, and when different individuals have very similar characteristics. Uncertainty in the linkage should be propagated into the stage of population size estimation. We propose an approach called linkage-averaging to propagate linkage uncertainty, as quantified by some Bayesian record linkage methodologies, into a subsequent stage of population size estimation. Linkage-averaging is a two-stage approach in which the results from the record linkage stage are fed into the population size estimation stage. We show that under some conditions the results of this approach correspond to those of a proper Bayesian joint model for both record linkage and population size estimation. The two-stage nature of linkage-averaging allows us to combine different record linkage models with different capture-recapture models, which facilitates model exploration. We present a case study from the Salvadoran civil war, where we are interested in estimating the total number of civilian killings using lists of witnesses’ reports collected by different organizations. These lists contain duplicates, typographical and spelling errors, missingness, and other inaccuracies that lead to uncertainty in the linkage. We show how linkage-averaging can be used for transferring the uncertainty in the linkage of these lists into different models for population size estimation.


Paper: Can rationality be measured?

This paper studies whether rationality can be computed. Rationality is defined as the use of complete information, which is processed with a perfect biological or physical brain, in an optimized fashion. To compute rationality one needs to quantify how complete is the information, how perfect is the physical or biological brain and how optimized is the entire decision making system. The rationality of a model (i.e. physical or biological brain) is measured by the expected accuracy of the model. The rationality of the optimization procedure is measured as the ratio of the achieved objective (i.e. utility) to the global objective. The overall rationality of a decision is measured as the product of the rationality of the model and the rationality of the optimization procedure. The conclusion reached is that rationality can be computed for convex optimization problems.


Article: Noodle Notes: Ethical Machine Learning

Here at Noodle we spend a great deal of time thinking of ways to center ethics in our machine learning and data science work. This means that we’re constantly reading about (and thinking about) ethics in AI from every possible angle: from healthcare to industry and beyond. First up is a paper in the AMA Journal of Ethics about AI and healthcare titled ‘Is it ethical to use prognostic estimates from machine learning to treat psychosis?’ You might think that’s looking pretty far into the future, but just this week an insurance company stated that they are enacting a plan to use wearable device information to determine pricing in life insurance, so this is not the future we’re reading about, it’s now.


Article: Infographic: AI – The Dark Side vs. The Force for Good

There’s a lot of talk about the potential evils of artificial intelligence, but in the right hands AI can do extraordinary good. The infographic below, developed by out friends over at Noodle.ai, explores both sides to the story.


Article: Let’s Find Donors For Charity With Machine Learning Models

Welcome to my second medium post about Data Science. I will write here about a project I’ve done using Machine Learning algorithms. I will explain what I did without relying heavily on technical language, but I will show snippets of my code. Code matters 🙂 The project is a hypothetical case study where I had to identify potential donors to a charity that offers funding to people willing to study machine learning in Silicon Valley. This charity, named CharitML found that every donor was making more than $50,000 annually. My task was to use machine learning algorithms to help this charity identify potential donors in the entire region of California.


Article: Are Self-Learning Game Players Truly Intelligent?

Are self-learning programs, such as Alpha Zero or its open source brethren Leela Zero, intelligent? Is this intelligence distinguishable in any meaningful way from that of traditional chess engines such as the world’s strongest chess player, Stockfish? The field of AI is burdened by a subjective definition of intelligence: if you can explain it, then it’s not intelligent. To avoid this mysticism we attempt to come up with a definition that is more objective. Using the formalism of a Markov Decision Process, ‘an intelligence is an agent that can sense and manipulate its environment to achieve its goals’. This definition is problematic in the definition of ‘it’; but for the purposes of game playing we can treat a player (the agent, the ‘self’) as distinct from the game (the environment).


Article: Explained: A Style-Based Generator Architecture for GANs – Generating and Tuning Realistic Artificial Faces

Generative Adversarial Networks (GAN) are a relatively new concept in Machine Learning, introduced for the first time in 2014. Their goal is to synthesize artificial samples, such as images, that are indistinguishable from authentic images. A common example of a GAN application is to generate artificial face images by learning from a dataset of celebrity faces. While GAN images became more realistic over time, one of their main challenges is controlling their output, i.e. changing specific features such pose, face shape and hair style in an image of a face.
Advertisements