Article: Modern Times Anxiety’ in AI: Are we there yet?
I recently came across this panel discussion from AAAI conference, 1984 and weirdly enough it felt both ancient yet relevant all at the same time and I was hoping I can throw my thoughts in this pit too.
Paper: Decentralising power: how we are trying to keep CALLector ethical
We present a brief overview of the CALLector project, and consider ethical questions arising from its overall goal of creating a social network to support creation and use of online CALL resources. We argue that these questions are best addressed in a decentralised, pluralistic open source architecture.
Paper: Interpreting Social Respect: A Normative Lens for ML Models
Machine learning is often viewed as an inherently value-neutral process: statistical tendencies in the training inputs are ‘simply’ used to generalize to new examples. However when models impact social systems such as interactions between humans, these patterns learned by models have normative implications. It is important that we ask not only ‘what patterns exist in the data?’, but also ‘how do we want our system to impact people?’ In particular, because minority and marginalized members of society are often statistically underrepresented in data sets, models may have undesirable disparate impact on such groups. As such, objectives of social equity and distributive justice require that we develop tools for both identifying and interpreting harms introduced by models.
Paper: Learning Fair Rule Lists
The widespread use of machine learning models, especially within the context of decision-making systems impacting individuals, raises many ethical issues with respect to fairness and interpretability of these models. While the research in these domains is booming, very few works have addressed these two issues simultaneously. To solve this shortcoming, we propose FairCORELS, a supervised learning algorithm whose objective is to learn at the same time fair and interpretable models. FairCORELS is a multi-objective variant of CORELS, a branch-and-bound algorithm, designed to compute accurate and interpretable rule lists. By jointly addressing fairness and interpretability, FairCORELS can achieve better fairness/accuracy tradeoffs compared to existing methods, as demonstrated by the empirical evaluation performed on real datasets. Our paper also contains additional contributions regarding the search strategies for optimizing the multi-objective function integrating both fairness, accuracy and interpretability.
Paper: Recognizing Human Internal States: A Conceptor-Based Approach
The past few decades has seen increased interest in the application of social robots to interventions for Autism Spectrum Disorder as behavioural coaches . We consider that robots embedded in therapies could also provide quantitative diagnostic information by observing patient behaviours. The social nature of ASD symptoms means that, to achieve this, robots need to be able to recognize the internal states their human interaction partners are experiencing, e.g. states of confusion, engagement etc. Approaching this problem can be broken down into two questions: (1) what information, accessible to robots, can be used to recognize internal states, and (2) how can a system classify internal states such that it allows for sufficiently detailed diagnostic information? In this paper we discuss these two questions in depth and propose a novel, conceptor-based classifier. We report the initial results of this system in a proof-of-concept study and outline plans for future work.
Article: The Work of the Future: Shaping Technology and Institutions
Technological change has been reshaping human life and work for centuries. The mechanization that began with the Industrial Revolution enabled dramatic improvements in human health, well-being, and quality of life – not only in the developed countries of the West, but increasingly throughout the world. At the same time, economic and social disruptions often accompanied those changes, with painful and lasting results for workers, their families, and communities. Along the way, valuable skills, industries, and ways of life were lost. Ultimately new and unforeseen occupations, industries, and amenities took their place. But the benefits of these upheavals often took decades to arrive. And the eventual beneficiaries were not necessarily those who bore the initial costs. The world now stands on the cusp of a technological revolution in artificial intelligence and robotics that may prove as transformative for economic growth and human potential as were electrification, mass production, and electronic telecommunications in their eras. New and emerging technologies will raise aggregate economic output and boost the wealth of nations. Will these developments enable people to attain higher living standards, better working conditions, greater economic security, and improved health and longevity? The answers to these questions are not predetermined. They depend upon the institutions, investments, and policies that we deploy to harness the opportunities and confront the challenges posed by this new era. How can we move beyond unhelpful prognostications about the supposed end of work and toward insights that will enable policymakers, businesses, and people to better navigate the disruptions that are coming and underway? What lessons should we take from previous epochs of rapid technological change? How is it different this time? And how can we strengthen institutions, make investments, and forge policies to ensure that the labor market of the 21st century enables workers to contribute and succeed?
Article: Making Fairness an Intrinsic Part of Machine Learning
The suitability of Machine Learning models is traditionally measured on its accuracy. A highly accurate model based on metrics like RMSE, MAPE, AUC, ROC, Gini, etc is considered to be high performing models. While such accuracy metrics important, are there other metrics that the data science community has been ignoring so far? The answer is yes – in the pursuit of accuracy, most models sacrifice ‘fairness’ and ‘interpretability.’ Rarely, a data scientist tries to dissect a model to find out if the model follows all ethical norms. This is where machine learning fairness and interpretability of models come into being.
Article: AI Safety and Intellectual Debt
A friend shared the article in the New Yorker The Hidden Costs of Automated Thinking Jonathan Zittrain. He refers first to the drug industry or pharmaceutical business and how not all drugs are fully understood – beyond that they are working. He draws this parallel towards the discussion regarding automation and artificial intelligence, machine learning techniques in particular. He mentions that ‘theory-free’ advances can be indispensable to the development of life-saving drugs, however, it comes with a cost. He mentioned altering a few pixels in photograph to fool an algorithm, and that systems can have unknown gaps. In this article, I would like to first attempt to understand slightly better who Jonathan is and secondly a reflection on this concept of intellectual debt. The subtitle of this piece is taken from one of the headings of Jonathan Zittrain’s post on Medium called Intellectual Debt: With Great Power Comes Great Ignorance. As a quick disclaimer, these texts are short reflections as part of my project #500daysofAI and as such will not be comprehensive, it is a process of learning every day about the topic.