Article: Racial Bias in Conversational Artificial Intelligence

It is an emerging field of human computer interaction where we use natural language to exchange information and pass commands to computers. Any single interface with digital devices that you can think of can either be replaced or augmented with AI-enabled conversational interfaces. Examples include chatbots & speech based assistants like Siri. The modern smart city concept also involves a ‘citizen-centric’ services model which uses conversational AI interfaces to personalize and contextualize city services. One example is Citibot, a citizen engagement platform. Similarly, Vienna’s has a WienBot that allows residents and tourists to find common civic services like find parking , restrooms , Restaurants and other facilities. They no longer need to rely of kindness of strangers, or to scroll through long list on websites.


Article: Building inclusion, fairness, and ethics into machine learning

Andrew Zaldivar is a Developer Advocate for Google AI. His job is to help to bring the benefits of AI to everyone. Andrew develops, evaluates, and promotes tools and techniques that can help communities build responsible AI systems, writing posts for the Google Developers blog and speaking at a variety of conferences. Before joining Google AI, Andrew was a Senior Strategist in Google’s Trust & Safety group and worked on protecting the integrity of some of Google’s key products by using machine learning to scale, optimize and automate abuse-fighting efforts. Prior to joining Google, Andrew completed his Ph.D. in Cognitive Neuroscience from the University of California, Irvine and was an Insight Data Science fellow. Here, Andrew shares details on his role at Google, his personal and professional passions, and how he applies his academic background to his work creating and sharing tools that help teams build more inclusive products and user experiences.


Paper: Software Engineering for Fairness: A Case Study with Hyperparameter Optimization

We assert that it is the ethical duty of software engineers to strive to reduce software discrimination. This paper discusses how that might be done. This is an important topic since machine learning software is increasingly being used to make decisions that affect people’s lives. Potentially, the application of that software will result in fairer decisions because (unlike humans) machine learning software is not biased. However, recent results show that the software within many data mining packages exhibits ‘group discrimination’; i.e. their decisions are inappropriately affected by ‘protected attributes'(e.g., race, gender, age, etc.). There has been much prior work on validating the fairness of machine-learning models (by recognizing when such software discrimination exists). But after detection, comes mitigation. What steps can ethical software engineers take to reduce discrimination in the software they produce? This paper shows that making \textit{fairness} as a goal during hyperparameter optimization can (a) preserve the predictive power of a model learned from a data miner while also (b) generates fairer results. To the best of our knowledge, this is the first application of hyperparameter optimization as a tool for software engineers to generate fairer software.


Paper: From What to How. An Overview of AI Ethics Tools, Methods and Research to Translate Principles into Practices

The debate about the ethical implications of Artificial Intelligence dates from the 1960s. However, in recent years symbolic AI has been complemented and sometimes replaced by Neural Networks and Machine Learning techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such debate has primarily focused on principles – the what of AI ethics – rather than on practices, the how. Awareness of the potential issues is increasing at a fast rate, but the AI community’s ability to take action to mitigate the associated risks is still at its infancy. Therefore, our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers apply ethics at each stage of the pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs.


Paper: Contrastive Fairness in Machine Learning

We present contrastive fairness, a new direction in causal inference applied to algorithmic fairness. Earlier methods dealt with the ‘what if?’ question (counterfactual fairness, NeurIPS’17). We establish the theoretical and mathematical implications of the contrastive question ‘why this and not that?’ in context of algorithmic fairness in machine learning. This is essential to defend the fairness of algorithmic decisions in tasks where a person or sub-group of people is chosen over another (job recruitment, university admission, company layovers, etc). This development is also helpful to institutions to ensure or defend the fairness of their automated decision making processes. A test case of employee job location allocation is provided as an illustrative example.


Article: Ian McEwan on His New Novel and Ethics in the Age of A.I.

When we program morality into robots, are we doomed to disappoint them with our very human ethical inconsistency?


Article: The human problem of AI

When it comes to most things business, AI is making its mark as the must-have technology. Whether we are talking about customer-facing chatbots to help with engagement and conversion or AI working in the background to help make critical business decisions, AI is everywhere. And the expectations of what it can and should be able to do is often sky-high. When those expectations aren’t met, however, it’s not always the tech that’s to blame. More likely, it’s the humans who brought it on board. Here are some of the most common human errors when it comes to implementing AI.
Mistake #1: Confusing automation with AI
Mistake #2: Not determining success factors
Mistake #3: Not getting organizational buy-in
Mistake #4: Not considering the impact on the entire customer journey
Mistake #5: Not understanding the cause of the problems you’re trying to solve


Article: The Future of Life Institute (FLI)

Mission: ‘To catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.’ We have technology to thank for all the ways in which today is better than the stone age, and technology is likely to keep improving at an accelerating pace. We are a charity and outreach organization working to ensure that tomorrow’s most powerful technologies are beneficial for humanity. With less powerful technologies such as fire, we learned to minimize risks largely by learning from mistakes. With more powerful technologies such as nuclear weapons, synthetic biology and future strong artificial intelligence, planning ahead is a better strategy than learning from mistakes, so we support research and other efforts aimed at avoiding problems in the first place. We are currently focusing on keeping artificial intelligence beneficial and we are also exploring ways of reducing risks from nuclear weapons and biotechnology. FLI is based in the Boston area, and welcomes the participation of scientists, students, philanthropists, and others nearby and around the world. Here is a video highlighting our activities from our first year.
Advertisements