Paper: Explaining Explanations to Society ()

There is a disconnect between explanatory artificial intelligence (XAI) methods and the types of explanations that are useful for and demanded by society (policy makers, government officials, etc.) Questions that experts in artificial intelligence (AI) ask opaque systems provide inside explanations, focused on debugging, reliability, and validation. These are different from those that society will ask of these systems to build trust and confidence in their decisions. Although explanatory AI systems can answer many questions that experts desire, they often don’t explain why they made decisions in a way that is precise (true to the model) and understandable to humans. These outside explanations can be used to build trust, comply with regulatory and policy changes, and act as external validation. In this paper, we focus on XAI methods for deep neural networks (DNNs) because of DNNs’ use in decision-making and inherent opacity. We explore the types of questions that explanatory DNN systems can answer and discuss challenges in building explanatory systems that provide outside explanations for societal requirements and benefit.

Article: Our Software Dependency Problem ()

For decades, discussion of software reuse was far more common than actual software reuse. Today, the situation is reversed: developers reuse software written by others every day, in the form of software dependencies, and the situation goes mostly unexamined. My own background includes a decade of working with Google’s internal source code system, which treats software dependencies as a first-class concept,1 and also developing support for dependencies in the Go programming language.2 Software dependencies carry with them serious risks that are too often overlooked. The shift to easy, fine-grained software reuse has happened so quickly that we do not yet understand the best practices for choosing and using dependencies effectively, or even for deciding when they are appropriate and when not. My purpose in writing this article is to raise awareness of the risks and encourage more investigation of solutions.

Book: Who’s Afraid of AI? – What to Fear and How to Love the Dawning Robot Age ()

A penetrating guide to artificial intelligence: what it is, how it works, and the ways it will define our lives – for better and worse. Computer programs can recognize human faces more reliably than humans. They beat us at board games, they bluff better than the best poker players in the world, and some of them can almost pass as human. At a breathtaking pace, machines are becoming better and faster at making complex decisions – even compared to us. In Who’s Afraid of AI?, a guide to the most awe-inspiring AI achievements – as well as the most frightening – award-winning author Thomas Ramge expertly explains how machines are learning to learn. Plus, he turns our gaze toward the future as he ponders the greatest AI conundrum: What will become of humans when smart machines become more intelligent than us? What happens when, in many ways, we’re obsolete?

Book: Artificial Intelligence Brings Positive or Negative Impaction to – Influence Human Job Nature ()

Advances in artificial intelligence (AI) technology is for the progress in critical areas, such as health, education, energy, economy inclusion, social welfare and the environment. Whether AI can bring positive or negative impaction to influence human job nature change.Thus, it brings this question: Whether (AI) robotic workers can be instead of traditional human workers in these different new markets to bring positive or negative impaction to change human job nature change? In recent years, machines had been used to be human’s tasks in the performance of certain tasks related to intelligence , such as aspects of image recognition. Experts also forecast that rapid progress in the field of specialized artificial intelligence will continue. Then, it also brings this question: Does (AI) exceed that of human performance on more and more tasks to replace human jobs? If it is truth, will some of human jobs to be disappeared? (AI) will be instead of human some simple jobs, then unemployment rate to the low skillful and low educated workers will be increased.Whether (AI) will be raised either production or performance or unemployment to bring human job market more advantages or more disadvantages? In my this book, I shall explain whether (AI) will bring benefits or disadvantages to human job market. I shall give example to let my readers to think how to support my final view point.

Book: All Data Are Local – Thinking Critically in a Data-Driven Society ()

How to analyze data settings rather than data sets, acknowledging the meaning-making power of the local.In our data-driven society, it is too easy to assume the transparency of data. Instead, Yanni Loukissas argues in All Data Are Local, we should approach data sets with an awareness that data are created by humans and their dutiful machines, at a time, in a place, with the instruments at hand, for audiences that are conditioned to receive them. All data are local. The term data set implies something discrete, complete, and portable, but it is none of those things. Examining a series of data sources important for understanding the state of public life in the United States-Harvard’s Arnold Arboretum, the Digital Public Library of America, UCLA’s Television News Archive, and the real estate marketplace Zillow-Loukissas shows us how to analyze data settings rather than data sets.Loukissas sets out six principles: all data are local; data have complex attachments to place; data are collected from heterogeneous sources; data and algorithms are inextricably entangled; interfaces recontextualize data; and data are indexes to local knowledge. He then provides a set of practical guidelines to follow. To make his argument, Loukissas employs a combination of qualitative research on data cultures and exploratory data visualizations. Rebutting the ‘myth of digital universalism,’ Loukissas reminds us of the meaning-making power of the local.

Paper: Forecasting Transformative AI: An Expert Survey ()

Transformative AI technologies have the potential to reshape critical aspects of society in the near future. However, in order to properly prepare policy initiatives for the arrival of such technologies accurate forecasts and timelines are necessary. A survey was administered to attendees of three AI conferences during the summer of 2018 (ICML, IJCAI and the HLAI conference). The survey included questions for estimating AI capabilities over the next decade, questions for forecasting five scenarios of transformative AI and questions concerning the impact of computational resources in AI research. Respondents indicated a median of 21.5% of human tasks (i.e., all tasks that humans are currently paid to do) can be feasibly automated now, and that this figure would rise to 40% in 5 years and 60% in 10 years. Median forecasts indicated a 50% probability of AI systems being capable of automating 90% of current human tasks in 25 years and 99% of current human tasks in 50 years. The conference of attendance was found to have a statistically significant impact on all forecasts, with attendees of HLAI providing more optimistic timelines with less uncertainty. These findings suggest that AI experts expect major advances in AI technology to continue over the next decade to a degree that will likely have profound transformative impacts on society.

Article: Training AI to Save Lives ()

Life is full of risks, some of them technological in nature. One can easily imagine how almost any technology, no matter how benign its intended function, could put people’s lives in danger. Nevertheless, these imaginative leaps shouldn’t keep society from continuing to roll out technological innovations. As artificial intelligence becomes the backbone of self-driving vehicles and other 21st century innovations, many people are holding their breath, just waiting for something to go dangerously wrong. It’s not just the nervous nellies of this world who feel trepidations about AI. Even the technology’s experts seem to be engaging in a sort of AI death watch, as evidenced by sentiments such as ”Who Do We Blame When an AI Finally Kills Somebody?,’ as expressed in a recent blog by Bill Vorhies, editorial director of Data Science Central.

Article: A Proposed Model AI Governance Framework ()

The PDPC presents the first edition of A Proposed Model AI Governance Framework (Model Framework) – an accountability-based framework to help chart the language and frame the discussions around harnessing AI in a responsible way. The Model Framework translates ethical principles into practical measures that can be implemented by organisations deploying AI solutions at scale. Through the Model Framework, we aim to promote AI adoption while building consumer confidence and trust in providing their personal data for AI.

Article: How to develop data products and not die trying ()

In the present days of data accumulation there is a global craving for the innovative and business use of AI at all levels. Maybe it’s time to stop and reflect on that burning desire of using AI everywhere and consider ‘the Law of the Instrument’ for a moment: ‘if all you have is a hammer, everything looks like a nail’.

Article: Using AI For Good ()

Recently, I have come across quite a few articles stating how artificial intelligence may threaten the developing world by eliminating the need for repetitive, labor-intensive manufacturing roles. Automation of factories can potentially lead to higher unemployment rates in poorer nations, thereby disrupting local economies and causing other social issues. Is AI nothing but a huge threat to the developing world?