Why we need to address bias in human decision-making to improve technology and its future governance, and how to do so.
Computers are increasingly using our data to make decisions about us, but can we trust them?
A remarkable time of human promise has been ushered in by the convergence of the ever-expanding availability of big data, the soaring speed and stretch of cloud computing platforms, and the advancement of increasingly sophisticated machine learning algorithms. Innovations in AI are already leaving a mark on government by improving the provision of essential social goods and services from healthcare, education, and transportation to food supply, energy, and environmental management. These bounties are likely just the start. The prospect that progress in AI will help government to confront some of its most urgent challenges is exciting, but legitimate worries abound. As with any new and rapidly evolving technology, a steep learning curve means that mistakes and miscalculations will be made and that both unanticipated and harmful impacts will occur. This guide, written for department and delivery leads in the UK public sector and adopted by the British Government in its publication, ‘Using AI in the Public Sector,’ identifies the potential harms caused by AI systems and proposes concrete, operationalisable measures to counteract them. It stresses that public sector organisations can anticipate and prevent these potential harms by stewarding a culture of responsible innovation and by putting in place governance processes that support the design and implementation of ethical, fair, and safe AI systems. It also highlights the need for algorithmically supported outcomes to be interpretable by their users and made understandable to decision subjects in clear, non-technical, and accessible ways. Finally, it builds out a vision of human-centred and context-sensitive implementation that gives a central role to communication, evidence-based reasoning, situational awareness, and moral justifiability.
The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good. It is perhaps the only area where the metrics do tell the true story as far as we are concerned’ • Facebook VP Andrew Bosworth, 18 June 2016, as leaked to Buzzfeed ‘Watch time was the priority…Everything else was considered a distraction.’ • (ex)-Google engineer Guillaume Chaslot, as quoted in the Guardian 2 Feb 2018, describing YouTube’s recommendation engine’s sole KPI ‘Software is eating the world’, the venture capitalist Marc Andreessen warned us in 2011, and, more and more, the software eating our world is also shaping our professional, political, and personal realities via machine learning. These include, for example, the recommendation algorithms selecting what items appear in our social feeds, or selecting the next autoplay video on YouTube, or recommending ‘related’ products for purchase on Amazon.
In response to the release of a report from the Artificial Intelligence (AI) High Level Expert Group offering policy and investment recommendations, the Center for Data Innovation released the following statement from Senior Policy Analyst Eline Chivot: The report includes a range of appropriate solutions to support the development and uptake of AI, including talent retention and mobility strategies, the identification of key sectors for applied AI research, regulatory sandboxes, a better transfer of research results to the market to facilitate the commercialization of AI systems, the integration of existing research networks, and the increased availability of large data sets. The report also constructively recommends policymakers avoid ‘unnecessarily prescriptive regulation’ and ‘cumulative regulatory interventions at the sectoral level’ which could have a chilling effect on innovation, and instead suggests using broad principles as guidance.
Article: The rise of data and AI ethics
As technology tracks huge amounts of personal data, data ethics can be tricky, with very little covered by existing law. Governments are at the center of the data ethics debate in two important ways.?
We critique recent work on ethics in natural language processing. Those discussions have focused on data collection, experimental design, and interventions in modeling. But we argue that we ought to first understand the frameworks of ethics that are being used to evaluate the fairness and justice of algorithmic systems. Here, we begin that discussion by outlining deontological ethics, and envision a research agenda prioritized by it.
AI Ethics is now a global topic of discussion in academic and policy circles. At least 63 public-private initiatives have produced statements describing high-level principles, values, and other tenets to guide the ethical development, deployment, and governance of AI. According to recent meta-analyses, AI Ethics has seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite the initial credibility granted to a principled approach to AI Ethics by the connection to principles in medical ethics, there are reasons to be concerned about its future impact on AI development and governance. Significant differences exist between medicine and AI development that suggest a principled approach in the latter may not enjoy success comparable to the former. Compared to medicine, AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms. These differences suggest we should not yet celebrate consensus around high-level principles that hide deep political and normative disagreement.