We present a response to the 2018 Request for Information (RFI) from the NITRD, NCO, NSF regarding the ‘Update to the 2016 National Artificial Intelligence Research and Development Strategic Plan.’ Through this document, we provide a response to the question of whether and how the National Artificial Intelligence Research and Development Strategic Plan (NAIRDSP) should be updated from the perspective of Fermilab, America’s premier national laboratory for High Energy Physics (HEP). We believe the NAIRDSP should be extended in light of the rapid pace of development and innovation in the field of Artificial Intelligence (AI) since 2016, and present our recommendations below. AI has profoundly impacted many areas of human life, promising to dramatically reshape society — e.g., economy, education, science — in the coming years. We are still early in this process. It is critical to invest now in this technology to ensure it is safe and deployed ethically. Science and society both have a strong need for accuracy, efficiency, transparency, and accountability in algorithms, making investments in scientific AI particularly valuable. Thus far the US has been a leader in AI technologies, and we believe as a national Laboratory it is crucial to help maintain and extend this leadership. Moreover, investments in AI will be important for maintaining US leadership in the physical sciences.
There is substantial evidence that Artificial Intelligence (AI) and Machine Learning (ML) algorithms can generate bias against minorities, women, and other protected classes. Federal and state laws have been enacted to protect consumers from discrimination in credit, housing, and employment, where regulators and agencies are tasked with enforcing these laws. Additionally, there are laws in place to ensure that consumers understand why they are denied access to services and products, such as consumer loans. In this article, we provide an overview of the potential benefits and risks associated with the use of algorithms and data, and focus specifically on fairness. While our observations generalize to many contexts, we focus on the fairness concerns raised in consumer credit and the legal requirements of the Equal Credit and Opportunity Act. We propose a methodology for evaluating algorithmic fairness and minimizing algorithmic bias that aligns with the provisions of federal and state anti-discrimination statutes that outlaw overt, disparate treatment, and, specifically, disparate impact discrimination. We argue that while the use of AI and ML algorithms heighten potential discrimination risks, these risks can be evaluated and mitigated, but doing so requires a deep understanding of these algorithms and the contexts and domains in which they are being used.
Article: AI and Accessibility: A Discussion of Ethical Considerations
According to the World Health Organization, more than one billion people worldwide have disabilities. The field of disability studies defines disability through a social lens; people are disabled to the extent that society creates accessibility barriers. AI technologies offer the possibility of removing many accessibility barriers; for example, computer vision might help people who are blind better sense the visual world, speech recognition and translation technologies might offer real time captioning for people who are hard of hearing, and new robotic systems might augment the capabilities of people with limited mobility. Considering the needs of users with disabilities can help technologists identify high-impact challenges whose solutions can advance the state of AI for all users; however, ethical challenges such as inclusivity, bias, privacy, error, expectation setting, simulated data, and social acceptability must be considered.
Article: How to recognize AI snake oil
Much of what’s being sold as ‘AI’ today is snake oil – it does not and cannot work. Why is this happening? How can we recognize flawed AI claims and push back?
Paper: The Human Body is a Black Box’: Supporting Clinical Decision-Making with Deep Learning
Machine learning technologies are increasingly developed for use in healthcare. While research communities have focused on creating state-of-the-art models, there has been less focus on real world implementation and the associated challenges to accuracy, fairness, accountability, and transparency that come from actual, situated use. Serious questions remain under examined regarding how to ethically build models, interpret and explain model output, recognize and account for biases, and minimize disruptions to professional expertise and work cultures. We address this gap in the literature and provide a detailed case study covering the development, implementation, and evaluation of Sepsis Watch, a machine learning-driven tool that assists hospital clinicians in the early diagnosis and treatment of sepsis. We, the team that developed and evaluated the tool, discuss our conceptualization of the tool not as a model deployed in the world but instead as a socio-technical system requiring integration into existing social and professional contexts. Rather than focusing on model interpretability to ensure a fair and accountable machine learning, we point toward four key values and practices that should be considered when developing machine learning to support clinical decision-making: rigorously define the problem in context, build relationships with stakeholders, respect professional discretion, and create ongoing feedback loops with stakeholders. Our work has significant implications for future research regarding mechanisms of institutional accountability and considerations for designing machine learning systems. Our work underscores the limits of model interpretability as a solution to ensure transparency, accuracy, and accountability in practice. Instead, our work demonstrates other means and goals to achieve FATML values in design and in practice.
Paper: Forbidden knowledge in machine learning — Reflections on the limits of research and publication
Certain research strands can yield ‘forbidden knowledge’. This term refers to knowledge that is considered too sensitive, dangerous or taboo to be produced or shared. Discourses about such publication restrictions are already entrenched in scientific fields like IT security, synthetic biology or nuclear physics research. This paper makes the case for transferring this discourse to machine learning research. Some machine learning applications can very easily be misused and unfold harmful consequences, for instance with regard to generative video or text synthesis, personality analysis, behavior manipulation, software vulnerability detection and the like. Up to now, the machine learning research community embraces the idea of open access. However, this is opposed to precautionary efforts to prevent the malicious use of machine learning applications. Information about or from such applications may, if improperly disclosed, cause harm to people, organizations or whole societies. Hence, the goal of this work is to outline norms that can help to decide whether and when the dissemination of such information should be prevented. It proposes review parameters for the machine learning community to establish an ethical framework on how to deal with forbidden knowledge and dual-use applications.
As AI systems become prevalent in high stakes domains such as surveillance and healthcare, researchers now examine how to design and implement them in a safe manner. However, the potential harms caused by systems to stakeholders in complex social contexts and how to address these remains unclear. In this paper, we explain the inherent normative uncertainty in debates about the safety of AI systems. We then address this as a problem of vagueness by examining its place in the design, training, and deployment stages of AI system development. We adopt Ruth Chang’s theory of intuitive comparability to illustrate the dilemmas that manifest at each stage. We then discuss how stakeholders can navigate these dilemmas by incorporating distinct forms of dissent into the development pipeline, drawing on Elizabeth Anderson’s work on the epistemic powers of democratic institutions. We outline a framework of sociotechnical commitments to formal, substantive and discursive challenges that address normative uncertainty across stakeholders, and propose the cultivation of related virtues by those responsible for development.
Article: How to Build AI That Won’t Destroy Us
It’s hard to find a discussion about AI safety that doesn’t focus on control. The logic is, if we’re not controlling it, something bad will happen. This sounds to me like actual, real life madness. Do we honestly think that ‘laws’, ‘control structures’ or human goals will matter to a super-intelligent machine? You may as well tell me that ants run the world. We need to look more closely at nature. Our idea that the world is a hostile, dog-eat-dog sort of place isn’t as old or as well-placed as we think. Nor is our control fetish. There might be solutions for us in the way that complex natural systems stay stable.