Article: AI Ethics Guidelines Every CIO Should Read

You don’t need to come up with an AI ethics framework out of thin air. Here are five of the best resources to get technology and ethics leaders started.
• Future of Life Institute
• IAPP
• IEEE
• The Public Voice
• EU Council of Europe


Article: Artificial Intelligence – Ethics vs. World Domination?

I was contacted by a friend who is helping to host an event at a large business conference in Norway where industry and politicians meet. The name of the event is ‘Artificial Intelligence – Ethics vs. World Domination?’. In this context I was asked a few questions and I will do my best to answer. I will however first discuss a series of questions I was sent relating to the topic. These questions concern competitiveness, human-centric AI, Norwegian interests and social responsible AI. First let us begin with the description of the event.


Article: You Can’t Fix Unethical Design by Yourself

Nearly every tech conference right now has at least one, if not many, sessions about ethics: Ethics in artificial intelligence, introductions to data ethics, why letting the internet go to sleep is the ethical thing to do, or just plain integrating the basics of ethics into your design. We as a community are doing a great job raising questions about the implications of technology and spreading awareness to our communities about the potential for harm.


Paper: A 20-Year Community Roadmap for Artificial Intelligence Research in the US

Decades of research in artificial intelligence (AI) have produced formidable technologies that are providing immense benefit to industry, government, and society. AI systems can now translate across multiple languages, identify objects in images and video, streamline manufacturing processes, and control cars. The deployment of AI systems has not only created a trillion-dollar industry that is projected to quadruple in three years, but has also exposed the need to make AI systems fair, explainable, trustworthy, and secure. Future AI systems will rightfully be expected to reason effectively about the world in which they (and people) operate, handling complex tasks and responsibilities effectively and ethically, engaging in meaningful communication, and improving their awareness through experience. Achieving the full potential of AI technologies poses research challenges that require a radical transformation of the AI research enterprise, facilitated by significant and sustained investment. These are the major recommendations of a recent community effort coordinated by the Computing Community Consortium and the Association for the Advancement of Artificial Intelligence to formulate a Roadmap for AI research and development over the next two decades.


Article: AI Justice: When AI Principles Are Not Enough

Fluxus Landscape is an art and research project mapping about 500 stakeholders and actors in AI ethics and governance. It casts a broad net and each included stakeholder defines artificial intelligence and ethics in their own terms. Together, they create a snapshot of the organic structure of social change – showing us that development at speed can create vortices of thought and intellectual dead zones open to exploitation.


Article: Fluxus Landscape

Fluxus Landscape is an art and research project created in partnership with the Center for the Advanced Study in the Behavioral Sciences (CASBS) at Stanford University with support from the Stanford Institute for Human-Centered Artificial Intelligence.


Paper: Conservatives Overfit, Liberals Underfit’: The Social-Psychological Control of Affect and Uncertainty

The presence of artificial agents in human social networks is growing. From chatbots to robots, human experience in the developed world is moving towards a socio-technical system in which agents can be technological or biological, with increasingly blurred distinctions between. Given that emotion is a key element of human interaction, enabling artificial agents with the ability to reason about affect is a key stepping stone towards a future in which technological agents and humans can work together. This paper presents work on building intelligent computational agents that integrate both emotion and cognition. These agents are grounded in the well-established social-psychological Bayesian Affect Control Theory (BayesAct). The core idea of BayesAct is that humans are motivated in their social interactions by affective alignment: they strive for their social experiences to be coherent at a deep, emotional level with their sense of identity and general world views as constructed through culturally shared symbols. This affective alignment creates cohesive bonds between group members, and is instrumental for collaborations to solidify as relational group commitments. BayesAct agents are motivated in their social interactions by a combination of affective alignment and decision theoretic reasoning, trading the two off as a function of the uncertainty or unpredictability of the situation. This paper provides a high-level view of dual process theories and advances BayesAct as a plausible, computationally tractable model based in social-psychological and sociological theory. We introduce a revised BayesAct model that more deeply integrates social-psychological theorising, and we demonstrate a key component of the model as being sufficient to account for cognitive biases about fairness, dissonance and conformity. We close with ethical and philosophical discussion.


Article: Safe Artificial General Intelligence

The Future of Life Institute (FLI) has appeared across various articles and areas within the field of artificial intelligence, at least where I have looked. They seem to be concerned with the unknown future and how it affects us. Since I have been exploring the topic of AI Safety it does now make sense seeing as FLI has funded a series of different projects throughout the last five years particularly with two rounds, both funded by Elon Musk and different research institutes. The first round seems to have been in 2015 with a focus on AI Safety Researchers and the second round with its focus on artificial general intelligence (AGI) Safety Researchers in 2018. Since the project summaries are all out online I decided to have a think about each in turn.
Advertisements