Paper: Ethics of Artificial Intelligence Demarcations

In this paper we present a set of key demarcations, particularly important when discussing ethical and societal issues of current AI research and applications. Properly distinguishing issues and concerns related to Artificial General Intelligence and weak AI, between symbolic and connectionist AI, AI methods, data and applications are prerequisites for an informed debate. Such demarcations would not only facilitate much-needed discussions on ethics on current AI technologies and research. In addition sufficiently establishing such demarcations would also enhance knowledge-sharing and support rigor in interdisciplinary research between technical and social sciences.


Article: Can Blockchain Tame AI’s Dark Creative Impulses?

How can we trust what we see and hear on our digital devices? Beyond the tiresome meme of fake news, something more fundamental has been sneaking up on us for a few years without generating the sort of alarm it deserves. These days it’s hard to hear any alarms over the general din of the contemporary news cycle, but media people and marketers must take heed of technology that comes under the academic heading of ‘generative AI’ or, more specifically, generative content or image synthesis. This is the practice of training and using computers to generate realistic video, audio and images – usually of people doing and saying things they never did or said – often for ‘entertainment,’ but surely for darker purposes as well. You may have seen it flare-up in the notorious trend of so-called #Deepfakes, which erupted on Reddit last year before being banned on Reddit and a number of other sites. But popular apps like FakeApp (based on Google’s popular TensorFlow platform) have made widespread public access to this technology inevitable. Further research in generative adversarial networks (GANs), as well, assure us that the capabilities of AI to produce audio-visual evidence that’s all but indistinguishable from reality are similarly unavoidable. The anxiety we feel today over polarized discord regarding what’s real and what’s fake pales in comparison to the society we face when we literally can’t distinguish between real and synthetic evidence. Science, law, and government all depend on this ability, as does our personal security, but commerce and marketing have a special role to play as they provide the economic force that powers today’s global media distribution platforms. More on that in a minute.


Article: The Ethical and Privacy Issues of Recommendation Engines on Media Platforms

Recommendation engines on media platforms are dominating our media decisions. Instead of allowing the randomness of couch surfing decide our viewing fate, the choice is being made for us across all forms of digital media, including YouTube, Facebook, Spotify, etc. According to a McKinsey report, 75% of Netflix viewing decisions are from product recommendations. While at face value this equates to user convenience, as the system recommends things that align with the data it has gathered to create a profile of user interests, in reality, the recommendation system domination belies ethical and privacy concerns.


Article: Ethics, the new frontier of technology

As artificial intelligence (AI) and machine learning (ML) applications weave into more and more aspects of our lives, voices are rising to express concerns about the ethical implications, the potential discrimination fueled by algorithmic bias (‘Algorithms, the Illusion of Neutrality’), and the lack of transparency and explainability of black box models (‘X-AI, Black Boxes and Crystal Balls’). We are building systems that are beyond our intellectual ability to comprehend. Who can seriously pretend that they understand the hundreds of millions of lines of code used in a self-driving car? AI is rapidly evolving towards more autonomy and human-like cognitive activities such as natural language processing and computer vision. Algorithms need less and less supervision to function. In some cases, they are even starting to rewrite bits of their own code. Those ‘generic algorithms’ evolve, just as organisms do naturally. No wonder that some academic research labs are now looking for ways to understand algorithms by treating them like animals in the wild, observing their behaviors in the world. Does this mean we are creating monsters?


Article: Toward ethical, transparent and fair AI/ML: a critical reading list

In the past 5 years there’s been a lot of enthusiasm about AI and specifically machine learning and deep learning. As we continuously deploy AI models in the wild we are forced to re-examine what are the effects of knowledge symbolisation, generalisation and classification on the historical, political and social conditions of human life. We also need to remind ourselves that algorithms don’t exercise their power over us. People do. This reading list is made for engineers, scientists, designers, policy makers and those interested in machine learning and AI. It’s an open ended document that examines machine learning as a sociotechnical system and contextualises its critical discourse. For suggestions and comments please tweet @irinimalliaraki or drop me an email at e.malliaraki16@imperial.ac.uk These sections aren’t in any particular order. There’s overlap and interaction between these topics that you can jump around as much as you want; Reading ‘out of order’ could lead to interesting connections.


Article: Intro to AI Ethics

To even begin to think about AI ethics, we must first have a primer on more general ethics. As an engineer or some other non-philosopher it can be very easy to forget about ethics and simply build systems for the sake of building cool things. We must, however, be aware of the potential outcomes of our build decisions when it comes to highly complex, sophisticated, and potentially impactful systems. Especially systems who outcomes are going to be highly controversial.
Ethics is the branch of philosophy concerned with grounding decisions, beliefs, policies, etc. in some sort of framework for deciding right and wrong. Ethics looks to resolve such questions as human morality. By deriving some moral system we can prescribe value to some action or belief. There are some main areas of study in ethics that can be further broken into subcategories:
1. Applied Ethics – concerned with studying what is right or just and what is valuable
2. Normative Ethics – study of how people should/ought act
3. Meta-ethics – pursuit of understanding what is good or bad, what do these concepts of good/bad really mean?


Article: A guide to anticipating the future impact of today´s technology or How not to regret the things you will build

As technologists, it’s only natural that we spend most of our time focusing on how our tech will change the world for the better. Which is great. Everyone loves a sunny disposition. But perhaps it’s more useful, in some ways, to consider the glass half empty. What if, in addition to fantasizing about how our tech will save the world, we spent some time dreading all the ways it might, possibly, perhaps, just maybe, screw everything up? No one can predict exactly what tomorrow will bring (though somewhere in the tech world, someone is no doubt working on it). So until we get that crystal ball app, the best we can hope to do is anticipate the long-term social impact and unexpected uses of the tech we create today. If the technology you’re building right now will some day be used in unexpected ways, how can you hope to be prepared? What new categories of risk should you pay special attention to now? And which design, team or business model choices can actively safeguard users, communities, society, and your company from future risk? The last thing you want is to get blindsided by a future YOU helped create. The Ethical OS is here to help you see more clearly.


Paper: Teaching AI, Ethics, Law and Policy

The cyberspace and the development of new technologies, especially intelligent systems using artificial intelligence, present enormous challenges to computer professionals, data scientists, managers and policy makers. There is a need to address professional responsibility, ethical, legal, societal, and policy issues. This paper presents problems and issues relevant to computer professionals and decision makers and suggests a curriculum for a course on ethics, law and policy. Such a course will create awareness of the ethics issues involved in building and using software and artificial intelligence.
Advertisements