Article: Changing contexts and intents
Every day, someone comes up with a new use for old data. Recently, IBM scraped a million photos from Flickr and turned them into a training data set for an AI project intending to reduce bias in facial recognition. That’s a noble goal, promoted to researchers as an opportunity to make more ethical AI.
Article: Round Up: Ethics and Skepticism
There are a whole lot of different ways to misunderstand or be duped by data. This is my round up of good links that illustrate some of the most common problems with relying on data. Additions are welcome, but I’m looking for news stories, rather than theoretical examples. Your job, as a reporter, is to put the data in context. Sometimes that means making an honest decision about whether or not maps are even the right way to tell the story, because how you tell a story matters.
Kate Klonick, an assistant professor at St John’s Law School, teaches an Information Privacy course for second- and third-year law students; she devised a wonderful and simply exercise to teach her students about ‘anonymous speech, reasonable expectation of privacy, third party doctrine, and privacy by obscurity’ over the spring break. Klonick’s students were assigned to sit in a public place and eavesdrop on nearby conversations, then, using only Google searches, ‘see if you can de-anonymize someone based on things they say loudly enough for lots of others to hear and/or things that are displayed on their clothing or bags.’
Article: Designing Ethical Algorithms
Ethical algorithm design is becoming a hot topic as machine learning becomes more widespread. But how do you make an algorithm ethical? Here are 5 suggestions to consider.
Current advances in research, development and application of artificial intelligence (AI) systems have yielded a far-reaching discourse on AI ethics. In consequence, a number of ethics guidelines have been released in recent years. These guidelines comprise normative principles and recommendations aimed to harness the ‘disruptive’ potentials of new AI technologies. Designed as a comprehensive evaluation, this paper analyzes and compares these guidelines highlighting overlaps but also omissions. As a result, I give a detailed overview of the field of AI ethics. Finally, I also examine to what extent the respective ethical principles and values are implemented in the practice of research, development and application of AI systems – and how the effectiveness in the demands of AI ethics can be improved.
The data revolution continues to transform every sector of science, industry and government. Due to the incredible impact of data-driven technology on society, we are becoming increasingly aware of the imperative to use data and algorithms responsibly — in accordance with laws and ethical norms. In this article we discuss three recent regulatory frameworks: the European Union’s General Data Protection Regulation (GDPR), the New York City Automated Decisions Systems (ADS) Law, and the Net Neutrality principle, that aim to protect the rights of individuals who are impacted by data collection and analysis. These frameworks are prominent examples of a global trend: Governments are starting to recognize the need to regulate data-driven algorithmic technology. Our goal in this paper is to bring these regulatory frameworks to the attention of the data management community, and to underscore the technical challenges they raise and which we, as a community, are well-equipped to address. The main take-away of this article is that legal and ethical norms cannot be incorporated into data-driven systems as an afterthought. Rather, we must think in terms of responsibility by design, viewing it as a systems requirement.
We provide a formal definition of blameworthiness in settings where multiple agents can collaborate to avoid a negative outcome. We first provide a method for ascribing blameworthiness to groups relative to an epistemic state (a distribution over causal models that describe how the outcome might arise). We then show how we can go from an ascription of blameworthiness for groups to an ascription of blameworthiness for individuals using a standard notion from cooperative game theory, the Shapley value. We believe that getting a good notion of blameworthiness in a group setting will be critical for designing autonomous agents that behave in a moral manner.
Back in the day, machine experiences were a drag. Hit a button, pull a lever, and get the task done. Decades later, with subsequent computing innovation, machines have transformed into their ultra-smart, self-learning, automated versions that are sweeping the human landscape. The underlying technology that’s reinventing machines to personalize human experiences is Machine Learning (ML), a branch of Artificial Intelligence and a strong buzzword in today’s digital-first world. In essence, it’s about programming machines to infuse the ability of self-learning by leveraging Big Data. Information extracted from various touchpoints is analyzed and used to predict intentions for actionable intelligence. And, the good news is, Latest technology is advancing consistently and revolutionizing every facet of our routines. Humans had their first brush-up with Machine Learning when voice-controlled personal assistants?-?Amazon’s Echo and Alexa?-?were launched. These devices are a new normal with the trend of smart homes picking up. Driverless cars, which were a quintessential sci-fi fantasy, aren’t something of the far-off future now. These new-age vehicles, aimed at cutting down human labor, are tested across the world for their utility benefits. Initially, the idea of intelligent machines was preposterous. Machines that act on behalf of humans weren’t a norm. However, with enablement and evolution of Machine Learning in our daily lives, the human landscape is radically changing and how. Below, we have mentioned 10 ways in which Machine Learning is revolutionizing our lives. Let’s take a dive right in.