• Home
  • About
  • Books
  • Courses
  • Documents
  • eBooks
  • Feeds
  • Images
  • Quotes
  • R Packages
  • What is …

AnalytiXon

~ Broaden your Horizon

Category Archives: Ethics

AI related Ethics

Let’s get it right

22 Monday Jul 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Article: Facebook vs. EU Artificial Intelligence and Data Politics

This article is a summary of the paper by the European Union Agency for Fundamental Rights (FRA) called Data quality and artificial intelligence – mitigating bias and error to protect fundamental rights. I then proceed to look at the recent move by Facebook in its data politics; statements made by Zuckerberg; and their recent hiring of previous Deputy Prime Minister Nick Clegg as head of global policy and communications. It is my wish that this makes EU policy more comprehensible and give you an overview of a few actions taken by Facebook in this regard.


Article: Towards Social Data Science

Combining social science and data science is not a new approach, yet after several revelations (and sizeable fines) large technology companies are waking up to discover where they are situated. It seems research institutes particularly in Europe are happy to facilitate this shift. This article is (1) a broad definition of data science; (2) a rapid look at social data science; (3) a surface look at how new, in relative terms, the discipline of social data science is at this moment.


Article: AI: Almost Immortal

Healthcare’s AI revolution is changing the way we think about age-related diseases, even aging itself We are in the midst of an epidemic. Regardless of your family history, race, or geography, there is a disease that will befall each and every one of us. You can hide in the mountains of Siberia, but the disease will still reach you because it’s not contagious. It’s followed humanity throughout time, and will continue to do so into the foreseeable future despite our recent attempts to forestall it. That disease is called aging.


Article: Collective Transparency

As our privacy continues to be challenged by the endless pursuit of data, does collective transparency offer a solution?


Article: Why AI Must Be Ethical – And How We Make It So

It must be said that AI is wonderful when properly implemented. It must also be said that AI is frightening when unregulated. AI trailblazers are working towards establishing AI ethics – to varying success. Some early attempts have failed, such as Google’s attempt at establishing an AI ethics board earlier this year, which was dissolved after just a week. Instead, I argue that the future of establishing ethical AI lies in collaboration. For instance, the European Commission is inviting Europeans to discuss ethical AI. In my opinion, inviting a broad range of individuals and entities to establish ethical guidelines is the best approach to handling AI ethics. Hopefully, such initiatives will start to appear in greater scale in the near future. In order to ensure AI becomes – and stays – ethical, we must achieve diversified ethical boards through broad, inclusive discussions. Because someday soon, AI will decide whether you’re a criminal. And all you can do is to hope the AI makes the right call.


Article: A Unified Framework of Five Principles for AI in Society

Artificial Intelligence (AI) is already having a major impact on society. As a result, many organizations have launched a wide range of initiatives to establish ethical principles for the adoption of socially beneficial AI. Unfortunately, the sheer volume of proposed principles threatens to overwhelm and confuse. How might this problem of ‘principle proliferation’ be solved? In this paper, we report the results of a fine-grained analysis of several of the highest-profile sets of ethical principles for AI. We assess whether these principles converge upon a set of agreed-upon principles, or diverge, with significant disagreement over what constitutes ‘ethical AI.’ Our analysis finds a high degree of overlap among the sets of principles we analyze. We then identify an overarching framework consisting of five core principles for ethical AI. Four of them are core principles commonly used in bioethics: beneficence, non-maleficence, autonomy, and justice. On the basis of our comparative analysis, we argue that a new principle is needed in addition: explicability, understood as incorporating both the epistemological sense of intelligibility (as an answer to the question ‘how does it work?’) and in the ethical sense of accountability (as an answer to the question: ‘who is responsible for the way it works?’). In the ensuing discussion, we note the limitations and assess the implications of this ethical framework for future efforts to create laws, rules, technical standards, and best practices for ethical AI in a wide range of contexts.


Article: What Kinds of Intelligent Machines Really Make Life Better?

Michael Jordan’s article on artificial intelligence (AI) eloquently articulates how far we are from understanding human-level intelligence, much less recreating it through AI, machine learning, and robotics. The very premise that intelligent machines doing our work will make our lives better may be flawed. Evidence from neuroscience, cognitive science, health sciences, and gerontology shows that human wellbeing and longevity, our health and wellness, fundamentally hinge on physical activity, social connectedness, and a sense of purpose. Therefore, we may need very different types of AI from those currently in development to truly improve human quality of life at the individual and societal levels.


Article: Microsoft invests in and partners with OpenAI to support us building beneficial AGI

Microsoft is investing $1 billion in OpenAI to support us building artificial general intelligence (AGI) with widely distributed economic benefits. We’re partnering to develop a hardware and software platform within Microsoft Azure which will scale to AGI. We’ll jointly develop new Azure AI supercomputing technologies, and Microsoft will become our exclusive cloud provider – so we’ll be working hard together to further extend Microsoft Azure’s capabilities in large-scale AI systems.

Let’s get it right

22 Monday Jul 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Article: The Scariest Thing About DeepNude Wasn’t the Software

At the end of June, Motherboard reported on a new app called DeepNude, which promised – ‘with a single click’ – to transform a clothed photo of any woman into a convincing nude image using machine learning. In the weeks since this report, the app has been pulled by its creator and removed from GitHub, though open source copies have surfaced there in recent days. Most of the coverage of DeepNude has focused on the specific dangers posed by its technical advances. ‘DeepNude is an evolution of that technology that is easier to use and faster to create than deepfakes,’ wrote Samantha Cole in Motherboard’s initial report on the app. ‘DeepNude also dispenses with the idea that this technology can be used for anything other than claiming ownership over women’s bodies.’ With its promise of single-click undressing of any woman, it made it easier than ever to manufacture naked photos – and, by extension, to use those fake nudes to harass, extort, and publicly shame women everywhere. But even following the app’s removal, there’s a lingering problem with DeepNude that goes beyond its technical advances and ease of use. It’s something older and deeper, something far more intractable – and far harder to erase from the internet – than a piece of open source code.


Paper: The Elusive Model of Technology, Media, Social Development, and Financial Sustainability

We recount in this essay the decade-long story of Gram Vaani, a social enterprise with a vision to build appropriate ICTs (Information and Communication Technologies) for participatory media in rural and low-income settings, to bring about social development and community empowerment. Other social enterprises will relate to the learning gained and the strategic pivots that Gram Vaani had to undertake to survive and deliver on its mission, while searching for a robust financial sustainability model. While we believe the ideal model still remains elusive, we conclude this essay with an open question about the reason to differentiate between different kinds of enterprises – commercial or social, for-profit or not-for-profit – and argue that all enterprises should have an ethical underpinning to their work.


Paper: Ethical Underpinnings in the Design and Management of ICT Projects

With a view towards understanding why undesirable outcomes often arise in ICT projects, we draw attention to three aspects in this essay. First, we present several examples to show that incorporating an ethical framework in the design of an ICT system is not sufficient in itself, and that ethics need to guide the deployment and ongoing management of the projects as well. We present a framework that brings together the objectives, design, and deployment management of ICT projects as being shaped by a common underlying ethical system. Second, we argue that power-based equality should be incorporated as a key underlying ethical value in ICT projects, to ensure that the project does not reinforce inequalities in power relationships between the actors directly or indirectly associated with the project. We present a method to model ICT projects to make legible its influence on the power relationships between various actors in the ecosystem. Third, we discuss that the ethical values underlying any ICT project ultimately need to be upheld by the project teams, where certain factors like political ideologies or dispersed teams may affect the rigour with which these ethical values are followed. These three aspects of having an ethical underpinning to the design and management of ICT projects, the need for having a power-based equality principle for ICT projects, and the importance of socialization of the project teams, needs increasing attention in today’s age of ICT platforms where millions and billions of users interact on the same platform but which are managed by only a few people.


Paper: Mediation Challenges and Socio-Technical Gaps for Explainable Deep Learning Applications

The presumed data owners’ right to explanations brought about by the General Data Protection Regulation in Europe has shed light on the social challenges of explainable artificial intelligence (XAI). In this paper, we present a case study with Deep Learning (DL) experts from a research and development laboratory focused on the delivery of industrial-strength AI technologies. Our aim was to investigate the social meaning (i.e. meaning to others) that DL experts assign to what they do, given a richly contextualized and familiar domain of application. Using qualitative research techniques to collect and analyze empirical data, our study has shown that participating DL experts did not spontaneously engage into considerations about the social meaning of machine learning models that they build. Moreover, when explicitly stimulated to do so, these experts expressed expectations that, with real-world DL application, there will be available mediators to bridge the gap between technical meanings that drive DL work, and social meanings that AI technology users assign to it. We concluded that current research incentives and values guiding the participants’ scientific interests and conduct are at odds with those required to face some of the scientific challenges involved in advancing XAI, and thus responding to the alleged data owners’ right to explanations or similar societal demands emerging from current debates. As a concrete contribution to mitigate what seems to be a more general problem, we propose three preliminary XAI Mediation Challenges with the potential to bring together technical and social meanings of DL applications, as well as to foster much needed interdisciplinary collaboration among AI and the Social Sciences researchers.


Paper: Canada Protocol: an ethical checklist for the use of Artificial Intelligence in Suicide Prevention and Mental Health

Introduction: To improve current public health strategies in suicide prevention and mental health, governments, researchers and private companies increasingly use information and communication technologies, and more specifically Artificial Intelligence and Big Data. These technologies are promising but raise ethical challenges rarely covered by current legal systems. It is essential to better identify, and prevent potential ethical risks. Objectives: The Canada Protocol – MHSP is a tool to guide and support professionals, users, and researchers using AI in mental health and suicide prevention. Methods: A checklist was constructed based upon ten international reports on AI and ethics and two guides on mental health and new technologies. 329 recommendations were identified, of which 43 were considered as applicable to Mental Health and AI. The checklist was validated, using a two round Delphi Consultation. Results: 16 experts participated in the first round of the Delphi Consultation and 8 participated in the second round. Of the original 43 items, 38 were retained. They concern five categories: ‘Description of the Autonomous Intelligent System’ (n=8), ‘Privacy and Transparency’ (n=8), ‘Security’ (n=6), ‘Health-Related Risks’ (n=8), ‘Biases’ (n=8). The checklist was considered relevant by most users, and could need versions tailored to each category of target users.


Paper: Fairness and Diversity in the Recommendation and Ranking of Participatory Media Content

Online participatory media platforms that enable one-to-many communication among users, see a significant amount of user generated content and consequently face a problem of being able to recommend a subset of this content to its users. We address the problem of recommending and ranking this content such that different viewpoints about a topic get exposure in a fair and diverse manner. We build our model in the context of a voice-based participatory media platform running in rural central India, for low-income and less-literate communities, that plays audio messages in a ranked list to users over a phone call and allows them to contribute their own messages. In this paper, we describe our model and evaluate it using call-logs from the platform, to compare the fairness and diversity performance of our model with the manual editorial processes currently being followed. Our models are generic and can be adapted and applied to other participatory media platforms as well.


Paper: Global AI Ethics: A Review of the Social Impacts and Ethical Implications of Artificial Intelligence

The ethical implications and social impacts of artificial intelligence have become topics of compelling interest to industry, researchers in academia, and the public. However, current analyses of AI in a global context are biased toward perspectives held in the U.S., and limited by a lack of research, especially outside the U.S. and Western Europe. This article summarizes the key findings of a literature review of recent social science scholarship on the social impacts of AI and related technologies in five global regions. Our team of social science researchers reviewed more than 800 academic journal articles and monographs in over a dozen languages. Our review of the literature suggests that AI is likely to have markedly different social impacts depending on geographical setting. Likewise, perceptions and understandings of AI are likely to be profoundly shaped by local cultural and social context. Recent research in U.S. settings demonstrates that AI-driven technologies have a pattern of entrenching social divides and exacerbating social inequality, particularly among historically-marginalized groups. Our literature review indicates that this pattern exists on a global scale, and suggests that low- and middle-income countries may be more vulnerable to the negative social impacts of AI and less likely to benefit from the attendant gains. We call for rigorous ethnographic research to better understand the social impacts of AI around the world. Global, on-the-ground research is particularly critical to identify AI systems that may amplify social inequality in order to mitigate potential harms. Deeper understanding of the social impacts of AI in diverse social settings is a necessary precursor to the development, implementation, and monitoring of responsible and beneficial AI technologies, and forms the basis for meaningful regulation of these technologies.


Paper: A Study on the Prevalence of Human Values in Software Engineering Publications, 2015-2018

Failure to account for human values in software (e.g., equality and fairness) can result in user dissatisfaction and negative socio-economic impact. Engineering these values in software, however, requires technical and methodological support throughout the development life cycle. This paper investigates to what extent software engineering (SE) research has considered human values. We investigate the prevalence of human values in recent (2015 – 2018) publications at some of the top-tier SE conferences and journals. We classify SE publications, based on their relevance to different values, against a widely used value structure adopted from social sciences. Our results show that: (a) only a small proportion of the publications directly consider values, classified as relevant publications; (b) for the majority of the values, very few or no relevant publications were found; and (c) the prevalence of the relevant publications was higher in SE conferences compared to SE journals. This paper shares these and other insights that motivate research on human values in software engineering.

Let’s get it right

15 Monday Jul 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Article: Regulation of Artificial Intelligence in Selected Jurisdictions

This report examines the emerging regulatory and policy landscape surrounding artificial intelligence (AI) in jurisdictions around the world and in the European Union (EU). In addition, a survey of international organizations describes the approach that United Nations (UN) agencies and regional organizations have taken towards AI. As the regulation of AI is still in its infancy, guidelines, ethics codes, and actions by and statements from governments and their agencies on AI are also addressed. While the country surveys look at various legal issues, including data protection and privacy, transparency, human oversight, surveillance, public administration and services, autonomous vehicles, and lethal autonomous weapons systems, the most advanced regulations were found in the area of autonomous vehicles, in particular for the testing of such vehicles.


Paper: Making AI Forget You: Data Deletion in Machine Learning

Intense recent discussions have focused on how to provide individuals with control over when their data can and cannot be used — the EU’s Right To Be Forgotten regulation is an example of this effort. In this paper we initiate a framework studying what to do when it is no longer permissible to deploy models derivative from specific user data. In particular, we formulate the problem of how to efficiently delete individual data points from trained machine learning models. For many standard ML models, the only way to completely remove an individual’s data is to retrain the whole model from scratch on the remaining data, which is often not computationally practical. We investigate algorithmic principles that enable efficient data deletion in ML. For the specific setting of k-means clustering, we propose two provably deletion efficient algorithms which achieve an average of over 100X improvement in deletion efficiency across 6 datasets, while producing clusters of comparable statistical quality to a canonical k-means++ baseline.


Article: Making Egalitarian AI Algorithms

Here is a small riddle – A father and son are in a horrible car crash that kills the dad. The son is rushed to the hospital for an emergency surgery; just as he’s about to go under the knife, the surgeon says, ‘I can’t operate – that boy is my son!’. What do you think is going on? If you guessed that the surgeon is the boy’s gay, second father, you get a point for enlightenment, at least outside the Bible Belt. But did you also guess the surgeon could be the boy’s mother? If not, you’re part of a surprising majority. Gender biases are deeply embedded in our psyche and are reflected in our thoughts and conversations. Language is one of the most powerful means through which sexism and gender discrimination are perpetrated. Lexical choices and everyday communication constantly reflects these long-standing biases. Our writings, our movies, tweets and all the content we generate reflect these biases. Incidentally with the recent advancements in NLP, machine learning and AI these disturbing biases in our content are being unearthed by our learning algorithms.


Article: Facing the Future

Can a New Zealand-based lab take virtual assistants from the realm of marketing gimmicks and endow them with real intelligence?


Article: A New Study Suggests Employers Track Your Every Move to ‘Improve Productivity’

How would you feel if your boss told you that, if you wanted that raise, you’d need to wear a tracking device 24/7? It’s not an implausible future. Workplace wellness programs, which sometimes use fitness trackers and other devices to assess employee health – data that in many cases impact insurance rates – blossomed under the Obama administration, and now cover upwards of 50 million American workers. A new study, funded in part by the Office of the Director of National Intelligence with lead researchers from Dartmouth College, suggests a potential next step into this brave new world: day and night data surveillance that connects seemingly irrelevant data points – like how often you check your phone or leave your home on the weekend – to your work performance.


Article: Applying artificial intelligence for social good

AI is not a silver bullet, but it could help tackle some of the world’s most challenging social problems.


Article: The Moral Compass in the age of AI

We can probably do that’ was the immediate response from one of our data scientists when I asked him a few years ago if we could segment our data (which by design is just anonymous IDs) by purchase power and some form of credit worthiness. The prospective customer wanted to find financially distressed people who ‘overspend based on their means’ to create new financial products (likely high interest credit cards). They wanted to understand how to build media plans to reach those already strapped customers. It was an awakening (not the first one but a strong one) to the power and responsibility dealing with large data and AI. Mind you, we have no PII, no direct data on people’s HHI or purchase history nor do we have credit scores as part of the data set. There was no privileged date involved. He was pretty certain it could be done. All it would take is combining public data sources we already have gobs of (online, social and public government sources), adding some AI and let the system do its work. The results could not have been applied to individuals. Still it would ultimately affect cohorts of people that were vulnerable. We never pursued this opportunity. Had the prospect been a company promoting financial education, should we have acted differently?


Paper: Grounding Value Alignment with Ethical Principles

An important step in the development of value alignment (VA) systems in AI is understanding how values can interrelate with facts. Designers of future VA systems will need to utilize a hybrid approach in which ethical reasoning and empirical observation interrelate successfully in machine behavior. In this article we identify two problems about this interrelation that have been overlooked by AI discussants and designers. The first problem is that many AI designers commit inadvertently a version of what has been called by moral philosophers the ‘naturalistic fallacy,’ that is, they attempt to derive an ‘ought’ from an ‘is.’ We illustrate when and why this occurs. The second problem is that AI designers adopt training routines that fail fully to simulate human ethical reasoning in the integration of ethical principles and facts. Using concepts of quantified modal logic, we proceed to offer an approach that promises to simulate ethical reasoning in humans by connecting ethical principles on the one hand and propositions about states of affairs on the other.

Let’s get it right

11 Thursday Jul 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Paper: Toward Fairness in AI for People with Disabilities: A Research Roadmap

AI technologies have the potential to dramatically impact the lives of people with disabilities (PWD). Indeed, improving the lives of PWD is a motivator for many state-of-the-art AI systems, such as automated speech recognition tools that can caption videos for people who are deaf and hard of hearing, or language prediction algorithms that can augment communication for people with speech or cognitive disabilities. However, widely deployed AI systems may not work properly for PWD, or worse, may actively discriminate against them. These considerations regarding fairness in AI for PWD have thus far received little attention. In this position paper, we identify potential areas of concern regarding how several AI technology categories may impact particular disability constituencies if care is not taken in their design, development, and testing. We intend for this risk assessment of how various classes of AI might interact with various classes of disability to provide a roadmap for future research that is needed to gather data, test these hypotheses, and build more inclusive algorithms.


Article: A.I. and Humanity’s Self-Alienation

Who are we?’ is a timeless question that cannot be answered with singular specificity, for we are not any one thing. As we know it, we are many things, many cultures, many societies, many systems, many norms, many relations. We are good and evil, nurturing and threatening, smart and stupid, wise and foolish. We are, simply, human, and we come with intelligence. So, what is intelligence? Intelligence is many things. What is considered intelligent changes over time and differs across context and culture. Contrary to the way it is treated in American popular culture, intelligence is fluid, not fixed. Its evaluation is context dependent.


Article: Algorithmic Governance and Political Legitimacy

In ever more areas of life, algorithms are coming to substitute for judgment exercised by identifiable human beings who can be held to account. The rationale offered is that automated decision-making will be more reliable. But a further attraction is that it serves to insulate various forms of power from popular pressures. Our readiness to acquiesce in the conceit of authorless control is surely due in part to our ideal of procedural fairness, which de­mands that individual discretion exercised by those in power should be replaced with rules whenever possible, because authority will inevitably be abused. This is the original core of liberalism, dating from the English Revolution. Mechanized judgment resembles liberal proceduralism. It relies on our habit of deference to rules, and our suspicion of visible, personified authority. But its effect is to erode precisely those pro­cedural liberties that are the great accomplishment of the liberal tradition, and to place authority beyond scrutiny. I mean ‘authori­ty’ in the broadest sense, including our interactions with outsized commercial entities that play a quasi-governmental role in our lives. That is the first problem. A second problem is that decisions made by algorithm are often not explainable, even by those who wrote the algorithm, and for that reason cannot win rational assent. This is the more fundamental problem posed by mechanized decision-making, as it touches on the basis of political legitimacy in any liberal regime.


Article: How AI Can be One of the Disruptive technology in history.

Artificial Intelligence is making a quick transition from future technology to one that surrounds us in our daily lives. From taking perfect pictures to predicting what we can say next in an email, artificial intelligence is being incorporated into the products and services we use every day to transform our lives for better but how can this emerging technology affect our future work? Of all the technologies that are driving digital transformation in the enterprise, often the people out AI as the most disruptive among all. There arises no question to how AI is in the process of disrupting people’s day-to-day jobs because of the sophisticated automation.


Article: The Robotic Influencers of our Future: A Minecraft-playing, Twitch-streaming Robot

Ever heard of anything like it before? Me neither. The robot was created as part of Futurice’s project with Yle, the national broadcast company of Finland. Yle produces content for TV, radio, and the web. It has a broad reach of older audiences, but has had trouble reaching younger ones. The goal of this project was to use new technology to reach young audiences – specifically teenagers.


Article: Privacy and AI – How Much Should We Really Care

More data means better models but we may be crossing over a line into what the public can tolerate, both in the types of data collected and our use of it. The public seems divided. Targeted advertising is good but the increased invasion of privacy is bad.


Article: The circle of fairness

We shouldn’t ask our AI tools to be fair; instead, we should ask them to be less unfair and be willing to iterate until we see improvement. Fairness isn’t so much about ‘being fair’ as it is about ‘becoming less unfair.’ Fairness isn’t an absolute; we all have our own (and highly biased) notions of fairness. On some level, our inner child is always saying: ‘But that’s not fair.’ We know humans are biased, and it’s only in our wildest fantasies that we believe judges and other officials who administer justice somehow manage to escape the human condition. Given that, what role does software have to play in improving our lot? Can a bad algorithm be better than a flawed human? And if so, where does that lead us in our quest for justice and fairness?


Article: Artificial intelligence will bring more human touch to each interaction

Artificial intelligence will bring more of the human touch to each interaction AI and machine learning have become unavoidable trends in customer relations. AI is unlocking and redefining the possibilities to appeal to today’s most demanding consumers; meeting their ever-growing expectations and developing emotional connections to deliver a fulfilling customer experience. A report published by Juniper Research predicts that retail industry spending on AI will reach $7.3 billion per year by 2022. Notable applications like Uber and Lyft have changed the expectations of consumers with regards to taxis. The experience of traditional taxis now seems outdated and ineffective. Yet the arrival of artificial intelligence is raising alarm over the loss of human contact.

Let’s get it right

06 Saturday Jul 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Article: AI and the Imposter Syndrome

When I helped set up a coworking space soon five years ago I came across a term in conversations with a programmer. I had been talking to the technical lead in a company, a person who many considers greatly skilled at programming, and he admitted he often felt like a fraud. Later talking to other developers I came to realise that this was somewhat a common occurrence amongst techies.


Paper: Requisite Variety in Ethical Utility Functions for AI Value Alignment

Being a complex subject of major importance in AI Safety research, value alignment has been studied from various perspectives in the last years. However, no final consensus on the design of ethical utility functions facilitating AI value alignment has been achieved yet. Given the urgency to identify systematic solutions, we postulate that it might be useful to start with the simple fact that for the utility function of an AI not to violate human ethical intuitions, it trivially has to be a model of these intuitions and reflect their variety $ – $ whereby the most accurate models pertaining to human entities being biological organisms equipped with a brain constructing concepts like moral judgements, are scientific models. Thus, in order to better assess the variety of human morality, we perform a transdisciplinary analysis applying a security mindset to the issue and summarizing variety-relevant background knowledge from neuroscience and psychology. We complement this information by linking it to augmented utilitarianism as a suitable ethical framework. Based on that, we propose first practical guidelines for the design of approximate ethical goal functions that might better capture the variety of human moral judgements. Finally, we conclude and address future possible challenges.


Paper: Proof of Witness Presence: Blockchain Consensus for Augmented Democracy in Smart Cities

Smart City data intensive urban environments are becoming highly complex and evolving by the digital transformation. Repositioning the democratic values of citizens’ choices in these complex ecosystems has turned out to be imperative in an era of social media filter bubbles, fake news and opportunities for manipulating electoral results with such means. This paper introduces a new paradigm of augmented democracy that promises citizens who actively engage in a more informed decision-making integrated in public urban space. The proposed concept is inspired by a digital revive of the Ancient Agora of Greece, an arena of public discourse, a Polis where citizens assemble to actively deliberate and collectively decide about public matters. At the core of the proposed paradigm lies the concept of proving witness presence that makes decision-making subject of providing evidence and testifying for choices made in the physical space. This paper shows how proofs of witness presence can be made using blockchain consensus. It also shows how complex crowd-sensing decision-making processes can be designed with the Smart Agora platform and how real-time collective measurements can be performed in a fully decentralized and privacy-preserving way. An experimental testnet scenario on sustainable use of transport means is illustrated. The paramount role of dynamic consensus, self-governance and ethically aligned artificial intelligence in the augmented democracy paradigm is outlined.


Paper: Following wrong suggestions: self-blame in human and computer scenarios

This paper investigates the specific experience of following a suggestion by an intelligent machine that has a wrong outcome and the emotions people feel. By adopting a typical task employed in studies on decision-making, we presented participants with two scenarios in which they follow a suggestion and have a wrong outcome by either an expert human being or an intelligent machine. We found a significant decrease in the perceived responsibility on the wrong choice when the machine offers the suggestion. At present, few studies have investigated the negative emotions that could arise from a bad outcome after following the suggestion given by an intelligent system, and how to cope with the potential distrust that could affect the long-term use of the system and the cooperation. This preliminary research has implications in the study of cooperation and decision making with intelligent machines. Further research may address how to offer the suggestion in order to better cope with user’s self-blame.


Paper: The Ethical Dilemma when (not) Setting up Cost-based Decision Rules in Semantic Segmentation

Neural networks for semantic segmentation can be seen as statistical models that provide for each pixel of one image a probability distribution on predefined classes. The predicted class is then usually obtained by the maximum a-posteriori probability (MAP) which is known as Bayes rule in decision theory. From decision theory we also know that the Bayes rule is optimal regarding the simple symmetric cost function. Therefore, it weights each type of confusion between two different classes equally, e.g., given images of urban street scenes there is no distinction in the cost function if the network confuses a person with a street or a building with a tree. Intuitively, there might be confusions of classes that are more important to avoid than others. In this work, we want to raise awareness of the possibility of explicitly defining confusion costs and the associated ethical difficulties if it comes down to providing numbers. We define two cost functions from different extreme perspectives, an egoistic and an altruistic one, and show how safety relevant quantities like precision / recall and (segment-wise) false positive / negative rate change when interpolating between MAP, egoistic and altruistic decision rules.


Paper: Quantifying Algorithmic Biases over Time

Algorithms now permeate multiple aspects of human lives and multiple recent results have reported that these algorithms may have biases pertaining to gender, race, and other demographic characteristics. The metrics used to quantify such biases have still focused on a static notion of algorithms. However, algorithms evolve over time. For instance, Tay (a conversational bot launched by Microsoft) was arguably not biased at its launch but quickly became biased, sexist, and racist over time. We suggest a set of intuitive metrics to study the variations in biases over time and present the results for a case study for genders represented in images resulting from a Twitter image search for #Nurse and #Doctor over a period of 21 days. Results indicate that biases vary significantly over time and the direction of bias could appear to be different on different days. Hence, one-shot measurements may not suffice for understanding algorithmic bias, thus motivating further work on studying biases in algorithms over time.


Article: Is Artificial Intelligence the frontier solution to Global South’s wicked development challenges?

To date almost all of research work has been focused on the implications of frontier technologies like Artificial Intelligence (AI), Machine Learning, automation and Internet of Things (IoT) for people living in higher income countries such as the EU, UK and US. The focus of this article is to identify key examples on the impact of artificial intelligence as a frontier technology on citizen engagement in low and middle-income countries for whom many of the same opportunities and risks apply as in the higher-income courtiers (often to a greater extent), along with additional opportunities and risks unique to these countries.


Paper: Fair Kernel Regression via Fair Feature Embedding in Kernel Space

In recent years, there have been significant efforts on mitigating unethical demographic biases in machine learning methods. However, very little is done for kernel methods. In this paper, we propose a new fair kernel regression method via fair feature embedding (FKR-F$^2$E) in kernel space. Motivated by prior works on feature selection in kernel space and feature processing for fair machine learning, we propose to learn fair feature embedding functions that minimize demographic discrepancy of feature distributions in kernel space. Compared to the state-of-the-art fair kernel regression method and several baseline methods, we show FKR-F$^2$E achieves significantly lower prediction disparity across three real-world data sets.

Let’s get it right

05 Friday Jul 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Article: Infographic: Can AI Think Ethically?

Cities across the world are using – or banning because of concerns about bias – facial recognition technology, but is it possible for artificial intelligence to think ethically? We’ve written about the ethics of AI extensively here on insideBIGDATA. To further the conversation, the infographic below developed by our friends over at NowSourcing, Inc. outlines the roles AI can serve and how to build technology we can trust.

Let’s get it right

04 Thursday Jul 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Paper: Age and gender bias in pedestrian detection algorithms

Pedestrian detection algorithms are important components of mobile robots, such as autonomous vehicles, which directly relate to human safety. Performance disparities in these algorithms could translate into disparate impact in the form of biased accident outcomes. To evaluate the need for such concerns, we characterize the age and gender bias in the performance of state-of-the-art pedestrian detection algorithms. Our analysis is based on the INRIA Person Dataset extended with child, adult, male and female labels. We show that all of the 24 top-performing methods of the Caltech Pedestrian Detection Benchmark have higher miss rates on children. The difference is significant and we analyse how it varies with the classifier, features and training data used by the methods. Algorithms were also gender-biased on average but the performance differences were not significant. We discuss the source of the bias, the ethical implications, possible technical solutions and barriers.


Paper: Evolutionary Computation and AI Safety: Research Problems Impeding Routine and Safe Real-world Application of Evolution

Recent developments in artificial intelligence and machine learning have spurred interest in the growing field of AI safety, which studies how to prevent human-harming accidents when deploying AI systems. This paper thus explores the intersection of AI safety with evolutionary computation, to show how safety issues arise in evolutionary computation and how understanding from evolutionary computational and biological evolution can inform the broader study of AI safety.


Paper: Artificial Intelligence: the global landscape of ethics guidelines

In the last five years, private companies, research institutions as well as public sector organisations have issued principles and guidelines for ethical AI, yet there is debate about both what constitutes ‘ethical AI’ and which ethical requirements, technical standards and best practices are needed for its realization. To investigate whether a global agreement on these questions is emerging, we mapped and analyzed the current corpus of principles and guidelines on ethical AI. Our results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted; why they are deemed important; what issue, domain or actors they pertain to; and how they should be implemented. Our findings highlight the importance of integrating guideline-development efforts with substantive ethical analysis and adequate implementation strategies.


Paper: Towards Empathic Deep Q-Learning

As reinforcement learning (RL) scales to solve increasingly complex tasks, interest continues to grow in the fields of AI safety and machine ethics. As a contribution to these fields, this paper introduces an extension to Deep Q-Networks (DQNs), called Empathic DQN, that is loosely inspired both by empathy and the golden rule (‘Do unto others as you would have them do unto you’). Empathic DQN aims to help mitigate negative side effects to other agents resulting from myopic goal-directed behavior. We assume a setting where a learning agent coexists with other independent agents (who receive unknown rewards), where some types of reward (e.g. negative rewards from physical harm) may generalize across agents. Empathic DQN combines the typical (self-centered) value with the estimated value of other agents, by imagining (by its own standards) the value of it being in the other’s situation (by considering constructed states where both agents are swapped). Proof-of-concept results in two gridworld environments highlight the approach’s potential to decrease collateral harms. While extending Empathic DQN to complex environments is non-trivial, we believe that this first step highlights the potential of bridge-work between machine ethics and RL to contribute useful priors for norm-abiding RL agents.


Article: Bias in the AI court decision making – spot it before you fight it

Use of machine learning in different decision-making processes, including in judicial practice, is becoming more and more frequent. As the court decision have a great impact on the individual’s personal and professional life as well as on the society as a whole, it is important to be able to identify and ideally rectify the bias in the artificial intelligence (AI) system to avoid that the model renders an unfair or inaccurate decision, potentially amplifying the existing inequalities in our society.


Article: It’s Time for a Code of Ethics for Designers

Lawyers, doctors, and even journalists have something in common: They all have studied ethics as part of their higher education, taking the time to construct, interpret, and follow written codes of conduct that guide them in making sound, ethical decisions. Is it a coincidence that these are also some of the world’s oldest professions? The design profession has a long history too, but we’ve only recently started to discuss ethics in a design context. Design is all about the viewer: drawing the eye and changing the heart. It’s truly one of the most powerful tools (superpowers?) that companies have today, and that leaves us, the viewers, wanting – no, needing awareness and guidelines that ensure design is being practiced responsibly. With all the services and apps influencing our behavior and daily lives, is this too much to ask?


Article: When Your Boss Is an Algorithm

For Uber drivers, the workplace can feel like a world of constant surveillance, automated manipulation and threats of ‘deactivation’


Paper: Implementing Ethics in AI: An industrial multiple case study

Solutions in artificial intelligence (AI) are becoming increasingly widespread in system development endeavors. As the AI systems affect various stakeholders due to their unique nature, the growing influence of these systems calls for eth-ical considerations. Academic discussion and practical examples of autonomous system failures have highlighted the need for implementing ethics in software development. However, research on methods and tools for implementing ethics into AI system design and development in practice is still lacking. This paper be-gins to address this focal problem by providing a baseline for ethics in AI based software development. This is achieved by reporting results from an industrial multiple case study on AI systems development in the health care sector. In the context of this study, ethics were perceived as interplay of transparency, re-sponsibility and accountability, upon which research model is outlined. Through these cases, we explore the current state of practice out on the field in the ab-sence of formal methods and tools for ethically aligned design. Based on our data, we discuss the current state of practice and outline existing good practic-es, as well as suggest future research directions in the area.

Let’s get it right

02 Tuesday Jul 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Paper: Ethically Aligned Design of Autonomous Systems: Industry viewpoint and an empirical study

Progress in the field of artificial intelligence has been accelerating rapidly in the past two decades. Various autonomous systems from purely digital ones to autonomous vehicles are being developed and deployed out on the field. As these systems exert a growing impact on society, ethics in relation to artificial intelligence and autonomous systems have recently seen growing attention among the academia. However, the current literature on the topic has focused almost exclusively on theory and more specifically on conceptualization in the area. To widen the body of knowledge in the area, we conduct an empirical study on the current state of practice in artificial intelligence ethics. We do so by means of a multiple case study of five case companies, the results of which indicate a gap between research and practice in the area. Based on our findings we propose ways to tackle the gap.


Paper: Considerations for the Interpretation of Bias Measures of Word Embeddings

Word embedding spaces are powerful tools for capturing latent semantic relationships between terms in corpora, and have become widely popular for building state-of-the-art natural language processing algorithms. However, studies have shown that societal biases present in text corpora may be incorporated into the word embedding spaces learned from them. Thus, there is an ethical concern that human-like biases contained in the corpora and their derived embedding spaces might be propagated, or even amplified with the usage of the biased embedding spaces in downstream applications. In an attempt to quantify these biases so that they may be better understood and studied, several bias metrics have been proposed. We explore the statistical properties of these proposed measures in the context of their cited applications as well as their supposed utilities. We find that there are caveats to the simple interpretation of these metrics as proposed. We find that the bias metric proposed by Bolukbasi et al. 2016 is highly sensitive to embedding hyper-parameter selection, and that in many cases, the variance due to the selection of some hyper-parameters is greater than the variance in the metric due to corpus selection, while in fewer cases the bias rankings of corpora vary with hyper-parameter selection. In light of these observations, it may be the case that bias estimates should not be thought to directly measure the properties of the underlying corpus, but rather the properties of the specific embedding spaces in question, particularly in the context of hyper-parameter selections used to generate them. Hence, bias metrics of spaces generated with differing hyper-parameters should be compared only with explicit consideration of the embedding-learning algorithms particular configurations.


Paper: Ethical Interviews in Software Engineering

Background: Despite a long history, numerous laws and regulations, ethics remains an unnatural topic for many software engineering researchers. Poor research ethics may lead to mistrust of research results, lost funding and retraction of publications. A core principle for research ethics is confidentiality, and anonymization is a standard approach to guarantee it. Many guidelines for qualitative software engineering research, and for qualitative research in general, exist, but these do not penetrate how and why to anonymize interview data. Aims: In this paper we aim to identify ethical guidelines for software engineering interview studies involving industrial practitioners. Method: By learning from previous experiences and listening to the authority of existing guidelines in the more mature field of medicine as well as in software engineering, a comprehensive set of checklists for interview studies was distilled. Results: The elements of an interview study were identified and ethical considerations and recommendations for each step were produced, in particular with respect to anonymization. Important ethical principles are: consent, beneficence, confidentiality, scientific value, researcher skill, justice, respect for law, and ethical reviews. Conclusions: The most important contribution of this study is the set of checklists for ethical interview studies. Future work is needed to refine these guidelines with respect to legal aspects and ethical boards.


Article: Abigail Echo-Hawk on the art and science of ‘decolonizing data’

The chief research officer of the Seattle Indian Health Board is creating programs and databases that are not based on Western concepts to better serve indigenous communities.


Article: Facebook’s Recent Moves Highlight The Grand Challenge Of Digital Ethics

This blog is a summary of an interesting internal discussion we had among analysts. I’d like to extend my thanks to Brigitte Majewski, Martha Bennett, Sucharita Kodali, Benjamin Ensor, and Fatemeh Khatibloo, who all helped with the thinking. I’m merely putting the pieces together for you here, because connecting the dots is where I come in at Forrester . . .


Paper: Internet of Autonomous Vehicles: Architecture, Features, and Socio-Technological Challenges

Mobility is the backbone of urban life and a vital economic factor in the development of the world. Rapid urbanization and the growth of mega-cities is bringing dramatic changes in the capabilities of vehicles. Innovative solutions like autonomy, electrification, and connectivity are on the horizon. How, then, we can provide ubiquitous connectivity to the legacy and autonomous vehicles? This paper seeks to answer this question by combining recent leaps of innovation in network virtualization with remarkable feats of wireless communications. To do so, this paper proposes a novel paradigm called the Internet of autonomous vehicles (IoAV). We begin painting the picture of IoAV by discussing the salient features, and applications of IoAV which is followed by a detailed discussion on the key enabling technologies. Next, we describe the proposed layered architecture of IoAV and uncover some critical functions of each layer. This is followed by the performance evaluation of IoAV which shows the significant advantage of the proposed architecture in terms of transmission time and energy consumption. Finally, to best capture the benefits of IoAV, we enumerate some social and technological challenges and explain how some unresolved issues can disrupt the widespread use of autonomous vehicles in the future.


Paper: The dual-process approach to human sociality: A review

Which social decisions are intuitive? Which are deliberative? The dual-process approach to human sociality has emerged in the last decades as a vibrant and exciting area of research. Here, I review the existing literature on the cognitive basis of cooperation, altruism, honesty, equity-efficiency, positive and negative reciprocity, and moral judgments. For each of these domains, I list a number of open problems that I believe to be crucial to further advance our understanding of human social cognition. I conclude by making an attempt to introduce a game-theoretical framework to organize the existing empirical evidence. This framework seems promising, as it turns out to make predictions that are generally in line with the experimental data in all but one domain: positive reciprocity. I tried to keep the review self-contained, exhaustive, and research-oriented. My hope is that it can contribute to bring further attention to this fascinating area of research.


Paper: Blocking Mechanism of Porn Website in India: Claim and Truth

In last few years, the addiction of internet is apparently recognized as the serious threat to the health of society. This internet addiction gives an impetus to pornographic addiction because most of the pornographic content is accessible through internet. There have been ethical concerns on blocking the contents over internet. In India Uttarakhand High court has taken initiative for the blocking of pornographic content over internet. Technocrats are coming up with various innovative mechanisms to block the content over internet with various techniques, although long ago in 2015. The Supreme Court of India has already asked to block some of the websites but it could not be materialized. The focus of this research paper is to review the effectiveness of blocking existing web content blocking mechanism of pornographic websites in Indian context.

Let’s get it right

30 Sunday Jun 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Paper: Persuasion for Good: Towards a Personalized Persuasive Dialogue System for Social Good

Developing intelligent persuasive conversational agents to change people’s opinions and actions for social good is the frontier in advancing the ethical development of automated dialogue systems. To do so, the first step is to understand the intricate organization of strategic disclosures and appeals employed in human persuasion conversations. We designed an online persuasion task where one participant was asked to persuade the other to donate to a specific charity. We collected a large dataset with 1,017 dialogues and annotated emerging persuasion strategies from a subset. Based on the annotation, we built a baseline classifier with context information and sentence-level features to predict the 10 persuasion strategies used in the corpus. Furthermore, to develop an understanding of personalized persuasion processes, we analyzed the relationships between individuals’ demographic and psychological backgrounds including personality, morality, value systems, and their willingness for donation. Then, we analyzed which types of persuasion strategies led to a greater amount of donation depending on the individuals’ personal backgrounds. This work lays the ground for developing a personalized persuasive dialogue system.


Article: AI Will Replace Jobs. Or Will It? Thoughts On The Coming AI Revolution

According to an article that appeared in Fortune earlier this year: Automation could replace 40% of jobs in 15 years. This article joins countless others in sounding the warning bells of the forthcoming AI-style industrial revolution. As we’ve heard so often, AI will replace jobs by the thousands. Almost overnight, half the country will be out of work. There’s a lot of uncertainty, fear, and doubt being spread on this subject. Admittedly, it would be impossible to tackle this issue from every angle. We won’t even try. However, we can offer our sense of where this industry is, what the effects might be, and where we might be headed within 15 years.


Article: Exclusive: A Group of Microsoft Employees Is Fighting the Company’s Political Action Committee

Employees say there’s no way to dictate how the PAC spends their money, even when it conflicts with the company’s progressive values.


Article: Augmented Reality: Greater Power, Greater Responsibility

We are becoming increasingly intertwined with our technology. The growing depth of this connection adds new weight to each decision we make in the way we design and build the next generation of applications. AR is a new paradigm that gives us a chance to step back and consider the learnings we’ve gained over the last few decades. We can choose to maintain the status quo, or we can decide that now is the time to correct our course.


Article: Unfair Advantage: Don’t Expect AI to Play Like a Human

The debate surrounding AI and StarCraft shows that we need to change the way we discuss and evaluate artificial intelligence – and stop comparing its gameplay with our own.


Article: Is It Time for a Data Scientist Code of Ethics?

As news broke of a new app called DeepNude, which allowed anyone to alter a photo of a woman to make them appear nude, I found myself deeply disturbed by the speed at which deepfakes are evolving. Such a tangible and accessible tool highlights the darker side of AI, computer vision, and other machine learning techniques in the wrong hands. And while there are some incredibly overt examples of how deepfakes can be used to doctor videos of individuals, their appearance, their activities, as well as what they are saying, the accessibility of this technology has mostly been in the hands of the technical few that understood it. Even with this knowledge, the time it takes to generate convincing deepfakes was also a barrier. But DeepNude showed that altering images can be done in seconds versus the days it previously took on incredibly powerful machines out of reach of the general public.


Article: How AI will force us to redefine capitalism

Capitalism is the world’s most resilient and effective economic ideology ever devised, in spite of its shortcomings, because there is no viable alternative. Not only did capitalism withstood the Communist challenge and successfully weathered the negative repercussions of the Industrial revolutions, but it has also become an almost universal economic policy for countries that want to evolve in a sustainable and effective way. But today, many policymakers speak about the need to reform capitalism so that ensure its future vitality and flexibility as well as to address imminent challenges, such as relentlessly growing inequality, inability to tackle environmental degradation and climate change, inherent weaknesses in the financial system, which culminated in the financial crisis of 2008, in addition to an increasingly appealing Chinese mix of free-market capitalism and centralised, government-controlled economy.


Paper: Agnostic data debiasing through a local sanitizer learnt from an adversarial network approach

The widespread use of automated decision processes in many areas of our society raises serious ethical issues concerning the fairness of the process and the possible resulting discriminations. In this work, we propose a novel approach called \gansan whose objective is to prevent the possibility of \emph{any} discrimination i.e., direct and indirect) based on a sensitive attribute by removing the attribute itself as well as the existing correlations with the remaining attributes. Our sanitization algorithm \gansan is partially inspired by the powerful framework of generative adversarial networks (in particuler the Cycle-GANs), which offers a flexible way to learn a distribution empirically or to translate between two different distributions. In contrast to prior work, one of the strengths of our approach is that the sanitization is performed in the same space as the original data by only modifying the other attributes as little as possible and thus preserving the interpretability of the sanitized data. As a consequence, once the sanitizer is trained, it can be applied to new data, such as for instance, locally by an individual on his profile before releasing it. Finally, experiments on a real dataset demonstrate the effectiveness of the proposed approach as well as the achievable trade-off between fairness and utility.

Let’s get it right

29 Saturday Jun 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Article: Fixing the machine behind the machines

Why we need to address bias in human decision-making to improve technology and its future governance, and how to do so.


Article: Time to program guardians to protect ourselves: AI experts

Computers are increasingly using our data to make decisions about us, but can we trust them?


Paper: Understanding artificial intelligence ethics and safety

A remarkable time of human promise has been ushered in by the convergence of the ever-expanding availability of big data, the soaring speed and stretch of cloud computing platforms, and the advancement of increasingly sophisticated machine learning algorithms. Innovations in AI are already leaving a mark on government by improving the provision of essential social goods and services from healthcare, education, and transportation to food supply, energy, and environmental management. These bounties are likely just the start. The prospect that progress in AI will help government to confront some of its most urgent challenges is exciting, but legitimate worries abound. As with any new and rapidly evolving technology, a steep learning curve means that mistakes and miscalculations will be made and that both unanticipated and harmful impacts will occur. This guide, written for department and delivery leads in the UK public sector and adopted by the British Government in its publication, ‘Using AI in the Public Sector,’ identifies the potential harms caused by AI systems and proposes concrete, operationalisable measures to counteract them. It stresses that public sector organisations can anticipate and prevent these potential harms by stewarding a culture of responsible innovation and by putting in place governance processes that support the design and implementation of ethical, fair, and safe AI systems. It also highlights the need for algorithmically supported outcomes to be interpretable by their users and made understandable to decision subjects in clear, non-technical, and accessible ways. Finally, it builds out a vision of human-centred and context-sensitive implementation that gives a central role to communication, evidence-based reasoning, situational awareness, and moral justifiability.


Article: Ethical Principles, OKRs, and KPIs: what YouTube and Facebook could learn from Tukey

The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good. It is perhaps the only area where the metrics do tell the true story as far as we are concerned’ • Facebook VP Andrew Bosworth, 18 June 2016, as leaked to Buzzfeed ‘Watch time was the priority…Everything else was considered a distraction.’ • (ex)-Google engineer Guillaume Chaslot, as quoted in the Guardian 2 Feb 2018, describing YouTube’s recommendation engine’s sole KPI ‘Software is eating the world’, the venture capitalist Marc Andreessen warned us in 2011, and, more and more, the software eating our world is also shaping our professional, political, and personal realities via machine learning. These include, for example, the recommendation algorithms selecting what items appear in our social feeds, or selecting the next autoplay video on YouTube, or recommending ‘related’ products for purchase on Amazon.


Article: EU AI Recommendations Err By Doubling Down on Ethics and Ignoring Opportunity for GDPR Reforms

In response to the release of a report from the Artificial Intelligence (AI) High Level Expert Group offering policy and investment recommendations, the Center for Data Innovation released the following statement from Senior Policy Analyst Eline Chivot: The report includes a range of appropriate solutions to support the development and uptake of AI, including talent retention and mobility strategies, the identification of key sectors for applied AI research, regulatory sandboxes, a better transfer of research results to the market to facilitate the commercialization of AI systems, the integration of existing research networks, and the increased availability of large data sets. The report also constructively recommends policymakers avoid ‘unnecessarily prescriptive regulation’ and ‘cumulative regulatory interventions at the sectoral level’ which could have a chilling effect on innovation, and instead suggests using broad principles as guidance.


Article: The rise of data and AI ethics

As technology tracks huge amounts of personal data, data ethics can be tricky, with very little covered by existing law. Governments are at the center of the data ethics debate in two important ways.?


Paper: Principled Frameworks for Evaluating Ethics in NLP Systems

We critique recent work on ethics in natural language processing. Those discussions have focused on data collection, experimental design, and interventions in modeling. But we argue that we ought to first understand the frameworks of ethics that are being used to evaluate the fairness and justice of algorithmic systems. Here, we begin that discussion by outlining deontological ethics, and envision a research agenda prioritized by it.


Paper: AI Ethics — Too Principled to Fail?

AI Ethics is now a global topic of discussion in academic and policy circles. At least 63 public-private initiatives have produced statements describing high-level principles, values, and other tenets to guide the ethical development, deployment, and governance of AI. According to recent meta-analyses, AI Ethics has seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite the initial credibility granted to a principled approach to AI Ethics by the connection to principles in medical ethics, there are reasons to be concerned about its future impact on AI development and governance. Significant differences exist between medicine and AI development that suggest a principled approach in the latter may not enjoy success comparable to the former. Compared to medicine, AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms. These differences suggest we should not yet celebrate consensus around high-level principles that hide deep political and normative disagreement.
← Older posts
Newer posts →

Blogs by Category

  • arXiv
  • arXiv Papers
  • Blogs
  • Books
  • Causality
  • Distilled News
  • Documents
  • Ethics
  • Magister Dixit
  • Personal Productivity
  • Python Packages
  • R Packages
  • Uncategorized
  • What is …
  • WordPress

Blogs by Month

Follow Blog via Email

Enter your email address to follow this blog and receive notifications of new posts by email.

Follow AnalytiXon

Powered by WordPress.com.

 

Loading Comments...