Paper: Justifications of Welfare Guarantees under Normalized Utilities

It is standard in computational social choice to analyse welfare considerations under the assumptions of normalized utilities. In this note, we summarize some common reasons for this approach. We then mention another justification which is ignored but has solid normative appeal. The central concept used in the `new’ justification can also be used more widely as a social objective.


Article: Inmates in Finland are training AI as part of prison labor

Prison labor’ is usually associated with physical work, but inmates at two prisons in Finland are doing a new type of labor: classifying data to train artificial intelligence algorithms for a startup. Though the startup in question, Vainu, sees the partnership as a kind of prison reform that teaches valuable skills, other experts say the claim of job training is more evidence of hype around the promises of AI. Vainu is building a comprehensive database of companies around the world that helps businesses find contractors to work with, says co-founder Tuomas Rasila. For this to work, people need to read through hundreds of thousands of business articles scraped from the internet and label whether, for example, an article is about Apple the tech company or a fruit company that has ‘apple’ in the name. (This labeled data is then used to train an algorithm that manages the database.)


Article: Teaching AI Human Values

Ensuring fairness and safety in artificial intelligence(AI) applications is considered by many the biggest challenge in the space. As AI systems match or surpass human intelligence in many areas, it is essential that we establish a guideline to align this new form of intelligence with human values. The challenge is that, as humans, we understand very little about how our values are represented in the brain or we can’t even formulate specific rules to describe a specific value. While AI operates in a data universe, human values are a byproduct of our evolution as social beings. We don’t describe human values like fairness or justice using neuroscientific terms but using arguments from social sciences like psychology, ethics or sociology Recently, researchers from OpenAI published a paper describing the importance of social sciences to improve the safety and fairness or AI algorithms in processes that require human intervention. We often hear that we need to avoid bias in AI algorithms by using fair and balanced training datasets. While that’s true in many scenarios, there are many instances in which fairness can’t be described using simple data rules. A simple question such as ‘do you prefer A to B’ can have many answers depending on the specific context, human rationality or emotion. Imagine the task of inferring a pattern of ‘happiness’, ‘responsibility’ or ‘loyalty’ given a specific dataset. Can we describe those values simply using data? Extrapolating that lesson to AI systems tells us that in order to align with human values we need help from the disciplines that better understand human behavior.


Paper: Complexity Analysis of Approaching Clinical Psychiatry with Predictive Analytics and Neural Networks

As the emerging field of predictive analytics in psychiatry generated and continues to generate massive interest overtime with its major promises to positively change and revolutionize clinical psychiatry, health care and medical professionals are greatly looking forward to its integration and application into psychiatry. However, by directly applying predictive analytics to the practice of psychiatry, this could cause detrimental damage to those that use predictive analytics through creating or worsening existing medical issues. In both cases, medical ethics issues arise, and need to be addressed. This paper will use literature to provide descriptions of selected stages in the treatment of mental disorders and phases in a predictive analytics project, approach mental disorder diagnoses using predictive models that rely on neural networks, analyze the complexities in clinical psychiatry, neural networks and predictive analytics, and conclude with emphasizing and elaborating on limitations and medical ethics issues of applying neural networks and predictive analytics to clinical psychiatry.


Article: Notes to Myself on Software Engineering

Design for ethics. Bake your values into your creations.


Article: The Hitchhiker’s Guide to AI Ethics Part 2: What AI Is

Part 1 explored the what and why of ethics of AI and divided the landscape into four realms – what AI is, what AI does, what AI impacts and what AI can be. In Part 2 I dive into the ethics of what AI is. The most commonly deployed form of AI can be described as a set of math functions (model) that given some inputs (data), learn *something* and use that to *infer* something else (make predictions). In other words AI is data, model and predictions. Ethical exploration of this realm covers issues like Bias in a models’ predictions and Fairness (or lack thereof) of the outcomes; as well as approaches to address them via Accountability and Transparency.


Paper: On Measuring Gender Bias in Translation of Gender-neutral Pronouns

Ethics regarding social bias has recently thrown striking issues in natural language processing. Especially for gender-related topics, the need for a system that reduces the model bias has grown in areas such as image captioning, content recommendation, and automated employment. However, detection and evaluation of gender bias in the machine translation systems are not yet thoroughly investigated, for the task being cross-lingual and challenging to define. In this paper, we propose a scheme for making up a test set that evaluates the gender bias in a machine translation system, with Korean, a language with gender-neutral pronouns. Three word/phrase sets are primarily constructed, each incorporating positive/negative expressions or occupations; all the terms are gender-independent or at least not biased to one side severely. Then, additional sentence lists are constructed concerning formality of the pronouns and politeness of the sentences. With the generated sentence set of size 4,236 in total, we evaluate gender bias in conventional machine translation systems utilizing the proposed measure, which is termed here as translation gender bias index (TGBI). The corpus and the code for evaluation is available on-line.


Article: Ethics Boards Won’t Save Big Tech

On March 26, Google announced the formation of an external advisory group to help the company navigate complex questions around the ethical and responsible development of new technologies, including artificial intelligence. By April 4, however, the council had been disbanded, and Google acknowledged that the company was ‘going back to the drawing board.’ Ironically, Google’s new group of ethics advisors fell apart because of ethical challenges. But apart from underlining just how fragile the current state of technology ethics is, the incident attests to a much larger challenge tech companies are facing: How can a company ensure that the products it develops – especially A.I. – are as good for society as they are for the company’s bottom line? Google’s advisory council was established to help the company implement its A.I. principles – an ‘ethical charter to guide the responsible development and use of AI in our research and products.’ Launched last June, the principles articulate ideals and aspirations that few would dispute, including developing socially beneficial technologies, avoiding unfair bias, and ensuring safety. They mirror similar efforts from companies like Microsoft to develop an ethical foundation for A.I. development. They also reflect frameworks such as the Institute of Electrical and Electronics Engineers’ (IEEE) guidelines on ethically aligned design. At a time when there is legitimate growing concern over the potentially harmful personal and social impacts of A.I. and other technologies, these principles are laudable.