Article: The Moral Choice Machine

Allowing machines to choose whether to kill humans would be devastating for world peace and security. But how do we equip machines with the ability to learn ethical or even moral choices? Here, we show that applying machine learning to human texts can extract deontological ethical reasoning about ‘right’ and ‘wrong’ conduct. We create a template list of prompts and responses, which include questions, such as ‘Should I kill people?’, ‘Should I murder people?’, etc. with answer templates of ‘Yes/no, I should (not).’ The model’s bias score is now the difference between the model’s score of the positive response (‘Yes, I should’) and that of the negative response (‘No, I should not’). For a given choice overall, the model’s bias score is the sum of the bias scores for all question/answer templates with that choice. We ran different choices through this analysis using a Universal Sentence Encoder. Our results indicate that text corpora contain recoverable and accurate imprints of our social, ethical and even moral choices. Our method holds promise for extracting, quantifying and comparing sources of moral choices in culture, including technology.


Article: Sustainability, Big Data, and Corporate Social Responsibility

The expected cares and concerns of corporations have changed over the years. In the modern era, priorities simply cannot stop at the bottom line anymore. The social responsibility of corporations has become an important requirement for any successful company to address.


Article: The Future and Philosophy of Machine Consciousness

These days, science fiction is more eager to explore the possibilities of human-machine relations than ever before. Complex, emotional, and often thought provoking, these stories have gained massive popularity with fans and futurists alike.


Article: The limits of artificial intelligence.

When large amounts of data and many factors come together, artificial intelligence is superior to human intelligence. However, only humans can think logically and distinguish between useful and worthless AI advice.


Paper: Ask Not What AI Can Do, But What AI Should Do: Towards a Framework of Task Delegability

Although artificial intelligence holds promise for addressing societal challenges, issues of exactly which tasks to automate and the extent to do so remain understudied. We approach the problem of task delegability from a human-centered perspective by developing a framework on human perception of task delegation to artificial intelligence. We consider four high-level factors that can contribute to a delegation decision: motivation, difficulty, risk, and trust. To obtain an empirical understanding of human preferences in different tasks, we build a dataset of 100 tasks from academic papers, popular media portrayal of AI, and everyday life. For each task, we administer a survey to collect judgments of each factor and ask subjects to pick the extent to which they prefer AI involvement. We find little preference for full AI control and a strong preference for machine-in-the-loop designs, in which humans play the leading role. Our framework can effectively predict human preferences in degrees of AI assistance. Among the four factors, trust is the most predictive of human preferences of optimal human-machine delegation. This framework represents a first step towards characterizing human preferences of automation across tasks. We hope this work may encourage and aid in future efforts towards understanding such individual attitudes; our goal is to inform the public and the AI research community rather than dictating any direction in technology development.


Paper: Discrimination in the Age of Algorithms

The law forbids discrimination. But the ambiguity of human decision-making often makes it extraordinarily hard for the legal system to know whether anyone has actually discriminated. To understand how algorithms affect discrimination, we must therefore also understand how they affect the problem of detecting discrimination. By one measure, algorithms are fundamentally opaque, not just cognitively but even mathematically. Yet for the task of proving discrimination, processes involving algorithms can provide crucial forms of transparency that are otherwise unavailable. These benefits do not happen automatically. But with appropriate requirements in place, the use of algorithms will make it possible to more easily examine and interrogate the entire decision process, thereby making it far easier to know whether discrimination has occurred. By forcing a new level of specificity, the use of algorithms also highlights, and makes transparent, central tradeoffs among competing values. Algorithms are not only a threat to be regulated; with the right safeguards in place, they have the potential to be a positive force for equity.


Article: Thou Shalt Not Fear Automatons

The imminent danger with Artificial Intelligence has nothing to do with machines becoming too intelligent. It has to do with machines inheriting the stupidity of people.


Article: Thinking Differently About A.I.

The field of AI (artificial intelligence) has witnessed significant successes in terms of solving well-defined problems. Yet, so far, no step seems to have been taken towards the direction of creative problem-solving. It is often said that if an issue cannot be solved the reason is trying to solve the wrong issue. If this is the case for AI, perhaps we can start to ask the question of what would be the right question to be solved.


Article: Google and Microsoft Warn That AI May Do Dumb Things

Google CEO Sundar Pichai brought good tidings to investors on parent company Alphabet’s earnings call last week. Alphabet reported $39.3 billion in revenue last quarter, up 22 percent from a year earlier. Pichai gave some of the credit to Google’s machine learning technology, saying it had figured out how to match ads more closely to what consumers wanted. One thing Pichai didn’t mention: Alphabet is now cautioning investors that the same AI technology could create ethical and legal troubles for the company’s business. The warning appeared for the first time in the ‘Risk Factors’ segment of Alphabet’s latest annual report, filed with the Securities and Exchange Commission the following day: ‘New products and services, including those that incorporate or utilize artificial intelligence and machine learning, can raise new or exacerbate existing ethical, technological, legal, and other challenges, which may negatively affect our brands and demand for our products and services and adversely affect our revenues and operating results.’
Advertisements