Article: Why We Need Ethics for AI: Aethics

We need to think specifically about the implications of classification, machine learning and artificial intelligence on decision making processes.


Python Library: transparentai

Python tool to create an ethic AI from defining users’s need to monitoring the model.


Paper: Moral Dilemmas for Artificial Intelligence: a position paper on an application of Compositional Quantum Cognition

Traditionally, the way one evaluates the performance of an Artificial Intelligence (AI) system is via a comparison to human performance in specific tasks, treating humans as a reference for high-level cognition. However, these comparisons leave out important features of human intelligence: the capability to transfer knowledge and make complex decisions based on emotional and rational reasoning. These decisions are influenced by current inferences as well as prior experiences, making the decision process strongly subjective and apparently biased. In this context, a definition of compositional intelligence is necessary to incorporate these features in future AI tests. Here, a concrete implementation of this will be suggested, using recent developments in quantum cognition, natural language and compositional meaning of sentences, thanks to categorical compositional models of meaning.


Article: Ethics of Artificial Intelligence

The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent beings. It is typically divided into roboethics, a concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings, and machine ethics, which is concerned with the moral behavior of artificial moral agents (AMAs).


Article: Does the gift you’re giving your loved ones respect their rights?

When picking a gift this year, we urge you to think carefully about the choice you’re making. Is that smart assistant smart enough to respect your friend or family member’s rights? Does that tablet really have their best interest in mind? Or does that shiny gadget come at a cost much higher than its price tag? When we allow proprietary software created by Facebook, Amazon, Apple, Google, and countless other companies to handle our basic computing tasks, we put an enormous amount of power in their hands, power which they freely exploit. It’s only through using free software, and devices running free software, that we can seize this power back.


Article: Should we be worried about artificial intelligence?

We should be concerned about Artificial Intelligence because all things have consequences and unintended side effects we cannot foresee or control into the future. Human nature is what it is! We will do terrible things to each other. History and this article has shown this.


Paper: Fooling with facts: Quantifying anchoring bias through a large-scale online experiment

Living in the ‘Information Age’ means that not only access to information has become easier but also that the distribution of information is more dynamic than ever. Through a large-scale online field experiment, we provide new empirical evidence for the presence of the anchoring bias in people’s judgment due to irrational reliance on a piece of information that they are initially given. The comparison of the anchoring stimuli and respective responses across different tasks reveals a positive, yet complex relationship between the anchors and the bias in participants’ predictions of the outcomes of events in the future. Participants in the treatment group were equally susceptible to the anchors regardless of their level of engagement, previous performance, or gender. Given the strong and ubiquitous influence of anchors quantified here, we should take great care to closely monitor and regulate the distribution of information online to facilitate less biased decision making.


Article: Artificial Intelligence and a More or Less Ethical Future of Work

On November the 28th an article was posted in TechCrunch about the Future of Work. The article is a conversation between Greg M. Epstein, the Humanist Chaplain at Harvard and MIT, and the author of the New York Times bestselling book Good Without God – and two key organisers EmTech. These two key organiser were Gideon Lichfield and Karen Hao. I could not access it, because it was behind a paywall. However this was accompanied by another article called: Will the future of work be ethical? After generations of increasing inequality, can we teach tech leaders to love their neighbors more than algorithms and profits? That article is open for access and one I recommend to read. The theme of EmTech this year seems to be AI, Machine Learning, and the future of work. It is what Greg describes as the ‘…opportunity to have an existential crisis; I could even say a religious crisis, though I’m not just a confirmed atheist but a professional one as well.’ He ponders whether the future leaders will exploit more efficiently or find a different path.
Advertisements