Article: Machine intelligence makes human morals more important

Machine intelligence is here, and we’re already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don’t fit human error patterns – and in ways we won’t expect or be prepared for. ‘We cannot outsource our responsibilities to machines,’ she says. ‘We must hold on ever tighter to human values and human ethics.’

Article: AI – Fear, uncertainty, and hope

How to cope with AI and start becoming a part of it If you open a news site today, you are almost sure to be met with an article about AI, Robotics, Quantum computing, genetic engineering, autonomous vehicles, natural language processing, and other technologies from the box called ‘The fourth industrial revolution’. Rating these technologies makes no sense, as they all have a staggering potential to change our world forever. Artificial intelligence, however, is already surging into all of the other technologies. To facilitate by the mastering of big data, pattern recognition, or prediction is an inherent quality of AI, and is frequently being applied to support ground-breaking discoveries in other technologies. I once heard a driving instructor comparing holding the hands on the steering wheel to having a gun in the hand, because of how dangerous it is to drive a car. AI is also dangerous, and we need to face the dark side of AI too, not only revel in the glorious benefits it brings us. Anything else would be reckless driving.

Article: Towards Trans-Inclusive AI

AI ‘thinks’ like those who designed them – with a heteronormative conception of gender. They exclude transgender people and reinforce gender stereotypes. Worse, governments across the world spend billions of dollars to scale cis-sexist AI to new industries like government agencies and to new applications like image recognition with little regard for their gendered impacts. The computer science community, tech community, and government agencies should be more accountable for the gendered impacts of their algorithms. They need to learn to analyze the gendered impacts of algorithms using queer and transgender theory then apply that learning to the design, deployment, and monitoring of AI algorithms in society.

Article: 9 Steps Toward Ethical AI

Few current laws address the use of artificial intelligence. That puts companies under greater pressure to reassure the public that their AI applications are ethical and fair.

Article: Will Big Data Affect Opinion Polls?

Statisticians feel recently a pressure for substituting sample surveys with new opportunities offered by Big Data. Some authors suggest that opinion polls and other random sample surveys have become obsolete in the new era of Big Data. The author discusses relationships between survey-based and Big Data-based approaches to the measurement of consumers’ and public opinions. Special attention is given to traditional opinion polls.

Article: The Hitchhiker’s Guide to AI Ethics

A machine learning algorithm. OpenAI’s GPT2 language model is trained to predict text. It’s huge, complex, takes months of training over tons of data on expensive computers; but once that’s done it’s easy to use. A prompt (‘The Hitchhiker’s Guide to AI Ethics is a’) and a little curation is all it took to generate my raving review using a smaller version of GPT2. The text has some obvious errors but it is a window into the future. If AI can generate human-like output can it also make human-like decisions? Spoiler alert: yes it can, it already is. But is *human-like* good enough? What happens to TRUST in a world where machines generate human-like output and make human-like decisions? Can I trust an autonomous vehicle to have seen me? Can I trust the algorithm processing my housing loan to be fair? Can we trust the AI in the ER enough to make life and death decisions for us? As technologists we must flip this around and ask: How can we make algorithmic systems trustworthy? Enter Ethics. To understand more we need some definitions, a framework, and lots of examples. Let’s go!

Article: AI TRAPS: Automating Discrimination

A close look at how AI & algorithms reinforce prejudices and biases of its human creators and societies, and how to fight discrimination.

Article: Understanding Dataism: Manipulation and Threat Behind AI

If you experience something – record it. If you record something – upload it. If you upload something – share it’ – this phrase best explains dataism, the new 21st-century religion focused mainly on the rapid development of technology, Internet obsession, and general data-worship. Artificial Intelligence, Machine Learning, Data Science, Big Data.. these things and more are so powerful and new for the majority of us that we start to fear what kind of dangerous transformations it may bring. Science fiction, however, is quickly becoming science fact – the future is the machine. But, isn’t it too early for assuming something like that and declare technology will destroy humanity? The truth is, right now Dataism is far away from being a pure religion in its meaning, or scientific concept grounding on true laws. It is rather a complex of fear and a vision that AI and other stuff are nothing more than just manipulation and threat. Weird comparison, but just like capitalism, Dataism too began as a neutral scientific theory but is now mutating into a religion that claims to determine right and wrong. So, to believe in Dataism or not? Let’s make a little investigation on this.