Article: The Machine That Programmed Humans

Many water marks have been breached in the last century, but perhaps the most important will be that, for first time in the history of the human species, we are encountering tools whose interests diverge from those of their user. I speak of artificial intelligence, and in particular advertising enabled software such as the Facebook and Gmail. In such examples, the software’s intended purpose has departed subtly from those of its users. While this may seem innocuous at present, brought to its furthest conclusions, it portends a raft of disturbing consequences. Namely, we may find ourselves at a crossroads in which humans are programmed by machines rather than vice-a-versa.


Article: Artificial Intelligence can never be truly intelligent

Let’s say I’m locked in a room and given a large batch of Chinese writing. I don’t know any Chinese, neither written nor spoken. I can’t even differentiate the writing from other similar scripts, such as Japanese. Now, I receive a second batch of writing, but this time with a set of instructions in English (which I do know) on how to correlate the first batch with the second. I use these English instructions to find patterns and common symbols in the writing. I then receive a third batch of writing, once again with a set of English instructions, which help me correlate it with the first two batches. These instructions also help me frame a response using the same set of symbols and characters. This goes on. After a while I get so good at this game that nobody, just by looking at my responses, can tell if I’m a native Chinese speaker or not. But does this mean I understand Chinese? Of course not. This brilliant example, called the Chinese room argument, from John R. Searle’s Minds, Brains and Programs(1984)[1] highlights the fundamental flaws in our understanding of Artificial Intelligence. This article is an ode to the idea that the ‘intelligence’ we try to manufacture artificially, is not really the ‘intelligence’ that you and I identify intrinsically as a trait of our species. It is but an imitation or an illusion of human intelligence.


Article: AI Safety Needs Social Scientists

We’ve written a paper arguing that long-term AI safety research needs social scientists to ensure AI alignment algorithms succeed when actual humans are involved. Properly aligning advanced AI systems with human values requires resolving many uncertainties related to the psychology of human rationality, emotion, and biases. The aim of this paper is to spark further collaboration between machine learning and social science researchers, and we plan to hire social scientists to work on this full time at OpenAI.


Article: Strata Data Ethics Summit

Technology helps people augment their abilities. And, from the Gutenberg Bible to robotics, tech has always had ethical implications. But while many technologies have narrow implications, data touches everything. It is us. Data is where the rubber of humanity meets the road of technology – and we’re ill-prepared for the impact. From data breaches, to campaign influence, to fraud, to equality, data ethics are at the forefront of today’s headlines. In this day-long Strata Data event, Altimeter analyst, Susan Etlinger, and Strata Chair, Alistair Croll, bring together a packed lineup of academics, practitioners, and innovators for a deep dive into the thorny issues of data and algorithms, and how establishing and reinforcing ethical techonology norms can not only mitigate risk, but drive innovation.


Article: A Conversation about Tech Ethics with the New York Times Chief Data Scientist

Note from Rachel: Although I’m excited about the positive potential of tech, I’m also scared about the ways that tech is having a negative impact on society, and I’m interested in how we can push tech companies to do better. I was recently in a discussion during which New York Times chief data scientist Chris Wiggins shared a helpful framework for thinking about the different forces we can use to influence tech companies towards responsibility and ethics. I interviewed Chris on the topic and have summarized that interview here. In addition to having been Chief Data Scientist at the New York Times since January 2014, Chris Wiggins is professor of applied mathematics at Columbia University, a founding member of Columbia’s Data Science Institute, and co-founder of HackNY. He co-teaches a course at Columbia on the history and ethics of data.


Article: On Ethics and Artificial Intelligence: an Economical Perspective

Which is the outtake of Artificial Intelligence (AI)? This is a recurrent conversation topic among AI practitioners, specialized journalists, and brave politicians. Although some simple concepts are clearly conveyed to the general audience, there are some others that are not so widely known. In this post I’ll be focusing on an important topic that is often overlooked: the economics behind AI.


Article: Those Racist Robots…

ARTIFICIAL INTELLIGENCE (AI) is one of the hottest topics out there, especially with the whole debate over whether or not robots are likely to take over the world. Regardless of our view on Artificial Intelligence being an actual advancement in our history or just another reckless, clumsy integration of accumulated knowledge, examining this topic is of great interest to most.


Article: Algorithms are shaping our lives—here’s how we wrest back control

In this episode of the Data Show, I spoke with Kartik Hosanagar, professor of technology and digital business, and professor of marketing at The Wharton School of the University of Pennsylvania. Hosanagar is also the author of a newly released book, A Human’s Guide to Machine Intelligence, an interesting tour through the recent evolution of AI applications that draws from his extensive experience at the intersection of business and technology.
Advertisements