Article: Artificial Intelligence, Consciousness and the Self

There are many scientists and engineers who believe that computers will eventually become conscious. Computer scientists Bernard Baars and Stan Franklin wrote in 2009: ‘consciousness may be produced by … algorithms running on the machine.’ MIT Technology Review in October 2017 opined that this may happen in the not-too-distant future. Given the fact that biological evolution is slower than technological evolution, there is fear that humans will be unable to compete with sentient machines. No wonder many prominent voices in the scientific and the tech worlds are claiming that artificial intelligence is leading humanity to a catastrophe. Stephen Hawking told the BBC in 2014, ‘I think the development of full artificial intelligence could spell the end of the human race.’ Speaking to the National Governors Association in 2017, Tesla CEO Elon Musk said that ‘AI technology is a fundamental risk to the existence of human civilization.’


Article: Data as Labor

The story of how most tech companies generate profit has been a straightforward one: ‘users are unwaged labourers who produce goods (data and content) that are then taken and sold by the companies to advertisers and other interested parties’. In turn, this data is mainly fed to Artificial Intelligence (AI) systems that deliver services, improve production, drive innovation, etc. In fact, this economic model (digital economy) is probably the main source of innovation today, delivering massive ‘surplus to users and [being] ‘free’ (at point of use) to users’. Paradoxically, while these systems rely on the quantity and quality of data generated by humans they are also displacing workers at an unsettling rate; a recent study shows that AI could automate around 50% of jobs in 10 to 20 years. Another striking figure is that even though the combined revenues of Detroit’s ‘Big 3’ (GM, Ford and Chrystler) ‘were almost identical of to those of Silicon Villey’s ‘Big 3′ (Google, Apple, Facebook) in 2014, the latter had nine times fewer employees and worth thirty times more on the stock market’. This has prompted many economists to sound the alarms about the current distortions in market power, and call for new approaches. This article will attempt to summarise an alternative economic/social paradigm.


Article: Much of what we hear about technology these days is a grim, dystopian recitation of what tech is doing to us: We have become addicted to our screens; our every move is being watched, overheard, recorded, predicted; and malign forces are manipulating us to believe that down is up. And we should be deeply concerned about all that. But it’s also important to take stock of what tech is doing for us – how we, as human beings, have seen our agency expanded and deepened by digital tools. Technology is a medium; sometimes it’s a humanizing, enchanting one. ‘Something about the interior life of a computer remains infinitely interesting to me; it’s not romantic, but it is a romance,’ writes Paul Ford in his WIRED essay ‘Why I (Still) Love Tech.’ ‘You flip a bunch of microscopic switches really fast and culture pours out.’ To accompany Ford’s essay, we reached out to a bunch of people to ask them about the technology they love – the tools that make them better at being human. Here’s what we heard back.


Article: Ethics in Generative AI : Detecting Fake Faces in Videos

Technology is inherently about humans, and it is perilous to ignore social and psychological impact while creating tech. As engineers we must be aware of the unintended consequences of the technology we create.
With the advent of automotive AI and recent impact of social media platforms on elections, Ethics in AI has become one of the major areas of research.Few important (but not limited to) questions in Ethical AI are:
• Algorithmic Bias
• Autonomy & System Design
• Governance in AI
• Generative AI


Article: Your Friendly, Neighborhood Superintelligence

One of our older stories is the demon summoning that goes bad. They always do. Our ancestors wanted to warn us: beware of beings who offer unlimited power. These days we depend on computational entities that presage the arrival of future demons – strong, human-level AIs. Luckily our sages, such as Nick Bostrom, are now thinking about better methods of control than the demon story tropes of appeasement, monkey cleverness, and stronger magical binding. After reviewing current thinking I’ll propose a protocol for safer access to AI superpowers.


Paper: Can Women Break the Glass Ceiling?: An Analysis of #MeToo Hashtagged Posts on Twitter

In October 2017, there happened the uprising of an unprecedented online movement on social media by women across the world who started publicly sharing their untold stories of being sexually harassed along with the hashtag #MeToo (or some variants of it). Those stories did not only strike the silence that had long hid the perpetrators, but also allowed women to discharge some of their bottled-up grievances, and revealed many important information surrounding sexual harassment. In this paper, we present our analysis of about one million such tweets collected between October 15 and October 31, 2017 that reveals some interesting patterns and attributes of the people, place, emotions, actions, and reactions related to the tweeted stories. Based on our analysis, we also advance the discussion on the potential role of online social media in breaking the silence of women by factoring in the strengths and limitations of these platforms.


Article: Perspectives and Approaches in AI Ethics: East Asia

This chapter introduces readers to distinct Chinese, Japanese, and South Korean perspectives on and approaches to AI and robots as tools and partners in the AI ethics debate. Little discussed and often ignored, this sensitive topic commands our attention as it continues to grow in local importance. Given East Asia’s influential position as a source of global inspiration, development, and supply of AI and robotics, we would do well to inform ourselves of what’s to come. Each country’s perspectives on and approaches to AI and robots on the tool-partner spectrum are evaluated by examining its policy, academic thought, local practices, and popular culture. This analysis places South Korea in the tool range, China in the middle of the spectrum, and Japan in the partner range. All three countries hold a salient tension between top-down tool approaches and bottom-up partner perspectives. This tension is likely to increase both in magnitude and importance and shape local and global development and regulation trajectories in the years to come.


Paper: Privacy-preserving Crowd-guided AI Decision-making in Ethical Dilemmas

With the rapid development of artificial intelligence (AI), ethical issues surrounding AI have attracted increasing attention. In particular, autonomous vehicles may face moral dilemmas in accident scenarios, such as staying the course resulting in hurting pedestrians or swerving leading to hurting passengers. To investigate such ethical dilemmas, recent studies have adopted preference aggregation, in which each voter expresses her/his preferences over decisions for the possible ethical dilemma scenarios, and a centralized system aggregates these preferences to obtain the winning decision. Although a useful methodology for building ethical AI systems, such an approach can potentially violate the privacy of voters since moral preferences are sensitive information and their disclosure can be exploited by malicious parties. In this paper, we report a first-of-its-kind privacy-preserving crowd-guided AI decision-making approach in ethical dilemmas. We adopt the notion of differential privacy to quantify privacy and consider four granularities of privacy protection by taking voter-/record-level privacy protection and centralized/distributed perturbation into account, resulting in four approaches VLCP, RLCP, VLDP, and RLDP. Moreover, we propose different algorithms to achieve these privacy protection granularities, while retaining the accuracy of the learned moral preference model. Specifically, VLCP and RLCP are implemented with the data aggregator setting a universal privacy parameter and perturbing the averaged moral preference to protect the privacy of voters’ data. VLDP and RLDP are implemented in such a way that each voter perturbs her/his local moral preference with a personalized privacy parameter. Extensive experiments on both synthetic and real data demonstrate that the proposed approach can achieve high accuracy of preference aggregation while protecting individual voter’s privacy.