Autonomous mechanisms have been proposed to regulate certain aspects of society and are already being used to regulate business organisations. We take seriously recent proposals for algorithmic regulation of society, and we identify the existing technologies that can be used to implement them, most of them originally introduced in business contexts. We build on the notion of ‘social machine’ and we connect it to various ongoing trends and ideas, including crowdsourced task-work, social compiler, mechanism design, reputation management systems, and social scoring. After showing how all the building blocks of algorithmic regulation are already well in place, we discuss possible implications for human autonomy and social order. The main contribution of this paper is to identify convergent social and technical trends that are leading towards social regulation by algorithms, and to discuss the possible social, political, and ethical consequences of taking this path.
I shall not today attempt further to define the kinds of material … within that shorthand description … but I know it when I see it.’ This rationale is used to categorize the struggle to define something that is hard to define, like taste, art, beauty or, in this case, obscenity, as famously stated by U.S. Supreme Court Justice Potter Stewart. That perspective could easily be applied to digital ethics today. Gartner defines digital ethics as the systems of values and moral principles for the conduct of electronic interactions. But what does this really mean? Everyone agrees we need to decide what is ethical and what is not, yet most executives and organizations seem to operate on an ‘I know it when I see it’ basis.
The Artificial Intelligence paradigm (hereinafter referred to as ‘AI’) builds on the analysis of data able, among other things, to snap pictures of the individuals’ behaviors and preferences. Such data represent the most valuable currency in the digital ecosystem, where their value derives from their being a fundamental asset in order to train machines with a view to developing AI applications. In this environment, online providers attract users by offering them services for free and getting in exchange data generated right through the usage of such services. This swap, characterized by an implicit nature, constitutes the focus of the present paper, in the light of the disequilibria, as well as market failures, that it may bring about. We use mobile apps and the related permission system as an ideal environment to explore, via econometric tools, those issues. The results, stemming from a dataset of over one million observations, show that both buyers and sellers are aware that access to digital services implicitly implies an exchange of data, although this does not have a considerable impact neither on the level of downloads (demand), nor on the level of the prices (supply). In other words, the implicit nature of this exchange does not allow market indicators to work efficiently. We conclude that current policies (e.g. transparency rules) may be inherently biased and we put forward suggestions for a new approach.
The emergence of artificial intelligence (AI) and its progressively wider impact on many sectors across the society requires an assessment of its effect on sustainable development. Here we analyze published evidence of positive or negative impacts of AI on the achievement of each of the 17 goals and 169 targets of the 2030 Agenda for Sustainable Development. We find that AI can support the achievement of 128 targets across all SDGs, but it may also inhibit 58 targets. Notably, AI enables new technologies that improve efficiency and productivity, but it may also lead to increased inequalities among and within countries, thus hindering the achievement of the 2030 Agenda. The fast development of AI needs to be supported by appropriate policy and regulation. Otherwise, it would lead to gaps in transparency, accountability, safety and ethical standards of AI-based technology, which could be detrimental towards the development and sustainable use of AI. Finally, there is a lack of research assessing the medium- and long-term impacts of AI. It is therefore essential to reinforce the global debate regarding the use of AI and to develop the necessary regulatory insight and oversight for AI-based technologies.
Die ‘Hochrangige Expertengruppe zur KI’ (AI HLG) hat Anfang April ihre Richtlinien für eine vertrauenswürdige KI vorgelegt. Die EU-Kommission hatte das Gremium im Sommer vergangenen Jahres beauftragt. Auf Grundlage von vier ethischen Grundsätzen für die Entwicklung und Anwendung von KI-Systemen formulierten die 52 Kommissionsmitglieder in dem Papier sieben Anforderungen für KI-Unternehmen und deren Entwickler. Grundsätze sind die Anerkennung der Handlungsautonomie des einzelnen und des Prinzips ‘do no harm’, also keinen Schaden anzurichten und niemanden zu verletzen, sowie Fairness und Nachvollziehbarkeit. Zu den sieben Anforderungen, auf die KI-Entwicklungen abzuklopfen seien, gehören technische Robustheit und Sicherheit, Vertraulichkeit und klare Datenhaltungsregeln, Transparenz, Nichtdiskriminierung, Berücksichtigung gesamtgesellschaftlicher Effekte und Maßnahmen zur Rechenschaftslegung. Über allem steht die Autonomie und Kontrolle für den einzelnen. Wie gut sich diese Regeln in der Entwicklung und in Anwendungen einbauen lassen, sollen Unternehmen ab dem Sommer bis Ende 2019 testen. Die Debatte über eine ethische KI flankiert ein mit den Mitgliedsstaaten abgestimmtes großes Investitionsprogramm: 20 Milliarden Euro sind bis Ende 2020 vorgesehen, davon 1,5 Milliarden EU-Gelder. Damit möchte man die vertrauenswürdige ‘KI made in Europe’ gerne in dem von China und den USA dominierten Markt positionieren.
With the growing popularity of computer vision and facial recognition, businesses strive to adopt innovations to keep heads above water. As Accenture’s data scientists report, in such areas as security, customer interactions, retail, marketing, etc., they predict the future full of tech invasion, no less. In this future, different scenarios are possible. It spurs inevitable discussions. That way it was with Amazon’s Recognition when a group of stakeholders expressed worries about potential abuses of the facial recognition technology for surveillance purposes. And not in vain. Commonly, the main argument for implementing AI-based innovations, e.g., facial recognition, is the intention to increase the ROI. This situation can’t but raise ethical questions. How should organizations tackle the downsides of AI? It’s time to figure things out.
Article: Empathy in Artificial Intelligence
A few years ago, I had a conversation with a person who had opinions of what it means to be an American. We were discussing the Japanese Internment during World War II. As an Asian American, I feared that rising racial tensions and conflict with Asian countries might lead to another round of internment of Asian Americans. That person stated that the sheer number of Asian Americans make it impossible to intern them now. Then, this person stated that ‘internment’ is a primitive method of controlling a group of people. In the age of Artificial Intelligence, augmented reality will be able to control people’s lives by altering every aspect of those people’s lives. In the name of national security, in times of world conflict, a group of people can be persecuted by technology without even knowing that they are persecuted.
Last year, my colleague J. P. Gownder and I got to talking about the impact on employees both from companies’ automation efforts and their eagerness to integrate artificial intelligence and robots. We noticed that most of the predictions of how this would go were either utopias viewed through rose-colored glasses or dystopian nightmares darker than most science-fiction novels. Perfect or perfectly terrible future scenarios both sounded unlikely to us. More importantly, we believed that companies needed a plan for creating future employee experiences that didn’t leave humans either out of work or with jobs that left little for them to do. And so we recently published a new report to help companies, ‘Start Designing The Future Human-Machine Workplace Now.’ The goal for every company should be to ensure that their humans can thrive when they work alongside robots and AI. To do that, we lay out three principles for creating future employee experiences.