Article: AI Problems are Human Problems

I recently attended a meeting of great minds at Harvard’s Kennedy School of Government titled, ‘Governing AI?-?How Do We Do It?’. The meeting was full of high-profile individuals ranging from distinguished Harvard faculty, to cyber-security guru, to members of a delegation working with the Danish Minister of Higher Education and Science. As an extremely low-profile software engineer and prospective graduate student, I felt quite intimidated to say the least.


Article: Top 10 Principles for Ethical Artificial Intelligence

1. Demand That AI Systems Are Transparent
2. Equip AI Systems With an ‘Ethical Black Box’
3. Make AI Serve People and Planet
4. Adopt a Human-In-Command Approach
5. Ensure a Genderless, Unbiased AI
6. Share the Benefits of AI Systems
7. Secure a Just Transition and Ensuring Support for Fundamental Freedoms and Rights
8. Establish Global Governance Mechanisms
9. Ban the Attribution of Responsibility to Robots
10. Ban AI Arms Race


Article: Women Leading in AI – 10 Principles for Responsible AI

1. Introduce a regulatory approach governing the deployment of AI which mirrors that used for the pharmaceutical sector.
2. Establish an AI regulatory function working alongside the Information Commissioner’s Office and Centre for Data Ethics – to audit algorithms, investigate complaints by individuals, issue notices and fines for breaches of GDPR and equality and human rights law, give wider guidance, spread best practice and ensure algorithms must be fully explained to users and open to public scrutiny.
3. Introduce a new ‘Certificate of Fairness for AI systems’ alongside a ‘kite mark’ type scheme to display it. Criteria to be defined at industry level, similarly to food labelling regulations.
4. Introduce mandatory AIAs (Algorithm Impact Assessments) for organisations employing AI systems that have a significant effect on individuals.
5. Introduce a mandatory requirement for public sector organisations using AI for particular purposes to inform citizens that decisions are made by machines, explain how the decision is reached and what would need to change for individuals to get a different outcome.
6. Introduce a ‘reduced liability’ incentive for companies that have obtained a Certificate of Fairness to foster innovation and competitiveness.
7. To compel companies and other organisations to bring their workforce with them – by publishing the impact of AI on their workforce and offering retraining programmes for employees whose jobs are being automated.
8. Where no redeployment is possible, to compel companies to make a contribution towards a digital skills fund for those employees.
9. To carry out a skills audit to identify the wide range of skills required to embrace the AI revolution.
10. To establish an education and training programme to meet the needs identified by the skills audit, including content on data ethics and social responsibility. As part of that, we recommend the set up of a solid, courageous and rigorous programme to encourage young women and other underrepresented groups into technology.


Article: Data science for environmental health

Ecosystems fulfill a whole host of ecosystem functions that are essential for life on our planet. However, an unprecedented level of anthropogenic influences is reducing the resilience and stability of our ecosystems as well as their ecosystem functions. The relationships between drivers, stress and ecosystem functions in ecosystems are complex, multi- faceted and often non-linear and yet environmental managers, decision makers and politicians need to be able to make rapid decisions that are data-driven and based on short- and long-term monitoring information, complex modeling and analysis approaches. A huge number of long-standing and standardized ecosystem health approaches like the essential variables already exist and are increasingly integrating remote-sensing based monitoring approaches. Unfortunately, these approaches in monitoring, data storage, analysis, prognosis and assessment still do not satisfy the future requirements of information and digital knowledge processing of the 21st century. This presentation therefore discusses the requirements for using Data Science as a bridge between complex and multidimensional Big Data for environmental health. It became apparent that no existing monitoring approach, technique, model or platform is sufficient on its own to monitor, model, forecast or assess vegetation health and its resilience. In order to advance the development of a multi-source ecosystem health monitoring network, we argue that in order to gain a better understanding of ecosystem health in our complex world it would be conducive to implement the concepts of Data Science with the components: (i) digitalization, (ii) standardization with metadata management adhering to the FAIR (Findability, Accessibility, Interoperability, and Reusability) principles, (iii) Semantic Web, (iv) proof, trust and uncertainties, (v) complex tools for Data Science analysis and (vi) easy tools for scientists, data managers and stakeholders for decision-making support .


Article: AI: The Future of Technology and the World

Artificial intelligence (AI) has now become a topic of controversy bigger than ever before. Many people are worried about robots taking over the world. The concept of AI scares people because they are afraid of the fact that we are creating bots in which we have no idea how they work. But what if I were to tell you that the majority of statements you have heard about AI are inaccurate? And when I say inaccurate, I don’t just mean a little bit off; the media is VERY wrong when it comes to AI. How can you speak about AI when you don’t even know how it works and have no experience with it? Even worse, most content on AI produced by the media hardly includes evidence from experts in the field of AI. At this point, it’s still understandable if you don’t trust AI; I don’t expect you to agree with me right off the bat. But hopefully, by the end of this article, a new light on AI will be shed for you.


Book: AIQ – How artificial intelligence works and how we can harness its power for a better world

Two leading data scientists offer an up-close and user-friendly look at artificial intelligence: what it is, how it works, where it came from and how to harness its power for a better world. ‘There comes a time in the life of a subject when someone steps up and writes the book about it. AIQ explores the fascinating history of the ideas that drive this technology of the future and demystifies the core concepts behind it; the result is a positive and entertaining look at the great potential unlocked by marrying human creativity with powerful machines.’ Steven D. Levitt, co-author of Freakonomics Dozens of times per day, we all interact with intelligent machines that are constantly learning from the wealth of data now available to them. These machines, from smart phones to talking robots to self-driving cars, are remaking the world in the twenty first century in the same way that the Industrial Revolution remade the world in the nineteenth. AIQ is based on a simple premise: if you want to understand the modern world, then you have to know a little bit of the mathematical language spoken by intelligent machines. AIQ will teach you that language but in an unconventional way, anchored in stories rather than equations. You will meet a fascinating cast of historical characters who have a lot to teach you about data, probability and better thinking. Along the way, you’ll see how these same ideas are playing out in the modern age of big data and intelligent machines, and how these technologies will soon help you to overcome some of your built-in cognitive weaknesses, giving you a chance to lead a happier, healthier, more fulfilled life.


Book: Ethik der Informationsgesellschaft – Privatheit und Datenschutz, Nachhaltigkeit, Human-, Sozial- und Naturverträglichkeit, Interessen- und Wertekonflikte, Urheber- und Menschenrechte

Das eigentlich wertneutrale Medium Internet bringt als exponiertestes Instrument des menschlichen Geistes die menschliche Zivilisation schneller, direkter und effektiver voran als alle Erfindungen zuvor. Die Digitalisierung hat Grenzen eingerissen, Gesellschaften transformiert und eine virtuelle Parallelwelt geschaffen, in der alles möglich scheint. Diese neue Form mit Inhalt zu erfüllen, ist Aufgabe menschlicher Kreativität. Das Potential, auch das Zerstörerische zu erkennen (Überwachung, Manipulation, digitalkulturelle Fremdsteuerung), ist Aufgabe eines wachsamen Bewusstseins. Diese neue Dimension der Macht zu kontrollieren, ist Aufgabe der Ethik. Dabei die richtige Balance zwischen Sicherheit und Datenschutz zu finden, ist Aufgabe der Öffentlichkeit und des Individuums.


Paper: A Network-centric Framework for Auditing Recommendation Systems

To improve the experience of consumers, all social media, commerce and entertainment sites deploy Recommendation Systems (RSs) that aim to help users locate interesting content. These RSs are black-boxes – the way a chunk of information is filtered out and served to a user from a large information base is mostly opaque. No one except the parent company generally has access to the entire information required for auditing these systems – neither the details of the algorithm nor the user-item interactions are ever made publicly available for third-party auditors. Hence auditing RSs remains an important challenge, especially with the recent concerns about how RSs are affecting the views of the society at large with new technical jargons like ‘echo chambers’, ‘confirmation biases’, ‘filter bubbles’ etc. in place. Many prior works have evaluated different properties of RSs such as diversity, novelty, etc. However, most of these have focused on evaluating static snapshots of RSs. Today, auditors are not only interested in these static evaluations on a snapshot of the system, but also interested in how these systems are affecting the society in course of time. In this work, we propose a novel network-centric framework which is not only able to quantify various static properties of RSs, but also is able to quantify dynamic properties such as how likely RSs are to lead to polarization or segregation of information among their users. We apply the framework to several popular movie RSs to demonstrate its utility.
Advertisements