Article: Hambacher Erklärung zur Künstlichen Intelligenz – Sieben datenschutzrechtliche Anforderungen

Systeme der Künstlichen Intelligenz (KI) stellen eine substanzielle Herausforderung für Freiheit und Demokratie in unserer Rechtsordnung dar. Entwicklungen und Anwendungen von KI müssen in demokratisch-rechtsstaatlicher Weise den Grundrechten entsprechen. Nicht alles, was technisch möglich und ökonomisch erwünscht ist, darf in der Realität umgesetzt werden. Das gilt in besonderem Maße für den Einsatz von selbstlernenden Systemen, die massenhaft Daten verarbeiten und durch automatisierte Einzelentscheidungen in Rechte und Freiheiten Betroffener eingreifen. Die Wahrung der Grundrechte ist Aufgabe aller staatlichen Instanzen. Wesentliche Rahmenbedingungen für den Einsatz von KI sind vom Gesetzgeber vorzugeben und durch die Aufsichtsbehörden zu vollziehen. Nur wenn der Grundrechtsschutz und der Datenschutz mit dem Prozess der Digitalisierung Schritt halten, ist eine Zukunft möglich, in der am Ende Menschen und nicht Maschinen über Menschen entscheiden.
1. KI darf Menschen nicht zum Objekt machen
2. KI darf nur für verfassungsrechtlich legitimierte Zwecke eingesetzt werden und das Zweckbindungsgebot nicht aufheben
3. KI muss transparent, nachvollziehbar und erklärbar sein
4. KI muss Diskriminierungen vermeiden
5. Für KI gilt der Grundsatz der Datenminimierung
6. KI braucht Verantwortlichkeit
7. KI benötigt technische und organisatorische Standards


Article: Dark Data as the New Challenge for Big Data Science and the Introduction of the Scientific Data Officer

Many studies in big data focus on the uses of data available to researchers, leaving without treatment data that is on the servers but of which researchers are unaware. We call this dark data, and in this article, we present and discuss it in the context of high-performance computing (HPC) facilities. To this end, we provide statistics of a major HPC facility in Europe, the High-Performance Computing Center Stuttgart (HLRS). We also propose a new position tailor-made for coping with dark data and general data management. We call it the scientific data officer (SDO) and we distinguish it from other standard positions in HPC facilities such as chief data officers, system administrators, and security officers. In order to understand the role of the SDO in HPC facilities, we discuss two kinds of responsibilities, namely, technical responsibilities and ethical responsibilities. While the former are intended to characterize the position, the latter raise concerns – and proposes solutions – to the control and authority that the SDO would acquire.


Article: FDA developing new rules for artificial intelligence in medicine

The Food and Drug Administration announced Tuesday that it is developing a framework for regulating artificial intelligence products used in medicine that continually adapt based on new data. The agency’s outgoing commissioner, Scott Gottlieb, released a white paper ( https://www.regulations.gov/document?D=FDA-2019-N-1185-0001 ) that sets forth the broad outlines of the FDA’s proposed approach to establishing greater oversight over this rapidly evolving segment of AI products. It is the most forceful step the FDA has taken to assert the need to regulate a category of artificial intelligence systems whose performance constantly changes based on exposure to new patients and data in clinical settings. These machine-learning systems present a particularly thorny problem for the FDA, because the agency is essentially trying to hit a moving target in regulating them. The white paper describes criteria the agency proposes to use to determine when medical products that rely on artificial intelligence will require FDA review before being commercialized.


Article: Taming AI: What’s Next in Setting Standards for Safe, Effective, and Ethical Algorithms?

Please join the Center for Data Innovation for a conversation about the state of play in developing standards and oversight of AI systems, the need for proper governance of AI in key industries like health care and transportation, and the role that policymakers can play in advancing these efforts.
Date and Time: Thursday, May 30, 2019, from 10:00 to 11:30 AM
Location: 1101 K Street NW, Suite 610, Washington, DC, 20005
Speakers to be announced. The event will be live-streamed on this page. Please return to this page on the day of the event.


Article: Solving the AI Accountability Gap – Hold developers responsible for their creations

Yesterday, a leaked white paper from the United Kingdom government suggested that social media executives could be held legally responsible for harmful content proliferating on their platform’s algorithms. This proposal aims to address one of the single biggest problems brought about by autonomous decision-making: who should be blamed when an AI causes harm?


Article: Ethics guidelines for trustworthy AI

Following the publication of the draft ethics guidelines in December 2018 to which more than 500 comments were received, the independent expert group presents today their ethics guidelines for trustworthy artificial intelligence.


Article: Relying on Competitive Advantage of AI Ethics is a Losing Strategy for Europe

The new ethics guidelines are a welcome alternative to the EU’s typical ‘regulate first, ask questions later’ approach to new technology. They also reflect a number of improvements from the draft released in December. For example, the new document acknowledges the trade-off between enhancing a system’s explainability and increasing its accuracy. It rightly acknowledges that its principles remain abstract, does away with the poorly defined ‘principle of beneficence,’ and no longer associates ‘nudging’ with ‘risks to mental integrity.’ In addition, it is particularly important that this document does not include recommendations urging the Commission to regulate on AI.


Article: Exclusive: Google cancels AI ethics board in response to outcry

Well, it’s officially done falling apart – it’s been canceled. Google told Vox on Thursday that it’s pulling the plug on the ethics board. The board survived for barely more than one week. Founded to guide ‘responsible development of AI’ at Google, it would have had eight members and met four times over the course of 2019 to consider concerns about Google’s AI program. Those concerns include how AI can enable authoritarian states, how AI algorithms produce disparate outcomes, whether to work on military applications of AI, and more. But it ran into problems from the start.