Article: IoT Data: Get the Right Rights
Before diving into AI, advanced analytics and other data-driven projects be sure to understand which data you hold rights to and what restrictions apply.
Article: Managing Corporate AI
Morrison and Foerster’s Stephanie Sharron and Andy Serwin discuss the legal and ethical concerns of using AI. The impacts could be far-reaching and potentially include issues such as unlawful bias and discrimination, violations of privacy laws and uncertainty about legal liability and accountability for harms caused.
Recent advancements in machine learning research, i.e., deep learning, introduced methods that excel conventional algorithms as well as humans in several complex tasks, ranging from detection of objects in images and speech recognition to playing difficult strategic games. However, the current methodology of machine learning research and consequently, implementations of the real-world applications of such algorithms, seems to have a recurring HARKing (Hypothesizing After the Results are Known) issue. In this work, we elaborate on the algorithmic, economic and social reasons and consequences of this phenomenon. We present examples from current common practices of conducting machine learning research (e.g. avoidance of reporting negative results) and failure of generalization ability of the proposed algorithms and datasets in actual real-life usage. Furthermore, a potential future trajectory of machine learning research and development from the perspective of accountable, unbiased, ethical and privacy-aware algorithmic decision making is discussed. We would like to emphasize that with this discussion we neither claim to provide an exhaustive argumentation nor blame any specific institution or individual on the raised issues. This is simply a discussion put forth by us, insiders of the machine learning field, reflecting on us.
OpenAI’s decision not to release their model – nicknamed GPT-2 – to the public sets a crucial and highly controversial precedent for the development of increasingly advanced AIs. And because AI will inevitably have a defining influence on the course of human affairs – and even on the very development of our species – the debate surrounding OpenAI’s latest move merits the sustained attention of anyone who’s remotely invested in the future of human civilization.
This white paper offers a framework for understanding the potential risks for machine learning applications to have discriminatory outcomes, in order to arrive at a roadmap for preventing them. While different applications of ML will require different actions to combat discrimination and encourage dignity assurance, in this white paper we offer a set of transferable, guiding principles that are particularly relevant for the field of machine learning. We base our approach on the rights enshrined in the Universal Declaration of Human Rights and further elaborated in a dozen binding international treaties that provide substantive legal standards for the protection and respect of human rights and safeguarding against discrimination.
Kurt Muehmel explores AI within a broader discussion of the ethics of technology, arguing that inclusivity and collaboration are necessary.
Artificial Intelligence might not be the newest entrant in smart technology but, its redefined usage is. It has recently gained mass popularity. Now, it is being accepted across many industries. With the general public embracing AI, government agencies are also considering this fantastic technology. AI can imitate human’s decision-making skills. Thus, it has the potential to improve the decision-making ability of various applications.
Handing off decision-making to predictive AI would be catastrophic.