Article: Recommendations to the EU High Level Expert Group on Artificial Intelligence on its Draft AI Ethics Guidelines for Trustworthy AI

The Center for Data Innovation is pleased to submit feedback to the High-Level Expert Group (HLEG) on AI on its draft AI Ethics Guidelines for Trustworthy AI. The Center is a nonprofit research institute focused on the intersection of data, technology, and public policy. With staff in Washington, DC and Brussels, the Center formulates and promotes pragmatic public policies designed to maximize the benefits of data-driven innovation in the public and private sectors. It educates policymakers and the public about the opportunities and challenges associated with data, as well as technology trends such as artificial intelligence, open data, and the Internet of Things. The Center is affiliated with the Information Technology and Innovation Foundation (ITIF), the top-ranked science and technology policy think tank in the world.

Article: Five Ways Your Safety Depends on Machine Learning

In this episode of The Dr. Data Show, Eric Siegel tells you about five ways your safety depends on machine learning, which actively protects you from all sorts of dangers, including fires, explosions, collapses, crashes, workplace accidents, restaurant E. coli, and crime.

Article: The Algorithms Aren’t Biased, We Are

Excited about using AI to improve your organization’s operations? Curious about the promise of insights and predictions from computer models? I want to warn you about bias and how it can appear in those types of projects, share some illustrative examples, and translate the latest academic research on ‘algorithmic bias.’

Article: Notes on Artificial Intelligence, Machine Learning and Deep Learning for curious people

AI has been the most intriguing topic of 2018 according to McKinsey. Many people make referrals to AI without actually knowing what it really means. There is public debate on whether it is an evil or savior for humanity. Thus this is yet another attempt to compile & explain the introductory AI/ML concepts to go beyond this buzz for non-practitioners and curious people. Artificial intelligence as an academic discipline was founded in 50s. Actually the ‘AI’ term was coined by John McCarthy, an American computer scientist, back in 1956 at The Dartmouth Conference. According to John McCarthy, AI is ‘The science and engineering of making intelligent machines, especially intelligent computer programs’.

Article: Nobody UNDERSTANDS Me … But Soon, Artificial Intelligence Just Might

The evil AIs from 2001: A Space Odyssey, Terminator, and beloved children’s movie Wall-E are, without doubt, some of the most terrifying depictions of artificial intelligence in our era. In reality, artificial intelligence is driving some massively positive change in the world; diagnosing illnesses based on conversation, forecasting disease outbreaks, even writing books. However, the thought of an all-powerful sentient AI wiping out all humans is actually a concern for a lot of smart people in the world?-?billionaire innovators like Elon Musk have devoted money and resources to make sure we have friendly artificial intelligence in the future through projects like OpenAI.

Article: Event Recap: The Impact of AI on Diplomacy and International Relations

On January 28, the Center for Data Innovation and DiploFoundation hosted an event which brought together over 150 people – diplomats, policy makers from EU institutions and member states, researchers, journalists and others – with an interest in the relationship between artificial intelligence (AI), diplomacy, and foreign policy. The event served as a platform to launch and discuss DiploFoundation’s report, Mapping the challenges and opportunities of artificial intelligence for the conduct of diplomacy, which was commissioned by the Finnish Ministry for Foreign Affairs. The launch was followed by three panels of experts, which deepened the discussion on some of the key themes in the report and allowed for further exchange of ideas.

Article: 2018 Lindberg-King Lecture: The Best Way to Predict the Future is to Create It. But Is It Already Too Late?

(CIT): Computer science pioneer Alan Curtis Kay, Ph.D., will deliver this year’s Lindberg-King Lecture in the Lister Hill Auditorium. His talk is titled, ‘The Best Way to Predict the Future is to Create It. But Is It Already Too Late?’ A child prodigy, Dr. Kay was an original member of the seminal Xerox-PARC group, and for his myriad innovations in computer science was awarded computer science’s highest honor: the Turing Prize. He has been elected a Fellow of the American Academy of Arts and Sciences, the National Academy of Engineering, and the Royal Society of Arts. He is the president of the Viewpoints Research Institute and an adjunct professor of computer science at the University of California, Los Angeles. The Lindberg-King Lecture honors former NLM Director Donald A.B. Lindberg, M.D., and former NLM Deputy Director of Research, and Education Donald West King, M.D. The event is co-sponsored by the NLM, Friends of the National Library of Medicine, and the American Medical Informatics Association.

Article: Guidelines for Human-AI Interaction

Advances in artifcial intelligence (AI) frame opportunities and challenges for user interface design. Principles for human- AI interaction have been discussed in the human-computer interaction community for over two decades, but more study and innovation are needed in light of advances in AI and the growing uses of AI technologies in human-facing applications. We propose 18 generally applicable design guidelines for human-AI interaction. These guidelines are validated through multiple rounds of evaluation including a user study with 49 design practitioners who tested the guidelines against 20 popular AI-infused products. The results verify the relevance of the guidelines over a spectrum of interaction scenarios and reveal gaps in our knowledge, highlighting opportunities for further research. Based on the evaluations, we believe the set of design guidelines can serve as a resource to practitioners working on the design of applications and features that harness AI technologies, and to researchers interested in the further development of guidelines for human-AI interaction design.
1. Make clear what the system can do. Help the user understand what the AI system is capable of doing.
2. Make clear how well the system can do what it can do. Help the user understand how often the AI system may make mistakes.
3. Time services based on context. Time when to act or interrupt based on the user’s current task and environment.
4. Show contextually relevant information. Display information relevant to the user’s current task and environment.
5. Match relevant social norms. Ensure the experience is delivered in a way that users would expect, given their social and cultural context.
6. Mitigate social biases. Ensure the AI system’s language and behaviors do not reinforce undesirable and unfair stereotypes and biases.
7. Support effcient invocation. Make it easy to invoke or request the AI system’s services when needed.
8. Support effcient dismissal. Make it easy to dismiss or ignore undesired AI system services.
9. Support effcient correction. Make it easy to edit, refne, or recover when the AI system is wrong.
10. Scope services when in doubt. Engage in disambiguation or gracefully degrade the AI system’s services when uncertain about a user’s goals.
11. Make clear why the system did what it did. Enable the user to access an explanation of why the AI system behaved as it did.
12. Remember recent interactions. Maintain short term memory and allow the user to make effcient references to that memory.
13. Learn from user behavior. Personalize the user’s experience by learning from their actions over time.
14. Update and adapt cautiously. Limit disruptive changes when updating and adapting the AI system’s behaviors.
15. Encourage granular feedback. Enable the user to provide feedback indicating their preferences during regular interaction with the AI system.
16. Convey the consequences of user actions. Immediately update or convey how user actions will impact future behaviors of the AI system.
17. Provide global controls. Allow the user to globally customize what the AI system monitors and how it behaves.
18. Notify users about changes. Inform the user when the AI system adds or updates its capabilities.