Guidelines for Human-AI Interaction

Advances in artifcial intelligence (AI) frame opportunities and challenges for user interface design. Principles for human- AI interaction have been discussed in the human-computer interaction community for over two decades, but more study and innovation are needed in light of advances in AI and the growing uses of AI technologies in human-facing applications. We propose 18 generally applicable design guidelines for human-AI interaction. These guidelines are validated through multiple rounds of evaluation including a user study with 49 design practitioners who tested the guidelines against 20 popular AI-infused products. The results verify the relevance of the guidelines over a spectrum of interaction scenarios and reveal gaps in our knowledge, highlighting opportunities for further research. Based on the evaluations, we believe the set of design guidelines can serve as a resource to practitioners working on the design of applications and features that harness AI technologies, and to researchers interested in the further development of guidelines for human-AI interaction design.
1. Make clear what the system can do. Help the user understand what the AI system is capable of doing.
2. Make clear how well the system can do what it can do. Help the user understand how often the AI system may make mistakes.
3. Time services based on context. Time when to act or interrupt based on the user’s current task and environment.
4. Show contextually relevant information. Display information relevant to the user’s current task and environment.
5. Match relevant social norms. Ensure the experience is delivered in a way that users would expect, given their social and cultural context.
6. Mitigate social biases. Ensure the AI system’s language and behaviors do not reinforce undesirable and unfair stereotypes and biases.
7. Support effcient invocation. Make it easy to invoke or request the AI system’s services when needed.
8. Support effcient dismissal. Make it easy to dismiss or ignore undesired AI system services.
9. Support effcient correction. Make it easy to edit, refne, or recover when the AI system is wrong.
10. Scope services when in doubt. Engage in disambiguation or gracefully degrade the AI system’s services when uncertain about a user’s goals.
11. Make clear why the system did what it did. Enable the user to access an explanation of why the AI system behaved as it did.
12. Remember recent interactions. Maintain short term memory and allow the user to make effcient references to that memory.
13. Learn from user behavior. Personalize the user’s experience by learning from their actions over time.
14. Update and adapt cautiously. Limit disruptive changes when updating and adapting the AI system’s behaviors.
15. Encourage granular feedback. Enable the user to provide feedback indicating their preferences during regular interaction with the AI system.
16. Convey the consequences of user actions. Immediately update or convey how user actions will impact future behaviors of the AI system.
17. Provide global controls. Allow the user to globally customize what the AI system monitors and how it behaves.
18. Notify users about changes. Inform the user when the AI system adds or updates its capabilities.

Learning Data Science: Modelling Basics

Data Science is all about building good models, so let us start by building a very simple model: we want to predict monthly income from age (in a later post we will see that age is indeed a good predictor for income).

AI-Powered Project Management: The Pros and Cons

Artificial intelligence holds promise as a way to improve IT project management, but there are some hurdles to overcome. In the tech world, it’s common for the DevOps team to be responsible for the project management aspect of software development. DevOps workers have a strong reliance on analytics, so it’s not surprising that there is a push toward using artificial intelligence – which does analyses of its own – in project management. This trend has both pros and cons – here are two of each.

Visualizing New York City WiFi Access with K-Means Clustering

Visualization has become a key application of data science in the telecommunications industry. Specifically, telecommunication analysis is highly dependent on the use of geospatial data.

References on Econometrics and Machine Learning

In our series of posts on the history and foundations of econometric and machine learning models, a lot of references where given. Here they are.

Fondations of Machine Learning, part 1

This post is the fifth one of our series on the history and foundations of econometric and machine learning models. The first fours were on econometrics techniques. Part 4 is online here.

Probabilistic Fondations of Econometrics, part 4

This post is the fourth one of our series on the history and foundations of econometric and machine learning models. Part 3 is online here.

Intuitive Visualization of Outlier Detection Methods

Check out this visualization for outlier detection methods, and the Python project from which it comes, a toolkit for easily implementing outlier detection methods on your own.

The Most Intuitive and Easiest Guide for Recurrent Neural Network

Everything has its past. And sometimes the past defines us. What route we’ve walked through and what choices we made along the way. They are curved as our history and tell us what kind of person we were. Where we’re heading for. This is also applicable to data. They can have their past, and that history can be used for predicting what’s coming next, the future. As we have a fortune teller, likewise data, which is called the sequence models.

Generative Adversarial Networks (GANs) for Beginners

Generative adversarial networks (GANs) refers to a set of neural network models typically used to generate stimuli, such as pictures. The use of GANs challenges the doctrine that computers are not capable of being creative. It is still early days in the utility of GANs, but it is a very exciting area of research. Here, I review the essential components of a GAN and demonstrate an example GAN I used to generate images of distracted drivers. Specifically, I will be reviewing a Wasserstein GAN. For reader’s more interested in the theory behind a Wasserstein GAN, I refer you to the linked paper. All of the code corresponding to this post can be found on my GitHub.

Almost Everything You Need to Know About Time Series

Whether we wish to predict the trend in financial markets or electricity consumption, time is an important factor that must now be considered in our models. For example, it would be interesting to not only know when a stock will move up in price, but also when it will move up.

Review: LapSRN & MS-LapSRN – Laplacian Pyramid Super-Resolution Network (Super Resolution)

In this story, LapSRN (Laplacian Pyramid Super-Resolution Network) and MS-LapSRN (Multi-Scale Laplacian Pyramid Super-Resolution Network) are reviewed. By progressively reconstructs the sub-band residuals, with Charbonnier loss functions, LapSRN outperforms SRCNN, FSRCNN, VDSR, and DRCN. With parameter sharing, local residual learning (LRL) and multi-scale training, MS-LapSRN even outperforms DRRN. LapSRN and MS-LapSRN are published in 2017 CVPR with more than 200 citations and 2018 TPAMI with tens of citations respectively. (SH Tsang @ Medium)

TensorFlow – The Scope of Software Engineering

How to structure your TensorFlow graph like a software engineer

The complete guide to start your DataScience/AI journey.

AI & Data Science are already skills that everyone should develop. With this guide we will go through a fast tutorial to present you how to start this technical journey (platforms to learn and apply skills, installation on your computer, useful software). At the end of this article you will be able to run your first Machine Learning program ! This guide is made for the non expert, so no worries ! 🙂 You are all set now to start your Data Science & Artificial Intelligence journey. Welcome aboard !

AI Accelerator Products

In this blog we would like to present in details various Intel Software and hardware AI products. These products are available in the market that accelerate deep learning inference in a production environment in various hard wares.

Unsupervised learning for anomaly detection in stock options pricing

This post is part of a broader work for predicting stock prices. The outcome (identified anomaly) is a feature (input) in a LSTM model (within a GAN architecture)- link to the post.

Time Series Analysis Tutorial Using Financial Data

For my 2nd project at Metis I created a model that predicted the price of the CBOE volatility index (VIX) using a time series analysis. The VIX is a composite of option prices of popular stocks that indicate how much volatility is in the overall market. I also added in the Federal Prime Rate as an exogenous data source to see if that improved predictions. This was a great opportunity to dive in and learn how to work with time series data. See my project presentation here.

A Beginner’s Tutorial on Building an AI Image Classifier using PyTorch

This is a step-by-step guide to build an image classifier. The AI model will be able to learn to label images. I use Python and Pytorch.

Comparing Different Classification Machine Learning Models for an imbalanced dataset

A data set is called imbalanced if it contains many more samples from one class than from the rest of the classes. Data sets are unbalanced when at least one class is represented by only a small number of training examples (called the minority class) while other classes make up the majority. In this scenario, classifiers can have good accuracy on the majority class but very poor accuracy on the minority class(es) due to the influence that the larger majority class. The common example of such dataset is credit card fraud detection, where data points for fraud = 1, are usually very less in comparison to fraud = 0.