Python Libraries for Interpretable Machine Learning

4 libraries for better visualisation, explanation and interpretation of models:
• yellowbrick
• ELI5
• LIME
• MLxtend


Sentiment Analysis is difficult, but AI may have an answer.

Recently, while shopping on Amazon for a laptop bag, I stumbled upon a pretty amusing customer review: ‘This is the best laptop bag ever. It is so good that within two months of use, it is worthy of being used as a grocery bag.’ The innate sarcasm in the review is evident as the user isn’t happy with the quality of the bag. However, as the sentence contains words like ‘best’, ‘good’ and ‘worthy’, the review can easily be mistaken to be positive. It is a common phenomenon for such humorous albeit cryptic reviews to become viral on social media. If such responses are not detected and acted upon, it may prove to be damaging for a company’s reputation, especially if they are planning to hold a new launch. Detecting sarcasm in the reviews is an important use case of Natural Language Processing, and we shall see how Machine Learning can be of help in this regard.


Feature selection using Python for classification problem

Including more features in the model makes the model more complex, and the model may be overfitting the data. Some features can be the noise and potentially damage the model. By removing those unimportant features, the model may generalize better. The Sklearn website listed different feature selection methods. This article is mainly based on the topics from that website. However, I have collected different resources about the theory behind those methods and added them to this article. In addition, I applied different feature selection methods on the same data set to compare their performances. After reading this article and the resources I provided in the article, you should be able to understand the theory behind feature selection as well as how to do it using python.


Balancing Act in Datasets of a Machine Learning Algorithm

When dealing with imbalanced classes, we may need to do some extra work and planning to make sure that our algorithms give us useful results. In this blog, I examine just two classification techniques to illustrate the issue, but you should know that the problem generalizes. For good reason, supervised classification algorithms – which use labeled data – take class distributions into account. However, when we’re trying to detect classes that are important, but rare compared to the alternatives, it can be difficult to develop a model that catches them. Here, after diving into the problem with some examples, I outline a few of the tried and true techniques for solving it.


BERT-based Cross-Lingual Question Answering with DeepPavlov

DeepPavlov is a conversational artificial intelligence framework that contains all the components required for building chatbots. DeepPavlov is developed on top of the open-source machine learning frameworks TensorFlow and Keras. It is free and easy to use. This article describes how to use the BERT-based question answering models of DeepPavlov.


Experimentation in Data Science

Today I am going to talk about experimentation in data science, why it is so important and some of the different techniques that we might consider using when AB testing is not appropriate. Experiments are designed to identify causal relationships between variables and this is a really important concept in many fields and particularly relevant for data scientists today. Let’s say we are a data scientist working in a product team. In all likelihood, a large part of our role will be to identify whether new features will have a positive impact on the metrics we care about. i.e. if we introduce a new feature making it easier for users to recommend our app to their friends, will this improve user growth? These are the types of questions that product teams will be interested in and experiments can help provide an answer. However, causality is rarely easy to identify and there are many situations where we will need to think a bit deeper about the design of our experiments so we do not make incorrect inferences. When this is the case, we can use often use techniques taken from econometrics and I will discuss some of these below. Hopefully, by the end, you will get a better understanding of when these techniques apply and also how to use them effectively.


The Complete Guide to Time Series Analysis and Forecasting

Whether we wish to predict the trend in financial markets or electricity consumption, time is an important factor that must now be considered in our models. For example, it would be interesting to forecast at what hour during the day is there going to be a peak consumption in electricity, such as to adjust the price or the production of electricity.
Enter time series. A time series is simply a series of data points ordered in time. In a time series, time is often the independent variable and the goal is usually to make a forecast for the future.
However, there are other aspects that come into play when dealing with time series.
• Is it stationary?
• Is there a seasonality?
• Is the target variable autocorrelated?
In this post, I will introduce different characteristics of time series and how we can model them to obtain accurate (as much as possible) forecasts.


A Brief Intro to Studying Algorithms

In this post, I will (1) cover the basic reasoning behind studying algorithms, (2) introduce Big O notation, and (3) do a brief analysis of Karatsuba’s algorithm.


Self Organizing Maps

Self Organizing Maps or Kohenin’s map is a type of artificial neural networks introduced by Teuvo Kohonen in the 1980s. (Paper link) SOM is trained using unsupervised learning, it is a little bit different from other artificial neural networks, SOM doesn’t learn by backpropagation with SGD,it use competitive learning to adjust weights in neurons. And we use this type of artificial neural networks in dimension reduction to reduce our data by creating a spatially organized representation, also it help us to discover the correlation between data.


A Comprehensive Introduction to Working with Databases using R

In a previous post, we had briefly looked at connecting to databases from R and using dplyr for querying data. In this new expanded post, we will focus on the following:
• connect to & explore database
• read & write data
• use RStudio SQL script & knitr SQL engine
• query data using dplyr
• visualize data with dbplot
• modeling data with modeldb & tidypredict
• explore RStudio connections pane
• handling credentials


Shapiro-Wilk Test for Normality in R

I think the Shapiro-Wilk test is a great way to see if a variable is normally distributed. This is an important assumption in creating any sort of model and also evaluating models. Let’s look at how to do this in R!


What are Progressive Neural Networks?

Life is a journey through learning experiences. As, we are continuously learning new tasks and acquiring new knowledge and we have a magical, and purely understood ability to leverage previous experiences to optimize how we build new knowledge. Learning never stops and it shapes as intellectual and social beings. Could we recreate the continuity of learning in artificial intelligence(AI) models.


Knowing Your Neighbours: Machine Learning on Graphs

Representing connected data is possible using a graph data structure regularly used in Computer Science. In this article, we will provide an introduction to the assorted types of connected data, what they represent, and the challenges we can solve. We also introduce graph convolutional networks (GCNs). Using GCN as an example, this paper will also explain how modern machine learning methods can build predictive models of connected data.


Detecting Bias with SHAP

StackOverflow’s annual developer survey concluded earlier this year, and they have graciously published the (anonymized) 2019 results for analysis. They’re a rich view into the experience of software developers around the world – what’s their favorite editor? how many years of experience? tabs or spaces? and crucially, salary. Software engineers’ salaries are good, and sometimes both eye-watering and news-worthy. The tech industry is also painfully aware that it does not always live up to its purported meritocratic ideals. Pay isn’t a pure function of merit, and story after story tells us that factors like name-brand school, age, race, and gender have an effect on outcomes like salary. Can machine learning do more than predict things? Can it explain salaries and so highlight cases where these factors might be undesirably causing pay differences? This example will sketch how standard models can be augmented with SHAP (SHapley Additive exPlanations) to detect individual instances whose predictions may be concerning, and then dig deeper into the specific reasons the data leads to those predictions.


Video Understanding Using Temporal Cycle-Consistency Learning

In the last few years there has been great progress in the field of video understanding. For example, supervised learning and powerful deep learning models can be used to classify a number of possible actions in videos, summarizing the entire clip with a single label. However, there exist many scenarios in which we need more than just one label for the entire clip. For example, if a robot is pouring water into a cup, simply recognizing the action of ‘pouring a liquid’ is insufficient to predict when the water will overflow. For that, it is necessary to track frame-by-frame the amount of water in the cup as it is being filled. Similarly, a baseball coach who is comparing stances of pitchers may want to retrieve video frames from the precise moment that the ball leaves the pitchers’ hands. Such applications require models to understand each frame of a video. However, applying supervised learning to understand each individual frame in a video is expensive, since per-frame labels in videos of the action of interest are needed. This requires that annotators apply fine-grained labels to videos by manually adding unambiguous labels to every frame in each video. Only then can the model be trained, and only on a single action. Training on new actions requires the process to be repeated. With the increasing demand for fine-grained labeling, necessary for applications ranging from robotics to sports analytics, this makes the need for scalable learning algorithms that can understand videos without the tedious labeling process increasingly pertinent. We propose a potential solution using a self-supervised learning method called Temporal Cycle-Consistency Learning (TCC). This novel approach uses correspondences between examples of similar sequential processes to learn representations particularly well-suited for fine-grained temporal understanding of videos. We are also releasing our TCC codebase to enable end-users to apply our self-supervised learning algorithm to new and novel applications.


Stochastic Processes Analysis

An introduction to Stochastic processes and how they are applied every day in Data Science and Machine Learning.
Advertisements