Following article consists of three parts
1. The concept of classification in machine learning
2. The concept & explanation of Logistic Regression
3. A practical example of Logistic Regression on Titanic Data-Set
Imagine buying these smart glasses everyone has been talking about. At first you were hesitant, but today, you decided it was time to finally try them out yourself. You go to the supermarket and start filling your shopping cart with all sorts of things to get through the remainder of the day. While doing so, tiny cameras inside the glasses are registering your surroundings and see exactly which things you are buying. At the exit, you take the ‘special glasses’ lane, just like you did when you walked in. There is no scanning process, no queues and you basically just walk out feeling like a cat burglar. While walking out, the glasses already sent the content of the shopping cart to the supermarket’s server and paid the receipt immediately with the credit card you linked to the glasses. After putting everything in your car, you hop in and head home. While driving the glasses ping you that you are running low on fuel. You stop by a gas station and after you refueled, your glasses, again, take care of the paying process.
A look at why graphs improve predictions and how to create a workflow to use them with existing machine learning tasks. We’re passionate about the importance of understanding the connections between things. That’s why we work on graph technologies, which help people make use of these connections. It’s also why we wrote the O’Reilly book: Graph Algorithms: Practical Examples in Apache Spark and Neo4j. Simply put, a graph is a mathematical representation of any type of network. The objects that make up graphs are called nodes (or vertices) and the links between them are called relationships (or edges.) Graph algorithms are specifically built to operate on relationships, and they are uniquely capable of finding structures and revealing patterns in connected data. Graph analytics vary from conventional statistical analysis by focusing and calculating metrics based on the relationships between things.
Mathematics is ubiquitous nowadays. But why financing pure mathematical research is necessary for any economy to be innovative? Here is my down-to-earth explanation.
What do we Deep Learning practitioners do once we are done with training our models ?
In this article I will show you how to conduct a textual analysis to improve the accuracy of your model and discover several facts in your data. For that I will use a data set available on Kaggle. This dataset is composed of comments in from ‘Wikipedia’s talk page edits’, it is provided with the identifiers of the users who published the comment, but we have no information on the commented page or on the users’s personal data. Nevertheless, we have 6 differents labels (Toxic, Severe_Toxic, Identity hate, obscene, threat, insult) that have been handwritten.
Missing Data Imputation Techniques. Missing data is an every day problem that a data professional need to deal with. Though there are many articles, blogs, videos already available , I found it is difficult to find a concise consolidated information in a single place . That’s why I am putting my effort here , hoping it will be useful to any data practitioner or enthusiast. What is missing data ? Missing data are defined as values that are not available and that would be meaningful if they are observed. Missing data can be anything from missing sequence, incomplete feature, files missing, information incomplete, data entry error etc. Most datasets in the real world contain missing data. Before you can use data with missing data fields, you need to transform those fields so they can be used for analysis and modeling. Like many other aspects of data science, this too may actually be more art than science. Understanding the data and the domain from which it comes is very important.
A new version of the snahelper package is now available on CRAN. If you do not now the package: So far, it included one RStudio addin that provided a GUI to analyze and visualize networks. Check out the introductory post for more details. This major update includes two more addins that further facilitate the work with network data in R.
Generative Text Models using N-grams and Deep learning to create a quote generator app. Language models are so widely used these days. The next-word prediction in emails, WhatsApp texts, automated chatbots are all based on language models. A generative model takes some text as input, learns the vocabulary and sentence structure, and creates text. The book Deep Learning with Python (Francois Chollet) is a great resource to understand how LSTMs and neural networks learn text. Here’s another great guide to understanding N gram Language Models. I used generative models to create a QuoteBot that creates character specific quotes from the Harry Potter universe!
How to predict the intent behind a customer query. Seq2Seq models explained. Slot filling demonstrated on ATIS dataset with Keras. Natural Language Understanding (NLU), the technology behind conversational AI (chatbots, virtual assistant, augmented analytics) typically includes the intent classification and slot filling tasks, aiming to provide a semantic tool for user utterances. Intent classification focuses on predicting the intent of the query, while slot filling extracts semantic concepts in the query. For example the user query could be ‘Find me an action movie by Steven Spielberg’. The intent here is ‘find_movie’ while the slots are ‘genre’ with value ‘action’ and ‘directed_by’ with value ‘Steven Spielberg’.
In this post, we’ll generate explanations for various classification results for fine-grained sentiment using LIME. This is Part 2 of a series on fine-grained sentiment analysis in Python. Part 1 covered how to train and evaluate various fine-grained sentiment classifiers in Python. In this post, we’ll discuss why a classifier made a specific class prediction – that is, how to explain a sentiment classifier’s results using a popular method called LIME.
In the last few years, I’ve noticed a lot of C-level executives use AI in their keynote speeches and television appearances. They boast about their companies breaking new grounds, advancing technologically. They talk about simplifying processes for themselves and their customers. This is not limited to C-level executives either. A lot of managers are doing it, too. Perhaps it’s all driven by a need to abstract concepts in order to facilitate communication with upper management. It almost seems as if AI is the new buzz word, even though it has been around for quite some time now. Computer scientists have a better idea: don’t call automation AI! The two are very different and don’t fall under the same category. Automation and AI belong to separate niches altogether.
In the previous post, I wrote about the fundamentals of two commonly used dimensionality reduction approaches, singular value decomposition (SVD) and principal component analysis (PCA). I also explained their relationships using numpy. To quickly recap, the singular values (S) of a 0-centered matrix X (n samples × m features), equals the square root of its eigenvalues (?), making it possible to compute PCA using SVD, which is often more efficient. In this post, I will explore and benchmark a few SVD algorithms and implementations in Python for squared dense matrix (X, n×n). I also had a very exciting finding: the recently developed JAX library powered by the XLA compiler is able to significantly accelerate SVD computation. Let’s get to it!
The intuition behind Singular Value Decomposition: a tool based on Principal Component Analysis in plain English and without maths. In a previous article introducing Recommendation Systems, we saw that the tool has evolved enormously in the last year. Emerging as a tool for maintaining a website or application audience engaged and using its services. Usually, Recommendation Systems use our previous activity to make specific recommendations for us (this is known as Content-based Filtering). Now, if we’re visiting an e-commerce for the first time, it won’t know anything about us, so how can it make us a recommendation? The most basic solution would be recommending the best selling products, last releases, classics for example if we are talking about movies or books, or we could even recommend the products which would give the maximum profit to the business. However, this approach has gone out of fashion since lots of eCommerce have started to use Recommendation Systems based on Collaborative Filtering. Let’s remember how it works: suppose I like the following books: ‘The blind assassin’ and ‘A Gentleman in Moscow’. And my friend Matias likes ‘The blind assassin’ and ‘A Gentleman in Moscow’ as well, but also ‘Where the crawdads sing’. It seems we both have the same interests. So you could probably affirm I’d like ‘Where the crawdads sing’ too, even though I didn’t read it.
Statistical network models have become a popular exploratory data analysis tool in psychology and related disciplines that allow to study relations between variables. The most popular models in this emerging literature are the binary-valued Ising model and the multivariate Gaussian distribution for continuous variables, which both model interactions between pairs of variables. In these pairwise models, the interaction between any pair of variables A and B is a constant and therefore does not depend on the values of any of the variables in the model. Put differently, none of the pairwise interactions is moderated. However, in the highly complex and contextualized fields like psychology, such moderation effects are often plausible. In this blog post, I show how to fit, analyze, visualize and assess the stability of Moderated Network Models for continuous data with the R-package mgm. Moderated Network Models (MNMs) for continuous data are extending the pairwise multivariate Gaussian distribution with moderation effects (3-way interactions). The implementation in the mgm package estimates these MNMs with a nodewise regression approach, and allows one to condition on moderators, visualize the models and assess the stability of its parameter estimates. For a detailed description of how to construct such a MNM, and on how to estimate its parameters, have a look at our paper on MNMs. For a short recap on moderation and its relation to interactions in the regression framework, have a look at this blog post.
Since you’re reading this blog, you probably already know who is Andrew Ng, one of the pioneers in the field, and you maybe interested in his advice on how to build a career in Machine Learning. This blog summarizes the career advice/reading research papers lecture in the CS230 Deep learning course by Stanford University on YouTube.