People who have seen The Terminator would definitely agree that it was the greatest sci-fi movie of that era. In the movie, James Cameroon introduced an interesting visual effect concept that made it possible for the viewers to get behind the eyes of the cyborg called Terminator. This effect came to be known as the Terminator Vision and in a way, it segmented humans from the background. It might have sounded totally out of place then, but Image segmentation forms a vital part of many Image processing techniques today.
One of the main limitations of regression analysis is when one needs to examine changes in data across several categories. This problem can be resolved by using a multilevel model, i.e. one that varies at more than one level and allows for variation between different groups or categories.
Numpy’s broadcasting feature can be somewhat confusing for new users of this library, but as it allows for very clean, elegant and FUN coding. It is definitely worth the effort of getting used to. In this short article, I wanted to show a nice implementation of broadcasting to save some for loops and even computation time. Let’s start with a simple case.
Natural language understanding(NLU) is one of the richest areas in deep learning which includes highly diverse tasks such as reaching comprehension, question-answering or machine translation. Traditionally, NLU models focus on solving only of those tasks and are useless when applied to other NLU-domains. Also, NLU models have mostly evolved as supervised learning architectures that require expensive training exercises. Recently, researchers from OpenAI challenged both assumptions in a paper that introduces a single unsupervised NLU model that is able to achieve state-of-the-art performance in many NLU tasks.
My latest data science project involved predicting the sales of each product in a particular store. There were several ways I could approach the problem. But no matter which model I used, my accuracy score would not improve. I figured out the problem after spending some time inspecting the data – outliers!
My last post about DCGANs was primarily focused on the idea of replacing fully connected layers with convolutions and implementing upsampling convolutions with Keras. This article will further explain the architectural guidelines mentioned by Raford et al. , as well as additional topics mentioned in the paper such as Unsupervised Feature Learning with GANs, GAN Overfitting, and Latent Space Interpolation.
Monte Carlo simulations are extremely common methods in the world of data science and analytics. They can be used for everything from business process optimization to physics simulation. Unfortunately, the math of Monte Carlo simulations is often unwieldy and can be intimidating for people without strong math backgrounds. More importantly, the actual implementation of Monte Carlo methods is very difficult to explain succinctly, especially in a meeting with senior leaders. The goal of this article is to explain Monte Carlo simulations using an analogy that is approachable for non-technical readers without resorting to dense math or coding that is difficult to explain to non-mathematicians.
McCarthy and Minsky described Artificial Intelligence as a task performed by a machine, which if, performed by a human instead will require a great deal of intelligence. A collective data of all the behavioral qualities are required to make the precise decision. These behavioral qualities are planning, problem-solving, reasoning and manipulation.
Many of the following statistical tests are rarely discussed in textbooks or in college classes, much less in data camps. Yet they help answer a lot of different and interesting questions. I used most of them without even computing the underlying distribution under the null hypothesis, but instead, using simulations to check whether my assumptions were plausible or not. In short, my approach to statistical testing is model-free, data-driven. Some are easy to implement even in Excel. Some of them are illustrated here, with examples that do not require statistical knowledge for understanding or implementation.
Machine Learning Yearning is about structuring the development of machine learning projects. The book contains practical insights that are difficult to find somewhere else, in a format that is easy to share with teammates and collaborators. Most technical AI courses will explain to you how the different ML algorithms work under the hood, but this book teaches you how to actually use them. If you aspire to be a technical leader in AI, this book will help you on your way. Historically, the only way to learn how to make strategic decisions about AI projects was to participate in a graduate program or to gain experience working at a company. Machine Learning Yearning is there to help you quickly acquire this skill, which enables you to become better at building sophisticated AI systems.
Machine learning techniques such as neural networks, and linear models often utilize L2 regularization as a way to avoid overfitting. You may have heard about the term Tikhonov regularization as a more general version of L2 regularization, but in the end, most examples only end up using L2 regularization.
Looking to get your feet wet with machine learning and build something useful at the same time? Here’s a fun weekend project to use TensorFlow to automatically read your weight from pictures of your scale and chart it over time. You’ll learn the basics of the TensorFlow Object Detection API and be able to apply it to this and other image analysis projects.
Listed companies produce quarterly earnings reports which can cause significant price movements when the results deviate from what the analysts had estimated. This is because according to the Efficient-market hypothesis, asset prices fully reflect all available information and will as a result factor in consensus estimates. In this article we are going to see how we can use Machine Learning to predict whether a company will beat or miss its estimates.
Last couple of years have been incredible for Natural Language Processing (NLP) as a domain! We have seen multiple breakthroughs – ULMFiT, ELMo, Facebook’s PyText, Google’s BERT, among many others. These have rapidly accelerated the state-of-the-art research in NLP (and language modeling, in particular). We can now predict the next sentence, given a sequence of preceding words. What’s even more important is that machines are now beginning to understand the key element that had eluded them for long.
Know your data, where it comes from, what’s in it, what it means. It all starts from there. If there is one piece of advice that I consistently give to every data person that’s starting out, whether they are going to be an analyst, scientist, or visualizer, this is it. This is the hill I spend the majority of my time on even now, to the point of obsession. It is a deeeeeeep but eminently important rabbit hole.