A Leap into the Future: Generative Adversarial Networks

Is December 12, 2018, a day the world changed forever? That Wednesday, a team of researchers at NVIDIA released a dazzling new artificial intelligence design called the StyleGAN. The StyleGAN is a deep learning system based on the idea of a generative adversarial network (GAN), and this model generates ultra-realistic images of people, cars, and households. Inventions like this could change the way humankind interacts with nearly all media. For instance, when people see images in the news, their first reaction could be to try to determine whether what they are looking at is real or fake. GANs are some of the more futuristic AI designs, and people have many questions about them: What are GANs? Will I be able to understand them? Can I use GANs for my business analytics, or are they only good at creating images? This article will try to answer some of these questions. We’ll start with an overview of GANs, then we’ll discuss some challenges in helping GANs to learn. After that, we’ll examine two promising GANs: the RadialGAN, which is designed for numbers, and the StyleGAN, which is focused on images.


automl-gs

Give an input CSV file and a target field you want to predict to automl-gs, and get a trained high-performing machine learning or deep learning model plus native Python code pipelines allowing you to integrate that model into any prediction workflow. No black box: you can see exactly how the data is processed, how the model is constructed, and you can make tweaks as necessary.


Does Random Forest overfit?

When I first saw this question I was a little surprised. The first thought is, of course, they do! Any complex machine learning algorithm can overfit. I’ve trained hundreds of Random Forest (RF) models and many times observed they overfit. The second thought, wait, why people are asking such a question? Let’s dig more and do some research. After quick googling, I’ve found the following paragraph on Leo Breiman (the creator of the Random Forest algorithm) website: Random forests does not overfit. You can run as many trees as you want. The link to this statement on Breiman website. (BTW, on his website, the performance of RF was checked on 800MHz processor!) What? Impossible! So RF doesn’t overfit? The developer of the algorithm knows the best – I thought. He is much smarter than me. For sure he is right. But wait, I’ve seen many times that RF was overfitting. Uh, I’m very confused! Let’s try to check it, from a theoretical and practical point of view.


All you need to know about text preprocessing for NLP and Machine Learning

Based on some recent conversations, I realized that text preprocessing is a severely overlooked topic. A few people I spoke to mentioned inconsistent results from their NLP applications only to realize that they were not preprocessing their text or were using the wrong kind of text preprocessing for their project. With that in mind, I thought of shedding some light around what text preprocessing really is, the different methods of text preprocessing, and a way to estimate how much preprocessing you may need. For those interested, I’ve also made some text preprocessing code snippets for you to try. Now, let’s get started!


KDnuggets Poll – Top Data Science and Machine Learning Methods Used in 2018/2019

Which Data Science / Machine Learning methods and algorithms did you use in 2018/2019 for a real-world application? Take part in the latest KDnuggets survey and have your say.


Separating the Signal from the Noise: Robust Statistics for Pedestrians

One of the problems of navigating an autonomous car through a city is to extract robust signals in the face of all the noise that is present in the different sensors. Just taking something like an arithmetic mean of all the data points could possibly end in a catastrophe: if a part of a wall looks similar to the street and the algorithm calculates an average trajectory of the two this would end in leaving the road and possibly crashing into pedestrians. So we need some robust algorithm to get rid of the noise. The area of statistics that especially deals with such problems is called robust statistics and the methods used therein robust estimation. Now, one of the problems is that one doesn’t know what is signal and what is noise. The big idea behind RANSAC (short for RAndom SAmple Consensus) is to get rid of outliers by basically taking as many points as possible which form a well-defined region and leaving out the others. It does that iteratively, similar to the famous k-means algorithm (the topic of one of the upcoming posts, so stay tuned…). To really understand how RANSAC works we will now build it with R. We will take a simple linear regression as an example and make it robust against outliers.


Visualising Model Response with easyalluvial

In this tutorial I want to show how you can use alluvial plots to visualise model response in up to 4 dimensions. easyalluvial generates an artificial data space using fixed values for unplotted variables or uses the partial dependence plotting method. It is model agnostic but offers some convenient wrappers for caret models.


Testing numeric variables for NA/NaN/Inf

In R, a numeric variable is either a number (like 0, 42, or -3.14), or one of 4 special values: NA, NaN, Inf or -Inf. It can be hard to remember how the is.x functions treat each of the special values, especially NA and NaN! The table below summarizes how each of these values is treated by different base R functions. Functions are listed in alphabetical order.


The Ultimate Opinionated Guide to Base R Date Format Functions

When I was first learning R, working with dates was one of the hardest and most time consuming tasks I dealt with. There are so many things to learn! What do I do with as.POSIXct(), as.POSIXlt(), strftime(), strptime(), format(), and as.Date()? R date formats were confusing, and it seemed no matter what I did I was always running into issues. And I’ll be honest, working with dates in R still trips me up from time to time. It can be confusing. But I’ve learned to follow a procedure to guide me through any date manipulation task with ease. Today I’m going to help you learn that same procedure so you’ll never have to worry about R date format issues again!


Beyond the bar plot: visualizing gender inequality in science

Sarah Leo from ‘The Economist’ recently wrote an interesting article, where she gave examples of graphs that were ‘(1) misleading, (2) confusing and (3) failing to make a point’, along with her proposed revisions. I was intrigued by her last example, illustrating the story about gender imbalance in scientific publishing, where she refrains from proposing a better graph and asks for ideas instead.


What’s Next For AI? Enter: Deep Reasoning

Deep learning models are pretty good at understanding relationships between inputs and outputs, but that’s about as far as it goes. Whether it’s supervised learning or reinforcement learning, the input and desired output are clearly defined and easy for the model to understand. This is fine for tasks like classification, and even generation, but if we want AI models to be able to make decisions using what we call ‘common sense’, which is actually abstract reasoning, we need to enable them to reason.


Introduction to Neural Style Transfer with TensorFlow

Neural style transfer is one of the most creative application of convolutional neural networks. By taking a content image and a style image, the neural network can recombine the content and the style image to effectively creating an artistic image! These algorithms are extremely flexible and the virtually infinite possible combinations of content and style resulted in very creative and unique results.


Multi-Class Text Classification with LSTM

Automatic text classification or document classification can be done in many different ways in machine learning as we have seen before. This article aims to provide an example of how a Recurrent Neural Network (RNN) using the Long Short Term Memory (LSTM) architecture can be implemented using Keras. We will use the same data source as we did Multi-Class Text Classification with Scikit-Lean, the Consumer Complaints data set that originated from data.gov.


Generalizable Deep Reinforcement Learning

Transfer learning is all the rage in the machine learning community these days. Transfer learning serves as the basis for many of the managed AutoML services that Google, Salesforce, IBM, and Azure provide. It now figures prominently in the latest NLP research – appearing in Google’s Bidirectional Encoder Representations from Transformers (BERT) model and in Sebastian Ruder and Jeremy Howard’s Universal Language Model Fine-tuning for Text Classification (ULMFIT).
Advertisements