Classification and Regression Analysis with Decision Trees

A decision tree is a supervised machine learning model used to predict a target by learning decision rules from features. As the name suggests, we can think of this model as breaking down our data by making a decision based on asking a series of questions.


How to Decide Between Amazon SageMaker and Microsoft Azure Machine Learning Studio

I recently published a walk-thru of Microsoft Azure Machine Learning Studio (Studio) https://…tudio-clarifies-data-science-8e8d3e6ed64e and was favorably impressed with the simplicity and power. But there are other tools that also claim to make machine learning easier and speed model development. I am wondering how they compare? So, this week, I am taking a look at Amazon SageMaker (SageMaker) and how it compares to Studio.


Uber datasets in BigQuery: Driving times around SF (and your city too)

Uber keeps adding new cities to their public data program – let’s load them into BigQuery. We’ll take advantage of the latest new features: Native GIS functions, partitioning, clustering, and fast dashboards with BI Engine.


Introducing Translatotron: An End-to-End Speech-to-Speech Translation Model

Speech-to-speech translation systems have been developed over the past several decades with the goal of helping people who speak different languages to communicate with each other. Such systems have usually been broken into three separate components: automatic speech recognition to transcribe the source speech as text, machine translation to translate the transcribed text into the target language, and text-to-speech synthesis (TTS) to generate speech in the target language from the translated text. Dividing the task into such a cascade of systems has been very successful, powering many commercial speech-to-speech translation products, including Google Translate.


Building Recommender systems with Azure Machine Learning service

Recommendation systems are used in a variety of industries, from retail to news and media. If you’ve ever used a streaming service or ecommerce site that has surfaced recommendations for you based on what you’ve previously watched or purchased, you’ve interacted with a recommendation system. With the availability of large amounts of data, many businesses are turning to recommendation systems as a critical revenue driver. However, finding the right recommender algorithms can be very time consuming for data scientists. This is why Microsoft has provided a GitHub repository with Python best practice examples to facilitate the building and evaluation of recommendation systems using Azure Machine Learning services.


The Inherent Insecurity in Neural Networks and Machine Learning Based Applications

Deep neural networks are inherently fuzzy. Each type of neural network (Traditional, Convolutional, Recurrent, etc.) have a set of weight connections (W41,W42, … W87 parameters) that are randomly initialized and updated as data is pumped through the system and errors are back propagated to correct the weight connection values. After training, these weights approximate a function that fits the input and output of the trained data. However, the distribution of weight values is not perfect and can only generalize based on inputs and outputs that the neural network has seen. The problem with neural networks is that they will never be perfect and they fail gracefully (not letting you know that they failed incorrectly – often times classifying with high confidence). Ideally, you want a system to notify you when there is a failure. With neural networks if you feed it a random set of static images, it may provide incorrect output object classifications with high confidence.


IBM’s Quest to Solve the Continual Learning Problem and Build Neural Networks Without Amnesia

I often joke that neural networks suffers from a continuous amnesia problem in the sense that they every time they are retrained they lost the knowledge accumulated in previous iterations. Building neural networks that can learn incrementally without forgetting is one of the existential challenges facing the current generation of deep learning solutions. Recently, researchers from IBM published a paper proposing a method for continual learning proposing that allow the implementation of neural networks that can build incremental knowledge. Neural networks have achieved impressive milestones in the last few years from beating Go to multi-player games. However, neural network architectures remain constrained to very specific domains and unable to transition its knowledge into new areas. Furthermore, current neural network models are only effective if trained over large stationary distributions of data and struggle when training over changing non-stationary distributions of data. In other words, neural networks can effectively solve many tasks when trained from scratch and continually sample from all tasks many times until training has converged. Meanwhile, they struggle when training incrementally if there is a time dependence to the data received. Paradoxically, most real world AI scenarios are based on incremental, and not stationary, knowledge. Throughout the history of artificial intelligence(AI), there have been several theories and proposed models to deal with the continual learning challenge.


We Need to Talk, AI

A Comic Essay on Artificial Intelligence


SingularityNET’s AI Team Experiments With Generative Capsule Networks

The SingularityNET AI team has been experimenting with Generative Capsule Networks (CapsNets), which have the potential to structure SingularityNET’s disparate data and services. Generative CapsNets have demonstrated the capability to generalize the process of generating shifted images for never encountered shifts far outside the range of the training set. CapsNets have shown better performance in the task of one-shot transferring the capability to reconstruct rotated images from some classes to others, and to extrapolate outside the range of the training set. Models with better generalization and transfer learning performance would greatly benefit SingularityNET, which nodes will be heavily reused for different tasks and trained on user data, which might be not well-prepared.


Time Series Forecasting – A Getting Started Guide

When I started writing this post I thought of just explaining how to do predictions with a ‘simple’ time series (aka univariate time series). The challenging part of the project I was in, however, was the fact that the prediction needed to be made in conjunction with multiple variables. For this reason, I decided to bring this guide a little bit closer to reality and use a multivariate time series.


Tutorial for Using Confidence Intervals & Bootstrapping

In this tutorial I will attempt to show how the use of bootstrapping and confidence intervals can help with highlighting statistically significant differences between sample distributions.


The Power of Visualization

This article focuses on the importance of visualization with data. The amount and complexity of information produced in science, engineering, business, and everyday human activity is increasing at staggering rates. Good visualizations not only present a visual interpretation of data, but do so by improving comprehension, communication, and decision making. The importance of visualization is a topic taught to almost every data scientist in an entry-level course at university but is mastered by very few individuals. It is often regarded as obvious or unimportant due to its inherently subjective nature. In this article, I hope to dispel some of those thoughts and show you that visualization is incredibly important, not just in the field of data science, but for communicating any form of information. I will aim to show the reader, through multiple examples, the impact a well-designed visualization can have at communicating an idea or piece of information. In addition, I will discuss the best practices for making effective visualizations, and how one can go about developing their own visualizations and the resources that are available to go about doing this. I hope you enjoy this visual journey and learn something in the process.


Multivariate Outlier Detection

I was given 3 GB of Machine Generated data being fed by 120 sensors (5 records every second) in an excel format. The task in hand was to mine out interesting patterns, if any, from the data. I fed the data in R in my local machine and performed various descriptive and exploratory analysis to have some insights. Customer was also looking for some low cost maintenance mechanisms for their machines. So I thought if I could study the outliers and provide some information about system health. This could also be monitored real time using dashboards and if possible could forecast at a near future time point for early alarm and predictive maintenance. So this became a case of outlier detection in 120 dimensional space. Now, as I studied, values in around 90 columns were found to be constant over the entire time period and were contributing nothing towards system noise. So I dropped them.


From Word Embeddings to Pretrained Language Models – A New Age in NLP – Part 2

For words to be processed by machine learning models, they need some form of numeric representation that models can use in their calculation. This is part 2 of a two part series where I look at how the word to vector representation methodologies have evolved over time. If you haven’t read Part 1 of this series, I recommend checking that out first!


From Word Embeddings to Pretrained Language Models – A New Age in NLP – Part 1

For words to be processed by machine learning models, they need some form of numeric representation that models can use in their calculation. This is part 1 of a 2 part series where I look at how the word to vector representation methodologies have evolved over time. Part 2 can be found here.
Advertisements