Embodied Learning is Essential to Artificial Intelligence

Jeff Hawkins has a principle that intuitively makes a lot of sense, yet is something that Deep Learning research has not emphasized enough. This is the notion of embodied learning. That is, biological systems learn from interacting with the environment.


Paper review & code: Deep Ensembles (NIPS 2017)

The thing we love about probabilistic programming is of course the possibility to model the uncertainty of model predictions. This convenient property comes with a cost unfortunately, as the calculations needed to perform e.g. Markov Chain Monte Carlo are very expensive. In this short but insightful paper Lakshminarayanan et al., report a simple method to assess model uncertainty using neural networks, which compares well even to Bayesian networks (as they demonstrate in the paper).


10 Pragmatic Expectations for Machine Learning Technologies in 2019

Every new year brings new expectations and hopes for technology markets. In the case of machine learning, the space is moving so fast that is hard to differentiate hype from reality. Many of the ground-breaking advancements in machine learning or artificial intelligence(AI) research are simply unpractical to apply in real world solutions and many of the basic artifacts needed to streamline the adoption of machine learning technologies in real world scenarios are still missing.
1) Reinforcement Learning Toolkits
2) Improved Model Serving and Deployment Stacks
3) Support for Spark Runtimes
4) Semi-Supervised Learning and Better Automated Data Labeling and Transformation Frameworks
5) Experimentation & Logging Frameworks
6) Better Eager Execution Frameworks
7) Interpretability Tools
8) AutoML for Machine Learning Model Selection and Versioning
9) Model Discovery and Reusability
10) More Decentralized AI Experimentation


Easily explore your data using the summarytools package

Whenever we start working with data with which we are not familiar, our first step is usually some kind of exploratory data analysis. We may look at the structure of the data using the str function, or use a tool like the RStudio Viewer to examine the data. We might also use the base R function summary or the describe function from the Hmisc package to obtain summary statistics on our data. In this article, we will look at the summarytools package, which provides a few functions to neatly summarize your data. This is especially useful in a Shiny application or a RMarkdown report as you can directly render the summary outputs to HTML and then display it in the application or report. In fact, I discovered this package while playing around with the Shiny application radiant. The four main functions in this package are dfSummary, freq, descr and ctable. The print.summarytools function is used to print summarytools objects with the appropriate rendering method.


Non-Gaussian Mixture Models – The Covariance Matrix Transform

The covariance matrix has many interesting properties, and it can be found in mixture models, component analysis, Kalman filters, and more. Developing an intuition for how the covariance matrix operates is useful in understanding its practical implications. This article will focus on a few important properties, associated proofs, and then some interesting practical applications, i.e., non-Gaussian mixture models.


Discovering the essential tools for Named Entities Recognition

I remember the first time I read about Natural Language Processing (NLP). It was hard for me to picture how an algorithm can recognize words. How it was able to identify their meaning. Or select which type of words they were. I was even confused about how to start. It was so much information. After going around in circles, I started by asking myself what exactly I had to do with NLP. I had several corpora of text coming from different websites I have scraped. My main goal was to extract and classify the names of persons, organizations, and locations, among others. What I had between my hands was a Named Entities Recognition (NER) task.


Building a Multi-label Text Classifier using BERT and TensorFlow

In Multi-class classification each sample is assigned to one and only one label: a fruit can be either an apple or a pear but not both at the same time. Let us consider an example of three classes C= [‘Sun, ‘Moon, Cloud’]. In multi-class each sample can belong to only one of C classes. In multi-label case each sample can belong to one or more than one class.


Learn AI Collaboratively by Saving the Planet

Omdena and Ciencia y Datos has partnered up for a new AI Challenge, if you want to be one of the 50 AI enthusiasts to acquire hands-on skills by solving a meaningful and global problem you need to read this article.


Inference on the edge

While machine learning inference models are already transforming computing as we know it, the hard truth is that using multiple, gigantic datasets to train them still takes a ton of processing power. Using a trained model to predict an event requires much less, but still more than our pocket devices can currently handle. Processing an image with a deep neural network implies billions of operations – huge matrices of many dimensions being multiplied, inversed, reshaped, and Fourier transformed.


Pre-training BERT from scratch with cloud TPU

In this experiment, we will be pre-training a state-of-the-art Natural Language Understanding model BERT on arbitrary text data using Google Cloud infrastructure.


ML Models – Prototype to Production

So you have a model, now what? Through the powers of machine learning and the promise of deep learning, today’s conferences, thought leaders and experts in ML and AI have been painting a vision of businesses powered by data. However, despite the groundbreaking research and the constant flood of new papers in the fields of ML and deep learning, much of this research remains just that – research (outside of the few tech giants). Deploying ML models remain a significant challenge. As a data scientist and an ML practitioner, I have myself experienced that it is often more difficult to make the journey from a reliable and accurate prototype model to a well-performing and scalable production inference service than it is to actually build the model. Models need to be retrained and deployed when code and/or data are updated. Therefore, automating the build and deployment of machine learning models is a crucial part of creating production machine learning services.


Artificial Intelligence and Robotic Process Automation

In its simplest form, machine learning is the ability of machines to learn from past experiences using historical data (supervised and semi-supervised cases) to solve a given problem. It uses different algorithms to realise one or more (mathematical) models that act as baseline for producing the desired result set, while consuming input data parameters (dimensions). This is the fundamental difference from any computer program where instead of programmatic implementation of a logic or algorithm a trained model is used to produce the output.


Classifying Facial Emotions via Machine Learning

Index
• What is an Emotion
• Work-plan to solve the problem
• Step 1: Pre-processing Images
• Step 2: Extract Features from Faces
• Brief Introduction to SIFT
• Step 3: Clustering the features
• Encoding
• Pooling
• BoW Featurization
• VLAD Featurization
• Step 4: Training the dataset and Conclusion
• Dataset Used
• Dataset Processing
• Training


Properly Setting the Random Seed in Machine Learning Experiments

Machine learning models make use of randomness in obvious and unexpected ways. At a conceptual level, this non-determinism may impact your model’s convergence rate, the stability of your results, and the final quality of a network. At a practical level, it means that you probably have difficulty reproducing the same results across runs for your model – even when you run the same script on the same training data. It could also lead to challenges in figuring out whether a change in performance is due to an actual model or data modification, or merely the result of a new random sample.
Advertisements