Advertisements

Distilled News

Most AI Explainability Is Snake Oil. Ours Isn’t And Here’s Why.

Advanced machine learning (ML) is a subset of AI that uses more data and sophisticated math to make better predictions and decisions. Banks and lenders could make a lot more money using ML-powered credit scoring instead of legacy methods in use today. But adoption of ML has been held back by the technology’s ‘black-box’ nature: you can see the model’s results but not how it came to those results. You can’t run a credit model safely or accurately if you can’t explain its decisions, especially for a regulated use case such as credit underwriting.


How to deploy a predictive service to Kubernetes with R and the AzureContainers package

It’s easy to create a function in R, but what if you want to call that function from a different application, with the scale to support a large number of simultaneous requests? This article shows how you can deploy an R fitted model as a Plumber web service in Kubernetes, using Azure Container Registry (ACR) and Azure Kubernetes Service (AKS). We use the AzureContainers package to create the necessary resources and deploy the service.


Visualizing Hurricane Data with Shiny

Around the time that I was selecting a topic for this project, my parents and my hometown found themselves in the path of a Category 1 hurricane. Thankfully, everyone was ok, and there was only minor damage to their property. But this event made me think about how long it had been since the last time my hometown had been in the path of a Category 1 hurricane. I also wanted to study trends in hurricane intensity over time to see if it corresponds to the popular impression that storms have grown stronger storms over the past few years.


Network Centrality in R: New ways of measuring Centrality

This is the third post of a series on the concept of ‘network centrality’ with applications in R and the package netrankr. The last part introduced the concept of neighborhood-inclusion and its implications for centrality. In this post, we extend the concept to a broader class of dominance relations by deconstructing indices into a series of building blocks and introduce new ways of evaluating centrality.


Named Entity Recognition (NER) With Keras and Tensorflow – Meeting Industry’s Requirement by Applying State-of-the-art Deep Learning Methods

Few years ago when I was working as a software engineering intern at a startup, I saw a new feature in a job posting web-app. The app was able to recognize and parse important information form the resumes like, email address, phone number, degree titles and etc. I started discussing possible approaches with our team and we decided to build a rule based parser in python to just parse different sections of a resume. After spending some time developing the parser, we realized that the answer may not be a rule-based tool. We started googling how it’s done and we came across the term Natural Language Processing (NLP) and more specific, Named Entity Recognition (NER) associated with Machine Learning.


The Importance of Being Recurrent for Modeling Hierarchical Structure

Recurrent Neural Networks (RNNs), such as Long Short-Term Memory networks (LSTMs), currently have performance limitations, while newer methods such as Fully Attentional Networks (FANs) show potential for replacing LSTMs without those same limitations. So the authors set out to compare the two approaches using standardized methods and found that LSTMs universally surpass FANs in prediction accuracy when applied to the hierarchy structure of language.


Supervised Machine Learning: Classification

In Supervised Learning, algorithms learn from labeled data. After understanding the data, the algorithm determines which label should be given to new data based on pattern and associating the patterns to the unlabeled new data.


Customer Analysis with Network Science

Over the past decade or two, Americans have continued to prefer payment methods that are traceable, providing retailers and vendors with a rich source of data on their customers. This data is used by data scientists to help businesses make more informed decisions with respect to inventory, marketing, and supply chain, to name a few. There are several tools and techniques for performing customer segmentation, and network analysis can be a powerful one.


AWS Architecture For Your Machine Learning Solutions

One of the regular challenges I face while designing enterprise-grade solutions for our client companies is the lack of reference online on examples of real world architectural use cases. You will find tons of tutorials on how to get started on individual technologies, and these are great when your focus is just limited to that particular framework or service. But in order to evaluate the broad spectrum of all that is available out there and to predetermine the implications of bundling a bunch of these together, you either have to hunt down someone who’s been down the road before, or venture on an independent experimentation yourself. That’s why I decided to start a series on sharing some of my own insights gathered while designing and developing technical solutions for multiple fortune 200 companies and emerging startups. And hopefully, today’s use case will help you plan the AWS Architecture for your Machine Learning solutions.


AI: the silver bullet to stop Technical Debt from sucking you dry

It’s Friday evening in the Bahamas. You’re relaxing under a striped red umbrella with a succulent glass of wine and your favorite book?-?it’s a great read and you love the way the ocean breeze moves the pages like leaves on a tree. As the sun descends your eyes follow, your consciousness drifting with the waves, closer to the horizon, closer to a soft, lulling sleep, closer to a perfect evening in a perfect world.


5 Machine Learning Resolutions for 2019

More organizations are using machine learning for competitive reasons, but their results are mixed. It turns out there are better — and worse — ways of approaching it. If you want to improve the outcome of your efforts in 2019, consider these points.
• Start with an appropriate scope
• Approach machine learning holistically
• Make the connection between data and machine learning
• Don’t expect too much ‘out of the box’
• Don’t forget infrastructural requirements


XGBoost is not black magic

Nowadays is quite easy to have decent results in data science tasks: it’s sufficient to have a general understanding of the process, a basic knowledge of Python and ten minutes of your time to instantiate XGBoost and fit the model. Ok, if it’s your first time then you would probably spend a couple of minutes collecting the required packages via pip, but that’s it. The only problem with this approach is that it works pretty well: a couple of years ago I classified in the Top 5 in a university competition by just feeding the dataset to an XGBoost with some basic feature engineering, outperforming groups presenting very complex architectures and data pipelines. One of the coolest characteristics of XGBoost is how it deals with missing values: deciding for each sample which is the best way to impute them. This feature has been super-useful for a lot of projects and datasets I run into during the last months; to be more deserving of the Data Scientist title written under my name, I decided to dig a little deeper, taking a couple of hours to read the original paper, trying to understand what an XGBoost is actually about and how it is able to deal with missing values in the sort of magical way it does.


Which Model and How Much Data?

Building deep learning applications in the real world is a never-ending process of selecting and refining the right elements of a specific solution. Among those elements, the selection of the correct model and the right structure of the training dataset are, arguably, the two most important decisions that data scientists need to make when architecting deep learning solutions. How to decide what deep learning model to use for a specific problem? How do we know whether we are using the correct training dataset or we should gather more data? Those questions are the common denominator across all stages of the lifecycle of a deep learning application. Even though there is no magic answer to those questions, there are several ideas that could guide your decision-making process. Let’s start with the selection of the correct deep learning model.
Advertisements

Document worth reading: “AI Reasoning Systems: PAC and Applied Methods”

Learning and logic are distinct and remarkable approaches to prediction. Machine learning has experienced a surge in popularity because it is robust to noise and achieves high performance; however, ML experiences many issues with knowledge transfer and extrapolation. In contrast, logic is easily intepreted, and logical rules are easy to chain and transfer between systems; however, inductive logic is brittle to noise. We then explore the premise of combining learning with inductive logic into AI Reasoning Systems. Specifically, we summarize findings from PAC learning (conceptual graphs, robust logics, knowledge infusion) and deep learning (DSRL, $\partial$ILP, DeepLogic) by reproducing proofs of tractability, presenting algorithms in pseudocode, highlighting results, and synthesizing between fields. We conclude with suggestions for integrated models by combining the modules listed above and with a list of unsolved (likely intractable) problems. AI Reasoning Systems: PAC and Applied Methods

R Packages worth a look

Parallel GLM (parglm)
Provides a parallel estimation method for generalized linear models without compiling with a multithreaded LAPACK or BLAS.

Utilizes the Black-Scholes Option Pricing Model to Perform Strategic Option Analysis and Plot Option Strategies (optionstrat)
Utilizes the Black-Scholes-Merton option pricing model to calculate key option analytics and graphical analysis of various option strategies. Provides …

Individual Tree Growth Modeling (ITGM)
Individual tree model is an instrument to support the decision with regard to forest management. This package provides functions that let you work with …

Native R Kernel for the ‘Jupyter Notebook’ (IRkernel)
The R kernel for the ‘Jupyter’ environment executes R code which the front-end (‘Jupyter Notebook’ or other front-ends) submits to the kernel via the n …

If you did not already know

Agnostic Disambiguation of Named Entities Using Linked Open Data (AGDISTIS) google
AGDISTIS is an Open Source Named Entity Disambiguation Framework able to link entities against every Linked Data Knowledge Base. The ongoing transition from the current Web of unstructured data to the Data Web yet requires scalable and accurate approaches for the extraction of structured data in RDF (Resource Description Framework). One of the key steps towards extracting RDF from natural-language corpora is the disambiguation of named entities. AGDISTIS combines the HITS algorithm with label expansion strategies and string similarity measures. Based on this combination, it can efficiently detect the correct URIs for a given set of named entities within an input text. Furthermore, AGDISTIS is agnostic of the underlying knowledge base. AGDISTIS has been evaluated on different datasets against state-of-the-art named entity disambiguation frameworks.
http://…/public.pdf


Deep Reinforcement One-shot Learning (DeROL) google
In recent years there has been a sharp rise in networking applications, in which significant events need to be classified but only a few training instances are available. These are known as cases of one-shot learning. Examples include analyzing network traffic under zero-day attacks, and computer vision tasks by sensor networks deployed in the field. To handle this challenging task, organizations often use human analysts to classify events under high uncertainty. Existing algorithms use a threshold-based mechanism to decide whether to classify an object automatically or send it to an analyst for deeper inspection. However, this approach leads to a significant waste of resources since it does not take the practical temporal constraints of system resources into account. Our contribution is threefold. First, we develop a novel Deep Reinforcement One-shot Learning (DeROL) framework to address this challenge. The basic idea of the DeROL algorithm is to train a deep-Q network to obtain a policy which is oblivious to the unseen classes in the testing data. Then, in real-time, DeROL maps the current state of the one-shot learning process to operational actions based on the trained deep-Q network, to maximize the objective function. Second, we develop the first open-source software for practical artificially intelligent one-shot classification systems with limited resources for the benefit of researchers in related fields. Third, we present an extensive experimental study using the OMNIGLOT dataset for computer vision tasks and the UNSW-NB15 dataset for intrusion detection tasks that demonstrates the versatility and efficiency of the DeROL framework. …

Poisson Regression google
In statistics, Poisson regression is a form of regression analysis used to model count data and contingency tables. Poisson regression assumes the response variable Y has a Poisson distribution, and assumes the logarithm of its expected value can be modeled by a linear combination of unknown parameters. A Poisson regression model is sometimes known as a log-linear model, especially when used to model contingency tables. …

Whats new on arXiv

Towards Machine Learning Mathematical Induction

Mathematical induction lies at the heart of mathematics and computer science. However, automated theorem proving of inductive problems is still limited in its power. In this abstract, we first summarize our progress in automating inductive theorem proving for Isabelle/HOL. Then, we present MiLkMaId, our approach to suggesting promising applications of mathematical induction without completing a proof search.


seq2graph: Discovering Dynamic Dependencies from Multivariate Time Series with Multi-level Attention

Discovering temporal lagged and inter-dependencies in multivariate time series data is an important task. However, in many real-world applications, such as commercial cloud management, manufacturing predictive maintenance, and portfolios performance analysis, such dependencies can be non-linear and time-variant, which makes it more challenging to extract such dependencies through traditional methods such as Granger causality or clustering. In this work, we present a novel deep learning model that uses multiple layers of customized gated recurrent units (GRUs) for discovering both time lagged behaviors as well as inter-timeseries dependencies in the form of directed weighted graphs. We introduce a key component of Dual-purpose recurrent neural network that decodes information in the temporal domain to discover lagged dependencies within each time series, and encodes them into a set of vectors which, collected from all component time series, form the informative inputs to discover inter-dependencies. Though the discovery of two types of dependencies are separated at different hierarchical levels, they are tightly connected and jointly trained in an end-to-end manner. With this joint training, learning of one type of dependency immediately impacts the learning of the other one, leading to overall accurate dependencies discovery. We empirically test our model on synthetic time series data in which the exact form of (non-linear) dependencies is known. We also evaluate its performance on two real-world applications, (i) performance monitoring data from a commercial cloud provider, which exhibit highly dynamic, non-linear, and volatile behavior and, (ii) sensor data from a manufacturing plant. We further show how our approach is able to capture these dependency behaviors via intuitive and interpretable dependency graphs and use them to generate highly accurate forecasts.


Regression Based Bayesian Approach for Nonparanormal Graphical Models

A nonparanormal graphical model is a semiparametric generalization of a Gaussian graphical model for continuous variables in which it is assumed that the variables follow a Gaussian graphical model only after some unknown smooth monotone transformations. We consider a Bayesian approach in the nonparanormal graphical model in which we put priors on the unknown transformations through random series based on B-splines. We use a regression formulation to construct the likelihood through the Cholesky decomposition on the underlying precision matrix of the transformed variables and put shrinkage priors on the regression coefcients. We apply a plug-in variational Bayesian algorithm for learning the sparse precision matrix and compare the performance to a posterior Gibbs sampling scheme in a simulation study. We finally apply the proposed methods to a real data set.


Deep Program Reidentification: A Graph Neural Network Solution

Program or process is an integral part of almost every IT/OT system. Can we trust the identity/ID (e.g., executable name) of the program? To avoid detection, malware may disguise itself using the ID of a legitimate program, and a system tool (e.g., PowerShell) used by the attackers may have the fake ID of another common software, which is less sensitive. However, existing intrusion detection techniques often overlook this critical program reidentification problem (i.e., checking the program’s identity). In this paper, we propose an attentional multi-channel graph neural network model (DeepRe-ID) to verify the program’s identity based on its system behaviors. The key idea is to leverage the representation learning of the program behavior graph to guide the reidentification process. We formulate the program reidentification as a graph classification problem and develop an effective multi-channel attentional graph embedding algorithm to solve it. Extensive experiments — using real-world enterprise monitoring data and real attacks — demonstrate the effectiveness of DeepRe-ID across multiple popular metrics and the robustness to the normal dynamic changes like program version upgrades.


SIMD-X: Programming and Processing of Graph Algorithms on GPUs

With high computation power and memory bandwidth, graphics processing units (GPUs) lend themselves to accelerate data-intensive analytics, especially when such applications fit the single instruction multiple data (SIMD) model. However, graph algorithms such as breadth-first search and k-core, often fail to take full advantage of GPUs, due to irregularity in memory access and control flow. To address this challenge, we have developed SIMD-X, for programming and processing of single instruction multiple, complex, data on GPUs. Specifically, the new Active-Compute-Combine (ACC) model not only provides ease of programming to programmers, but more importantly creates opportunities for system-level optimizations. To this end, SIMD-X utilizes just-in-time task management which filters out inactive vertices at runtime and intelligently maps various tasks to different amount of GPU cores in pursuit of workload balancing. In addition, SIMD-X leverages push-pull based kernel fusion that, with the help of a new deadlock-free global barrier, reduces a large number of computation kernels to very few. Using SIMD-X, a user can program a graph algorithm in tens of lines of code, while achieving 3?, 6?, 24?, 3? speedup over Gunrock, Galois, CuSha, and Ligra, respectively.


Top-N-Rank: A Scalable List-wise Ranking Method for Recommender Systems

We propose Top-N-Rank, a novel family of list-wise Learning-to-Rank models for reliably recommending the N top-ranked items. The proposed models optimize a variant of the widely used cumulative discounted gain (DCG) objective function which differs from DCG in two important aspects: (i) It limits the evaluation of DCG only on the top N items in the ranked lists, thereby eliminating the impact of low-ranked items on the learned ranking function; and (ii) it incorporates weights that allow the model to leverage multiple types of implicit feedback with differing levels of reliability or trustworthiness. Because the resulting objective function is non-smooth and hence challenging to optimize, we consider two smooth approximations of the objective function, using the traditional sigmoid function and the rectified linear unit (ReLU). We propose a family of learning-to-rank algorithms (Top-N-Rank) that work with any smooth objective function. Then, a more efficient variant, Top-N-Rank.ReLU, is introduced, which effectively exploits the properties of ReLU function to reduce the computational complexity of Top-N-Rank from quadratic to linear in the average number of items rated by users. The results of our experiments using two widely used benchmarks, namely, the MovieLens data set and the Amazon Video Games data set demonstrate that: (i) The `top-N truncation’ of the objective function substantially improves the ranking quality of the top N recommendations; (ii) using the ReLU for smoothing the objective function yields significant improvement in both ranking quality as well as runtime as compared to using the sigmoid; and (iii) Top-N-Rank.ReLU substantially outperforms the well-performing list-wise ranking methods in terms of ranking quality.


Delta Embedding Learning

Learning from corpus and learning from supervised NLP tasks both give useful semantics that can be incorporated into a good word representation. We propose an embedding learning method called Delta Embedding Learning, to learn semantic information from high-level supervised tasks like reading comprehension, and combine it with an unsupervised word embedding. The simple technique not only improved the performance of various supervised NLP tasks, but also simultaneously learns improved universal word embeddings out of these tasks.


Deep Learning on Graphs: A Survey

Deep learning has been shown successful in a number of domains, ranging from acoustics, images to natural language processing. However, applying deep learning to the ubiquitous graph data is non-trivial because of the unique characteristics of graphs. Recently, a significant amount of research efforts have been devoted to this area, greatly advancing graph analyzing techniques. In this survey, we comprehensively review different kinds of deep learning methods applied to graphs. We divide existing methods into three main categories: semi-supervised methods including Graph Neural Networks and Graph Convolutional Networks, unsupervised methods including Graph Autoencoders, and recent advancements including Graph Recurrent Neural Networks and Graph Reinforcement Learning. We then provide a comprehensive overview of these methods in a systematic manner following their history of developments. We also analyze the differences of these methods and how to composite different architectures. Finally, we briefly outline their applications and discuss potential future directions.


On the Dimensionality of Word Embedding

In this paper, we provide a theoretical understanding of word embedding and its dimensionality. Motivated by the unitary-invariance of word embedding, we propose the Pairwise Inner Product (PIP) loss, a novel metric on the dissimilarity between word embeddings. Using techniques from matrix perturbation theory, we reveal a fundamental bias-variance trade-off in dimensionality selection for word embeddings. This bias-variance trade-off sheds light on many empirical observations which were previously unexplained, for example the existence of an optimal dimensionality. Moreover, new insights and discoveries, like when and how word embeddings are robust to over-fitting, are revealed. By optimizing over the bias-variance trade-off of the PIP loss, we can explicitly answer the open question of dimensionality selection for word embedding.


Text data mining and data quality management for research information systems in the context of open data and open science

In the implementation and use of research information systems (RIS) in scientific institutions, text data mining and semantic technologies are a key technology for the meaningful use of large amounts of data. It is not the collection of data that is difficult, but the further processing and integration of the data in RIS. Data is usually not uniformly formatted and structured, such as texts and tables that cannot be linked. These include various source systems with their different data formats such as project and publication databases, CERIF and RCD data model, etc. Internal and external data sources continue to develop. On the one hand, they must be constantly synchronized and the results of the data links checked. On the other hand, the texts must be processed in natural language and certain information extracted. Using text data mining, the quality of the metadata is analyzed and this identifies the entities and general keywords. So that the user is supported in the search for interesting research information. The information age makes it easier to store huge amounts of data and increase the number of documents on the internet, in institutions’ intranets, in newswires and blogs is overwhelming. Search engines should help to specifically open up these sources of information and make them usable for administrative and research purposes. Against this backdrop, the aim of this paper is to provide an overview of text data mining techniques and the management of successful data quality for RIS in the context of open data and open science in scientific institutions and libraries, as well as to provide ideas for their application. In particular, solutions for the RIS will be presented.


Efficient Model-Free Reinforcement Learning Using Gaussian Process

Efficient Reinforcement Learning usually takes advantage of demonstration or good exploration strategy. By applying posterior sampling in model-free RL under the hypothesis of GP, we propose Gaussian Process Posterior Sampling Reinforcement Learning(GPPSTD) algorithm in continuous state space, giving theoretical justifications and empirical results. We also provide theoretical and empirical results that various demonstration could lower expected uncertainty and benefit posterior sampling exploration. In this way, we combined the demonstration and exploration process together to achieve a more efficient reinforcement learning.


DRONE: a Distributed gRaph cOmputiNg Engine

Nowadays, in big data era, social networks, graph database, knowledge graph, electronic commerce and etc. demand efficient and scalable capability to process ever increasingly volume of graph-structured data. To meet the challenge, two mainstream distributed programming models, vertex-centric VC and subgraph-centric (SC) were proposed. Compared to the VC model, the SC model converges faster with less communication overhead on well-partitioned graphs, and is easy to program with due to the ‘think like a graph’ philosophy. However, edge-cut method causes significant performance bottleneck for preprocessing large graphs, especially power-law graphs. Although the edge-cut method is considered as a natural choice of subgraph-centric model for graph partitioning, and adopted by Giraph++, Blogel, GRAPE. Thus, the SC model is less competitive in practice. In this paper, we present an innovative distributed graph computing framework, DRONE(Distributed gRaph cOmputiNg Engine). It combines the subgraph-centric model and the vertex-cut graph partitioning strategy. Experiments show that DRONE outperform the state-of-art distributed graph computing engines on real-world graphs and synthetic power-law graphs. DRONE is capable to scale up to process one-trillion-edges synthetic power-law graphs, which is orders of magnitude larger than previously reported by existing SC-based frameworks.


The Empusa code generator: bridging the gap between the intended and the actual content of RDF resources

The RDF data model facilitates integration of diverse data available in structured and semi-structured formats. To obtain an RDF graph with a low amount of errors and internal redundancy, the chosen ontology must be consistently applied. However, with each addition of new diverse data the ontology must evolve thereby increasing its complexity, which could lead to accumulation of unintended erroneous composites. Thus, there is a need for a gatekeeping system that compares the intended content described in the ontology with the actual content of the resource. Here we present Empusa, a tool that has been developed to facilitate the creation of composite RDF resources from disparate sources. Empusa can be used to convert a schema into an associated application programming interface (API) that can be used to perform data consistency checks and generates Markdown documentation to make persistent URLs resolvable. In this way, the use of Empusa ensures consistency within and between the ontology (OWL), the Shape Expressions (ShEx) describing the graph structure, and the content of the resource.


Deep Anomaly Detection with Outlier Exposure

It is important to detect and handle anomalous inputs when deploying machine learning systems. The use of larger and more complex inputs in deep learning magnifies the difficulty of distinguishing between anomalous and in-distribution examples. At the same time, diverse image and text data commonly used by deep learning systems are available in enormous quantities. We propose leveraging these data to improve deep anomaly detection by training anomaly detectors against an auxiliary dataset of outliers, an approach we call Outlier Exposure (OE). This approach enables anomaly detectors to generalize and detect unseen anomalies. In extensive experiments in vision and natural language processing settings, we find that Outlier Exposure significantly improves the detection performance. Our approach is even applicable to density estimation models and anomaly detectors for large-scale images. We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance.


Metrics for Explainable AI: Challenges and Prospects

The question addressed in this paper is: If we present to a user an AI system that explains how it works, how do we know whether the explanation works and the user has achieved a pragmatic understanding of the AI? In other words, how do we know that an explanainable AI system (XAI) is any good? Our focus is on the key concepts of measurement. We discuss specific methods for evaluating: (1) the goodness of explanations, (2) whether users are satisfied by explanations, (3) how well users understand the AI systems, (4) how curiosity motivates the search for explanations, (5) whether the user’s trust and reliance on the AI are appropriate, and finally, (6) how the human-XAI work system performs. The recommendations we present derive from our integration of extensive research literatures and our own psychometric evaluations.

R Packages worth a look

Curve Linear Regression via Dimension Reduction (clr)
A new methodology for linear regression with both curve response and curve regressors, which is described in Cho, Goude, Brossat and Yao (2013) <doi …

Stochastic Approximation Monte Carlo (SAMC) Sampler and Methods (SAMCpack)
Stochastic Approximation Monte Carlo (SAMC) is one of the celebrated Markov chain Monte Carlo (MCMC) algorithms. It is known to be capable of sampling …

Data Management for the SPARCS (rSPARCS)
To clean and analyze the data from the Statewide Planning and Research Cooperative System (SPARCS), and generate sets for statistical modeling. Additio …

A Package to Visualize Linear Models Features and Play with Them (lmviz)
Contains three shiny applications. Two are meant to explore linear model inference feature through simulation. The third is a game to learn interpretin …

Book Memo: “Handbook of Model Predictive Control”

Recent developments in model-predictive control promise remarkable opportunities for designing multi-input, multi-output control systems and improving the control of single-input, single-output systems. This volume provides a definitive survey of the latest model-predictive control methods available to engineers and scientists today. The initial set of chapters present various methods for managing uncertainty in systems, including stochastic model-predictive control. With the advent of affordable and fast computation, control engineers now need to think about using ‘computationally intensive controls,’ so the second part of this book addresses the solution of optimization problems in ‘real’ time for model-predictive control. The theory and applications of control theory often influence each other, so the last section of Handbook of Model Predictive Control rounds out the book with representative applications to automobiles, healthcare, robotics, and finance. The chapters in this volume will be useful to working engineers, scientists, and mathematicians, as well as students and faculty interested in the progression of control theory. Future developments in MPC will no doubt build from concepts demonstrated in this book and anyone with an interest in MPC will find fruitful information and suggestions for additional reading.

Distilled News

The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence

Artificial intelligence evokes a mythical, objective omnipotence, but it is backed by real-world forces of money, power, and data. In service of these forces, we are being spun potent stories that drive toward widespread reliance on regressive, surveillance-based classification systems that enlist us all in an unprecedented societal experiment from which it is difficult to return. Now, more than ever, we need a robust, bold, imaginative response.


Grasp2Vec: Learning Object Representations from Self-Supervised Grasping

From a remarkably young age, people are capable of recognizing their favorite objects and picking them up, despite never being explicitly taught how to do so. According to cognitive developmental research, the ability to interact with objects in the world plays a crucial role in the emergence of object perception and manipulation capabilities, such as targeted grasping. By interacting with the world around them, people are able to learn with self-supervision: we know what actions we took, and we learn from the outcome. In robotics, this type of self-supervised learning is actively researched because it enables robotic systems to learn without the need for large amounts of training data or manual supervision. Inspired by the concept of object permanence, we propose Grasp2Vec, a simple yet highly effective algorithm for acquiring object representations. Grasp2Vec is based on the intuition that an attempt to pick up anything provides several pieces of information – if a robot grasps an object and holds it up, the object had to be in the scene before the grasp. Furthermore, the robot knows that the object it grasped is currently in its gripper, and therefore has been removed from the scene. By using this form of self supervision, the robot can learn to recognize the object by the visual change in the scene after the grasp.


Sharing Modeling Pipelines in R

Reusable modeling pipelines are a practical idea that gets re-developed many times in many contexts. wrapr supplies a particularly powerful pipeline notation, and a pipe-stage re-use system (notes here). We will demonstrate this with the vtreat data preparation system. Our example task is to fit a model on some arbitrary data. Our model will try to predict y as a function of the other columns. Our example data is 10,000 rows of 210 variables. Ten of the variables are related to the outcome to predict (y), and 200 of them are irrelevant pure noise. Since this is a synthetic example we know which is which (and deliberately encode this information in the column names).


Automated Web Scraping in R

There are many blogs and tutorials that teach you how to scrape data from a bunch of web pages once and then you’re done. But one-off web scraping is not useful for many applications that require sentiment analysis on recent or timely content, or capturing changing events and commentary, or analyzing trends in real time. As fun as it is to do an academic exercise of web scraping for one-off analysis on historical data, it is not useful when wanting to use timely or frequently updated data.


Introduction to Named Entity Recognition

In this article we will learn what is Named Entity Recognition also known as NER. We will discuss some of its use-cases and then evaluate few standard Python libraries using which we can quickly get started and solve problems at hand. In the next series of articles we will get under the hood of this class of algorithms, get more sophisticated and will create our own NER from scratch. So, let’s begin this journey.


Introducing TF-Ranking

Ranking is one of the most common problems in machine learning scenarios. From search to recommendation systems, ranking models are an important component of many mainstream machine learning architectures. In machine learning theory, ranking methods are often referred to using terms like learning-to-rank(LTR) or machine learning ranking(LTR). Despite its relevance, developing LTR models at scale remains a challenge in most machine learning frameworks. Recently, artificial intelligence(AI) engineers from Google introduced TF-Ranking, a TensorFlow-based framework for building highly scalable LTR models. The principles behind TF-Ranking are detailed in a research paper published a few weeks ago. Conceptually, a ranking problem is defined as a derivation of ordering over a list of examples that maximizes the utility of the entire list. That definition sounds similar to classification and regression problem but ranking problems are fundamentally different. While the goal of classification or regression is to predict a label or a value for each individual example as accurately as possible, the goal of ranking is to optimally sort the entire example list, such that the examples of highest relevance are presented first. To infer relevance, LTR methods attempt to learn a scoring function that maps example feature vectors to real-valued scores from labeled data. During inference, this scoring function is used to sort and rank examples.


Towards Ethical Machine Learning

I quit my job to enter an intensive data science bootcamp. I understand the value behind the vast amount of data available that enables us to create predictive machine learning algorithms. In addition to recognizing its value on a professional level, I benefit from these technologies as a consumer. Whenever I find myself in a musical rut, I rely on Spotify’s Discover Weekly. I’m often amazed by how Spotify’s algorithms and other machine learning models so accurately predict my behavior. In fact, when I first sat down to write this post, I took a break to watch one Youtube video. Twenty minutes later, I realized just how well Youtube’s recommendation algorithm works. Although I so clearly see the benefits of machine learning, it is also essential to recognize and mitigate its potential dangers.


Step-by-Step Tutorial on Linear Regression with Stochastic Gradient Descent

This article should provide you a good start for us to dive deep into deep learning. Let me walk you through the calculations step-by-step in a stochastic gradient descent for a linear regression task.


Quick ML Concepts: Tensors

Tensorflow, Tensorlab, Deep Tensorized Networks, Tensorized LSTMs… it’s no surprise that the word ‘tensor’ is embedded in the names of many machine learning technologies. But what are tensors? And how do they relate to machine learning? In part one of Quick ML Concepts, I aim to provide a short yet concise summary of what tensors are.


Machine Learning Going Meta

Machine learning has its own rule of three: in order to use it, you need three essential ingredients, namely 1) labeled data; 2) a model architecture you can optimize; and 3) a well-defined objective function. Many a discussion about applying ML to a problem are simply cut short because one of those three is not available. It is getting easier to arrive to that magical combination: ‘big data’ trends have made access to data more ubiquitous in industry, and deep learning has made it much simpler to find good model architectures to apply to a broad class of problems. Interestingly, much of the difficulty often remains in defining the right objective: one that makes business sense, provides a sufficient amount of supervision, and encompasses all the goals inherent to the problem, for instance fairness, safety or interpretability.


Understanding the concept of Hierarchical clustering Technique

Hierarchical clustering Technique is one of the popular Clustering techniques in Machine Learning. Before we try to understand the concept of Hierarchical clustering Technique let us understand about the Clustering…


Intro to Recommender System: Collaborative Filtering

Like many machine learning techniques, a recommender system makes prediction based on users’ historical behaviors. Specifically, it’s to predict user preference for a set of items based on past experience. To build a recommender system, the most two popular approaches are Content-based and Collaborative Filtering. Content-based approach requires a good amount of information of items’ own features, rather than using users’ interactions and feedbacks. For example, it can be movie attributes such as genre, year, director, actor etc., or textual content of articles that can extracted by applying Natural Language Processing. Collaborative Filtering, on the other hand, doesn’t need anything else except users’ historical preference on a set of items. In terms of preference, it usually expressed by two categories. Explicit Rating, is a rate given by a user to an item on a sliding scale, like 5 stars for Titanic. This is the most direct feedback from users to show how much they like an item. Implicit Rating, suggests users preference indirectly, such as page views, clicks, purchase records, whether or not listen to a music track, and so on. In this article, I will take a close look at collaborative filtering that is a traditional and powerful tool for recommender systems.


On integrating symbolic inference into deep neural networks

Deep neural networks have been a tremendous success story over the last couple of years. Many advances in the field of AI, such as recognizing real world objects, fluently translating natural language or playing GO at a world class level, are based on deep neural networks. However, there were only few reports concerning the limitations of this approach. One such limitation is the inability to learn from a small amount of examples. Deep neural networks usually require a huge amount of training examples, whereas humans are able to learn from one single example. If you show a cat to a child who has never seen one before, it can recognize another cat based on this single instance. Deep neural networks on the other hand require hundreds of thousands of images to learn what a cat looks like. Another limitation is the inability to make inferences based on previously learned common knowledge. When reading a text, humans tend to derive wide ranging inferences about possible interpretations of the text. Humans can do this because they can recall knowledge from very different domains and apply it to the text.


Understanding the basic Conversational AI concepts with Dialogflow

This is a beginners guide intended for understanding the different concepts around designing conversations and implementing them using Google Dialogflow. Other Conversational AI tools use the same concepts, so these should be transferable to any platform. I have used a variety of bot builders, and in my opinion Dialogflow is the easiest for rapidly creating simple bots or for a non-programmer. This overview covers how to create intents and the different parts they are made up of, training your bot, and other useful tips to help with using Dialogflow. Before implementing your first bot into dialogflow, it’s always good to map out the conversation flow, just in a mind map style. Having this visualisation will come in handy later when some conversations can be quite long and hard to keep track of. Note: The screenshots below are from Dialogflow, and the use case was for an internal business process assistant.


Building custom models using Keras (BiSeNet) Part II

This year is coming to an END, 2018 was the year that I had the most amazing Artificial Intelligence(AI) learning journey and I came to realise that Keras is a formidable high-level API for fast Deep Learning(DL) development. It reminds me of LEGOS, you just stack layer on top of layer and if you are a creative person with a wild imagination you can adapt or create custom LEGOS so you can build complex shapes and representations. As an engineer, this means a lot and saves a lot of time?-?just plug and play.


Data Science for Product Managers Certificate

Data is at the core of product management. Today’s product managers are increasingly responsible for shipping machine learning-driven product features, making critical product decisions based on machine learning techniques, and developing strong partnerships with data science counterparts. Our Data Science for Product Managers certificate provides the foundational understanding of data science and machine learning necessary for extracting business value from the data produced by your product and your organization.


A High-level Introduction to Differential Privacy

In the eyes of a data scientist, every moment of your life is a data point. From the brand of your toothpaste to the number of times you wave your hand, details that we often take for granted are crucial factors that can be used to infer our behavior and intentions. These insights, mined by multinational organizations, can be used to make aspects of our collective lives more convenient and interesting at the price of our private information being exposed or even exploited.