GANs’ vs ‘ODEs’: the end of mathematical modeling?

Hi everyone! In this article, I would like to make a connection between classical mathematical modeling, that we study in school, college, and machine learning, that also models objects and processes around us in a totally different manner. While mathematicians create models based on their expertise and understanding of the world, machine learning algorithms describe the world in some hidden way, not fully understandable, but in most of the cases even more accurate than mathematical models developed by human experts. However, in a lot of applications (as healthcare, finance, military), we need clean and interpretable decisions, that machine learning algorithms, in particular, deep learning models are not designed to provide. This blog will review the main characteristics we expect from any model, pros, and cons of ‘classical’ mathematical modeling and machine learning modeling and will show a candidate that combines both of the worlds?-?disentangled representation learning.

Statistical Learning and Knowledge Engineering All the Way Down

A path to combining machine learning and knowledge bases.

Feature Selection and Dimensionality Reduction

According to wikipedia, ‘feature selection is the process of selecting a subset of relevant features for use in model construction’ or in other words, the selection of the most important features. In normal circumstances, domain knowledge plays an important role and we could select features we feel would be the most important. For example, in predicting home prices the number of bedrooms and square footage are often considered important. Unfortunately, here in the Don’t Overfit II competition, the use of domain knowledge is impossible as we have a binary target and 300 continuous variables ‘of mysterious origin’ forcing us to try automatic feature selection techniques.

The Four Levels of Analytics Maturity

We outline our four-step model to categorize how successfully a company uses analytics by its ability to show the analytics, uncover underlying trends, and take action based on them.
Level 1: Analytics visualization with little or no action
Level 2: Analytics for personal performance and metrics
Level 3: Analytics for organizational performance
Level 4: Analytics to improve business performance

AI Safety Gridworlds

We present a suite of reinforcement learning environments illustrating various safety properties of intelligent agents. These problems include safe interruptibility, avoiding side effects, absent supervisor, reward gaming, safe exploration, as well as robustness to self-modification, distributional shift, and adversaries. To measure compliance with the intended safe behavior, we equip each environment with a performance function that is hidden from the agent. This allows us to categorize AI safety problems into robustness and specification problems, depending on whether the performance function corresponds to the observed reward function. We evaluate A2C and Rainbow, two recent deep reinforcement learning agents, on our environments and show that they are not able to solve them satisfactorily.

Building a chatbot for Slack from scratch: Part 1 Understanding Language

Building a Game of Throne chatbot for Slack: Part 1 Understanding Language. Lessons learned applying deep learning to natural language understanding and incorporating question and answering. This past summer I decided to test my NLP skills and undertake building a chatbot. As a big Game of Thrones fan I settled on creating a chatbot for Game of Thrones. Initially, my goal was just to provide a way to get various types of GOT news easily from places like Reddit, Watchers on the Wall, Los Siete Reinos, and Twitter. However, I quickly decided to expand into other tasks and I became particularly intrigued at how to integrate modern NLP approaches to enrich my chatbot. This first article will cover the natural language understanding and question answering components; and the second article will talk more about the platform and architecture.

Natural Language Processing with Spacy in Node.js

Many use cases exist for Natural Language Processing to be used in the browser with JavaScript but unfortunately not many tools exist in this space. To go beyond regular expressions and manipulation of strings, you’d want to leverage a good NLP framework. SpaCy which bills itself as Industrial-Strength Natural Language Processing is a popular toolkit but is a Python library. This creates a barrier to entry for those who only work with JavaScript. SpaCy features neural network models, integrated word vectors, Multi-language support, tokenization, POS Tagging, sentence segmentation, dependency parsing, and entity recognition. SpaCy boasts a 92.6% accuracy rate and claims to be the fastest syntactic parser in the world.

Why Measuring Accuracy is Hard (and important!) – Part 4 – How to Measure Accuracy Better

Now in the fourth and final installment of this series, I’m going to get to the real meat – the information we all crave. How do we do it better? With all these problems that can come up in measuring accuracy, how do we avoid pitfalls and measure accuracy better? The unfortunate answer is – it depends. There is no easy answer and no one-size-fits-all solution to the problem. So instead of just giving you a few tips and suggestions (there will be some of those), I instead want to give you a framework for thinking about how to measure a model. Choosing the metrics to measure your model with is an intellectual exercise. The best route forward may involve several different metrics – different numbers used for different reasons.

Why Measuring Accuracy is Hard (and important!), Part 3.

This is the third article in a series of articles discussing the topic of measuring accuracy. In this article, Part 3 of the series, I will be discussing the more unusual, interesting, and difficult scenarios that come up in trying to measure accuracy for a model.

Why Measuring Accuracy Is Hard (and important!) Part 2

This article is part 2 of a 4 part article series that I am creating on the challenges of measuring accuracy. If you haven’t read the first article, check it out here: https://…ccuracy-is-hard-and-very-important-part-1 In this article, we are going to discuss the major challenges that occur in measuring accuracy. We’ll try illustrate with examples and graphs where possible.

Why measuring accuracy is hard (and very important)! Part 1.

We are kicking off a new series of blog articles on measuring accuracy. There is a lot of content to cover – when I first started writing this article, it quickly turned into 12 hours of keyboard bashing and 20 pages of thoughts and notes. So we will be breaking it up into 4 separate articles:
1. An overview of the problem of measuring accuracy, why it’s important, and why it’s hard
2. Detailed description of most of the major problems that come up in measuring accuracy
3. Detailed description of some of the more unusual and interesting situations that we have in measuring accuracy
4. What we can do about it – how to measure accuracy better

Deep-dive into Convolutional Networks

From the building-blocks to the most advanced architectures, touching on interpretability, and bias.

Hierarchical Clustering on Categorical Data in R

This was my first attempt to perform customer clustering on real-life data, and it’s been a valuable experience. While articles and blog posts about clustering using numerical variables on the net are abundant, it took me some time to find solutions for categorical data, which is, indeed, less straightforward if you think of it. Methods for categorical data clustering are still being developed?-?I will try one or the other in a different post. On the other hand, I have come across opinions that clustering categorical data might not produce a sensible result?-?and partially, this is true (there’s an amazing discussion at CrossValidated). At a certain point, I thought ‘What am doing, why not just break it all down in cohorts.’ But cohort analysis is not always sensible as well, especially in case you get more categorical variables with higher number of levels?-?you can easily skimming through 5-7 cohorts might be easy, but what if you have 22 variables with 5 levels each (say, it’s a customer survey with discrete scores of 1,2,3,4,5), and you need to see what are the distinctive groups of customers you have?-?you would have 22×5 cohorts. Nobody wants to do this. Clustering could appear to be useful. So this post is all about sharing what I wish I bumped into when I started with clustering.

Mumble

Mumble is an open source, low-latency, high quality voice chat software primarily intended for use while gaming.

A landscape diagram for Python data

This article introduces a landscape diagram which shows 50 or so of the most popular Python libraries and frameworks used in data science. Landscape diagrams illustrate components within a technology stack alongside their complementary technologies. In other words, ‘How do the parts fit together?’ For example, where do some of the IBM-sponsored projects fit into a broader context of the open source Python ecosystem for data science work? Jupyter Enterprise Gateway is a good example, bridging a gap between Project Jupyter and Apache Spark, and allowing Jupyter notebooks to run atop enterprise-grade cluster computing.
Advertisements