Convolutional neural networks are widely adopted for solving problems in image classification. In this work, we aim to gain a better understanding of deep learning through exploring the miss-classified cases in facial and emotion recognitions. Particularly, we propose the backtracking algorithm in order to track down the activated pixels among the last layer of feature maps. We then are able to visualize the facial features that lead to the miss-classifications, by applying the feature tracking algorithm. A comparative analysis of the activated pixels reveals that for the facial recognition, the activations of the common pixels are decisive for the result of classification; for the emotion recognition, the activations of the unique pixels indeed determine the result of classification.
Download your free guide to predictive analytics in media and entertainment for a look at the landscape and use cases, from Dataiku.
This article explains how Bayes Nets gain remarkable predictive power by their use of conditional probability. This adds to several other salient strengths, making them a preeminent method for prediction and understanding variables’ effects.
This is part two in a series I’m writing on network analysis. The first part is here. In this section I’m going to cover allocating resources, again using the St James’ development in Edinburgh as an example. Most excitingly (for me), the end of this post covers the impact of changes in resource allocation. Edinburgh (and surrounds) has more than one shopping centre. Many more. I’ve had a stab at narrowing these down to those that are similar to the St James centre, i.e. they’re big, (generally) covered and may have a cinema. You can see a plot of these below. As you can see the majority are concentrated around the population centre of Edinburgh.
Lately I’ve been thinking a lot about the connection between prediction models and the decisions that they influence. There is a lot of theory around this, but communicating how the various pieces all fit together with the folks who will use and be impacted by these decisions can be challenging. One of the important conceptual pieces is the link between the decision threshold (how high does the score need to be to predict positive) and the resulting distribution of outcomes (true positives, false positives, true negatives and false negatives). As a starting point, I’ve built this interactive tool for exploring this.
We present Vega-Lite, a high-level grammar that enables rapid specification of interactive data visualizations. Vega-Lite combines a traditional grammar of graphics, providing visual encoding rules and a composition algebra for layered and multi-view displays, with a novel grammar of interaction. Users specify interactive semantics by composing selections. In Vega-Lite, a selection is an abstraction that defines input event processing, points of interest, and a predicate function for inclusion testing. Selections parameterize visual encodings by serving as input data, defining scale extents, or by driving conditional logic. The Vega-Lite compiler automatically synthesizes requisite data flow and event handling logic, which users can override for further customization. In contrast to existing reactive specifications, Vega-Lite selections decompose an interaction design into concise, enumerable semantic units. We evaluate Vega-Lite through a range of examples, demonstrating succinct specification of both customized interaction methods and common techniques such as panning, zooming, and linked selection.
This is a story about companies who like aggregations a bit too much. Data-driven decision making seems to be the new holy grail in management, but can the numbers always be trusted? What is key in data-savvy businesses: the people, the right technology, or – spoiler alert – is it something more fundamental? These questions become particularly urgent in the new economy as failing to embrace data can be a major growth impediment or worse, a dead sentence to the business.