Recurrent neural networks, and in particular long short-term memory networks (LSTMs), are a remarkably effective tool for sequence modeling that learn a dense black-box hidden representation of their sequential input. Researchers interested in better understanding these models have studied the changes in hidden state representations over time and noticed some interpretable patterns but also significant noise. In this work, we present LSTMVis a visual analysis tool for recurrent neural networks with a focus on understanding these hidden state dynamics. The tool allows a user to select a hypothesis input range to focus on local state changes, to match these states changes to similar patterns in a large data set, and to align these results with domain specific structural annotations. We further show several use cases of the tool for analyzing specific hidden state properties on datasets containing nesting, phrase structure, and chord progressions, and demonstrate how the tool can be used to isolate patterns for further statistical analysis.
Kaggle’s annual March Machine Learning Mania competition drew 442 teams to predict the outcomes of the 2017 NCAA Men’s Basketball tournament. In this winner’s interview, Kaggler David Scott describes how he came in 5th place by stepping back from solution mode and taking the time to plan out his approach to the the project methodically.
NLG tools automate the analysis and enhance traditional BI platforms by explaining in plain English the significance of visualizations and findings – here is an overview of the market.
We’re excited to announce a powerful new ability to organize content in RStudio Connect: version 1.5.0. Tags allow publishers to arrange what they’ve published and enable users to find and discover the content most relevant to them. The release also includes a newly designed (and customizable!) landing page and multiple important security enhancements.
Ridge regression and the lasso are closely related, but only the Lasso has the ability to select predictors. Like OLS, ridge attempts to minimize residual sum of squares of predictors in a given model. However, ridge regression includes an additional ‘shrinkage’ term – the square of the coefficient estimate – which shrinks the estimate of the coefficients towards zero. The impact of this term is controlled by another term, lambda (determined seperately). Two interesting implications of this design are the facts that when ? = 0 the OLS coefficients are returned and when ? = 8, coefficients will approach zero.
AI refers to ‘Artificial Intelligence’ which means making machines capable of performing intelligent tasks like human beings. AI performs automated tasks using intelligence.
This is our second post of a new series featuring articles published long ago. We manually selected articles that were most popular or overlooked, time-insensitive (for instance we eliminated articles about data science products because software packages and platforms have evolved so much over the last few years) and we only kept articles that still make sense and are useful today.
Every product team wants to know what makes their product thrive. They want to know how to optimize metrics and leave users the happiest and most engaged. Without a way to definitively understand user behavior, they must turn to anything they can. Enter the A/B test. Ultimately, every A/B test starts from a hypothesis. The hypothesis could be, “If we did [blank], then we would improve conversion.” Or “Feature X should drive increased retention, let’s test out that assumption through an A/B test.” The goal is almost always to drive a KPI such as conversion, retention, engagement, etc. These are lag measures, meaning that they result from a change in a lead measure that the product team can control.