29 Statistical Concepts Explained in Simple English – Part 13

This resource is part of a series on specific topics related to data science: regression, clustering, neural networks, deep learning, decision trees, ensembles, correlation, Python, R, Tensorflow, SVM, data reduction, feature selection, experimental design, cross-validation, model fitting, and many more. To keep receiving these articles, sign up on DSC.


Principal Component Analysis (PCA) in Python

Principal Component Analysis (PCA) is a linear dimensionality reduction technique which can be utilized for extracting information from a high-dimensional space by projecting it into a lower-dimensional sub-space. It tries to preserve the essential parts having more variation of the data and the non-essential with the lesser variation parts being removed. Dimensions are nothing but features that represent the data. For example, A 28 X 28 image has 784 picture elements (pixels) that are the dimensions or features which together represent that image. One important thing to note about PCA is that it is an Unsupervised dimensionality reduction technique, you can cluster the similar data points based on the feature correlation between them without any supervision ( or labels ), and you will learn how to achieve this practically using Python in the later section of this tutorial! According to Wikipedia, PCA is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables (entities each of which takes on various numerical values) into a set of values of linearly uncorrelated variables called principal components.


How do you teach physics to machine learning models?

In my other posts, I have covered topics such as: Machine learning for anomaly detection and condition monitoring, how machine learning can be used for production optimization, as well as how to avoid common pitfalls of machine learning for time series forecasting. But did you know that you can also combine machine learning and physics-based modeling? Here, I will describe how it can be done and how we can ‘teach physics’ to machine learning models.


Extreme Event Forecasting with LSTM Autoencoders

Dealing with extreme event prediction is a frequent nightmare for every Data Scientist. Looking around I found very interesting resources that deal with this problem. Personally I literally fall in love with the approach released by Uber Researchers. In their papers (two version are available here and here) they developed a ML solution for daily future prediction of traveler demand. Their methodology stole my attention for its geniality, good explanation and easy implementation. So my purpose is to reproduce their discovery in pythonic language. I’m very satisfied of this challenge and at the end I improved my knowledge for regression forecasting.


AI/Machine Learning in the Enterprise Just Became Much More Challenging

A few recent headlines say it all: the European Union has imposed a number of data protection and privacy constraints under the General Data Protection Regulation; France recently fined Google 50 million euros for misleading people on how it uses people’s data; In the United States, the New York Times published a major series of articles highlighting the privacy issues; In Canada, the government recently updated its Elections Modernization Act to regulate foreign election interference using social media; And even Facebook is making a compelling case for much more expansive and stronger data privacy and protection regulation based up a recent article by Facebook’s chief executive officer, Mark Zuckerberg. While the social media giants have been at the forefront of data privacy concerns, highly publicized data breaches by large corporations have also captured the headlines: Equifax has spent over $240 million in IT and security improvements as a result of a data breach that impacted about almost 148 million consumers; JP Morgan issued an SEC filing highlighting a data breach in 2014 that compromised data for 76 million customers; Yahoo exposed data from 3 billion user accounts in 2013 resulting in a market capitalization loss in the hundreds of millions while it was in negotiations to be purchased by Verizon; And there are many more examples. But the bottom line is this: lax data management capabilities have a tangible and material cost – to companies, but more importantly to people – causing a rapid change in the regulatory environment governing user data and privacy which are, in time, sure to propagate to many more jurisdictions. These changes will have a material impact to how enterprise manage their data. As a prime consumer of data, this is sure to impact how to AI/ML operates in the enterprise.


Function Approximation in Reinforcement Learning

In tabular methods like DP and Monte Carlo we have seen that the representation of the states is actually a memorisation of each state. Now let’s recall what is exactly a state. A state is the value taken by a set of observable features or variables. Which means every time a feature or a variable has a new value, it results in a a new state. let’s take a concrete example. Suppose an agent is in a 4×4 grid, so the location of the of the agent on the grit is a feature. This gives 16 different locations meaning 16 different states.


From Zero to Hero in XGBoost Tuning

A walk through some of the most common (and not so common too!) hyperparameters of XGBoost.


4 easy steps to improve your machine learning code performance

I want to show you 4 easy steps, that will help you, without any knowledge about optimization, speed-up your code at least twice and make it more efficient and readable.


Layman’s Introduction to KNN

k-nearest neighbour algorithm is where most people begin when starting with machine learning.


Text preprocessing steps and universal reusable pipeline

Before feeding any ML model some kind data, it has to be properly preprocessed. You must have heard the byword: Garbage in, garbage out (GIGO). Text is a specific kind of data and can’t be directly fed to most ML models, so before feeding it to a model you have to somehow extract numerical features from it, in another word vectorize. Vectorization is not the topic of this tutorial, but the main thing you have to understand is that GIGO is also applicable on vectorization too, you can extract qualitative features only from the qualitatively preprocessed text.


Hyperparameter Tuning On Google Cloud Platform With Scikit-Learn

Google Cloud Platform’s AI Platform (formerly ML Engine) offers a hyperparameter tuning service for your models. Why should you take the extra time and effort to learn how to use it instead of just running the code you already have on a virtual machine? Are the benefits worth the extra time and effort? One reason to use it is that AI Platform offers Bayesian Optimization out of the box. It is more efficient than a grid search and will be more accurate than a random search in the long run. They also offer early stopping and the ability to resume a completed training job. That allows you to use past data to run a few more well informed trails if you think you haven’t ran enough trials the first time. All of that can help speed the hyperparameter tuning process up while doing a better job than either grid search or random search to find the best fit. When training on lot’s of data frequently or running lots of experiments it can definitely save money and time.


Deploy TensorFlow models with Istio on Kubernetes

Typical cloud application requirements of deployment, versioning, scaling, monitoring are also among the common operational challenges of Machine Learning (ML) services. This post will focus on building a ML serving infrastructure to continuously update, version and deploy models.


Data Happy – Measuring the Abstract

The United Nations Sustainable Development Solutions Network has accumulated vast amounts of data on the emotional state of the world for their annual World Happiness Report. But, who cares about our feelings? Some people only care about facts; not feelings. Those capitalist pigs may be interested to hear these facts touted by Silvia Garcia, CEO of Happiest Places to Work, a consulting agency that provides the service of a culture audit to businesses seeking higher profits and growth via happier employees.


ZetaSQL – Analyzer Framework for SQL

ZetaSQL defines a language (grammar, types, data model, and semantics) as well as a parser and analyzer. It is not itself a database or query engine. Instead it is intended to be used by multiple engines wanting to provide consistent behavior for all semantic analysis, name resolution, type checking, implicit casting, etc. Specific query engines may not implement all features in the ZetaSQL language and may give errors if specific features are not supported. For example, engine A may not support any updates and engine B may not support analytic functions.


Free Wolfram Engine for Developers

A locally downloadable Wolfram Engine to put computational intelligence into your applications. The Wolfram Engine is the software system that powers Wolfram products and services, and implements the Wolfram Language and its interfaces and connections, across an unprecedentedly broad range of computational environments. Under continuous development since 1986, the Wolfram Engine represents a remarkable software engineering achievement, using a host of different methods and technologies to adapt its delivery of sophisticated computation and knowledge to the full spectrum of desktop, cloud, mobile and embedded environments, and to support both human and machine usage.


Love Your Job? Someone May be Taking Advantage of You

If someone is passionate about what they do, we see it as more legitimate to exploit them, according to new research from Duke University’s Fuqua School of Business.
Advertisements