Feature Selection and Dimensionality Reduction Using Covariance Matrix Plot

This article will discuss how the covariance matrix plot can be used for feature selection and dimensionality reduction.


Complete Hands-Off Automated Machine Learning

Here’ a proposal for real ‘zero touch’, ‘set-em-and-forget-em’ machine learning from the researchers at Amazon. If you have an environment as fast changing as e-retail and a huge number of models matching buyers and products you could achieve real cost savings and revenue increases by making the refresh cycle faster and more accurate with automation. This capability likely will be coming soon to your favorite AML platform.


Deployed your Machine Learning Model? Here’s What you Need to Know About Post-Production Monitoring

So you’ve built your machine learning model. You’ve even taken the next step – often one of the least spoken about – of putting your model into production (or model deployment). Great – you should be all set to impress your end-users and your clients. But wait – as a data science leader, your role in the project isn’t over yet. The machine learning model you and your team created and deployed now needs to be monitored carefully. There are different ways to perform post-model deployment and we’ll discuss them in this article.


What is the ethical supply chain?

Today, supply chains are considered strategic to the business, and meeting customer expectations for ethical and sustainable supply chain operations is increasingly becoming a top priority for supply chain managers. Last year, supply chain research specialists APICS found that 83% of supply chain professionals thought ethics were extremely or very important for their organization. When you consider the brand and reputational damage – not to mention the legal implications – of unethical labor, it’s not difficult to see why. Yet, even today, global brands are still discovered with unethical practices in their supply chains. Modern supply chains are global, complex and multi-tiered, which makes it easy for unethical practices to happen without the brand’s knowledge. For a major organization, it’s no longer enough to know what your supplier is doing. You need to know what their supplier is doing, and their supplier’s suppliers, and so on. For example, in the fashion industry, 93% of organizations admitted that they still don’t know where their cotton was manufactured. This can be a very costly error. In 1996, after it came to light that a sportswear manufacturer used child labor, the company had to pay millions of dollars in fines and, far more importantly, had 15% wiped from its corporate value. Yet this company is now an exemplar for what can happen when you build ethics into your supply chain.


The Perceptual and Cognitive Limits of Multivariate Data Visualization

Almost all data visualizations are multivariate (i.e., they display more than one variable), but there are practical limits to the number of variables that a single graph can display. These limits vary depending on the approach that’s used. Three graphical approaches are currently available for displaying multiple variables:
1. Encode each variable using a different visual attribute
2. Encode every variable using the same visual attribute
3. Increase the number of variables using small multiples
In this article, we’ll consider each.


Zero to ML hero with TensorFlow 2.0

Get a programmer’s perspective with Laurence Moroney from the basics of machine learning all the way up to building complex computer vision scenarios using convolutional neural networks and natural language processing with recurrent neural networks. Aiming at programmers, Laurence is light on math and theory and heavy on code. You’ll start by understanding the concepts of training versus programming, creating a very simple example where a neural network is trained to recognize patterns. This extends into computer vision with a scenario where you train a neural network to recognize items of clothing and branch into more complex images to learn how convolutions can be used to extract features in images so you can identify a cat by its ears or a horse by its snout. You’ll switch gears and look into some natural language processing, learning how to tokenize words, train neural networks to classify sentences in context, and maybe even do some basic text generation of your own. This will equip you with an understanding of how neural networks, machine learning, and deep learning are a new paradigm to open up new scenarios for you to build against.


You’ll never believe what this AI does! (Spoiler: it detects clickbait)

Researchers have developed an AI to detect clickbait along with headlines written by machines. There are many problems facing reporting today including fake news, alt-truths, general lack of factual accuracy, state manipulation, deepfake content, imprisonment of journalists, and those pesky misleading headlines we call clickbait. Many of us claim we don’t fall for clickbait, but most of us will have clicked on one at some point. The headline is normally something along the lines of ‘You wouldn’t believe what the Teletubbies baby looks like now!’ (Yes, ok, I fell for that one embarrassingly recently.) Researchers at Penn State and Arizona State University tasked a group of humans to write their own clickbait. Machines were then programmed to write artificial clickbait headlines. This data was then used to train an algorithm for detecting clickbait written by people or machines. The researchers claim their clickbait-detecting algorithm is around 14.5 percent more accurate than other systems. Beyond its use for detecting clickbait, the researchers claim their data can help to improve the performance of other AIs.


We can’t trust AI systems built on deep learning alone

Gary Marcus is not impressed by the hype around deep learning. While the NYU professor believes that the technique has played an important role in advancing AI, he also thinks the field’s current overemphasis on it may well lead to its demise. Marcus, a neuroscientist by training who has spent his career at the forefront of AI research, cites both technical and ethical concerns. From a technical perspective, deep learning may be good at mimicking the perceptual tasks of the human brain, like image or speech recognition. But it falls short on other tasks, like understanding conversations or causal relationships. To create more capable and broadly intelligent machines, often referred to colloquially as artificial general intelligence, deep learning must be combined with other methods.


TensorFlow 2.0 is now available!

Earlier this year, we announced TensorFlow 2.0 in alpha at the TensorFlow Dev Summit. Today, we’re delighted to announce that the final release of TensorFlow 2.0 is now available! Learn how to install it here. TensorFlow 2.0 is driven by the community telling us they want an easy-to-use platform that is both flexible and powerful, and which supports deployment to any platform. TensorFlow 2.0 provides a comprehensive ecosystem of tools for developers, enterprises, and researchers who want to push the state-of-the-art in machine learning and build scalable ML-powered applications.


Integrated Approach of RFM, Clustering, CLTV & Machine Learning Algorithms for Forecasting

CLTV is a customer relationship management (CRM) issue with an enterprise approach to understanding and influencing customer behavior through meaningful communication to improve customer acquisition, customer retention, customer loyalty, and customer profitability. The whole idea is that, business wants to predict the average amount of $$ customers will spend on the business over the entire life of relationship. Although statistical methods can be very powerful, but these methods make several stringent assumptions on the types of data and their distribution, and typically can only handle a limited number of variables. Regression-based methods are usually based on a ?xed-form equation, and assume a single best solution, which means that we can compare only a few alternative solutions manually. Further, when the models are applied to real data, the key assumptions of the methods are often violated. Here, I will show Machine Learning (ML) methods by integrating the CLTV and customer transaction variables with the RFM variables to forecast consumer purchases.
Advertisements