What Does It Really Mean to Operationalize a Predictive Model?

In a 2017 SAS survey, 83% have made moderate-to- significant investments in big data, but only 33% say they have derived value from their investments. Other more recent surveys have shown similar results. We have found that the main reason for this gap is a failure to understand the full scope of what is required to operationalize the predictions in a way that truly benefits your business. In this post, I’d like to walk you through a sample scenario, and show how success requires a process that ensures alignment with the affected people, keeps focus on the business value to be derived, and iteratively builds the technology platform.

Matrices as Tensor Network Diagrams

In the previous post, I described a simple way to think about matrices, namely as bipartite graphs. Today I’d like to share a different way to picture matrices – one which is used not only in mathematics, but also in physics, chemistry, and machine learning. Here’s the basic idea. An m×n matrix M with real entries represents a linear map from Rn -> Rm . Such a mapping can be pictured as a node with two edges. One edge represents the input space, the other edge represents the output space.

Why GraphQL is the key to staying out of technical debt

GraphQL (not to be confused with GraphDB or Open Graph or even an actual graph) is a remarkably creative solution to a relatively common problem: How do you enable front end developers to access backend data in exactly the way they need it? Quick example: We want to display a list of products on a web page. So we write a service which returns a list of products. We make it super RESTful because that’s what someone on a podcast said we should do.

What’s New in Deep Learning Research: Introducing Population Based Training

Training and optimization of deep learning models are some of the most challenging aspects of any modern machine intelligence (MI) solution. In many scenarios, data scientists are able to rapidly arrive to the correct set of algorithms for a specific problem just to spend countless months trying to find the optimal version of the model. Recently, DeepMind published a new research paper that proposes a new approach for training and optimizing deep learning models known as population based training. The optimization of traditional deep learning models is focused on minimizing its test error without drastically changing the core components of the model. One of the most important approaches in deep learning optimization centers around tuning elements that are orthogonal to the model itself. Deep learning theory typically refers to these elements as hyperparameters. In the past, I’ve written about hyperparameter optimization and its implications in deep learning programs so I don’t plan to bore you with the details :). Typically, hyperparameters in deep learning programs include elements such as the number of hidden units or the learning rate which can be tuned to improve the performance of a specific model.

Security with AI and Machine Learning

For security professionals seeking reliable ways to combat persistent threats to their networks, there’s encouraging news. Tools that employ AI and machine learning have begun to replace the older rules – and signature-based tools that can no longer combat today’s sophisticated attacks. In this free ebook, Oracle’s Laurent Gil and Recorded Future’s Allan Liska look at the strengths (and limitations) of AI- and ML-based security tools for dealing with today’s threat landscape – including quickly identifying threats, connecting attack patterns, and allowing operators and analysts to focus on their core mission. You’ll also learn how managed security service providers (MSSPs) use AI and ML to identify patterns from across their customer base. It’s not Robocop – but it’s getting closer.

The house always wins : Monte Carlo Simulation

How do casinos earn money? The trick is simple- you play long enough, the probability of losing money increases. Let us take a look at how this works with a simple Monte Carlo simulation. Monte Carlo simulation is a technique used to understand the impact of risk and uncertainty in financial, project management, cost, and other forecasting models. A Monte Carlo simulator helps one visualize most or all of the potential outcomes to have a better idea regarding the risk of a decision. Consider an imaginary game in which our player ‘Jack’, rolls an imaginary dice to get an outcome of 1 to 100. If Jack rolls anything from 1-51, the house wins, but if the number rolled is from 52-100, Jack wins. Simple enough?

Intuitive explanation of precision and recall

One of the first things you will learn (or have learned) when getting into machine learning is the model evaluation concept of precision and recall. If your experience was like mine, you were shown the formula for computing both, and you understood that if your model has high precision and recall, that’s a good thing. Conversely, if it only has one of those high, and the other low, then your model may not be that good. However, having an intuitive understanding of these metrics may not have been a take away. In this post, I will attempt to illustrate what these metrics actually mean in terms of the performance of your model.

Decentralized AI for the Rest of US

The emerging field of decentralized artificial intelligence(AI) is becoming one of the most exciting technology trends of the last few months. A lot has been written about the potential value of the intersection of artificial intelligence(AI) and blockchain technologies and we, this year, we have even entire conferences dedicated to the subject of decentralized AI. However, I feel that a lot of the hype behind decentralized AI fails to highlight some of the key value propositions of the new technology movement that can make it one of the most foundational technology trends of this decade. If you believe in the idea that AI is going to become an increasingly influential factor in our daily lives, I believe decentralized AI will be an essential element to guide the impact that machine intelligence will have in future generations. Sounds dramatic? Let’s look at some of the economic dynamics behind decentralized AI to try to clarify our point.

What’s New in Deep Learning Research: Learning by Debating

Debating plays a key role about how we learn new skills and domains. Think about how much rapidly do you learn something if you are in an environment in which you can express your viewpoints and get immediate feedback. In artificial intelligence(AI) scenarios, most agents are designed to learn in isolation or by feedback from the environment in scenarios such as reinforcement learning. However, the idea of multiple agents debating a task in order to improve their knowledge is largely unheard of. Why is this discussion even relevant? For artificial intelligence(AI) agents to become mainstream in real world scenarios, they need to master human-like tasks. A natural way to do this, its AI programs to receive human feedback. However, this trivial step is incredibly difficult to achieve as most AI environments are too complex for humans to provide continuous feedback. This interesting learning dilemma explains the fact that, while some AI tasks are too difficult for humans to perform, they can still provide better feedback about the learning process than most AI agents. However, in order to do that, the tasks have to be interpretable from the human cognition standpoint.

Ten Important Updates from TensorFlow 2.0

Go through the ten most important updates introduced in the newly released TensorFlow 2.0, and learn how to implement some of them.

Attempting to Visualize a Convolutional Neural Network in Realtime

While replicating the End-to-End Deep Learning approach for Self- Driving Cars, I was frustrated by the lack of visibility into what the network is seeing. I built a tool to fix this.

Traditional IT Governance Must Be Reengineered For Enterprise AI/ML

What needs to be in place to drive AI/ML agility while ensuring necessary reproducibility and traceability?