Age of Information: A New Concept, Metric, and Tool

The concept of Age of Information (AoI) was introduced in 2011 in to quantify the freshness of the knowledge we have about the status of a remote system. More specifically, AoI is the time elapsed since the generation of the last successfully received message containing update information about its source system. Utilizing a simple communication system model, in a series of papers, the first group of characterizations of the Age of Information metric had appeared by 2012. Since then, AoI has attracted a vivid interest, with over 50 publications, in the last six years 1. The attention AoI has been receiving is due to two factors. The first is the sheer novelty brought by AoI in characterizing the freshness of information versus for example that of the metrics of delay or latency. Second, the need and importance of characterizing the freshness of such information is paramount in a wide range of information, communication, and control systems. By now, age has been studied with considerable diversity of systems, being as a concept, a performance metric, and a tool.

Deep Learning Tutorial to Calculate the Screen Time of Actors in any Video (with Python codes)

When I started my deep learning journey, one of the first things I learned was image classification. It´s such a fascinating part of the computer vision fraternity and I was completely immersed in it! But I have a curious mind and once I had a handle on image classification, I wondered if I could transfer that learning to videos. Was there a way to build a model that automatically identified specific people in a given video at a particular time interval? Turns out, there was and I´m excited to share my approach with you!

New Perspective on the Central Limit Theorem and Statistical Testing

You won’t learn this in textbooks, college classes, or data camps. Some of the material in this article is very advanced yet presented in simple English, with an Excel implementation for various statistical tests, and no arcane theory, jargon, or obscure theorems. It has a number of applications, in finance in particular. This article covers several topics under a unified approach, so it was not easy to find a title.

Using machine learning in workload automation

Akhilesh Tripathi shows you how to use machine learning to identify root causes of problems in minutes instead of hours or days.

A Quick Appreciation of the R transform Function

R users who also use the dplyr package will be able to quickly understand the following code that adds an estimated area column to a data.frame.

Naïve Numerical Sums in R

There is no known simpler form and we have to work with this sum as it is. This is an infinite sum. How can we compute the value of this infinite sum numerically?

Understanding Neural Networks. From neuron to RNN, CNN, and Deep Learning

Neural Networks is one of the most popular machine learning algorithms at present. It has been decisively proven over time that neural networks outperform other algorithms in accuracy and speed. With various variants like CNN (Convolutional Neural Networks), RNN(Recurrent Neural Networks), AutoEncoders, Deep Learning etc. neural networks are slowly becoming for data scientists or machine learning practitioners what linear regression was one for statisticians. It is thus imperative to have a fundamental understanding of what a Neural Network is, how it is made up and what is its reach and limitations. This post is an attempt to explain a neural network starting from its most basic building block a neuron, and later delving into its most popular variations like CNN, RNN etc.

Exploratory Statistical Data Analysis with a Real Dataset using Pandas

Sometimes, when facing a Data problem, we must first dive into the Dataset and learn about it. Its properties, its distributions?-?we need to immerse in the domain. Today we’ll leverage Python’s Pandas framework for Data Analysis, and Seaborn for Data Visualization. As a geeky programmer with poor sense of aesthetics, I’ve found Seaborn to be an awesome visualization tool whenever I need to get a point accross. It uses Matplotlib under the hood, but sets graphics up with default style values that make them look a lot prettier than I could ever manage to make them. We’ll take a look at a Dataset, and I’ll try to give you an intuition of how to look at different features. Who knows, maybe we’ll actually gain some insights from this!

Decision Tree: an algorithm that works like the human brain

Decision trees are one of the most popular algorithms used in machine learning, mostly for classification but also for regression problems. Our brain works like a decision tree every time we ask ourselves a question before making a decision. For example: is it cloudy outside? If yes, I will bring an umbrella. When training a dataset to classify a variable, the idea of the Decision Tree is to divide the data into smaller datasets based on a certain feature value until the target variables all fall under one category. While the human brain decides to pick the ‘splitting feature’ based on the experience (i.e. the cloudy sky), a computer splits the dataset based on the maximum information gain. Let’s define a simple problem and jump into some calculations to see exactly what this means!

Kernel PCA vs PCA vs ICA in Tensorflow/sklearn

Principle Component Analysis performs a linear transformation on a given data, however, many real-world data are not linearly separable. So can we take advantage of higher dimensions, while not increasing the needed computation power so much?