Comprehensive Introduction to Neural Network Architecture

This article is the second in a series of articles aimed at demystifying the theory behind neural networks and how to design and implement them for solving practical problems. In this article, I will cover the design and optimization aspects of neural networks in detail.
• The topics in this article are:
• Anatomy of a neural network
• Activation functions
• Loss functions
• Output units
• Architecture
These tutorials are largely based on the notes and examples from multiple classes taught at Harvard and Stanford in the computer science and data science departments.


The Kalman Filter and External Control Inputs

In this article, you will
• Use the statsmodels Python module to implement a Kalman Filter model with external control inputs,
• Use Maximum Likelihood to estimate unknown parameters in the Kalman Filter model matrices,
• See how cumulative impact can be modeled via the Kalman Filter. (This article uses the fitness-fatigue model of athletic performance as an example and doubles as Modeling Cumulative Impact Part IV.)


Mastering the features of Google Colaboratory !!!

Google Colaboratory is a research tool for data science and machine learning. It’s a jupyter notebook environment that requires no setup to use. It is by far one of the most top tools especially for data scientists because you don’t have to manually install all the packages and libraries, just import them directly by calling them. Whereas in normal IDE you have to manually install the libraries. And moreover notebooks are meant for code and explanation, it often should look like a blog post. I have been using Google colab from past two months and it has been the best tool for me. In this blog, I would be giving you guys some tips and tricks about mastering the Google Colab. Stay tuned read all the points, these were the features which even I was struggling to implement at the first place, now I mastered it. Let’s see the top best features of Google Colab notebook.


Optimizing Source-Based-Language-Learning using Genetic Algorithm

What is Source-Based-Language-Learning in this context? Very simple – it is my way of describing the process of learning a language to literally understand a source (i.e. book, speech, etc.). In the specific case of what I will be sharing, it translates to learning Classical Arabic to be able to read/comprehend the Quran (in its native language, without translation). So why all this drama of using a Genetic Algorithm (GA) to learn a language? To understand this, it will require a better sense of the problem statement.


Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors

The 1990s saw the emergence of cognitive models that depend on very high dimensionality and randomness. They include Holographic Reduced Representations, Spatter Code, Semantic Vectors, Latent Semantic Analysis, Context-Dependent Thinning, and Vector-Symbolic Architecture. They represent things in highdimensional vectors that are manipulated by operations that produce new high-dimensional vectors in the style of traditional computing, in what is called here hyperdimensional computing on account of the very high dimensionality. The paper presents the main ideas behind these models, written as a tutorial essay in hopes of making the ideas accessible and even provocative. A sketch of how we have arrived at these models, with references and pointers to further reading, is given at the end. The thesis of the paper is that hyperdimensional representation has much to offer to students of cognitive science, theoretical neuroscience, computer science and engineering, and mathematics.


TensorFlow World – October 28 – 31, 2019 – Santa Clara, CA

Since being open sourced in 2015, TensorFlow has had a significant impact on many industries. With TensorFlow 2.0’s eager execution, intuitive high-level APIs, and flexible model building on any platform, it’s cementing its place as the production-ready, end-to-end platform driving the machine learning revolution. At TensorFlow World you’ll see TensorFlow 2.0 in action, discover new ways to use it, and learn how to successfully implement it in your enterprise.


Towards explainable AI for healthcare: Predicting and visualizing age in Chest Radiographs

I recently published a paper in SPIE 2019 that is related to a system that estimates the age of a person using Chest X-Rays (CXR) and deep learning. Such a system can be utilized in scenarios where the age information of the patient is missing. Forensics is an example of an area that could benefit. More interestingly though, by using deep network activation maps we can visualize which anatomical areas of CXRs that age affects most; offering insight on what the network ‘sees’ to estimate age. It might be too early to tell how age estimation and visualization on CXRs can have clinical implications. Nevertheless, age discrepancy between the network’s prediction and the real patient age can be useful for preventative counseling of patient health status. Excerpts from the paper as well as new experiments are provided in this post.


Recall, Precision, F1, ROC, AUC, and everything

The output of your fraud detection model is the probability [0.0-1.0] that a transaction is fraudulent. If this probability is below 0.5, you classify the transaction as non-fraudulent; otherwise, you classify the transaction as fraudulent.


How To Use Active Learning To Iteratively Improve Your Machine Learning Models

In this article, I will explain how to use active learning to iteratively improve the performance of a machine learning model. This technique is applicable to any model but for the purpose of this article, I will illustrate how it’s done to improve a binary text classifier. All the materials covered in this article are based on the 2018 Strata Data Conference Tutorial titled ‘Using R and Python for scalable data science, machine learning and AI’ from Microsoft. I assume the reader is familiar with the concept of active learning in the context of machine learning. If not, then the lead section of this Wikipedia article serves as a good introduction.


Unwrapping the Secrets of SEO: How Does Google’s Knowledge Graph Work?

The Knowledge Graph is Google’s semantic database. This is where entities are placed in relation to one another, assigned attributes and set in a thematic context or an ontology. But what is an entity? And how does the Knowledge Graph actually work? Find the answers to these questions in our latest Unwrapping the Secrets of SEO, the last in part three in Olaf Kopp’s series looking at Google’s semantics and machine learning.


PyRobot

PyRobot is a framework and ecosystem that enables AI researchers and students to get up and running with a robot in just a few hours, without specialized knowledge of the hardware or of details such as device drivers, control, and planning.


Self-Supervised Learning

122 slides, very readable, about learning from images, from video, and from video with sound.


End-User Probabilistic Programming (DRAFT)

Probabilistic programming aims to help users make decisions under uncertainty. The user writes code representing a probabilistic model, and receives outcomes as distributions or summary statistics. We consider probabilistic programming for end-users, in particular spreadsheet users, estimated to number in tens to hundreds of millions. We examine the sources of uncertainty actually encountered by spreadsheet users, and their coping mechanisms, via an interview study. We examine spreadsheet-based interfaces and technology to help reason under uncertainty, via probabilistic and other means. We show how uncertain values can propagate uncertainty through spreadsheets, and how sheet-defined functions can be applied to handle uncertainty. Hence, we draw conclusions about the promise and limitations of probabilistic programming for end-users.
Advertisements