In the beginning we saw how the human eye worked with the brain to provide use with a stream of visual information. From detecting dots and lines by the simple cells in the earlier layers to higher level features in the later layers, the mechanism was replicated in neural networks. The method was fine-tuned later by using layers of sliding, generalizable filters instead of fully connected layers. While this gives the reader a general overlook into the workings of convolutional neural networks, there’s many finer details that cannot be talked about without going deeper into the topic. Also, the architecture of the network discussed here is a very general one used for classification tasks. There are more creative approaches for different tasks such as object detection and neural style transfer(transferring an art style onto another image). I hope to cover the implementation of a convolutional neural network in the coming weeks where some of the aforementioned finer details will be discussed.
In this article, we will make use of two of the first algorithmically described machine learning algorithms for classification, the perceptron and adaptive linear neurons. We will start by implementing a perceptron step by step in Python and training it to classify different flower species in the Iris dataset. This will help us understand the concept of machine learning algorithms for classification and how they can be efficiently implemented in Python.
Last month, R users from across the world gathered in Toulouse, France to discuss new developments at the useR! conference, the language’s premier international gathering. At nearly every talk I attended, the name Hadley Wickham was mentioned. Wickham is the language’s most important developer. Over the past decade, along with his collaborators, Wickham built a set of popular data analysis and visualization libraries (also known as packages) called the ‘tidyverse,’ which has almost become its own language. Wickham’s libraries are among the most popular in R, and have become the standard for new learners. (R is free to use.) People who stopped using R years ago would barely recognize how people typically use the language today. Some R users are displeased by the dominance of the tidyverse, in part because it is now backed by the company RStudio, which employes Wickham and most of his collaborators. RStudio offers a free user interface for the language, but charges companies for enterprise support. In Toulouse, I spoke with Wickham about the current state of R and what he sees for the future of the language. The conversation has been edited and condensed.
Jupyter Notebooks are currently the hottest programming environment for Pythonistas the world over, especially those who are into Machine Learning and Data Science. I discovered Jupyter Notebooks when I first started to get serious about Machine Learning a few months ago. Initially, I was simply amazed, loved how everything ran inside my browser. However, I soon got disillusioned and found the stock Jupyter Notebook interface to be very basic lacking several useful features. That’s when I decided to go hunting for some Jupyter Notebook hacks. In this article, I present several Jupyter Notebook add-ons/extensions and a few jupyter commands that will enhance your Jupyter Notebooks and increase your productivity. In short, Supercharge your Jupyter Notebooks.
The world of convolutional neural network architectures is quickly becoming more clustered and crowded. Most students focused on utilizing either the VGG or ResNet networks and rarely explore the other architectures. Often students believe that going above 50 layers is both unnecessary and computationally expensive. In this short article, I attempt to show the merits of abandoning the VGG or ResNet architecture and exploring the Densely Connected Convolutional Networks (DenseNet) architecture.
Monte Carlo Tree search is a fancy name for one Artificial Intelligence algorithm used specially in games. Alpha Go reportedly used this algorithm with a combination of Neural Network. MCTS has been used in many other applications before that. Here I explain what algorithm is, and how it works.
Transfer learning is a popular method in computer vision that allows us to build accurate models faster. With transfer learning, instead of starting the learning process from scratch, you start from patterns that have been learned when solving a different problem. This way you leverage previous learnings and avoid starting from scratch. In computer vision, transfer learning is usually expressed through the use of pre-trained models. A pre-trained model is a model that was trained on a large benchmark dataset to solve a problem similar to the one that we want to solve. Accordingly, due to the computational cost of training such models, it is common practice to import and use models from published literature (e.g. VGG, Inception, MobileNet).
The leading open source software for time series analytics. Grafana allows you to query, visualize, alert on and understand your metrics no matter where they are stored. Create, explore, and share dashboards with your team and foster a data driven culture.
A guide to make a state-of-the-art banknote detector using Deep Learning Artificial Intelligence. This service recognizes what currency a banknote is (euro or usd dollar) and what denomination (5,10,20, …). The social impact purpose is to help blind people, so I took care to make ‘real-life’ training images holding the banknotes in my hand, sometimes folded, sometimes covering part of it. This post hopefully helps encourage others to learn Deep Learning. I’m using the amazing fast.ai online fee course, which I very much recommend. As a testament to their pragmatic, top-down approach, this side-project is based on lesson 3. On their online forum you can find many more amazing applications by fellow students.
Introduction of the NLP (Natural Language Processing) revolutionized all the industries. So, NLP is a branch of AI (artificial Intelligence) that helps computer understand, interpret and manipulate human language. Now, with heaps of data available (thanks to big data) to us the major challenge that industries were facing was to communicate with computers. Our language system is astoundingly complex and diverse. We have the capability to express ourselves in infinite ways, may it be verbally, physically or written. The first challenge was written text. We have hundreds of language and each with its unique set of grammar and syntax rules.
We are excited to announce polished. Polished is a new R package that adds modern user authentication and user administration to your Shiny apps. Polished comes with many of the authentication features required by today’s web apps (e.g. user registration, password reset, email verification, role-based authorization, etc.). Polished is available under the permissive MIT license!
When training a neural network to accomplish a given task, be it image classification or reinforcement learning, one typically refines a set of weights associated with each connection within the network. Another approach to creating successful neural networks that has shown substantial progress is neural architecture search, which constructs neural network architectures out of hand-engineered components such as convolutional network components or transformer blocks. It has been shown that neural network architectures built with these components, such as deep convolutional networks, have strong inductive biases for image processing tasks, and can even perform them when their weights are randomly initialized. While neural architecture search produces new ways of arranging hand-engineered components with known inductive biases for the task domain at hand, there has been little progress in the automated discovery of new neural network architectures with such inductive biases, for various task domains.
Have you heard about the latest Natural Language Processing framework that was released recently? I don’t blame you if you’re still catching up with the superb StanfordNLP library or the PyTorch-Transformers framework! There has been a remarkable rise in the amount of research and breakthroughs happening in NLP in the last couple of years.
Any big traditional company dreams to be like Google, Facebook, or Amazon but there is one thing that separate the former that existed before the birth of the Internet, from the latter which started with the Internet: they were not born with data mining and machine learning in their DNA.
A novel exploration method based on representation learning. Reinforcement learning could be hard when the reward signal is sparse. In these scenarios, exploration strategy becomes essentially important: a good exploration strategy not only helps the agent to gain a faster and better understanding of the world but also makes it robust to the change of the environment. In this article, we discuss a novel exploration method, namely Exploration with Mutual Information(EMI) proposed by Kim et al. in ICML 2019. In a nutshell, EMI learns representations for both observations(states) and actions in the expectation that we can have a linear dynamics model on these representations. EMI then computes the intrinsic reward as the prediction error under the linear dynamics model. The intrinsic reward combined with environment reward forms the final reward function which can then be used by any RL method. To avoid redundancy, we assume you are familiar with the concept of mutual information and Markov decision process.