How to Build Deep Learning Models for Font Classification with Tensorflow: CNN, Deeper CNN, Hidden Layers Models

You will learn how to build and train decent deep learning models for classification problem with a few lines of code in TensorFlow. The 5 models covered in this article are:
• Logistic regression model
• Single hidden layer model
• Multiple hidden layers model
• Deep CNN with convolutional layer and pooling layer
• Deeper CNN with 2 convolutional layers and 2 pooling layers


AmpliGraph

Open source Python library that predicts links between concepts in a knowledge graph. AmpliGraph is a suite of neural machine learning models for relational Learning, a branch of machine learning that deals with supervised learning on knowledge graphs. Use AmpliGraph if you need to:
• Discover new knowledge from an existing knowledge graph.
• Complete large knowledge graphs with missing statements.
• Generate stand-alone knowledge graph embeddings.
• Develop and evaluate a new relational model.


Report on Text Classification using CNN, RNN & HAN

In this article I will share my experiences and learnings while experimenting with various neural networks architectures.
I will cover 3 main algorithms such as:
1. Convolutional Neural Network (CNN)
2. Recurrent Neural Network (RNN)
3. Hierarchical Attention Network (HAN)
Text classification was performed on datasets having Danish, Italian, German, English and Turkish languages.


How to Build a Recommendation System for Purchase Data (Step-by-Step)

An application of item-based collaborative filtering with Turicreate and Python. Whether you are responsible for user experience and product strategy in a customer centric company, or sitting in your couch watching movies with loved ones, chances are you are already aware of some ways that recommendation technology is used to personalize your content and offers. Recommendation systems are one of the most common, easily comprehendible applications of big data and machine learning. Among the most known applications are Amazon’s recommendation engine that provides us with a personalized webpage when we visit the site, and Spotify’s recommendation list of songs when we listen using their app.


Improvements in Deep Q Learning: Dueling Double DQN, Prioritized Experience Replay, and fixed Q-targets

In our last article about Deep Q Learning with Tensorflow, we implemented an agent that learns to play a simple version of Doom. In the video version, we trained a DQN agent that plays Space invaders. However, during the training, we saw that there was a lot of variability. Deep Q-Learning was introduced in 2014. Since then, a lot of improvements have been made. So, today we’ll see four strategies that improve – dramatically – the training and the results of our DQN agents:
• fixed Q-targets
• double DQNs
• dueling DQN (aka DDQN)
• Prioritized Experience Replay (aka PER)
We’ll implement an agent that learns to play Doom Deadly corridor. Our AI must navigate towards the fundamental goal (the vest), and make sure they survive at the same time by killing enemies.


TOP 10 Machine Learning Algorithms

1. Linear Regression
2. Logistic Regression
3. Linear Discriminant Analysis
4. Classification and Regression Trees
5. Naive Bayes
6. K-Nearest Neighbors
7. Learning Vector Quantization
8. Support Vector Machines
9. Bagging and Random Forest
10. Boosting and AdaBoost


Robust Regressions: Dealing with Outliers

It is often the case that a dataset contains significant outliers – or observations that are significantly out of range from the majority of other observations in our dataset. Let us see how we can use robust regressions to deal with this issue. I described in another tutorial how we can run a linear regression in R. However, this does not account for the outliers in our data. So, how can we solve this?


A Visual Exploration of Gaussian Processes

Even if you have spent some time reading about machine learning, chances are that you have never heard of Gaussian processes. And if you have, rehearsing the basics is always a good way to refresh your memory. With this blog post we want to give an introduction to Gaussian processes and make the mathematical intuition behind them more approachable.


Dark Data as the New Challenge for Big Data Science and the Introduction of the Scientific Data Officer

Many studies in big data focus on the uses of data available to researchers, leaving without treatment data that is on the servers but of which researchers are unaware. We call this dark data, and in this article, we present and discuss it in the context of high-performance computing (HPC) facilities. To this end, we provide statistics of a major HPC facility in Europe, the High-Performance Computing Center Stuttgart (HLRS). We also propose a new position tailor-made for coping with dark data and general data management. We call it the scientific data officer (SDO) and we distinguish it from other standard positions in HPC facilities such as chief data officers, system administrators, and security officers. In order to understand the role of the SDO in HPC facilities, we discuss two kinds of responsibilities, namely, technical responsibilities and ethical responsibilities. While the former are intended to characterize the position, the latter raise concerns – and proposes solutions – to the control and authority that the SDO would acquire.


Spatio-Temporal Statistics: A Primer

Marketing scientist Kevin Gray asks University of Missouri Professor Chris Wikle about Spatio-Temporal Statistics and how it can be used in science and business.
Advertisements