CycleGANs to Create Computer-Generated Art

CycleGANs refer to a subset of GANs which are capable of taking an image and generating a new image reflecting some type of transformation. The cool part of CycleGANs is you do not need paired images. This is really useful for scenarios where you may not have paired images. For example, if you want to convert pictures of zebras into pictures of horses. This kind of data would probably be impossible to collect, unless I guess if you painted zebras and horses…


Monte Carlo Tree Search in Reinforcement Learning

A recipe of the search algorithm at the heart of Deep Mind’s Alpha Zero AI.


What to do when your data fails OLS Regression assumptions


Progressively-Growing GANs

The Progressively-Growing GAN architecture released from NVIDIA and published at ICLR 2018 has become the primary display of impressive GAN image synthesis. Classically, GANs have struggled to output low- and mid- resolution images such as 32² (CIFAR-10) and 128² (ImageNet), but this GAN model was able to generate high-resolution facial images at 1024².


Word2vec from Scratch with NumPy

How to implement a Word2vec model with Python and NumPy


Understanding Semantic Segmentation with UNET

A Salt Identification Case Study


Brief on Recommender Systems

Nowadays, people used to buy products online more than from stores. Previously, people used to buy products based on the reviews given by relatives or friends but now as the options increased and we can buy anything digitally we need to assure people that the product is good and they will like it. To give confidence in buying the products recommender systems were built.


https://towardsdatascience.com/hand-keypoints-detection-ec2dca27973e

How many labelled images are needed to train a network to accurately predict fingers and palm lines locations? I was inspired by this blog post where the author reported 97.5% classification accuracy to classify if a human was wearing glasses or not with only 135 train images per class. What accuracy can one get for my task with 60 labelled images, from 15 different people?


Attention Seq2Seq with PyTorch: learning to invert a sequence

In this article you’ll learn how to implement sequence-to-sequence models with and without attention on a simple case: inverting a randomly generated sequence.


Installing R using Powershell

Installing R from scratch and creating your favorite IDE setup is especially useful when making fresh installation or when you are developing and testing out different versions. This blogpost will guide you through some essential steps (hopefully, there will not be many) on how to download the desired R engine, desired R GUI – in this case RStudio, and how to prepare the additional packages with some custom helper functions to be used in the client set-up / environment. And mostly, using PowerShell script.


How to Setup a Python Environment for Machine Learning

Setting up your Python environment for Machine Learning can be a tricky task. If you’ve never set up something like that before, you might spend hours fiddling with different commands trying to get the thing to work. But we just want to get right to the ML! In this tutorial, you will learn how to set up a stable Python Machine Learning development environment. You’ll be able to get right down into the ML and never have to worry about installing packages ever again.


An Idiot’s Guide to Word2vec Natural Language Processing

Word2vec is arguably the most famous face of the neural network natural language processing revolution. Word2vec provides direct access to vector representations of words, which can help achieve decent performance across a variety of tasks machines are historically bad at. For a quick examination of how word vectors work, check out my previous article about them. Now we’re going to focus on how word2vec works in the first place. We’ll provide historical context explaining why this approach is so different, then delve into the network and how it works.


Ray

Ray is a distributed execution engine. The same code can be run on a single machine to achieve efficient multiprocessing, and it can be used on a cluster for large computations.
When using Ray, several processes are involved.
• Multiple worker processes execute tasks and store results in object stores. Each worker is a separate process.
• One object store per node stores immutable objects in shared memory and allows workers to efficiently share objects on the same node with minimal copying and deserialization.
• One local scheduler per node assigns tasks to workers on the same node.
• A driver is the Python process that the user controls. For example, if the user is running a script or using a Python shell, then the driver is the Python process that runs the script or the shell. A driver is similar to a worker in that it can submit tasks to its local scheduler and get objects from the object store, but it is different in that the local scheduler will not assign tasks to the driver to be executed.
• A Redis server maintains much of the system’s state. For example, it keeps track of which objects live on which machines and of the task specifications (but not data). It can also be queried directly for debugging purposes.


What Are Major Reinforcement Learning Achievements and Papers From 2018?

1. Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor
2. IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures
3. Temporal Difference Models: Model-Free Deep RL for Model-Based Control
4. Addressing Function Approximation Error in Actor-Critic Methods
5. Learning by Playing – Solving Sparse Reward Tasks from Scratch
6. Hierarchical Imitation and Reinforcement Learning
7. Unsupervised Predictive Memory in a Goal-Directed Agent
8. Data-Efficient Hierarchical Reinforcement Learning
9. Visual Reinforcement Learning with Imagined Goals
10. Horizon: Facebook’s Open Source Applied Reinforcement Learning Platform


How to identify an AI opportunity: 5 questions to ask

Could AI solve that problem? Speed that process? Five important things you should ask to unearth AI opportunities in your organization.
1. Where can we make better decisions?
2. Where are we most inefficient?
3. Where do we have a lot of relevant data?
4. What business outcome do we want to achieve?
5. Will this actually solve our problem?


Introducing Ludwig, a Code-Free Deep Learning Toolbox

Over the last decade, deep learning models have proven highly effective at performing a wide variety of machine learning tasks in vision, speech, and language. At Uber we are using these models for a variety of tasks, including customer support, object detection, improving maps, streamlining chat communications, forecasting, and preventing fraud. Many open source libraries, including TensorFlow, PyTorch, CNTK, MXNET, and Chainer, among others, have implemented the building blocks needed to build such models, allowing for faster and less error-prone development. This, in turn, has propelled the adoption of such models both by the machine learning research community and by industry practitioners, resulting in fast progress in both architecture design and industrial solutions.


Top 8 Sources For Machine Learning and Analytics Datasets

Your Ultimate Guide For Finding Machine Learning and Analytics Datasets
1. Kaggle Datasets
2. Amazon Datasets
3. UCI Machine Learning Repository
4. Google’s Datasets Search Engine
5. Microsoft Datasets
6. Awesome Public Datasets Collection
7. Government Datasets
8. Computer Vision Datasets