**A Primer On Generative Adversarial Networks**

In this article, I’ll talk about Generative Adversarial Networks, or GANs for short. GANs are one of the very few machine learning techniques which has given good performance for generative tasks, or more broadly unsupervised learning. In particular, they have given splendid performance for a variety of image generation related tasks. Yann LeCun, one of the forefathers of deep learning, has called them “the best idea in machine learning in the last 10 years”. Most importantly, the core conceptual ideas associated with a GAN are quite simple to understand (and in-fact, you should have a good idea about them by the time you finish reading this article).

**A Simple Introduction to Complex Stochastic Processes**

Stochastic processes have many applications, including in finance and physics. It is an interesting model to represent many phenomena. Unfortunately the theory behind it is very difficult, making it accessible to a few ‘elite’ data scientists, and not popular in business contexts. One of the most simple examples is a random walk, and indeed easy to understand with no mathematical background. However, time-continuous stochastic processes are always defined and studied using advanced and abstract mathematical tools such as measure theory, martingales, and filtration. If you wanted to learn about this topic, get a deep understanding on how they work, but were deterred after reading the first few pages of any textbook on the subject due to jargon and arcane theories, here is your chance to really understand how it works.

**Mercedes-Benz Greener Masking Challenge Masking Challenge–1st Place Winner’s Interview**

To ensure the safety and reliability of each and every unique car configuration before they hit the road, Daimler’s engineers have developed a robust testing system. But, optimizing the speed of their testing system for so many possible feature combinations is complex and time-consuming without a powerful algorithmic approach. In this competition launched earlier this year, Daimler challenged Kagglers to tackle the curse of dimensionality and reduce the time that cars spend on the test bench. Competitors worked with a dataset representing different permutations of Mercedes-Benz car features to predict the time it takes to pass testing. Winning algorithms would contribute to speedier testing, resulting in lower carbon dioxide emissions without reducing Daimler’s standards.

**How To Debug Your Approach To Data Analysis**

In 2005, UCLA Econ Graduate, Michael Burry, saw the writing on the wall – the ticking numbers that form the American mortgage market. Burry’s analysis of US lending practices between 2003-2004 led him to believe that housing prices would fall drastically as early as 2007. And he turned his ideas to good use, pocketing net profits close to a whopping 489% between 2001 and 2008! Those who overlooked his insights earned a little over 2% in the same period. In the modern world, we can’t overstate the impact of accurate data analysis. The price to pay for small mistakes can be significant – running up to millions of dollars, or the failure to predict election results by a laughably wide margin. So, why do we make these errors? Why do even the best of us, with years of experience in making data-led decisions and equipped with the latest tools, often struggle to read between the numbers?

**Reinforcement learning with TensorFlow**

The world of deep reinforcement learning can be a difficult one to grasp. Between the sheer number of acronyms and learning models, it can be hard to figure out the best approach to take when trying to learn how to solve a reinforcement learning problem. Reinforcement learning theory is not something new; in fact, some aspects of reinforcement learning date back to the mid-1950s. If you are absolutely fresh to reinforcement learning, I suggest you check out my previous article, ‘Introduction to reinforcement learning and OpenAI Gym,’ to learn the basics of reinforcement learning. Deep reinforcement learning requires updating large numbers of gradients, and deep learning tools such as TensorFlow are extremely useful for calculating these gradients. Deep reinforcement learning also requires visual states to be represented abstractly, and for this, convolutional neural networks work best. In this article, we will use Python, TensorFlow, and the reinforcement learning library Gym to solve the 3D Doom health gathering environment. For a full version of the code and required dependencies, please access the GitHub repository and Jupyter Notebook for this article.

Here are some of the books that I found interesting and useful in 2017.

• Scrum: The Art of Doing Twice the Work in Half the Time by Jeff Sutherland

• R for Data Science: Import, Tidy, Transform, Visualize, and Model Data by Hadley Wickham, Garrett Grolemund

• The Third Wave: An Entrepreneur’s Vision of the Future by Steve Case

• Data Smart: Using Data Science to Transform Information into Insight by John W. Foreman

• Data Science for Business: What you need to know about data mining and data-analytic thinking by Foster Provost, Tom Fawcett

• An Introduction to Statistical Learning: With Applications in R

• Scrum: The Art of Doing Twice the Work in Half the Time by Jeff Sutherland

• R for Data Science: Import, Tidy, Transform, Visualize, and Model Data by Hadley Wickham, Garrett Grolemund

• The Third Wave: An Entrepreneur’s Vision of the Future by Steve Case

• Data Smart: Using Data Science to Transform Information into Insight by John W. Foreman

• Data Science for Business: What you need to know about data mining and data-analytic thinking by Foster Provost, Tom Fawcett

• An Introduction to Statistical Learning: With Applications in R

**Correlation for Maximal Coupling**

An interesting (if vaguely formulated) question on X validated: given two Gaussian variates that are maximally coupled, what is the correlation between these variates? The answer depends on the parameters of both Gaussian, with a correlation of one when both Gaussians are identical. Answering the question by simulation (as I could not figure out the analytical formula on Boxing Day…) led me back to Pierre Jacob’s entry on the topic on Statisfaction, where simulating the maximal coupling stems from the decompositions p(x)=p(x)?q(x)+{p(x)-p(x)?q(x)} and q(x)=p(x)?q(x)+{q(x)-p(x)?q(x)} and incidentally to the R function image.plot (from the R library fields) for including the side legend.

**Six Reasons To Learn R For Business**

Reason 1: R Has The Best Overall Qualities

Reason 2: R Is Data Science For Non-Computer Scientists

Reason 3: Learning R Is Easy With The Tidyverse

Reason 4: R Has Brains, Muscle, And Heart

Reason 5: R Is Built For Business

Reason 6: R Community Support

Reason 2: R Is Data Science For Non-Computer Scientists

Reason 3: Learning R Is Easy With The Tidyverse

Reason 4: R Has Brains, Muscle, And Heart

Reason 5: R Is Built For Business

Reason 6: R Community Support

Advertisements