Finding Optimal Weights of Ensemble Learner using Neural Network

Encountering ensemble learning algorithm in winning solutions of data science competitions has become a norm now. The ability to train multiple learners on a set of hypothesis, not only adds robustness to the model, but also enables it to deliver highly accurate predictions. In case you missed, I would recommend reading Basics of Ensemble Learning Explained in Simple English before you go forward. While building ensemble models, one of most common challenge that people face is to find optimal weights. Few of them fight hard to solve this challenge and the not so brave ones, convince themselves to apply simple bagging, which assumes equal weight for all models and takes the average of all predicted values.

What Can I Do With “Deep Learning”?

YCML Machine Learning library on Github

YCML is a new Machine Learning library available on Github as an Open Source (GPLv3) project. It can be used in iOS and OS X applications, and includes Machine Learning and optimization algorithms.

Review: R for Cloud Computing

This is a lively book on a timely topic { or rather, a pair of topics, as the book is as much about R as it is on cloud computing. It should prove useful for those interested in the con uence of the two subject areas.

Changing the font of R base graphic plots.

Want to change the font used in your R plots? I got a quite simple solution that works on Mac OS.

Predicting Titanic deaths on Kaggle IV: random forest revisited

On July 19th I used randomForest to predict the deaths on Titanic in the Kaggle competition. Subsequently I found that both bagging and boosting gave better predictions than randomForest. This I found somewhat unsatisfactory, hence I am now revisiting randomForest. To my disappointment this does not result in predictions as good as bagging and boosting.

Generalized Linear Models in R, Part 5: Graphs for Logistic Regression

In my last post I used the glm() command in R to fit a logistic model with binomial errors to investigate the relationships between the numeracy and anxiety scores and their eventual success. Now we will create a plot for each predictor. This can be very helpful for helping us understand the effect of each predictor on the probability of a 1 response on our dependent variable.