A Subtle Way to Over-Fit
If you train a model on a set of data, it should fit that data well. The hope, however, is that it will fit a new set of data well. So in machine learning and statistics, people split their data into two parts. They train the model on one half, and see how well it fits on the other half. This is called cross validation, and it helps prevent over-fitting, fitting a model too closely to the peculiarities of a data set.
Python Data. Leaflet.js Maps.
Python Data. Leaflet.js Maps. Folium builds on the data wrangling strengths of the Python ecosystem and the mapping strengths of the Leaflet.js library. Manipulate your data in Python, then visualize it in on a Leaflet map via Folium.
Visualizing Matrix Multiplication As a Linear Combination
When multiplying two matrices, there’s a manual procedure we all know how to go through. Each result cell is computed separately as the dot-product of a row in the first matrix with a column in the second matrix. While it’s the easiest way to compute the result manually, it may obscure a very interesting property of the operation: multiplying A by B is the linear combination of A’s columns using coefficients from B. Another way to look at it is that it’s a linear combination of the rows of B using coefficients from A.