Demystifying Mathematical Concepts for Deep Learning

Data science is an interdisciplinary field that uses mathematics and advanced statistics to make predictions. All data science algorithms directly or indirectly use mathematical concepts. Solid understanding of math will help you develop innovative data science solutions such as a recommender system. If you are good at mathematics, it will make your transition into data science easier. As a data scientist, you have to utilize the fundamental concepts of mathematics to solve problems. Apart from mathematics, you also need domain knowledge, programming skills, business skills, analytical skills, and a curious mindset. There is no way of escaping mathematics for a data scientist. You have to inculcate and teach yourself the basics of mathematics and statistics to become a data scientist. In this tutorial, you are going to explore basic math concepts for data science.


Why are p-values like needles? It’s dangerous to share them!

For a concept taught in almost every STAT101 class, the amount of debate around p-values is staggering. As a statistician with both Frequentist and Bayesian sympathies, let me try to cut through the noise for you. I’m going to be cheerfully irreverent to both sides.


In 10 minutes: Web Scraping with Beautiful Soup and Selenium for Data Professionals

Web Scraping is a process to extract valuable information from websites and online contents. It is a free method to extract information and receive datasets for further analysis. In this era where information is practically highly related to each other, I believe that the need for Web Scraping to extract alternative data is enormous especially for me as a data professional. The objective for this publication is for you to understand several ways on scraping any publicly available information using quick and dirty Python Code. Just spend 10 minutes to read this article – or even better, contribute. Then you could get a quick glimpse to code your first Web Scraping tool. In this article, we are going to learn how to scrape data from Wikipedia and e-commerce (Lazada). We will clean up, process, and save the data into .csv file. We will use Beautiful Soup and Selenium as our main Web Scraping Libraries.


Mastering Actor Critic, Part 1

Before delving into the details of the actor critic, let’s remind ourselves of the Policy Gradient . What does it mean to have a policy based reinforcement learning? To put it simply imagine that a robot find itself in some situation, but it appears that this situation is similar to something it had experienced before. So the policy based method is says: since I have taken action (a) in this particular situation in the past, let’s try the same action this time too. PS. Don’t mix similar situation with same states, in similar situation the robot or agent is in some new states that resemble what it has experienced before, and not necessarily the exact same state in which it was in.


Open Data for Machine Learning

Open Data is widely accepted as a practice of transparency and accountability by governments and institutions. In this article, we outline why Open Data can unlock the potential of actual Machine Learning applications.


5 Probability Distributions Every Data Scientist Should Know

Probability Distributions are like 3D glasses. They allow a skilled Data Scientist to recognize patterns in otherwise completely random variables. In a way, most of the other Data Science or Machine Learning skills are based on certain assumptions about the probability distributions of your data. This makes probability knowledge part of the basis on which you can build your toolkit as a statistician. The first steps if you are figuring out how to become a Data Scientist. Without further ado, let us cut to the chase.


Spectral encoding of categorical features

About a year ago I was working on a regression model, which had over a million features. Needless to say, the training was super slow, and the model was overfitting a lot. After investigating this issue, I realized that most of the features were created using 1-hot encoding of the categorical features, and some of them had tens of thousands of unique values. The problem of mapping categorical features to lower-dimensional space is not new. Recently one of the popular way to deal with it is using entity embedding layers of a neural network. However that method assumes that neural networks are used. What if we decided to use tree-based algorithms instead? In tis case we can use Spectral Graph Theory methods to create low dimensional embedding of the categorical features. The idea came from spectral word embedding, spectral clustering and spectral dimensionality reduction algorithms. If you can define a similarity measure between different values of the categorical features, we can use spectral analysis methods to find the low dimensional representation of the categorical feature.


Fuzzy trees and forests—Review

Data classification and regression are commonly encountered data analysis problems. Many researchers created multiple tools to deal with these issues. Fuzzy clustering, fuzzy decision trees, and ensemble classifiers such as fuzzy forests are popular tools used for this kind of problems. We would like to describe some interesting, more or less popular, solutions which belong to mentioned areas to show the way they deal with data classification and regression problems. This paper is divided into four parts. In the first part we present the issue of fuzzy clustering, which is one of the most important aspects of fuzzy trees which base on clusters. Some methods of splitting objects into clusters using fuzzy logic are described there. The second part describes different fuzzy decision trees. The way these trees can deal with classification and regression problems is presented. In the third part the issue of forests – ensemble classifiers which consist of fuzzy trees – is described. The last part treats about the way of performing weighted decision making in fuzzy forests.


Automated Machine Learning (AML) Comes of Age – Almost

You know you’ve come of age when the major review publications like Gartner and Forrester publish a study on your segment. That’s what’s finally happened. Just released is ‘The Forrester New Wave: Automation-Focused Machine Learning Solutions, Q2 2019’. This is the first reasonably deep review of platforms and covers nine of what Forrester describes as ‘the most significant providers in the segment’. Those being Aible, Bell Integrator, Big Squid, DataRobot, DMway Analytics, dotData, EdgeVerve, H2O.ai, and Squark. I’ve been following these automated machine learning (AML) platforms since they emerged. I wrote first about them in the spring of 2016 under the somewhat scary title ‘Data Scientists Automated and Unemployed by 2025!’. Well we’ve still got six years to run and it hasn’t happened yet. On the other hand no-code data science is on the rise and AML platforms along with their partially automated platform brethren are what’s behind it.


Can design sprints work for Artificial Intelligence applications?

Can design sprints work for Artificial Intelligence applications? Last week, for the first time, I attended a meetup on Design Sprints( The Design Sprint Underground) I had heard of Design sprints from Google – but I am not an expert. The organiser, Eran, created a good atmosphere (as he called it Israeli style interaction ) i.e. oriented to spontaneous discussion – which benefitted the meeting.


Moogsoft and Cisco’s AppDynamics on How AIOps is Growing

Yet another aspect of systems management is getting the artificial intelligence makeover. According to some industry experts from AppDynamics and Moogsoft, more vendors are picking up on AIOps, artificial intelligence for IT operations, as IT systems become more intricate. The idea is to use automation to enhance human cognitive capabilities and cut down the time it takes to find solutions to performance problems. The need for such assistance stems from the rise in complexity that developed as a tradeoff as IT systems became more agile, says Will Cappelli, global vice-president of product strategy at Moogsoft. The company is a developer of an AI platform for IT operations. Cappelli is also working with the AIOps Exchange to help define what AIOps is and how it can help in the increasingly noisy IT world. ‘There is too much going on for the naked human intellect to deal with,’ Cappelli says. This can be seen especially, he says, in the amount of data generated by complex systems: data that IT teams must analyze to assess performance issues, anticipate incidents, and for other tasks. ‘A lot of data that comes in is either redundant or noisy data,’ Cappelli says. Before algorithms can be applied to data, it is necessary to determine which data is important.


jExcel v3: The javascript spreadsheet

jExcel is a lightweight vanilla javascript plugin to create amazing web-based interactive tables and spreadsheets compatible with Excel or any other spreadsheet software. You can create an online spreadsheet table from a JS array, JSON, CSV or XSLX files. You can copy from excel and paste straight to your jExcel spreadsheet and vice versa. It is very easy to integrate any third party javascript plugins to create your own custom columns, custom editors, and customize any feature into your application. jExcel has plenty of different input options through its native column types to cover the most common web-based application requirements. It is a complete solution for web data management. Create amazing applications with jExcel javascript spreadsheet.


The quest for high-quality data

AI starts with ‘good’ data’ is a statement that receives wide agreement from data scientists, analysts, and business owners. There has been a significant increase in our ability to build complex AI models for predictions, classifications, and various analytics tasks, and there’s an abundance of (fairly easy-to-use) tools that allow data scientists and analysts to provision complex models within days. As model building become easier, the problem of high-quality data becomes more evident than ever. A recent O’Reilly survey found that those with mature AI practices (as measured by how long they’ve had models in production) cited ‘Lack of data or data quality issues’ as the main bottleneck holding back further adoption of AI technologies.


Getting from flat data a world of relationships to visualise with Gephi

Network analysis offers a perspective of the data that broadens and enriches any investigation. Many times we deal with data in which the elements are related, but we have them in a tabulated format that is difficult to import into network analysis tools. Relationship data require a definition of nodes and connections. Both parts have different structures and it is not possible to structure them in a single table, at least two would be needed. Data analysis tools define different input formats, one of them is GDF, which is characterized by its simplicity and versatility. In this session, we will see how we can extract the relationships between elements of a file in CSV format to generate a file in GDF format to work with Gephi.


Forecasting tools in development

As I’ve been writing up a progress report for my NIGMS R35 MIRA award, I’ve been reminded at how much of the work that we’ve been doing is focused on forecasting infrastructure. A common theme in the Reich Lab is making operational forecasts of infectious disease outbreaks. The operational aspect means that we focus on everything from developing and adapting statistical methods to be used in forecasting applications to thinking about the data science toolkit that you need to store, evaluate, and visualize forecasts. To that end, in addition to working closely with the CDC in their FluSight initiative, we’ve been doing a lot of collaborative work on new R packages and data repositories that I hope will be useful beyond the confines of our lab. Some of these projects are fully operational, used in our production flu forecasts for CDC, and some have even gone through a level of code peer review. Others are in earlier stages of development. My hope is that in putting this list out there (see below the fold) we will generate some interest (and possibly find some new open-source collaborators) for these projects.


Too Close For Comfort

If you’ve ever been to a mall, you’ll often find a surprising situation: stores like Target, Walmart, JCPenney and Kohl’s right nearby each other, often within walking distance. It’s a strange phenomenon. Wouldn’t competitors choose to locate themselves farther from similar stores, to reduce competition?
Advertisements