Research quality data and research quality databases

When you are doing data science, you are doing research. You want to use data to answer a question, identify a new pattern, improve a current product, or come up with a new product. The common factor underlying each of these tasks is that you want to use the data to answer a question that you haven’t answered before. The most effective process we have come up for getting those answers is the scientific research process. That is why the key word in data science is not data, it is science.


DataOps Principles: How Startups Do Data The Right Way

If you have been trying to harness the power of data science and machine learning – but, like many teams, struggling to produce results – there’s a secret you are missing out on. All of those models and sophisticated insights require lots of good data, and the best way to get good data quickly is by using DataOps. What is DataOps? It’s a way of thinking about how an organization deals with data. It’s a set of tools to automate processes and empower individuals. And it’s a new DataOps Engineer role designed to make that thinking real by managing and building those tools.


Getting DataOps Right

Many large organizations have accumulated dozens of disconnected data sources to serve different lines of business over the years. These applications might be useful to one area of the enterprise, but they’re usually inaccessible to other data consumers in the organization. In this short report, five data industry thought leaders explore DataOps – the automated, process-oriented methodology for making clean, reliable data available to teams throughout your company.


Machine Learning Boosting Algorithms – AdaBoost Explained

The general idea behind boosting methods is to train predictors sequentially, each trying to correct its predecessor. The two most commonly used boosting algorithms are AdaBoost and Gradient Boosting. In the proceeding article, we’ll cover AdaBoost. At a high level, AdaBoost is similar to Random Forest in that they both tally up the predictions made by each decision trees within the forest to decide on the final classification. There are however, some subtle differences. For instance, in AdaBoost, the decision trees have a depth of 1 (i.e. 2 leaves). In addition, the predictions made by each decision tree have varying impact on the final prediction made by the model.


Evolution of Machine Translation

In 1949, Warren Weaver, a researcher at Rockefeller Foundation, presented a set of proposals for machine based translations which were based on information theory and successes in code breaking during the Second World War. After few years, the machine translation research began in earnest in many US universities. As described by Hutchins Report, on January 7th 1954, the Georgetown-IBM experiment started, the IBM 701 computer automatically translated 60 Russian sentences into English for the first time in history. This was the first public demonstration of a machine translation system and it garnered much media and public interest.


Data Science Tactics – A new way to approach data science

Tactic , is sequence of actions, aiming to achieve a certain goal. The word tactic originates from military warfare and supposed to have its origin in 1626. One of the classic tactic is called Oblique Order used in Greek Warefare. The first recorded use of a the Oblique Order tactic was in 371 BCE at the Battle of Leuctra, in Greece, when the Thebans defeated Spartans. The sequence in this tactic is as follows
• In this tactic an attacking army focusses its forces to attack a single flank. The left flank is more stronger than the right flank
• The left flank advances more rapidly and the right flank tries to avoid conflict as much as possible
• The enemy left flank is outnumbered and defeated
• Then the left flank goes to defeat the other enemy flanks


How to deal with outliers in a noisy population?

Defining outliers can be a straight forward task. On the other hand, deciding what to do with them always requires some deeper study.


A Simple Way to Detect Anomaly

When the number of observations in one class is much more than the other, it is difficult to train a vanilla CNN classifier. The CNN classifier may consider that all observations are from the main class to achieve high accuracy. One way to handle this problem is by using oversampling or downsampling to make data balanced. Also, adjusting class weights to force the classifier to handle data in the rare class is also a great idea. However, using the above methods may sometimes cause model overfitting when data is extremely imbalanced. Therefore, we’ll look at another method, which is called anomaly detection, to deal with this case.


Deep Neural Networks from scratch in Python

In this guide we will build a deep neural network, with as many layers as you want! The network can be applied to supervised learning problem with binary classification.


Neural ODEs: breakdown of another deep learning breakthrough

Hi everyone! If you’re reading this article, most probably you’re catching up with the recent advances that happen in the AI world. The topic we will review today comes from NIPS 2018, and it will be about the best paper award from there: Neural Ordinary Differential Equations (Neural ODEs). In this article, I will try to give a brief intro and the importance of this paper, but I will emphasize the practical use and how and for what we can apply this need breed of neural networks in applications and if can at all. As always, if you want to dive straight to the code, you can check this GitHub repository, I recommend you to launch it in Google Colab.


A Beginner’s Guide to Data Visualization Using Matplotlib

The purpose of this article is to provide a very brief introduction to using Matplotlib, one of the most commonly used plotting libraries in Python. By the end of this walkthrough, you’ll know how to make several different kinds of visualizations and how to manipulate some of the aesthetics of a plot. The data used in this tutorial can be found here. This particular dataset comes from data gathered by the World Health Organization, and it contains information that is used to calculate the Happiness Score of a particular country, such as a country’s GDP, life expectancy, and the people’s perception of how corrupt the country’s government is.


Do GANs Dream of Fake Images?

It is common knowledge nowadays, that it’s hard to tell real media from fake. May it be text, audio, video or images. Each type of media has it’s own forgery methods. And while faking texts is (still) mostly done in the old fashioned way, faking images and videos have taken a great leap forward. Some of us even feel that we can’t tell what’s real and what is fake anymore. If two years ago, the photoshop battle sub-reddit was the state of the art for faking images, and photoshop experts were the wizards of this field, new techniques have changed things quite a bit. You’ve probably heard of some cutting edge forgery methods that seriously threaten our perception of what real and what is fake: deep-fake technique allows planting every face in every video, and different re-enactment techniques allow allowing to move every face as you want: make expressions, talk, etc. And this is only the beginning.


Crowdsourcing vs. Managed Teams: A Study in Data Labeling Quality

If you’re like most dev teams, you’re doing data labeling work in-house, and it’s the bulk of the work. Cognilytica found 80% of AI project time is spent on aggregating, cleaning, labeling, and augmenting data to be used in ML models. That leaves just 20% for the activities that drive strategic value: algorithm development, model training and tuning, and ML operationalization. It’s hard to innovate and accelerate deployments when you’re spending so much time on tasks that can be effectively offloaded. You can flip that dynamic by deploying people strategically in a virtual data production line, but like any well executed strategy, there will be important tradeoffs to consider. Depending on the question you want your data to answer, you could use crowdsourcing or a managed service. Each workforce option comes with advantages and disadvantages. Data science platform developer Hivemind designed a study to understand these dynamics in more detail.


AI is getting smarter and creepier, and it can even predict when a person will die

There’s been an explosion of breakthroughs in the field of artificial intelligence (AI) over the past few years. The evidence for this can be found in almost every industry, and there’s no doubt that the rise of AI will continue to disrupt existing sectors in the future. AI has already proved its usefulness in automating dull and mundane tasks in industries such as retail, finance, and even construction. However, AI algorithms and software could so much more. In fact, such as system could be capable of predicting premature death. This may seem frightening, but for the healthcare sector, it could do wonders.


Part I – A new Tool to your Toolkit, KL Divergence

Being a Data Science practitioner, how often have you used the concept of KL Divergence at work? How much clarity and confidence you have with the concepts of Entropy, Cross-Entropy or KL Divergence? A little, just theoretical or read once but forgotten-type. Whatever it is, it’s fine. You must have read some articles about the topic over the Internet, even I did but it’s so theoretical and boring that we forget with time. But, hold on, what if I present a completely different view of it, the view which enabled me to grasp the concept very well and make it a strong weapon in my armory. And believe me, once you are through this, you will have an idea how to utilize these concepts in every small classification, clustering or other day-to-day machine learning problems. It will be a two-part tutorial. In the first part, we will start with understanding about Entropy and then Cross-Entropy and finally KL Divergence. We will then use the learned concepts and apply in a dataset which will make things crystal clear in part II. I will try my level best to keep things simple and intuitive but please feel free to jump to the references section and research more about the topics. Let’s get started on the journey.


Understanding Random Forest

A big part of machine learning is classification – we want to know what class (a.k.a. group) an observation belongs to. The ability to precisely classify observations is extremely valuable for various business applications like predicting whether a particular user will buy a product or forecasting whether a given loan will default or not. Data science provides a plethora of classification algorithms such as logistic regression, support vector machine, naive Bayes classifier, and decision trees. But near the top of the classifier hierarchy is the random forest classifier (there is also the random forest regressor but that is a topic for another day). In this post, we will examine how basic decision trees work, how individual decisions trees are combined to make a random forest, and ultimately discover why random forests are so good at what they do.


Machine Learning Classification with Python for Direct Marketing

How to make business more time-efficient, slash costs and drive up sales? The question is timeless but not rhetorical. In the next few minutes of your reading time, I will apply a few classification algorithms to demonstrate how the use of the data analytic approach can contribute to that end. Together we’ll create a predictive model that will help us customise the client databases we hand over to the telemarketing team so that they could concentrate resources on more promising clients first.


Essentially, all models are wrong, but some models are useful.

For the context of this article, a model can be thought of as a simplified representation of a system or object. Statistical models approximate patterns in a data set by making assumptions about the data, as well as the environment it was gathered in and applies to.


Simplified Logistic Regression

Logistic regression is typically used when the response Y is a probability or a binary value (0 or 1). For instance, the chance for an email message to be spam, based on a number of features such as suspicious keywords or IP address.


Getting Started with GraphQL: It’s pretty easy!

Many companies have switched over to GraphQL to build their APIs. There’s good reason – it’s a revolutionary way of thinking about how we fetch data.