5 Types of Regression and their properties

Linear and Logistic regressions are usually the first modelling algorithms that people learn for Machine Learning and Data Science. Both are great since they’re easy to use and interpret. However, their inherent simplicity also comes with a few drawbacks and in many cases they’re not really the best choice of regression model. There are in fact several different types of regressions, each with their own strengths and weaknesses.


Tensorflow-BEGAN: Boundary Equilibrium Generative Adversarial Networks

I’ve covered GAN and DCGAN in past posts. In 2017, Google published a great paper. The title of paper is ‘BEGAN: Boundary Equilibrium Generative Adversarial Network’. ‘BEGAN’, what a nice name it is? Also the results are great. The generated face image looks like an image of a training dataset.


Where is the Artificial Ingenuity in Deep Learning?

In a recent lecture by Demis Hassabis at the Rothschild Foundation, Hassabis explains intuition and creativity. Intuition is ‘implicit knowledge acquired through experience but not consciously expressible’. Creativity is ‘the ability to synthesize knowledge to produce a novel or original idea’.


A Beginner’s Guide to Tidyverse – The Most Powerful Collection of R Packages for Data Science

Data scientists spend close to 70% (if not more) of their time cleaning, massaging and preparing data. That’s no secret – multiple surveys have confirmed that number. I can attest to it as well – it is simply the most time-taking aspect in a data science project. Unfortunately, it is also among the least interesting things we do as data scientists. There is no getting around it, though. It is an inevitable part of our role. We simply cannot build powerful and accurate models without ensuring our data is well prepared. So how can we make this phase of our job interesting? Welcome to the wonderful world of Tidyverse! It is the most powerful collection of R packages for preparing, wrangling and visualizing data. Tidyverse has completely changed the way I work with messy data – it has actually made data cleaning and massaging fun!


The EU Needs to Reform the GDPR To Remain Competitive in the Algorithmic Economy

Since the mid-1990s, the digital economy has been evolving in three phases: the ‘Internet economy’ transformed into the ‘data-driven economy,’ which in turn is transforming into the ‘algorithmic economy,’ in which the ability to use artificial intelligence (AI) is proving critical to firms’ success. AI promises significant social and economic benefits. However, those benefits can be diminished by poor regulations, especially those related to data processing, as data lies at the heart of AI. In particular, the General Data Protection Regulation (GDPR), while establishing a needed EU-wide privacy framework, will unfortunately inhibit the development and use of AI in Europe, putting firms in the EU at a competitive disadvantage to their North American and Asian competitors. The GDPR’s requirement for organizations to obtain user consent to process data, while perhaps being viable, yet expensive, for the Internet economy, and a growing drag on the data-driven economy, will prove exceptionally detrimental to the emerging algorithmic economy. To address these limitations in the GDPR, several European countries have pursued strategies to facilitate access to personal data by companies in specific industries. These isolated efforts are important but will not be sufficient to fully leverage the value of data and capture growth in the long term. The GDPR, in its current form, puts Europe’s future competitiveness at risk. Europe’s success in the global algorithmic economy requires a regulatory environment that is fit for AI but does not reduce consumer protections. If the EU wants to thrive in the algorithmic economy, it needs to reform the GDPR, such as by expanding authorized uses of AI in the public interest, allowing repurposing of data that poses only minimal risk, not penalizing automated decision-making, permitting basic explanations of automated decisions, and making fines proportional to harm.


A Comprehensive Guide to Data Science With Python

It requires knowledge of Statistics, some Mathematics (Linear Algebra, Multivariable Calculus, Vector Algebra, and of course Discrete Mathematics), Operations Research (Linear and Non-Linear Optimization and some more topics including Markov Processes), Python, R, Tableau, and basic analytical and logical programming skills. .Now if you are new to data science, that last sentence might seem more like pure Greek than simple plain English. Don’t worry about it. If you are studying the Data Science course at Dimensionless Technologies, you are in the right place. This course covers the practical working knowledge of all the topics, given above, distilled and extracted into a beginner-friendly form by the talented course material preparation team. This course has turned ordinary people into skilled data scientists and landed them with excellent placement as a result of the course, so, my basic message is, don’t worry. You are in the right place and with the right people at the right time.


A Beginner’s Guide to Capsule Networks

Over the years the amount of data that’s getting generated has increased tremendously due to the advancement of technology and the variety of sources that’s is available in the current market. Most of the data now are unclean and messy and needs advanced tools and techniques to be able to extract meaningful insights from it. These unstructured data, if mined properly could achieve ground-breaking results and help a business achieve outstanding results. As most companies want to stay ahead of their competitors in the market, they have introduced Machine Learning in their workflow to streamline the process of predictive analytics. The state-of-of-art Machine Learning algorithms could produce interesting results if tuned properly with the relevant features and the correct parameters. However, the traditional Machine Learning algorithms lacks in performance and ability in comparison to its sub-field Deep Learning which works on the principle of Neural Networks. It also takes away the hassle of feature engineering in Machine Learning as more data it gets, the better it learns. To deploy Deep Learning algorithms in the workflow, one needs to have powerful computers with computational capacity. In this article, you would learn about one of the classes of Deep Learning – Capsule Networks.


Deep Learning for Data Integration

Here we have learnt that multiple sources of molecular and clinical information are becoming common in Biology and Biomedicine thanks to the recent technological progress. Therefore data integration is a logical next step which provides a more comprehensive understanding of the biological processes by utilizing the whole complexity of the data. Deep Learning framework is ideally suited for data integration due to its truly ‘integrative’ updating of parameters through back propagation when multiple data types learn information from each other. I showed that data integration can result in discoveries of novel patterns in the data which were not previously seen in the individual data types.


Financial storytelling using time-series classification

When it comes to narratives, financial markets are particularly appealing because they only ever move on one axis over time. A person can be hungry yet happy, but in finance things go up or down; get stronger or weaker; expand or contract.


How Apache Spark can Boost Your Value?

Technological progress and the development of infrastructure has increased the popularity of Big Data immensely. Businesses have started to realize that data can be used to accurately predict the needs of customers which can increase profits significantly. The growing use of Big Data can be assessed from Forrester’s prediction of the global Big Data market to grow 14% this year. Even though more n more individuals are entering this field, yet 41% of organisations face challenges due to lack of talent to implement Big Data as surveyed by Accenture. Users begin Big Data projects thinking it will be easy but discover that there is a lot to learn about data. It shows the need for good talent in the Big Data market. According to a Qubole report of 2018, 83% of data professionals said that it’s very difficult to find big data professionals with necessary skills and experience and 75% said that they face headcount shortfall of professionals who can deliver big data value. Qubole report found Spark is the most popular big data framework used in the enterprises. So if you want to enter Big Data market, learning Spark has become more like a necessity. If Big Data is a movie, Spark is its protagonist.


An Intuitive Understanding to Neural Style Transfer

Neural style transfer is a machine learning technique that merges the ‘content’ of one image with the ‘style’ of another.
Advertisements