**A Gentle Introduction to Deep Learning – [Part 1 ~ Introduction]**

I am starting this blog to share my understanding of this amazing book Deep Learning that is written by Ian Goodfellow, Yoshua Bengio and Aaron Cournville. I just started reading this book and thought it will be more fun if I share what I will learn and understand throughout the journey of this book. I will try to write a brief and compact form of this book chapter by chapter, so this blog will be a series of blogs about this book. Before I will jump into our first chapter let me tell you all about this book a little. For those who don’t know, this is like the holy bible for the deep learning enthusiast peoples. Those who wants a detailed mathematical introduction into the world of deep learning must read this book. It is written by pioneers of this field and it is also available free on deeplearning.org.

**A Gentle Introduction to Deep Learning : Part 2**

First of all I am deeply sorry for the late upload and now we can continue what we started in the previous article. This article is the second part of the series that I started on the Deep Learning book. I would suggest to please check out the previous article also for better understanding. In this article, I am not really talking about Deep Learning but rather a very important topic which will help us to build our foundation for deep learning. So this article and also some upcoming ones will be a part of the Applied Mathematics that you should understand if you really want to learn deep learning.

**A Gentle Introduction to Deep Learning : Part 3**

This quote truly justifies what I am trying to do here, you cannot learn the true form of machine learning or deep learning until you don’t have the knowledge of some of the important mathematical concepts like linear algebra and statistics. And by keeping this goal in my mind I started this series. In the Part 2 of this series we went over some of the basics of linear algebra and now in this part, I will cover some of the more advance topics of linear algebra and also we will learn about one of the important topic of machine learning, Principal Component Analysis(PCA) from scratch. Before starting if you are unfamiliar of the previous parts(Part1 & Part2) of this series then I would suggest please do read them first. Now let’s get started.

**A Gentle Introduction to Deep Learning : Part 4**

This is the part 4 of our series which I started to share what I am learning and to show you the real essence of Machine Learning. In part 3 of this you all learn about a complete mathematical understanding of PCA. If you haven’t seen that part I do suggest to check out all the previous parts(part 1, part 2, part 3) to get a complete understanding of what I am doing here. So without any delay let’s begin.

**Top 10 Data Science & ML Tools for Non-Programmers**

1. DataRobot

2. RapidMiner

3. MLBase

4. Auto-WEKA

5. BigML

6. Tableau

7. Datawrapper

8. Visualr

9. Paxata

10. Trifacta

**Getting started with Geographic Data Science in Python**

This is the first article of a three-part series of articles in Getting started Geographic Data Science with Python. You will learn about reading, manipulating and analysing Geographic data in Python. The articles in this series are designed to be sequential where the first article lays the foundation and the second one gets into intermediate and advanced level Geographic data science topics. The third part covers a relevant and real-world project wrapping up to cement your learning. Each tutorial also has some simple exercises to help you learn and practice. All code, dataset and Google Colab Jupyter notebooks are available from the link at the end of this article.

**Getting started with Geographic Data Science in Python – Part 2**

This is the Second article of a three-part series of articles in Getting started Geographic Data Science with Python. You will learn about reading, manipulating and analysing Geographic data in Python. The articles in this series are designed to be sequential where the first article lays the foundation and the second one gets into intermediate and advanced level Geographic data science topics. The third part covers a relevant and real-world project wrapping up to cement your learning.

**Word Representation in Natural Language Processing Part III**

In Part II of my blog series about word representations, I talked about distributed word representation such as Word2Vec and GloVe. These representations incorporate semantics (meaning) and similarity information of words into embedding. However, they fail to generalize to ‘Out Of Vocabulary’ words (OOV) that were not part of the training set. In this part, I will describe models that mitigate this issue. Specifically, I will talk about two recently proposed models: ELMo and FastText. The idea behind ELMo and FastText is to exploit the character and morphological structure of the word. Unlike other models, ELMo and FastText do not take words as one atomic unit, but rather a union of its character combinations e.g. ‘remaking -> re + make + ing’

**A Gentle Explanation of Logarithmic Time Complexity**

If you’re new to computer science, you’ve probably seen a notation that looks something like O(n) or O(log n). That’s time complexity analysis or Big O Notation! It’s a super important concept to understand, at least on an intuitive level, in order to write fast code. There’s also space complexity which defines how much memory a program might use but we’ll leave that for the next article. This concept is one of the reasons higher education thinks it needs to make computer science undergrads take years of math classes. Now I have nothing against math classes, but you definitely don’t need them to write good code. The reality is that all the computer science concepts the average programmer needs to know are not hard to understand in on their own. Much like code itself, the parts are simple, but these parts add up to something unfathomably complex.

**AI Has Not One, Not Two, but Many Centralization Problems**

A couple of months ago, I wrote a three-part series of the decentralization of artificial intelligence(AI). In that essay, I tried to cover the main elements that justify the movement of decentralized AI ranging from economic factors to technology enablers as well as the first generation of technologies that are developing decentralized AI platforms. The arguments made in my essay were fundamentally theoretical because, as we all know, the fact remains that AI today is completely centralized. However, as I work more in real world AI problems, I am starting to realize that centralization is an aspect that is constantly hindering the progress of AI solutions. Furthermore, we should start seeing centralization in AI as a single problem but as many different challenges that surface at different stages of the lifecycle of an AI solution. Today, I would like to explore that idea in more detail. What do I mean by claiming that AI has many centralization problems? If we visualize the traditional lifecycle of an AI solution we will see a cyclical graph that connects different stages such as model creation, training, regularization, etc. My thesis is that all those stages are conceptually decentralized activities that are boxed into centralized processes because of the limitation of today’s technologies. However, we should ask ourselves, if software development on-demand has traditionally been a centralized activity so what makes AI so different? The answer might be found by analyzing two main areas in which AI differs from traditional software applications.

**Resisting Adversarial Attacks Using Gaussian Mixture Variational Autoencoders**

Deep neural networks are amazing! They are able to learn to classify images into different categories by looking at more than a million images, perform translation between numerous language pairs, convert our speech to text, produce artwork (that even sells at auctions!), and excel at a plethora of other exciting and useful applications. It’s easy to be enchanted by the success story of deep learning, but are they infallible..?

**6 Tools that Make Microsoft the Go-to for Machine Learning Now**

Some of Microsoft’s offerings completely democratise data science and some make a data scientist’s job much easier. What’s great is that there is something for everyone regardless of level of expertise.

1. AutoML – automated machine learning

2. Azure Machine Learning Service – cloud service

3. Azure Machine Learning Studio – visual interface

4. Cognitive Services – machine learning web API

5. Bot Framework – chatbot framework

6. ML.NET – machine learning framework

**Towards Declarative Visual Reasoning. . . Or Not?**

Exploring how SingularityNET will advance the research and applicability of tasks that require reasoning and pattern recognition.

**Approaching the Problem of Equivariance with Hinton’s Capsule Networks**

Even if you’ve never been to the moon, you can probably recognize the subject of the images above as NASA’s Lunar Roving Vehicle, or at least as being two instances of an identical vehicle at slightly different orientations. You probably have an intuitive idea of how you could manipulate the viewpoint of one image to approximate the view of the other. This sort of cognitive transformation is effortlessly intuitive for a human, but turns out to be very difficult for a convolutional neural network without explicit training examples.

### Like this:

Like Loading...