The 3 Best Optimization Methods in Neural Networks

Deep learning is an iterative process. With so many parameters to tune or methods to try, it is important to be able to train models fast, in order to quickly complete the iterative cycle. This is key to increasing the speed and efficiency of a machine learning team. Hence the importance of optimization algorithms such as stochastic gradient descent, min-batch gradient descent, gradient descent with momentum and the Adam optimizer. These methods make it possible for our neural network to learn. However, some methods perform better than others in terms of speed. Here, you will learn about the best alternatives to stochastic gradient descent and we will implement each method to see how fast a neural network can learn using each method.


Machine learning interpretability techniques

Most machine learning systems require the ability to explain to stakeholders why certain predictions are made. When choosing a suitable machine learning model, we often think in terms of the accuracy vs. interpretability trade-off:
• accurate and ‘black-box’:
Black-box models such as neural networks, gradient boosting models or complicated ensembles often provide great accuracy. The inner workings of these models are harder to understand and they don’t provide an estimate of the importance of each feature on the model predictions, nor is it easy to understand how the different features interact.
• weaker and ‘white-box’:
Simpler models such as linear regression and decision trees on the other hand provide less predictive capacity and are not always capable of modelling the inherent complexity of the dataset (i.e. feature interactions). They are however significantly easier to explain and interpret.


Statistical Tests for Comparing Machine Learning and Baseline Performance

When comparing a machine learning approach with the current solution, I wish to understand if any observed difference is statistically significant; that it is unlikely to be simply due to chance or noise in the data. The appropriate test to evaluate statistical significance varies depending on what your machine learning model is predicting, the distribution of your data, and whether or not you’re comparing predictions on the subjects. This post highlights common tests and where they are suitable.


Loss functions based on feature activation and style loss.

Loss functions using these techniques can be used during the training of U-Net based model architectures and could be applied to the training of other Convolutional Neural Networks that are generating an image as their predication/output. I’ve separated this out from my article on Super Resolution (https://…solution-without-using-a-gan-11c9bb5b6cd5 ), to be more generic as I am using similar loss functions on other U-Net based models making predictions on image data. Having this separated makes it easier to reference and keeps my other articles easier to understand. This is based on the techniques demonstrated and taught in the Fastai deep learning course. This loss function is partly based upon the research in the paper Losses for Real-Time Style Transfer and Super-Resolution and the improvements shown in the Fastai course (v3). This paper focuses on feature losses (called perceptual loss in the paper). The research did not use a U-Net architecture as the machine learning community were not aware of them at that time.


Advanced Keras – Accurately Resuming a Training Process

In this post I will present a use case of the Keras API in which resuming a training process from a loaded checkpoint needs to be handled differently than usual.


Automated Machine Learning: Myth Versus Realty

Witnessing the data science field’s meteoric rise in demand across pretty much all industries and areas of scientific research, it’s easy to anticipate efforts to create shortcuts to satisfy the need for more data science practitioners. The current trend of automated machine learning is a great case in point. This article will touch on a number of efforts to circumvent the need for data scientists to select and train machine learning models and determine metrics for measuring their performance.


Regularization techniques for Neural Networks

In our last post, we learned about feedforward neural networks and how to design them. In this post, we will learn how to tackle one of the most central problems that arise in the domain of machine learning, that is how to make our algorithm to find a perfect fit not only to the training set but also to the testing set. When an algorithm performs well on the training set but performs poorly on the testing set, the algorithm is said to be overfitted on the Training data. After all, our main goal is to perform well on never seen before data, ie reducing the overfitting. To tackle this problem we have to make our model generalize over the training data which is done using various regularization techniques which we will learn about in this post.


Harnessing Organizational Knowledge for Machine Learning

One of the biggest bottlenecks in developing machine learning (ML) applications is the need for the large, labeled datasets used to train modern ML models. Creating these datasets involves the investment of significant time and expense, requiring annotators with the right expertise. Moreover, due to the evolution of real-world applications, labeled datasets often need to be thrown out or re-labeled.


Infographic: What’s the Future of the Data Catalog?

The concept of data catalogs is one that’s becoming increasingly relevant to businesses. According to the McKinsey Global Institute, data-driven organizations are 19 times as likely to be profitable than businesses that aren’t focused on data. They’re also 23 times more likely to acquire customers and six times more likely to retain them.


The Evolved Transformer – Enhancing Transformer with Neural Architecture Search

Neural architecture search (NAS) is the process of algorithmically searching for new designs of neural networks. Though researchers have developed sophisticated architectures over the years, the ability to find the most efficient ones is limited, and recently NAS has reached the point where it can outperform human-designed models.


A Beginner’s Guide to Big Data and Blockchain

Over the last few years, blockchain has been one of the hottest areas of technology development across industries. It’s easy to see why. There seems to be no end to the myriad ways that forward-thinking businesses are finding. Furthermore, they are doing this to adapt the technology to suit a variety of use cases and applications. Much of the development, however, has come in one of two places. One is deep-pocket corporations and crypto-startups. That means that the latest in blockchain technology is out of reach for businesses in the small and midsize enterprise (SME) sector. This leads to creating something of a digital divide that seems to be widening every day. But, there are a few blockchain projects that promise to democratise the technology for SMEs. Furthermore, this could even do the same for Big Data and analytics, to boot. In this blog, we will explore the basics of both big data and blockchain. Furthermore, we will analyse the advantages of combining both big data and blockchain. In the end, we will have a look the applications in real-world and wrap up with predictions about blockchain in future!


Text Preprocessing Techniques

These techniques were used in comparison in our paper ‘A Comparison of Pre-processing Techniques for Twitter Sentiment Analysis’. If you use this material please cite the paper. An extended paper for this work can be found here, with the title ‘A comparative evaluation of pre-processing techniques and their interactions for twitter sentiment analysis’.
0. Remove Unicode Strings and Noise
1. Replace URLs, User Mentions and Hashtags
2. Replcae Slang and Abbreviations
3. Replace Contractions
4. Remove Numbers
5. Replace Repetitions of Punctuation
6. Replace Negations with Antonyms
7. Remove Punctuation
8. Handling Capitalized Words
9. Lowercase
10. Remove Stopwords
11. Replace Elongated Words
12. Spelling Correction
13. Part of Speech Tagging
14. Lemmatizing
15. Stemming


Chatbot – A real game changer in the industry of technologically advanced practices

As of now, chatbots are among the most trending technology for which the industry is excited to get in integrated. They get touted as the next rendition of applications, similar to an immense change in the correspondence business. Since Facebook has extended access to its messenger administration, it is enabling firms to achieve clients better through various APIs. Chatbots has turned into the favorite expression nowadays. Multiple inquiries are emerging about Chatbots: What the Chatbots are? How would they work? How are they get developed? Are Chatbots a first open door for organizations? It will be talked about here.


Factors Behind Data Storage Security: Is Your Business Vulnerable?

Is your business vulnerable to cybersecurity issues or attacks? Here’s what to know about the driving factors behind data storage security.


Building NLP Classifiers Cheaply With Transfer Learning and Weak Supervision

There is a catch to training state-of-the-art NLP models: their reliance on massive hand-labeled training sets. That’s why data labeling is usually the bottleneck in developing NLP applications and keeping them up-to-date. For example, imagine how much it would cost to pay medical specialists to label thousands of electronic health records. In general, having domain experts label thousands of examples is too expensive.


How to setup the PySpark environment for development, with good software engineering practices

In this article we will discuss about how to set up our development environment in order to create good quality python code and how to automate some of the tedious tasks to speed up deployments.
We will go over the following steps:
• setup our dependencies in a isolated virtual environment with pipenv
• how to setup a project structure for multiple jobs
• how to run a pyspark job
• how to use a Makefile to automate development steps
• how to test the quality of our code using flake8
• how to run unit tests for PySpark apps using pytest-spark
• running a test coverage, to see if we have created enough unit tests using pytest-cov
Advertisements