A Wake-Up Call for Software Development Practices?

Agile, DevOps, Continuous Delivery and Continuous Development all help improve software delivery speed. However, as more applications and software development tools include AI, might software developers be trading trust and safety for speed?


Management accounting and controlling in R

In this article you learn how to make visualizations & tables for management accounting & controlling in R.


Introducing AdaNet: Fast and Flexible AutoML with Learning Guarantees

Ensemble learning, the art of combining different machine learning (ML) model predictions, is widely used with neural networks to achieve state-of-the-art performance, benefitting from a rich history and theoretical guarantees to enable success at challenges such as the Netflix Prize and various Kaggle competitions. However, they aren’t used much in practice due to long training times, and the ML model candidate selection requires its own domain expertise. But as computational power and specialized deep learning hardware such as TPUs become more readily available, machine learning models will grow larger and ensembles will become more prominent. Now, imagine a tool that automatically searches over neural architectures, and learns to combine the best ones into a high-quality model. Today, we’re excited to share AdaNet, a lightweight TensorFlow-based framework for automatically learning high-quality models with minimal expert intervention. AdaNet builds on our recent reinforcement learning and evolutionary-based AutoML efforts to be fast and flexible while providing learning guarantees. Importantly, AdaNet provides a general framework for not only learning a neural network architecture, but also for learning to ensemble to obtain even better models. AdaNet is easy to use, and creates high-quality models, saving ML practitioners the time normally spent selecting optimal neural network architectures, implementing an adaptive algorithm for learning a neural architecture as an ensemble of subnetworks. AdaNet is capable of adding subnetworks of different depths and widths to create a diverse ensemble, and trade off performance improvement with the number of parameters.


AdaNet: Adaptive Structural Learning of Artificial Neural Networks

We present new algorithms for adaptively learning artificial neural networks. Our algorithms (AdaNet) adaptively learn both the structure of the network and its weights. They are based on a solid theoretical analysis, including data-dependent generalization guarantees that we prove and discuss in detail. We report the results of large-scale experiments with one of our algorithms on several binary classification tasks extracted from the CIFAR-10 dataset. The results demonstrate that our algorithm can automatically learn network structures with very competitive performance accuracies when compared with those achieved for neural networks found by standard approaches.


Machine Learning Basics – Random Forest

A few colleagues of mine and I from codecentric.ai are currently working on developing a free online course about machine learning and deep learning. As part of this course, I am developing a series of videos about machine learning basics – the first video in this series was about Random Forests. You can find the video on YouTube but as of now, it is only available in German. Same goes for the slides, which are also currently German only.


Generating text using a Recurrent Neural Network

Deep Learning can be used for lots of interesting things, but often it may feel that only the most intelligent of engineers are able to create such applications. But this simply isn’t true. Through Keras and other high level deep learning libraries everyone can create and use deep learning models no matter his understanding of the theory and inner working of an algorithm. In this article, we will look at how to use a recurrent neural network to create new text in the style of Sir Arthur Conan Doyle using his book called ‘The Adventures of Sherlock Holmes’ as our dataset.


Simple Reinforcement Learning: Temporal Difference Learning

So recently I’ve been doing a lot of reading on reinforcement learning and watching David Silver’s Introduction to Reinforcement Learning video series, which by the way are phenomenal and I highly recommend them! Coming from a traditional statistics and machine learning background, in terms of both grad school and work projects, these topics were somewhat new to me. So for my own personal learning and to share that learning with those interested I thought I’d archive it through a Medium post while trying to make these concepts as simple as possible to understand.


Access Data from Twitter API using R and/or Python

Using the Twitter API should be an easy thing, but sometimes pictures and simple code can save you that extra 5 minutes. I previously covered how to Access Data from Twitter API using R, but the process has changed as of July 2018.


Empowering Spark with MLflow

This post aims to cover our initial experience using MLflow. We will start discovering MLflow with its own tracking server by logging all the exploratory iterations. Then, we will show our experience linking Spark with MLflow using UDFs.


A Primer on Natural Language Processing – using data science to understand the meaning of sentences.

My new project at work is examining pricing across our company’s suppliers. One of the challenges in this analysis is that we often use different suppliers to purchase identical or very similar items and commodities. This makes it hard to compare pricing between suppliers, as it is not clear which items we should match. One of the ways we are addressing this issue is by trying to match product descriptions using various Natural Language Processing (NLP). With this approach we can then use machine learning to improve our matching models so that we can get strong price comparisons. In this tutorial we use a small set of sentences to illustrate important features of NLP. We then use this knowledge to answer some basic questions about the meaning of those sentences.


Machine Learning for Cybercriminals

Machine learning (ML) and Artificial Intelligence (AI) are taking cybersecurity and othertech fields by storm, and you can easily find a great deal of information on the use of ML by both camps?-?defense and cyberattacks. The use of machine learning for cyberattacks remains ambiguous. However, in 2016, the U.S. intelligence community raised concerns about the deployment of artificial intelligence, posing potential threats to cybersecurity. The recent findings demonstrate how machine learning can be used by cybercriminals for more advanced, much faster and cheaperattacks. While my previous article ‘Machine Learning for Cybersecuirty 101’ details AI for defense, it’s time to take a turn for Machine Learning for Cybercriminals. Here, I am systematising the information on possible or existing methods of machine learning deployment in the malicious cyberspace. This text is intended to help Information Security teams prepare for imminent threats.


On Artificial Neurological Illnesses

Much like neurological illnesses emerged as a consequence of the increasing complexity in our nervous systems, artificial neurological illnesses will emerge in the increasingly complex AI systems we build. And they will for the same reason: unexpected connections with unexpected consequences in systems of growing complexity. Artificial neurological illnesses are not a classical bug per se. They may be triggered by a classical bug, some may be triggered by unexpected sensory inputs. Some by the history of live data / context they need to adapt to. Some by unexpected interactions among the different systems that make up the AI in question. These are what fascinate me. We will witness unexpected behaviours, emerging behaviours which we didn’t code and we didn’t train those systems for. Behaviours which seem unreasonable for the task that those AIs were created for. Odd behaviours.


Combinatorial Optimization: from theory to code

I decided to start this blog talking about a nice problem I was recently asked to solve. The problem belongs to the class of combinatorial optimization problems that can be understood as constrained optimization on a general graph. This gives me the opportunity to introduce the concepts and language of complex networks in a more general way than those involved in Neural Networks alone. Rather than discussing network theory from the most general point of view, I will focus here on solving a particular problem, from setting up the model to its numerical implementation. Finally, I will discuss few applications of combinatorial optimization and its connections to statistical mechanics and computer science in general.


Towards Building Large Scale Multimodal Domain-Aware Conversation Systems

While multimodal conversation agents are gaining importance in several domains such as retail, travel etc., deep learning research in this area has been limited primarily due to the lack of availability of large-scale, open chatlogs. To overcome this bottleneck, in this paper we introduce the task of multimodal, domain-aware conversations, and propose the MMD benchmark dataset. This dataset was gathered by working in close coordination with large number of domain experts in the retail domain. These experts suggested various conversations flows and dialog states which are typically seen in multimodal conversations in the fashion domain. Keeping these flows and states in mind, we created a dataset consisting of over 150K conversation sessions between shoppers and sales agents, with the help of in-house annotators using a semi-automated manually intense iterative process. With this dataset, we propose 5 new sub-tasks for multimodal conversations along with their evaluation methodology.We also propose two multimodal neural models in the encode-attend-decode paradigm and demonstrate their performance on two of the sub-tasks, namely text response generation and best image response selection. These experiments serve to establish baseline performance and open new research directions for each of these sub-tasks. Further, for each of the sub-tasks, we present a ‘per-state evaluation’ of 9 most significant dialog states, which would enable more focused research into understanding the challenges and complexities involved in each of these states. ( https://…/MMD )
Advertisements