Observability for data-science (DS) is a new and emerging field, which is sometimes mentioned in tandem with MLOps or AIOps. New offerings are being developed by young startups to address the lack of monitoring and alerts for everything data-science. However, they are mostly addressing data-scientists or -engineers, which are, of course, the first personas that feel the pain of managing multiple models. However, I will try to argue that the impact of data-science observability should be aimed toward decision-makers such as high & mid-level managers, the people who are responsible for spending, funding, managing and most importantly are accountable for the impact of the data-science-operation on the company’s clients, business, product, sales, and lets not forget the company’s bottom line.

Discretization is the process through which we can transform continuous variables, models or functions into a discrete form. We do this by creating a set of contiguous intervals (or bins) that go across the range of our desired variable/model/function.

I’m very resistant to point and click solutions. And I think my resistance is in good faith and for good reasons. We’ve waited a long time for drag and drop solutions for web apps, mobile apps, and a whole host of other things. But fundamentally, I think a solution like Knime is perfect for letting the user introduce the perfect amount of flexibility and simplification as necessary. For me, boxing up all my steps in a Data Science workflow has given me a whole new level of management to my projects.

Outlier detection is an interesting data mining task that is used quite extensively to detect anomalies in data. Outliers are points that exhibit significantly different properties than the majority of the points. To this end, outlier detection has very interesting applications such as credit card fraud detection (suspicious transactions), traffic management (drunk/rash driving) or Network Intrusions (hacks) etc. Due to the time-critical nature of these applications, there is a need for scalable outlier detection techniques. In this project, we will aim to detect outlier in a Taxi Dataset (Beijing), using a technique that only uses spatio-temporal characteristics to detect outliers in very large datasets. We will be using the geo-coordinates and the timestamps collected by the GPS on these taxis.

This story shows how to visualize pre-trained BERT embeddings in Tensorflow’s Tensorboard Embedding Projector. The story uses around 50 unique sentences and their BERT embeddings generated with TensorFlow Hub BERT models.

In the case of binary classification we just ask a yes/no type of question. If there are multiple possible answers and only one to be chosen, then it’s multiclass classification. In our example we can’t really select only one label, I would say that all of them match the photo. The goal of multi-label classification is to assign a set of relevant labels for a single instance. However, most of widely known algorithms are designed for a single label classification problems. In this article four approaches for multi-label classification available in scikit-multilearn library are described and sample analysis is introduced.

As it seems from the plot that boss is showing, the more employees have shaved heads, the more the company sales increase. If you where that boss, would you consider the same action on your employees? Probably not. In fact, you recognize that there is no causality between the two sets of events, and their behaviour is similar just by chance. More clearly: the shaved heads do not cause the sales. So, we just spotted the existence of at least two possible categories of correlations: without and with causality. We also agreed that only the second one is interesting, while the other is useless, when not misleading. But let’s dive deeper.

The growth of knowledge and research around language models has been amazing in the past few years. For BERT especially, we have seen some incredible uses for this massive pre-trained language model on tasks like text classification, prediction, and question answering. I’ve recently written about how some have researched some of the limitations of BERT when performing certain language tasks. Further, I did some testing on my own with creating a question-answering system to get a feel for how it could be used. It has been great to see and try in practice some of the many capabilities of language models.

In this article I will explore the differences in SQL and noSQL ETL pipelines. This article will focus on the transfer and load techniques -> that is what happens once the data has been loaded into the application. The extraction part is simple. It involves reading files and some basic data wrangling. By comparing how the datasets are divided post extraction you can see how the choice of database impacts the application architecture. You can also identify the strengths and weaknesses of choosing particular databases.

Machine learning has more than a few beautiful algorithms that are helping data scientists and researchers transform business models and societies altogether. It has a package of both supervised and unsupervised algorithms that can be trained as the requirements of the problem. Even though a majority of new-age machine learning applications are moving towards exploring deep learning theories and neural networks, a lot can be done with existing algorithms. One class of such a beautiful machine learning algorithms are the support vector machines. Even though people don’t use these much since the advent of neural networks, they still have a lot of scopes in research and getting answers to complex problems. The beauty of support vector machines lies in the fact that they can be reproduced using maximum likelihood estimates and understood in terms of a Bayesian classifier. Tweaks in the optimal Bayesian classifier with the appropriate assumption of a prior can give you a classic support vector machine. Support vector machines are a supervised learning algorithm when it comes to classification, but with progress in research, their scope for unsupervised clustering methods is being explored.

Feature selection is one of the most important things when it comes to feature engineering. We need to reduce the number of features so that we can interpret the model better, make it less computational stressful to train the model, remove redundant effects and make the model generalise better. In some case feature selection becomes extremely important or else the input dimensional space is too big making it difficult for the model to train.

Exploratory Data Analysis (EDA) is the first step in your data analysis process developed by ‘John Tukey’ in the 1970s. In statistics, exploratory data analysis is an approach to analyzing data sets to summarize their main characteristics, often with visual methods. By the name itself, we can get to know that it is a step in which we need to explore the data set.

In the real world, missing data is a nearly inevitable problem. Only a special few can swerve it -usually through large investments in data collection. This issue is crucial because the way we handle missing data has a direct impact on our findings, and it also plows into time management. Therefore, it should always be a priority to handle missing data properly, which can be much harder than it seems. The difficulty arises as we realize that not all missing data is created equal just because it all looks the same – a blank space – and that different types of missing data must be handled differently. In this article, we review the types of missing data, as well as basic and advanced methods to tackle them.

Originally created at Airbnb in 2014, Airflow is an open-source data orchestration framework that allows developers to programmatically author, schedule, and monitor data pipelines. Airflow experience is one of the most in-demand technical skills for Data Engineering (another one is Oozie) as it is listed as a skill requirement in many Data Engineer job postings. In this blog post, I will explain core concepts and workflow creation in Airflow, with source code examples to help you create your first data pipeline using Airflow.

Scanning all new published packages on PyPI I know that the quality is often quite bad. I try to filter out the worst ones and list here the ones which might be worth a look, being followed or inspire you in some way.

• nervaluate NER evaluation done right. nervaluate is a python module for evaluating Named Entity Recognition (NER) models as defined in the SemEval 2013 – 9.1 task. The evaluation metrics output by nervaluate go beyond a simple token/tag based schema, and consider diferent scenarios based on wether all the tokens that belong to a named entity were classified or not, and also wether the correct entity type was assigned.

• kdap KDAP is a package to analyze knowledge data. This library contains a collection of utilities for efficiently processing and anlyzing the data of Wiki-based and QnA-based portals (eg., Wikipedia, Wikia, Stack Exchange, etc.). The function takes Knol-ML files as input. Most of the functions of this library are implemented in such a way that parallel processing can be achieved. The library is currently in the development phase, feel free to raise an issue. We welcome the community to contribute towards KDAP code base and ARK dataset. Please check our webpage for more details (https://…/)

• mgc A comprehensive independence testing package

• nn-numpy A package which contains a simple implementation of neural network with numpy

• pandas-htools Tools for Exploratory data analysis with Pandas

Lecture Hall Tableaux We introduce lecture hall tableaux, which are fillings of a skew Young diagram satisfying certain conditions. Lecture hall tableaux generalize both lecture hall partitions and anti-lecture hall compositions, and also contain reverse semistandard Young tableaux as a limit case. We show that the coefficients in the Schur expansion of multivariate little $q$-Jacobi polynomials are generating functions for lecture hall tableaux. Using a Selberg-type integral we show that moments of multivariate little $q$-Jacobi polynomials, which are equal to generating functions for lecture hall tableaux of a Young diagram, have a product formula. We also explore various combinatorial properties of lecture hall tableaux. …

Guided Dynamic Routing Previous studies have shown that neural machine translation (NMT) models can benefit from modeling translated (Past) and un-translated (Future) source contents as recurrent states (Zheng et al., 2018). However, the recurrent process is less interpretable. In this paper, we propose to model Past and Future by Capsule Network (Hinton et al.,2011), which provides an explicit separation of source words into groups of Past and Future by the process of parts-to-wholes assignment. The assignment is learned with a novel variant of routing-by-agreement mechanism (Sabour et al., 2017), namely Guided Dynamic Routing, in which what to translate at current decoding step guides the routing process to assign each source word to its associated group represented by a capsule, and to refine the representation of the capsule dynamically and iteratively. Experiments on translation tasks of three language pairs show that our model achieves substantial improvements over both RNMT and Transformer. Extensive analysis further verifies that our method does recognize translated and untranslated content as expected, and produces better and more adequate translations. …

Q-Probabilistic Multiagent Learning (QPML) The ever-increasingly urban populations and their material demands have brought unprecedented burdens to cities. Smart cities leverage emerging technologies like the Internet of Things (IoT), Cognitive Radio Wireless Sensor Network (CR-WSN) to provide better QoE and QoS for all citizens. However, resource scarcity is an important challenge in CR-WSN. Generally, this problem is handled by auction theory or game theory. To make CR-WSN nodes intelligent and more autonomous in resource allocation, we propose a multi-agent reinforcement learning (MARL) algorithm to learn the optimal resource allocation strategy in the oligopoly market model. Firstly, we model a multi-agent scenario, in which the primary users (PUs) is the sellers and the secondary users (SUs) is the buyers. Then, we propose the Q-probabilistic multiagent learning (QPML) and apply it to allocate resources in the market. In the multi-agent interactive learning process, the PUs and SUs learn strategies to maximize their benefits and improve spectrum utilization. Experimental results show the efficiency of our QPML approach, which can also converge quickly. …

Convolutional Relational Machine (CRM) We present an end-to-end deep Convolutional Neural Network called Convolutional Relational Machine (CRM) for recognizing group activities that utilizes the information in spatial relations between individual persons in image or video. It learns to produce an intermediate spatial representation (activity map) based on individual and group activities. A multi-stage refinement component is responsible for decreasing the incorrect predictions in the activity map. Finally, an aggregation component uses the refined information to recognize group activities. Experimental results demonstrate the constructive contribution of the information extracted and represented in the form of the activity map. CRM shows advantages over state-of-the-art models on Volleyball and Collective Activity datasets. …

To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to ‘buy’ arbitrary levels of skills for a system, in a way that masks the system’s own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.The Measure of Intelligence

Amazon CodeGuru is a machine learning service for automated code reviews and application performance recommendations. It helps you find the most expensive lines of code that hurt application performance and keep you up all night troubleshooting, then gives you specific recommendations to fix or improve your code. CodeGuru is powered by machine learning, best practices, and hard-learned lessons across millions of code reviews and thousands of applications profiled on open source projects and internally at Amazon. With CodeGuru, you can find and fix code issues such as resource leaks, potential concurrency race conditions, and wasted CPU cycles. And with low, on-demand pricing, it is inexpensive enough to use for every code review and application you run. CodeGuru supports Java applications today, with support for more languages coming soon. CodeGuru helps you catch problems faster and earlier, so you can build and run better software.

AI is emerging as the most important technology in a new wave of digital innovation that is transforming industries around the world. Businesses in Europe are at the forefront of some of the latest advancements in the field, and European universities are home to the greatest concentration of AI researchers in the world. Every week, new case studies emerge showing the potential opportunities that can arise from greater use of the technology. To fully realize its vision for AI, Europe needs an influx of resources and talent, plus some important policy changes. Join the Center for Data Innovation to discuss why European success in AI is important, how the EU compares to other world leaders today, and what steps European policymakers should take to be more competitive in AI.

There’s no doubt about it, probability and statistics is an enormous field, encompassing topics from the familiar (like the average) to the complex (regression analysis, correlation coefficients and hypothesis testing to name but a few). If you want to be a great data scientist, you have to know some basic statistics. The following picture shows which statistics topics you must know if you’re going to excel in data science.

As deep neural networks are applied to an increasingly diverse set of domains, transfer learning has emerged as a highly popular technique in developing deep learning models. In transfer learning, the neural network is trained in two stages: 1) pretraining, where the network is generally trained on a large-scale benchmark dataset representing a wide diversity of labels/categories (e.g., ImageNet); and 2) fine-tuning, where the pretrained network is further trained on the specific target task of interest, which may have fewer labeled examples than the pretraining dataset. The pretraining step helps the network learn general features that can be reused on the target task.

Companies all over the world across a wide variety of industries have been going through what people are calling a digital transformation. That is, businesses are taking traditional business processes such as hiring, marketing, pricing, and strategy, and using digital technologies to make them 10 times better. Data Science has become an integral part of those transformations. With Data Science, organizations no longer have to make their important decisions based on hunches, best-guesses, or small surveys. Instead, they’re analyzing large amounts of real data to base their decisions on real, data-driven facts. That’s really what Data Science is all about – creating value through data. This trend of integrating data into the core business processes has grown significantly, with an increase in interest by over four times in the past 5 years according to Google Search Trends. Data is giving companies a sharp advantage over their competitors. With more data and better Data Scientists to use it, companies can acquire information about the market that their competitors might not even know existed. It’s become a game of Data or perish.

While AI may be the new electricity significant challenges remain to realize AI potential. Here we examine why data scientists and teams can’t rely on software engineering tools and processes for machine learning.

Deep Learning (DL) models are revolutionizing the business and technology world with jaw-dropping performances in one application area after another – image classification, object detection, object tracking, pose recognition, video analytics, synthetic picture generation – just to name a few. However, they are like anything but classical Machine Learning (ML) algorithms/techniques. DL models use millions of parameters and create extremely complex and highly nonlinear internal representations of the images or datasets that are fed to these models. Whereas for the classical ML, domain experts and data scientists often have to write hand-crafted algorithms to extract and represent high-dimensional features from the raw data, deep learning models, on the other hand, automatically extracts and work on these complex features.

We venture to suggest a curriculum roadmap after receiving multiple requests for one from academic partners. As a group, we have spent the vast majority of our time in industry, although many of us have had spent time in one academic capacity or another. What follows is a set of broad recommendations, and it will inevitably require a lot of adjustments in each implementation. Given that caveat, here are our curriculum recommendations.

We all understand that more data means better AI. That sounds great! But, with the recent blast of information, we often end in a problem of too much data! We need all that data. But it turns out to be too much for our processing. Hence we need to look into ways of streamlining the available data so that it can be compressed without losing value. Dimensionality reduction is an important technique that achieves this end.

Three weeks into my journey to become a data scientist and I’ve officially been baptized… by fire, that is! I chose to attend Flatiron’s Data Science 15-week bootcamp to transition out of finance. So far, the program has exceeded expectations (and my expectations were high). While the curriculum is rigorous and fast-paced, it’s well constructed, the instructors are dedicated to helping students learn, and my cohort is amazing – everyone is friendly, helpful, smart, and undoubtedly will go on to accomplish great things. This series of blog posts is dedicated to them… Here’s to the future data scientists!

Process capability analysis represents a significant component of the Measure phase from the DMAIC (Define, Measure, Analysis, Improve, Control) cycle during a Six Sigma project. This analysis measures how a process performance fits the customer’s requirements, which are translated into specification limits for the interesting characteristics of the product to be manufactured or produced. The results from this analysis may help industrial engineers identify variation within a process and develop further action plans that lead to better yield, lower variation and less number of defects.

Understanding Various Terminologies & Roles in Artificial Intelligence Projects. Artificial Intelligence (AI) is a complex and evolving field. The first challenge an AI aspirant faces is understanding the landscape and how he could navigate through it. Consider this, if you are travelling to a new city, and if you don’t have the map, you will have trouble to navigate the city and you will need to ask a lot of random people during your travel without knowing how much they know about the place. Similarly, all the newcomers to AI have this trouble, and there are two ways to deal with this, arrange the map (or a guide) or travel yourself and learn with experience.

The performance of a deep neural network is highly dependent on its training, and finding better local optimal solutions is the goal of many optimization algorithms. However, existing optimization algorithms show a preference for descent paths that converge slowly and do not seek to avoid bad local optima. In this work, we propose Learning Rate Dropout (LRD), a simple gradient descent technique for training related to coordinate descent. LRD empirically aids the optimizer to actively explore in the parameter space by randomly setting some learning rates to zero; at each iteration, only parameters whose learning rate is not 0 are updated. As the learning rate of different parameters is dropped, the optimizer will sample a new loss descent path for the current update. The uncertainty of the descent path helps the model avoid saddle points and bad local minima. Experiments show that LRD is surprisingly effective in accelerating training while preventing overfitting.

Error-correcting output codes (ECOC) is an ensemble method combining a set of binary classifiers for multi-class learning problems. However, in traditional ECOC framework, the binary classifiers are trained independently. To explore the interaction between the binary classifiers, we construct an error correction network (ECN) that jointly trains all binary classifiers while maximizing the ensemble diversity to improve its robustness against adversarial attacks. An ECN is built based on a code matrix which is generated by maximizing the error tolerance, i.e., the minimum Hamming distance between any two rows, as well as the ensemble diversity, i.e., the variation of information between any two columns. Though ECN inherently promotes the diversity between the binary classifiers as each ensemble member solves a different classification problem (specified by the corresponding column of the code matrix), we empirically show that the ensemble diversity can be further improved by forcing the weight matrices learned by ensemble members to be orthogonal. The ECN is trained in end-to-end fashion and can be complementary to other defense approaches including adversarial training. We show empirically that ECN is effective against the state-of-the-art while-box attacks while maintaining good accuracy on normal examples.

Architecture design has become a crucial component of successful deep learning. Recent progress in automatic neural architecture search (NAS) shows a lot of promise. However, discovered architectures often fail to generalize in the final evaluation. Architectures with a higher validation accuracy during the search phase may perform worse in the evaluation. Aiming to alleviate this common issue, we introduce sequential greedy architecture search (SGAS), an efficient method for neural architecture search. By dividing the search procedure into sub-problems, SGAS chooses and prunes candidate operations in a greedy fashion. We apply SGAS to search architectures for Convolutional Neural Networks (CNN) and Graph Convolutional Networks (GCN). Extensive experiments show that SGAS is able to find state-of-the-art architectures for tasks such as image classification, point cloud classification and node classification in protein-protein interaction graphs with minimal computational cost. Please visit https://…/sgas for more information about SGAS.

Processing large complex networks recently attracted considerable interest. Complex graphs are useful in a wide range of applications from technological networks to biological systems like the human brain. Sometimes these networks are composed of billions of entities that give rise to emerging properties and structures. Analyzing these structures aids us in gaining new insights about our surroundings. As huge networks become abundant, there is a need for scalable algorithms to perform analysis. A prominent example is the PageRank algorithm, which is one of the measures used by web search engines such as Google to rank web pages displayed to the user. In order to find these patterns, massive amounts of data have to be acquired and processed. Designing and evaluating scalable graph algorithms to handle these data sets is a crucial task on the road to understanding the underlying systems. This habilitation thesis is a summary a broad spectrum of scalable graph algorithms that I developed over the last six years with many coauthors. In general, this research is based on four pillars: multilevel algorithms, practical kernelization, parallelization and memetic algorithms that are highly interconnected. Experiments conducted indicate that our algorithms find better solutions and/or are much more scalable than the previous state-of-the-art.

Deep learning-based models have been very successful in achieving state-of-the-art results in many of the computer vision, speech recognition, and natural language processing tasks in the last few years. These models seem a natural fit for handling the ever-increasing scale of biometric recognition problems, from cellphone authentication to airport security systems. Deep learning-based models have increasingly been leveraged to improve the accuracy of different biometric recognition systems in recent years. In this work, we provide a comprehensive survey of more than 120 promising works on biometric recognition (including face, fingerprint, iris, palmprint, ear, voice, signature, and gait recognition), which deploy deep learning models, and show their strengths and potentials in different applications. For each biometric, we first introduce the available datasets that are widely used in the literature and their characteristics. We will then talk about several promising deep learning works developed for that biometric, and show their performance on popular public benchmarks. We will also discuss some of the main challenges while using these models for biometric recognition, and possible future directions to which research in this area is headed.

This paper studies the multi-cascade influence maximization problem, which explores strategies for launching one information cascade in a social network with multiple existing cascades. With natural extensions to the classic models, we first propose the independent multi-cascade model where the diffusion process is governed by the so-called activation function. We show that the proposed model is sufficiently flexible as it generalizes most of the existing cascade-based models. We then study the multi-cascade influence maximization problem under the designed model and provide approximation hardness under common complexity assumptions, namely Exponential Time Hypothesis and . Given the hardness results, we build a framework for designing heuristic seed selection algorithms with a testable data-dependent approximation ratio. The designed algorithm leverages upper and lower bounds, which reveal the key combinatorial structure behind the multi-cascade influence maximization problem. The performance of the framework is theoretically analyzed and practically evaluated through extensive simulations. The superiority of the proposed solution is supported by encouraging experimental results, in terms of effectiveness and efficiency.

In this paper, we evaluate training of deep recurrent neural networks with half-precision floats. We implement a distributed, data-parallel, synchronous training algorithm by integrating TensorFlow and CUDA-aware MPI to enable execution across multiple GPU nodes and making use of high-speed interconnects. We introduce a learning rate schedule facilitating neural network convergence at up to workers. Strong scaling tests performed on clusters of NVIDIA Pascal P100 GPUs show linear runtime and logarithmic communication time scaling for both single and mixed precision training modes. Performance is evaluated on a scientific dataset taken from the Joint European Torus (JET) tokamak, containing multi-modal time series of sensory measurements leading up to deleterious events called plasma disruptions, and the benchmark Large Movie Review Dataset~\cite{imdb}. Half-precision significantly reduces memory and network bandwidth, allowing training of state-of-the-art models with over 70 million trainable parameters while achieving a comparable test set performance as single precision.

Even as deep neural networks have become very effective for tasks in vision and perception, it remains difficult to explain and debug their behavior. In this paper, we present a programmatic and semantic approach to explaining, understanding, and debugging the correct and incorrect behaviors of a neural network based perception system. Our approach is semantic in that it employs a high-level representation of the distribution of environment scenarios that the detector is intended to work on. It is programmatic in that the representation is a program in a domain-specific probabilistic programming language using which synthetic data can be generated to train and test the neural network. We present a framework that assesses the performance of the neural network to identify correct and incorrect detections, extracts rules from those results that semantically characterizes the correct and incorrect scenarios, and then specializes the probabilistic program with those rules in order to more precisely characterize the scenarios in which the neural network operates correctly or not, without human intervention to identify important features. We demonstrate our results using the SCENIC probabilistic programming language and a neural network-based object detector. Our experiments show that it is possible to automatically generate compact rules that significantly increase the correct detection rate (or conversely the incorrect detection rate) of the network and can thus help with debugging and understanding its behavior.

Playing an essential role in data mining, machine learning has a long history of being applied to networks on multifarious tasks and has played an essential role in data mining. However, the discrete and sparse natures of networks often render it difficult to apply machine learning directly to networks. To circumvent this difficulty, one major school of thought to approach networks using machine learning is via network embeddings. On the one hand, this network embeddings have achieved huge success on aggregated network data in recent years. On the other hand, learning network embeddings on distributively stored networks still remained understudied: To the best of our knowledge, all existing algorithms for learning network embeddings have hitherto been exclusively centralized and thus cannot be applied to these networks. To accommodate distributively stored networks, in this paper, we proposed a multi-agent model. Under this model, we developed the multi-agent network embedding learning algorithm (MANELA) for learning network embeddings. We demonstrate MANELA’s advantages over other existing centralized network embedding learning algorithms both theoretically and experimentally. Finally, we further our understanding in MANELA via visualization and exploration of its relationship to DeepWalk.

In this paper, we introduce Anomaly Contribution Explainer or ACE, a tool to explain security anomaly detection models in terms of the model features through a regression framework, and its variant, ACE-KL, which highlights the important anomaly contributors. ACE and ACE-KL provide insights in diagnosing which attributes significantly contribute to an anomaly by building a specialized linear model to locally approximate the anomaly score that a black-box model generates. We conducted experiments with these anomaly detection models to detect security anomalies on both synthetic data and real data. In particular, we evaluate performance on three public data sets: CERT insider threat, netflow logs, and Android malware. The experimental results are encouraging: our methods consistently identify the correct contributing feature in the synthetic data where ground truth is available; similarly, for real data sets, our methods point a security analyst in the direction of the underlying causes of an anomaly, including in one case leading to the discovery of previously overlooked network scanning activity. We have made our source code publicly available.

We propose a novel model for a topic-aware chatbot by combining the traditional Recurrent Neural Network (RNN) encoder-decoder model with a topic attention layer based on Nonnegative Matrix Factorization (NMF). After learning topic vectors from an auxiliary text corpus via NMF, the decoder is trained so that it is more likely to sample response words from the most correlated topic vectors. One of the main advantages in our architecture is that the user can easily switch the NMF-learned topic vectors so that the chatbot obtains desired topic-awareness. We demonstrate our model by training on a single conversational data set which is then augmented with topic matrices learned from different auxiliary data sets. We show that our topic-aware chatbot not only outperforms the non-topic counterpart, but also that each topic-aware model qualitatively and contextually gives the most relevant answer depending on the topic of question.

Notwithstanding the tremendous progress that is taking place in spoken language technology, effective speech-based human-robot interaction still raises a number of important challenges. Not only do the fields of robotics and spoken language technology present their own special problems, but their combination raises an additional set of issues. In particular, there is a large gap between the formulaic speech that typifies contemporary spoken dialogue systems and the flexible nature of human-human conversation. It is pointed out that grounded and situated speech-based human-robot interaction may lead to deeper insights into the pragmatics of language usage, thereby overcoming the current `habitability gap’.

Deep metric learning has yielded impressive results in tasks such as clustering and image retrieval by leveraging neural networks to obtain highly discriminative feature embeddings, which can be used to group samples into different classes. Much research has been devoted to the design of smart loss functions or data mining strategies for training such networks. Most methods consider only pairs or triplets of samples within a mini-batch to compute the loss function, which is commonly based on the distance between embeddings. We propose Group Loss, a loss function based on a differentiable label-propagation method that enforces embedding similarity across all samples of a group while promoting, at the same time, low-density regions amongst data points belonging to different groups. Guided by the smoothness assumption that ‘similar objects should belong to the same group’, the proposed loss trains the neural network for a classification task, enforcing a consistent labelling amongst samples within a class. We show state-of-the-art results on clustering and image retrieval on several datasets, and show the potential of our method when combined with other techniques such as ensembles

In pattern recognition or machine learning, it is a very fundamental task to find nearest neighbors of a given point. All the methods for the task work basically by comparing the given point to all the points in the data set. That is why the computational cost increases with the number of data points. However, the human visual system seems to work in a different way. When the human visual system tries to find the neighbors of one point on a map, it directly focuses on the area around the point and actively searches the neighbors by looking or zooming in and out around the point. In this paper, we propose an innovative search method for nearest neighbors, which seems very similar to how human visual system works on the task.

Pointer analysis is a foundational analysis leveraged by various static analyses. Therefore, it gathered wide attention in research for decades. Some pointer analysis frameworks are based on succinct declarative specifications. However, these tools are heterogeneous in terms of the underlying intermediate representation (IR), heap abstraction, and programming methodology. This situation complicates a fair comparison of these frameworks and thus hinders further research. Consequently, the literature lacks an evaluation of the strengths and weaknesses of these tools. In this work, we evaluate two major frameworks for pointer analysis, WALA and Doop, on the DaCapo set of benchmarks. We compare the pointer analyses available in Wala and Doop, and conclude that—even though based on a declarative specification—Doop provides a better pointer analysis than Wala in terms of precision and scalability. We also compare the two IRs used in Doop, i.e., Jimple from the Soot framework and IR from the Wala framework. Our evaluation shows that in the majority of the benchmarks Soot’s IR gives a more precise and scalable pointer analysis. Finally, we propose a micro-benchmark \emph{PointerBench}, for which we manually validate the points-to statistics to evaluate the results of these tools.

Estimating the travel time of any route is of great importance for trip planners, traffic operators, online taxi dispatching and ride-sharing platforms, and navigation provider systems. With the advance of technology, many traveling cars, including online taxi dispatch systems’ vehicles are equipped with Global Positioning System (GPS) devices that can report the location of the vehicle every few seconds. This paper uses GPS data and the Matrix Factorization techniques to estimate the travel times on all road segments and time intervals simultaneously. We aggregate GPS data into a matrix, where each cell of the original matrix contains the average vehicle speed for a segment and a specific time interval. One of the problems with this matrix is its high sparsity. We use Alternating Least Squares (ALS) method along with a regularization term to factorize the matrix. Since this approach can solve the sparsity problem that arises from the absence of cars in many road segments in a specific time interval, matrix factorization is suitable for estimating the travel time. Our comprehensive evaluation results using real data provided by one of the largest online taxi dispatching systems in Iran, shows the strength of our proposed method.

This paper studies the optimality of kernel methods in high-dimensional data clustering. Recent works have studied the large sample performance of kernel clustering in the high-dimensional regime, where Euclidean distance becomes less informative. However, it is unknown whether popular methods, such as kernel k-means, are optimal in this regime. We consider the problem of high-dimensional Gaussian clustering and show that, with the exponential kernel function, the sufficient conditions for partial recovery of clusters using the NP-hard kernel k-means objective matches the known information-theoretic limit up to a factor of for large . It also exactly matches the known upper bounds for the non-kernel setting. We also show that a semi-definite relaxation of the kernel k-means procedure matches up to constant factors, the spectral threshold, below which no polynomial-time algorithm is known to succeed. This is the first work that provides such optimality guarantees for the kernel k-means as well as its convex relaxation. Our proofs demonstrate the utility of the less known polynomial concentration results for random variables with exponentially decaying tails in a higher-order analysis of kernel methods.

Learning the underlying patterns in the data goes beyond instance-based generalization to some external knowledge represented in structured graphs or networks. Deep Learning (DL) has shown significant advances in probabilistically learning latent patterns in the data using a multi-layered network of computational nodes (i.e. neurons/hidden units). However, with the tremendous amount of training data, uncertainty in generalization on domain-specific tasks, and delta improvement with an increase in complexity of models seem to raise a concern on the features learned by the model. As incorporation of domain specific knowledge will aid in supervising the learning of features for the model, infusion of knowledge from knowledge graphs within hidden layers will further enhance the learning process. Although much work remains, we believe that KGs will play an increasing role in developing hybrid neuro-symbolic intelligent systems (that is bottom up deep learning with top down symbolic computing) as well as in building explainable AI systems for which KGs will provide a scaffolding for punctuating neural computing. In this position paper, we describe our motivation for such hybrid approach and a framework that combines knowledge graph and neural networks.

Visual target tracking is one of the most sought-after yet challenging research topics in computer vision. Given the ill-posed nature of the problem and its popularity in a broad range of real-world scenarios, a number of large-scale benchmark datasets have been established, on which considerable methods have been developed and demonstrated with significant progress in recent years — predominantly by recent deep learning (DL)-based methods. This survey aims to systematically investigate the current DL-based visual tracking methods, benchmark datasets, and evaluation metrics. It also extensively evaluates and analyzes the leading visual tracking methods. First, the fundamental characteristics, primary motivations, and contributions of DL-based methods are summarized from six key aspects of: network architecture, network exploitation, network training for visual tracking, network objective, network output, and the exploitation of correlation filter advantages. Second, popular visual tracking benchmarks and their respective properties are compared, and their evaluation metrics are summarized. Third, the state-of-the-art DL-based methods are comprehensively examined on a set of well-established benchmarks of OTB2013, OTB2015, VOT2018, and LaSOT. Finally, by conducting critical analyses of these state-of-the-art methods both quantitatively and qualitatively, their pros and cons under various common scenarios are investigated. It may serve as a gentle use guide for practitioners to weigh on when and under what conditions to choose which method(s). It also facilitates a discussion on ongoing issues and sheds light on promising research directions.

In this paper, we introduce the prior knowledge, multi-scale structure, into self-attention modules. We propose a Multi-Scale Transformer which uses multi-scale multi-head self-attention to capture features from different scales. Based on the linguistic perspective and the analysis of pre-trained Transformer (BERT) on a huge corpus, we further design a strategy to control the scale distribution for each layer. Results of three different kinds of tasks (21 datasets) show our Multi-Scale Transformer outperforms the standard Transformer consistently and significantly on small and moderate size datasets.

In this paper, we evaluate Apache Spark for a data-intensive machine learning problem. Our use case focuses on policy diffusion detection across the state legislatures in the United States over time. Previous work on policy diffusion has been unable to make an all-pairs comparison between bills due to computational intensity. As a substitute, scholars have studied single topic areas. We provide an implementation of this analysis workflow as a distributed text processing pipeline with Spark dataframes and Scala application programming interface. We discuss the challenges and strategies of unstructured data processing, data formats for storage and efficient access, and graph processing at scale.

The rapid growth of deep learning applications in real life is accompanied by severe safety concerns. To mitigate this uneasy phenomenon, much research has been done providing reliable evaluations of the fragility level in different deep neural networks. Apart from devising adversarial attacks, quantifiers that certify safeguarded regions have also been designed in the past five years. The summarizing work of Salman et al. unifies a family of existing verifiers under a convex relaxation framework. We draw inspiration from such work and further demonstrate the optimality of deterministic CROWN (Zhang et al. 2018) solutions in a given linear programming problem under mild constraints. Given this theoretical result, the computationally expensive linear programming based method is shown to be unnecessary. We then propose an optimization-based approach \textit{FROWN} (\textbf{F}astened C\textbf{ROWN}): a general algorithm to tighten robustness certificates for neural networks. Extensive experiments on various networks trained individually verify the effectiveness of FROWN in safeguarding larger robust regions.

With rapid technological advancements within the domain of Internet of Things (IoT), strong trends have emerged which indicate a rapid growth in the number of smart devices connected to IoT networks and this growth cannot be supported by traditional cloud computing platforms. In response to the increased capacity of data being transferred over networks, the edge and fog computing paradigms have emerged as extremely viable frameworks that shift computational and storage resources towards the edge of the network, thereby migrating processing power from centralized cloud servers to distributed LAN resources and powerful embedded devices within the network. These computing paradigms, therefore, have the potential to support massive IoT networks of the future and have also fueled the advancement of IoT systems within industrial settings, leading to the creation of the Industrial Internet of Things (IIoT) technology that is revolutionizing industrial processes in a variety of domains. In this paper, we elaborate on the impact of edge and fog computing paradigms on IIoT. We also highlight the how edge and fog computing are poised to bring about a turnaround in several industrial applications through a use-case approach. Finally, we conclude with the current issues and challenges faced by these paradigms in IIoT and suggest some research directions that should be followed to solve these problems and accelerate the adaptation of edge and fog computing in IIoT.

Matrix completion is widely used in machine learning, engineering control, image processing, and recommendation systems. Currently, a popular algorithm for matrix completion is Singular Value Threshold (SVT). In this algorithm, the singular value threshold should be set first. However, in a recommendation system, the dimension of the preference matrix keeps changing. Therefore, it is difficult to directly apply SVT. In addition, what the users of a recommendation system need is a sequence of personalized recommended results rather than the estimation of their scores. According to the above ideas, this paper proposes a novel approach named probability completion model~(PCM). By reducing the data dimension, the transitivity of the similar matrix, and singular value decomposition, this approach quickly obtains a completion matrix with the same probability distribution as the original matrix. The approach greatly reduces the computation time based on the accuracy of the sacrifice part, and can quickly obtain a low-rank similarity matrix with data trend approximation properties. The experimental results show that PCM can quickly generate a complementary matrix with similar data trends as the original matrix. The LCS score and efficiency of PCM are both higher than SVT.

The problem of hyperparameter optimization exists widely in the real life and many common tasks can be transformed into it, such as neural architecture search and feature subset selection. Without considering various constraints, the existing hyperparameter tuning techniques can solve these problems effectively by traversing as many hyperparameter configurations as possible. However, because of the limited resources and budget, it is not feasible to evaluate so many kinds of configurations, which requires us to design effective algorithms to find a best possible hyperparameter configuration with a finite number of configuration evaluations. In this paper, we simulate human thinking processes and combine the merit of the existing techniques, and thus propose a new algorithm called ExperienceThinking, trying to solve this constrained hyperparameter optimization problem. In addition, we analyze the performances of 3 classical hyperparameter optimization algorithms with a finite number of configuration evaluations, and compare with that of ExperienceThinking. The experimental results show that our proposed algorithm provides superior results and has better performance.

Application of K-Means algorithm is restricted by the fact that the number of clusters should be known beforehand. Previously suggested methods to solve this problem are either ad hoc or require parametric assumptions and complicated calculations. The proposed method aims to solve this conundrum by considering cluster hypersphere density as the factor to determine the number of clusters in the given dataset. The density is calculated by assuming a hypersphere around the cluster centroid for n-different number of clusters. The calculated values are plotted against their corresponding number of clusters and then the optimum number of clusters is obtained after assaying the elbow region of the graph. The method is simple and easy to comprehend and provides robust and reliable results.

“Data Science Is Trial And Error, It’s Research And Recursive, It’s Practical And Theoretical, It Requires Domain Knowledge, It Boosts Your Strategic Skills, You Learn About Statistics And Master Programming Skills. But Most Importantly, It Teaches You To Remain Patient As You Are Always Close To Finding A Better Answer.”Farhad Malik ( December 28, 2018 )

CuCoTrack This paper introduces CuCoTrack, a cuckoo hash based data structure designed to efficiently implement connection tracking. The proposed scheme exploits the fact that queries always match one existing connection to compress the 5-tuple that identifies the connection. This reduces significantly the amount of memory needed to store the connections and also the memory bandwidth needed for lookups. CuCoTrack uses a dynamic fingerprint to avoid collisions thus ensuring that queries are completed in at most two memory accesses and facilitating a hardware implementation. The proposed scheme has been analyzed theoretically and validated by simulation. The results show that using 16 bits for the fingerprint is enough to avoid collisions in practical configurations. …

Fast Supervised Discrete Hashing (FSDH) Learning-based hashing algorithms are “hot topics’ because they can greatly increase the scale at which existing methods operate. In this paper, we propose a new learning-based hashing method called “fast supervised discrete hashing’ (FSDH) based on “supervised discrete hashing’ (SDH). Regressing the training examples (or hash code) to the corresponding class labels is widely used in ordinary least squares regression. Rather than adopting this method, FSDH uses a very simple yet effective regression of the class labels of training examples to the corresponding hash code to accelerate the algorithm. To the best of our knowledge, this strategy has not previously been used for hashing. Traditional SDH decomposes the optimization into three sub-problems, with the most critical sub-problem – discrete optimization for binary hash codes – solved using iterative discrete cyclic coordinate descent (DCC), which is time-consuming. However, FSDH has a closed-form solution and only requires a single rather than iterative hash code-solving step, which is highly efficient. Furthermore, FSDH is usually faster than SDH for solving the projection matrix for least squares regression, making FSDH generally faster than SDH. For example, our results show that FSDH is about 12-times faster than SDH when the number of hashing bits is 128 on the CIFAR-10 data base, and FSDH is about 151-times faster than FastHash when the number of hashing bits is 64 on the MNIST data-base. Our experimental results show that FSDH is not only fast, but also outperforms other comparative methods. …

Predictive Quality and Maintenance (PQM) PQM solutions, which harness data gathered by both the Internet of Things (IoT) and data from traditional legacy systems, focus on detecting and addressing quality and maintenance issues before they turn into serious problems-for example, problems that can cause unplanned downtime. …

Symbolic Regression Symbolic regression is a type of regression analysis that searches the space of mathematical expressions to find the model that best fits a given dataset, both in terms of accuracy and simplicity. No particular model is provided as a starting point to the algorithm. Instead, initial expressions are formed by randomly combining mathematical building blocks such as mathematical operators, analytic functions, constants, and state variables. (Usually, a subset of these primitives will be specified by the person operating it, but that’s not a requirement of the technique.) New equations are then formed by recombining previous equations, using genetic programming. By not requiring a specific model to be specified, symbolic regression isn’t affected by human bias, or unknown gaps in domain knowledge. It attempts to uncover the intrinsic relationships of the dataset, by letting the patterns in the data itself reveal the appropriate models, rather than imposing a model structure that is deemed mathematically tractable from a human perspective. The fitness function that drives the evolution of the models takes into account not only error metrics (to ensure the models accurately predict the data), but also special complexity measures, thus ensuring that the resulting models reveal the data’s underlying structure in a way that’s understandable from a human perspective. This facilitates reasoning and favors the odds of getting insights about the data-generating system. Symbolic Regression, Genetic Programming… or if Kepler had R …