R Packages worth a look

Linear and Smooth Predictor Modelling with Penalisation and Variable Selection (plsmselect)
Fit a model with potentially many linear and smooth predictors. Interaction effects can also be quantified. Variable selection is done using penalisati …

Render a Twitter Status in R Markdown Pages (twitterwidget)
Include the Twitter status widgets in HTML pages created using R markdown. The package uses the Twitter javascript APIs to embed in your document Twitt …

Simulating the Development of h-Index Values (hindex)
H-index and h-alpha are a bibliometric indicators. This package provides functions to simulate how these indicators may develop over time for a given s …

HJ Biplot’ using Different Ways of Penalization (SparseBiplots)
Contains a set of functions that allow to represent multivariate on a subspace of low dimension, in such a way that most of the variability of the info …

If you did not already know

K* Nearest Neighbors Algorithm google
Prediction with k* nearest neighbor algorithm based on a publication by Anava and Levy (2016) <arXiv:1701.07266>. …

Posterior-Based Proposal (PBP) google
Markov chain Monte Carlo (MCMC) is widely used for Bayesian inference in models of complex systems. Performance, however, is often unsatisfactory in models with many latent variables due to so-called poor mixing, necessitating development of application specific implementations. This limits rigorous use of real-world data to inform development and testing of models in applications ranging from statistical genetics to finance. This paper introduces ‘posterior-based proposals’ (PBPs), a new type of MCMC update applicable to a huge class of statistical models (whose conditional dependence structures are represented by directed acyclic graphs). PBPs generates large joint updates in parameter and latent variable space, whilst retaining good acceptance rates (typically 33 percent). Evaluation against standard approaches (Gibbs or Metropolis-Hastings updates) shows performance improvements by a factor of 2 to over 100 for widely varying model types: an individual-based model for disease diagnostic test data, a financial stochastic volatility model and mixed and generalised linear mixed models used in statistical genetics. PBPs are competitive with similarly targeted state-of-the-art approaches such as Hamiltonian MCMC and particle MCMC, and importantly work under scenarios where these approaches do not. PBPs therefore represent an additional general purpose technique that can be usefully applied in a wide variety of contexts. …

Deep Genetic Network google
Optimizing a neural network’s performance is a tedious and time taking process, this iterative process does not have any defined solution which can work for all the problems. Optimization can be roughly categorized into – Architecture and Hyperparameter optimization. Many algorithms have been devised to address this problem. In this paper we introduce a neural network architecture (Deep Genetic Network) which will optimize its parameters during training based on its fitness. Deep Genetic Net uses genetic algorithms along with deep neural networks to address the hyperparameter optimization problem, this approach uses ideas like mating and mutation which are key to genetic algorithms which help the neural net architecture to learn to optimize its hyperparameters by itself rather than depending on a person to explicitly set the values. Using genetic algorithms for this problem proved to work exceptionally well when given enough time to train the network. The proposed architecture is found to work well in optimizing hyperparameters in affine, convolutional and recurrent layers proving to be a good choice for conventional supervised learning tasks. …

Orthogonal Generative Adversarial Network (O-GAN) google
In this paper, we propose Orthogonal Generative Adversarial Networks (O-GANs). We decompose the network of discriminator orthogonally and add an extra loss into the objective of common GANs, which can enforce discriminator become an effective encoder. The same extra loss can be embedded into any kind of GANs and there is almost no increase in computation. Furthermore, we discuss the principle of our method, which is relative to the fully-exploiting of the remaining degrees of freedom of discriminator. As we know, our solution is the simplest approach to train a generative adversarial network with auto-encoding ability. …

Document worth reading: “Deep Learning-based Sequential Recommender Systems: Concepts, Algorithms, and Evaluations”

In the field of sequential recommendation, deep learning methods have received a lot of attention in the past few years and surpassed traditional models such as Markov chain-based and factorization-based ones. However, DL-based methods also have some critical drawbacks, such as insufficient modeling of user representation and ignoring to distinguish the different types of interactions (i.e., user behavior) among users and items. In this view, this survey focuses on DL-based sequential recommender systems by taking the aforementioned issues into consideration. Specifically, we illustrate the concept of sequential recommendation, propose a categorization of existing algorithms in terms of three types of behavioral sequence, summarize the key factors affecting the performance of DL-based models, and conduct corresponding evaluations to demonstrate the effects of these factors. We conclude this survey by systematically outlining future directions and challenges in this field. Deep Learning-based Sequential Recommender Systems: Concepts, Algorithms, and Evaluations

Let’s get it right

Article: Facebook vs. EU Artificial Intelligence and Data Politics

This article is a summary of the paper by the European Union Agency for Fundamental Rights (FRA) called Data quality and artificial intelligence – mitigating bias and error to protect fundamental rights. I then proceed to look at the recent move by Facebook in its data politics; statements made by Zuckerberg; and their recent hiring of previous Deputy Prime Minister Nick Clegg as head of global policy and communications. It is my wish that this makes EU policy more comprehensible and give you an overview of a few actions taken by Facebook in this regard.


Article: Towards Social Data Science

Combining social science and data science is not a new approach, yet after several revelations (and sizeable fines) large technology companies are waking up to discover where they are situated. It seems research institutes particularly in Europe are happy to facilitate this shift. This article is (1) a broad definition of data science; (2) a rapid look at social data science; (3) a surface look at how new, in relative terms, the discipline of social data science is at this moment.


Article: AI: Almost Immortal

Healthcare’s AI revolution is changing the way we think about age-related diseases, even aging itself We are in the midst of an epidemic. Regardless of your family history, race, or geography, there is a disease that will befall each and every one of us. You can hide in the mountains of Siberia, but the disease will still reach you because it’s not contagious. It’s followed humanity throughout time, and will continue to do so into the foreseeable future despite our recent attempts to forestall it. That disease is called aging.


Article: Collective Transparency

As our privacy continues to be challenged by the endless pursuit of data, does collective transparency offer a solution?


Article: Why AI Must Be Ethical – And How We Make It So

It must be said that AI is wonderful when properly implemented. It must also be said that AI is frightening when unregulated. AI trailblazers are working towards establishing AI ethics – to varying success. Some early attempts have failed, such as Google’s attempt at establishing an AI ethics board earlier this year, which was dissolved after just a week. Instead, I argue that the future of establishing ethical AI lies in collaboration. For instance, the European Commission is inviting Europeans to discuss ethical AI. In my opinion, inviting a broad range of individuals and entities to establish ethical guidelines is the best approach to handling AI ethics. Hopefully, such initiatives will start to appear in greater scale in the near future. In order to ensure AI becomes – and stays – ethical, we must achieve diversified ethical boards through broad, inclusive discussions. Because someday soon, AI will decide whether you’re a criminal. And all you can do is to hope the AI makes the right call.


Article: A Unified Framework of Five Principles for AI in Society

Artificial Intelligence (AI) is already having a major impact on society. As a result, many organizations have launched a wide range of initiatives to establish ethical principles for the adoption of socially beneficial AI. Unfortunately, the sheer volume of proposed principles threatens to overwhelm and confuse. How might this problem of ‘principle proliferation’ be solved? In this paper, we report the results of a fine-grained analysis of several of the highest-profile sets of ethical principles for AI. We assess whether these principles converge upon a set of agreed-upon principles, or diverge, with significant disagreement over what constitutes ‘ethical AI.’ Our analysis finds a high degree of overlap among the sets of principles we analyze. We then identify an overarching framework consisting of five core principles for ethical AI. Four of them are core principles commonly used in bioethics: beneficence, non-maleficence, autonomy, and justice. On the basis of our comparative analysis, we argue that a new principle is needed in addition: explicability, understood as incorporating both the epistemological sense of intelligibility (as an answer to the question ‘how does it work?’) and in the ethical sense of accountability (as an answer to the question: ‘who is responsible for the way it works?’). In the ensuing discussion, we note the limitations and assess the implications of this ethical framework for future efforts to create laws, rules, technical standards, and best practices for ethical AI in a wide range of contexts.


Article: What Kinds of Intelligent Machines Really Make Life Better?

Michael Jordan’s article on artificial intelligence (AI) eloquently articulates how far we are from understanding human-level intelligence, much less recreating it through AI, machine learning, and robotics. The very premise that intelligent machines doing our work will make our lives better may be flawed. Evidence from neuroscience, cognitive science, health sciences, and gerontology shows that human wellbeing and longevity, our health and wellness, fundamentally hinge on physical activity, social connectedness, and a sense of purpose. Therefore, we may need very different types of AI from those currently in development to truly improve human quality of life at the individual and societal levels.


Article: Microsoft invests in and partners with OpenAI to support us building beneficial AGI

Microsoft is investing $1 billion in OpenAI to support us building artificial general intelligence (AGI) with widely distributed economic benefits. We’re partnering to develop a hardware and software platform within Microsoft Azure which will scale to AGI. We’ll jointly develop new Azure AI supercomputing technologies, and Microsoft will become our exclusive cloud provider – so we’ll be working hard together to further extend Microsoft Azure’s capabilities in large-scale AI systems.

Finding out why

Paper: Consistency and matching without replacement

The paper demonstrates that the matching estimator is not generally consistent for the average treatment effect of the treated when the matching is done without replacement using propensity scores. To achieve consistency, practitioners must either assume that no unit exists with a propensity score greater than one-half or assume that there is no confounding among such units. Illustrations suggest that the result applies also to matching using other metrics as long as it is done without replacement.


Paper: Assessing Treatment Effect Variation in Observational Studies: Results from a Data Challenge

A growing number of methods aim to assess the challenging question of treatment effect variation in observational studies. This special section of ‘Observational Studies’ reports the results of a workshop conducted at the 2018 Atlantic Causal Inference Conference designed to understand the similarities and differences across these methods. We invited eight groups of researchers to analyze a synthetic observational data set that was generated using a recent large-scale randomized trial in education. Overall, participants employed a diverse set of methods, ranging from matching and flexible outcome modeling to semiparametric estimation and ensemble approaches. While there was broad consensus on the topline estimate, there were also large differences in estimated treatment effect moderation. This highlights the fact that estimating varying treatment effects in observational studies is often more challenging than estimating the average treatment effect alone. We suggest several directions for future work arising from this workshop.


Paper: Genome-wide Causation Studies of Complex Diseases

Despite significant progress in dissecting the genetic architecture of complex diseases by genome-wide association studies (GWAS), the signals identified by association analysis may not have specific pathological relevance to diseases so that a large fraction of disease causing genetic variants is still hidden. Association is used to measure dependence between two variables or two sets of variables. Genome-wide association studies test association between a disease and SNPs (or other genetic variants) across the genome. Association analysis may detect superficial patterns between disease and genetic variants. Association signals provide limited information on the causal mechanism of diseases. The use of association analysis as a major analytical platform for genetic studies of complex diseases is a key issue that hampers discovery of the mechanism of diseases, calling into question the ability of GWAS to identify loci underlying diseases. It is time to move beyond association analysis toward techniques enabling the discovery of the underlying causal genetic strctures of complex diseases. To achieve this, we propose a concept of a genome-wide causation studies (GWCS) as an alternative to GWAS and develop additive noise models (ANMs) for genetic causation analysis. Type I error rates and power of the ANMs to test for causation are presented. We conduct GWCS of schizophrenia. Both simulation and real data analysis show that the proportion of the overlapped association and causation signals is small. Thus, we hope that our analysis will stimulate discussion of GWAS and GWCS.


Article: Introducing the do-sampler for causal inference

How long should an online article title be? There’s a blog here citing an old post from 2013 which shows a nice plot for average click-through rate (CTR) and title length.


Paper: Feature Selection via Mutual Information: New Theoretical Insights

Mutual information has been successfully adopted in filter feature-selection methods to assess both the relevancy of a subset of features in predicting the target variable and the redundancy with respect to other variables. However, existing algorithms are mostly heuristic and do not offer any guarantee on the proposed solution. In this paper, we provide novel theoretical results showing that conditional mutual information naturally arises when bounding the ideal regression/classification errors achieved by different subsets of features. Leveraging on these insights, we propose a novel stopping condition for backward and forward greedy methods which ensures that the ideal prediction error using the selected feature subset remains bounded by a user-specified threshold. We provide numerical simulations to support our theoretical claims and compare to common heuristic methods.


Paper: A discriminative approach for finding and characterizing positivity violations using decision trees

The assumption of positivity in causal inference (also known as common support and co-variate overlap) is necessary to obtain valid causal estimates. Therefore, confirming it holds in a given dataset is an important first step of any causal analysis. Most common methods to date are insufficient for discovering non-positivity, as they do not scale for modern high-dimensional covariate spaces, or they cannot pinpoint the subpopulation violating positivity. To overcome these issues, we suggest to harness decision trees for detecting violations. By dividing the covariate space into mutually exclusive regions, each with maximized homogeneity of treatment groups, decision trees can be used to automatically detect subspaces violating positivity. By augmenting the method with an additional random forest model, we can quantify the robustness of the violation within each subspace. This solution is scalable and provides an interpretable characterization of the subspaces in which violations occur. We provide a visualization of the stratification rules that define each subpopulation, combined with the severity of positivity violation within it. We also provide an interactive version of the visualization that allows a deeper dive into the properties of each subspace.

If you did not already know

Contextual Bilateral Loss (CoBi) google
This paper shows that when applying machine learning to digital zoom for photography, it is beneficial to use real, RAW sensor data for training. Existing learning-based super-resolution methods do not use real sensor data, instead operating on RGB images. In practice, these approaches result in loss of detail and accuracy in their digitally zoomed output when zooming in on distant image regions. We also show that synthesizing sensor data by resampling high-resolution RGB images is an oversimplified approximation of real sensor data and noise, resulting in worse image quality. The key barrier to using real sensor data for training is that ground truth high-resolution imagery is missing. We show how to obtain the ground-truth data with optically zoomed images and contribute a dataset, SR-RAW, for real-world computational zoom. We use SR-RAW to train a deep network with a novel contextual bilateral loss (CoBi) that delivers critical robustness to mild misalignment in input-output image pairs. The trained network achieves state-of-the-art performance in 4X and 8X computational zoom. …

Uncertainty-Aware Feature Selection (UAFS) google
Missing data are a concern in many real world data sets and imputation methods are often needed to estimate the values of missing data, but data sets with excessive missingness and high dimensionality challenge most approaches to imputation. Here we show that appropriate feature selection can be an effective preprocessing step for imputation, allowing for more accurate imputation and subsequent model predictions. The key feature of this preprocessing is that it incorporates uncertainty: by accounting for uncertainty due to missingness when selecting features we can reduce the degree of missingness while also limiting the number of uninformative features being used to make predictive models. We introduce a method to perform uncertainty-aware feature selection (UAFS), provide a theoretical motivation, and test UAFS on both real and synthetic problems, demonstrating that across a variety of data sets and levels of missingness we can improve the accuracy of imputations. Improved imputation due to UAFS also results in improved prediction accuracy when performing supervised learning using these imputed data sets. Our UAFS method is general and can be fruitfully coupled with a variety of imputation methods. …

Deep Feature Fusion-Audio and Text Modal Fusion (DFF-ATMF) google
Sentiment analysis research has been rapidly developing in the last decade and has attracted widespread attention from academia and industry, most of which is based on text. However, the information in the real world usually comes as different modalities. In this paper, we consider the task of Multimodal Sentiment Analysis, using Audio and Text Modalities, proposed a novel fusion strategy including Multi-Feature Fusion and Multi-Modality Fusion to improve the accuracy of Audio-Text Sentiment Analysis. We call this the Deep Feature Fusion-Audio and Text Modal Fusion (DFF-ATMF) model, and the features learned from it are complementary to each other and robust. Experiments with the CMU-MOSI corpus and the recently released CMU-MOSEI corpus for Youtube video sentiment analysis show the very competitive results of our proposed model. Surprisingly, our method also achieved the state-of-the-art results in the IEMOCAP dataset, indicating that our proposed fusion strategy is also extremely generalization ability to Multimodal Emotion Recognition. …

Distance Metric Learned Collaborative Representation Classifier (DML-CRC) google
Any generic deep machine learning algorithm is essentially a function fitting exercise, where the network tunes its weights and parameters to learn discriminatory features by minimizing some cost function. Though the network tries to learn the optimal feature space, it seldom tries to learn an optimal distance metric in the cost function, and hence misses out on an additional layer of abstraction. We present a simple effective way of achieving this by learning a generic Mahalanabis distance in a collaborative loss function in an end-to-end fashion with any standard convolutional network as the feature learner. The proposed method DML-CRC gives state-of-the-art performance on benchmark fine-grained classification datasets CUB Birds, Oxford Flowers and Oxford-IIIT Pets using the VGG-19 deep network. The method is network agnostic and can be used for any similar classification tasks. …

Magister Dixit

“And that’s where the statistician needs to take it easy:
• Start with the results, so the audience has a clear view on the outcome
• Proceed to explain the analysis simply and with a minimum of statistical jargon
• Describe what an algorithm does, not the specifics of your killer algo
• Visualize the inputs (e.g.: a correlation matrix showing an ‘influence heat map’)
• Visualize the process (e.g.: a regression line on a chief predictor variable)
• Visualize the results (e.g.: a lift chart to show how much the analysis is improving results)
• Always, always tie each step back to the business challenge
• Always be open to questions and feedback.”
Andrew Pease ( November 3, 2014 )

Whats new on arXiv – Complete List

Object-Capability as a Means of Permission and Authority in Software Systems
A Scalable Framework for Multilevel Streaming Data Analytics using Deep Learning
Mutual Reinforcement Learning
Logic Conditionals, Supervenience, and Selection Tasks
Graph Interpolating Activation Improves Both Natural and Robust Accuracies in Data-Efficient Deep Learning
Evaluating Explanation Without Ground Truth in Interpretable Machine Learning
A Self-Attentive model for Knowledge Tracing
Deep Social Collaborative Filtering
Meta-Learning for Black-box Optimization
Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches
Quantum Data Fitting Algorithm for Non-sparse Matrices
Perception of visual numerosity in humans and machines
Selection Heuristics on Semantic Genetic Programming for Classification Problems
Natural Adversarial Examples
Mediation Challenges and Socio-Technical Gaps for Explainable Deep Learning Applications
Construction and enumeration for self-dual cyclic codes of even length over $\mathbb{F}_{2^m} + u\mathbb{F}_{2^m}$
On the Relationships Between Average Channel Capacity, Average Bit Error Rate, Outage probability and Outage Capacity over Additive White Gaussian Noise Channels
The Bach Doodle: Approachable music composition with machine learning at scale
Topology Based Scalable Graph Kernels
Boosting Resolution and Recovering Texture of micro-CT Images with Deep Learning
Adaptive Flux-Only Least-Squares Finite Element Methods for Linear Transport Equations
DeepRace: Finding Data Race Bugs via Deep Learning
Integrating the Data Augmentation Scheme with Various Classifiers for Acoustic Scene Modeling
On Convergence and Optimality of Best-Response Learning with Policy Types in Multiagent Systems
Geometric Convergence of Distributed Gradient Play in Games with Unconstrained Action Sets
A Short Note on the Kinetics-700 Human Action Dataset
A portable potentiometric electronic tongue leveraging smartphone and cloud platforms
Concentration of the matrix-valued minimum mean-square error in optimal Bayesian inference
Cataloging Accreted Stars within Gaia DR2 using Deep Learning
Linear Receivers for Massive MIMO Systems with One-Bit ADCs
Slow Feature Analysis for Human Action Recognition
Robust Variational Autoencoders for Outlier Detection in Mixed-Type Data
Quant GANs: Deep Generation of Financial Time Series
Towards Near-imperceptible Steganographic Text
Period mimicry: A note on the $(-1)$-evaluation of the peak polynomials
A row-invariant parameterized algorithm for integer programming
Sampled-Data Observers for Delay Systems and Hyperbolic PDE-ODE Loops
Comparison Between Algebraic and Matrix-free Geometric Multigrid for a Stokes Problem on Adaptive Meshes with Variable Viscosity
CupQ: A New Clinical Literature Search Engine
A Stratification Approach to Partial Dependence for Codependent Variables
Output maximization container loading problem with time availability constraints
Imaginary replica analysis of loopy regular random graphs
PPO Dash: Improving Generalization in Deep Reinforcement Learning
Towards network admissible optimal dispatch of flexible loads in distribution networks
MaskPlus: Improving Mask Generation for Instance Segmentation
Ramanujan Congruences for Fractional Partition Functions
Asymptotic stabilization of a system of coupled $n$th–order differential equations with potentially unbounded high-frequency oscillating perturbations
DOD-ETL: Distributed On-Demand ETL for Near Real-Time Business Intelligence
Real-time Facial Surface Geometry from Monocular Video on Mobile GPUs
Deep learning-based color holographic microscopy
Overcoming the curse of dimensionality in the numerical approximation of Allen-Cahn partial differential equations via truncated full-history recursive multilevel Picard approximations
Lower Bounding the AND-OR Tree via Symmetrization
Padé Activation Units: End-to-end Learning of Flexible Activation Functions in Deep Networks
Condensed Ricci Curvature of Complete and Strongly Regular Graphs
Defining mediation effects for multiple mediators using the concept of the target randomized trial
Real-time Hair Segmentation and Recoloring on Mobile GPUs
Binary Decision Diagrams: from Tree Compaction to Sampling
Almost all Steiner triple systems are almost resolvable
Low-supervision urgency detection and transfer in short crisis messages
A Data-Driven Game-Theoretic Approach for Behind-the-Meter PV Generation Disaggregation
Designing Perfect Simulation Algorithms using Local Correctness
Development of a General Momentum Exchange Devices Fault Model for Spacecraft Fault-Tolerant Control System Design
Independence numbers of Johnson-type graphs
AugLabel: Exploiting Word Representations to Augment Labels for Face Attribute Classification
Elastic depths for detecting shape anomalies in functional data
Some error estimates for the DEC method in the plane
High-order couplings in geometric complex networks of neurons
Partitioning Graphs for the Cloud using Reinforcement Learning
Increasing Power for Observational Studies of Aberrant Response: An Adaptive Approach
Subspace Determination through Local Intrinsic Dimensional Decomposition: Theory and Experimentation
Efficient Pipeline for Camera Trap Image Review
Hands Off my Database: Ransomware Detection in Databases through Dynamic Analysis of Query Sequences
Improving 3D Object Detection for Pedestrians with Virtual Multi-View Synthesis Orientation Estimation
Nonlinear filtering of stochastic differential equations driven by correlated Lévy noises
Rethinking RGB-D Salient Object Detection: Models, Datasets, and Large-Scale Benchmarks
AR(1) processes driven by second-chaos white noise: Berry-Esséen bounds for quadratic variation and parameter estimation
A simplified proof of CLT for convex bodies
Some Black-box Reductions for Objective-robust Discrete Optimization Problems Based on their LP-Relaxations
Planar graphs without 7-cycles and butterflies are DP-4-colorable
Study of Max-Link Relay Selection with Buffers for Multi-Way Cooperative Multi-Antenna Systems
2nd Place Solution to the GQA Challenge 2019
Efficient Autonomy Validation in Simulation with Adaptive Stress Testing
Instant Motion Tracking and Its Applications to Augmented Reality
Asynchronous Coded Caching
A Bird’s Eye View of Nonlinear System Identification
Ethical Underpinnings in the Design and Management of ICT Projects
Alternating Dynamic Programming for Multiple Epidemic Change-Point Estimation
Stochastic viscosity solutions for stochastic integral-partial differential equations and singular stochastic control
Hydrodynamic synchronization and collective dynamics of colloidal particles driven along a circular path
A Quantum-inspired Algorithm for General Minimum Conical Hull Problems
Energy-efficient Alternating Iterative Secure Structure of Maximizing Secrecy Rate for Directional Modulation Networks
Stereo-based terrain traversability analysis using normal-based segmentation and superpixel surface analysis
EL-Shelling on Comodernistic Lattices
Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving
Automated Deobfuscation of Android Native Binary Code
CL-Shellable Posets with No EL-Shellings
Noise Removal of FTIR Hyperspectral Images via MMSE
An Inter-Layer Weight Prediction and Quantization for Deep Neural Networks based on a Smoothly Varying Weight Hypothesis
Quality-aware skill translation models for expert finding on StackOverflow
Improved Reinforcement Learning through Imitation Learning Pretraining Towards Image-based Autonomous Driving
The Quantum Version Of Classification Decision Tree Constructing Algorithm C5.0
Deep inspection: an electrical distribution pole parts study via deep neural networks
The continuous Bernoulli: fixing a pervasive error in variational autoencoders
Coherency and Online Signal Selection Based Wide Area Control of Wind Integrated Power Grid
Discontinuous Galerkin Finite Element Methods for the Landau-de Gennes Minimization Problem of Liquid Crystals
Modeling competitive evolution of multiple languages
Vibrational spectrum derived from the local mechanical response in disordered solids
AirwayNet: A Voxel-Connectivity Aware Approach for Accurate Airway Segmentation Using Convolutional Neural Networks
Broadcast Distributed Voting Algorithm in Population Protocols
Quantifying replicability and consistency in systematic reviews
Labelings vs. Embeddings: On Distributed Representations of Distances
A generic rule-based system for clinical trial patient selection
The Impact of Tribalism on Social Welfare
Distributed data storage for modern astroparticle physics experiments
Light Multi-segment Activation for Model Compression
Global and local pointwise error estimates for finite element approximations to the Stokes problem on convex polyhedra
Separable Convolutional LSTMs for Faster Video Segmentation
Cascade RetinaNet: Maintaining Consistency for Single-Stage Object Detection
Learning Depth from Monocular Videos Using Synthetic Data: A Temporally-Consistent Domain Adaptation Approach
Minimal-norm static feedbacks using dissipative Hamiltonian matrices
Deep Reinforcement Learning Based Robot Arm Manipulation with Efficient Training Data through Simulation
A Unified Framework for Problems on Guessing, Source Coding and Task Partitioning
A General Framework for Uncertainty Estimation in Deep Learning
Modeling User Selection in Quality Diversity
Partial Solvers for Generalized Parity Games
Mango Tree Net — A fully convolutional network for semantic segmentation and individual crown detection of mango trees
Single-bit-per-weight deep convolutional neural networks without batch-normalization layers for embedded systems
Human Pose Estimation for Real-World Crowded Scenarios
The Bregman-Tweedie Classification Model
Assessing Refugees’ Integration via Spatio-temporal Similarities of Mobility and Calling Behaviors
Performance Assessment of Kron Reduction in the Numerical Analysis of Polyphase Power Systems
A theorem about partitioning consecutive numbers
Improving Bayesian Local Spatial Models in Large Data Sets
On the $L_p$-error of the Grenander-type estimator in the Cox model
Graphs with large girth and free groups
Semi-supervised Breast Lesion Detection in Ultrasound Video Based on Temporal Coherence
Machine learning without a feature set for detecting bursts in the EEG of preterm infants
Language comparison via network topology
A Subjective Interestingness measure for Business Intelligence explorations
Abstract categorial grammars with island constraints and effective decidability
on removal of perfect matching from folded hypercubes
Fused Detection of Retinal Biomarkers in OCT Volumes
On the Variational Iteration Method for the Nonlinear Volterra Integral Equation
Stochastic Evolution of spatial populations: From configurations to genealogies and back
A Unified Deep Framework for Joint 3D Pose Estimation and Action Recognition from a Single RGB Camera
Random projections and sampling algorithms for clustering of high-dimensional polygonal curves
Representative Days for Expansion Decisions in Power Systems
Effect of disorder correlation on Anderson localization of two-dimensional massless pseudospin-1 Dirac particles in a random one-dimensional scalar potential
Lossless Prioritized Embeddings
Positive specializations of symmetric Grothendieck polynomials
Stochastic gradient Markov chain Monte Carlo
Detecting anomalies in fibre systems using 3-dimensional image data
Speed estimation evaluation on the KITTI benchmark based on motion and monocular depth information
X-Net: Brain Stroke Lesion Segmentation Based on Depthwise Separable Convolution and Long-range Dependencies
Latent Adversarial Defence with Boundary-guided Generation
Uniqueness and characterization of local minimizers for the interaction energy with mildly repulsive potentials
CLCI-Net: Cross-Level fusion and Context Inference Networks for Lesion Segmentation of Chronic Stroke
Gender Balance in Computer Science and Engineering in Italian Universities
Threshold Logical Clocks for Asynchronous Distributed Coordination and Consensus
Improving Semantic Segmentation via Dilated Affinity
Outliers in meta-analysis: an asymmetric trimmed-mean approach
Applying twice a minimax theorem
Transmission Power Control for Remote State Estimation in Industrial Wireless Sensor Networks
Unforeseen Evidence
Computing Nested Fixpoints in Quasipolynomial Time
Data Selection for training Semantic Segmentation CNNs with cross-dataset weak supervision
Morphisms of Skew Hadamard Matrices
Cayley Structures and Coset Acyclicity
Adaptive Prior Selection for Repertoire-based Online Learning in Robotics
Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites
Neural Language Model Based Training Data Augmentation for Weakly Supervised Early Rumor Detection
Uncertainty-aware Self-ensembling Model for Semi-supervised 3D Left Atrium Segmentation
Structured Variational Inference in Unstable Gaussian Process State Space Models
Information processing constraints in travel behaviour modelling: A generative learning approach
Embedded Ridge Approximations: Constructing Ridge Approximations Over Localized Scalar Fields For Improved Simulation-Centric Dimension Reduction
Pedestrian Tracking by Probabilistic Data Association and Correspondence Embeddings
Homophily as a process generating social networks: insights from Social Distance Attachment model
A note on duality theorems in mass transportation
How much real data do we actually need: Analyzing object detection performance using synthetic and real data
SGD momentum optimizer with step estimation by online parabola model
Shrinkage in the Time-Varying Parameter Model Framework Using the R Package shrinkTVP
RadioTalk: a large-scale corpus of talk radio transcripts
Prediction of neural network performance by phenotypic modeling
Anatomically-Informed Multiple Linear Assignment Problems for White Matter Bundle Segmentation
On The Termination of a Flooding Process
Data-driven strategies for optimal bicycle network growth
A Reduced Order technique to study bifurcating phenomena: application to the Gross-Pitaevskii equation
Variable selection in sparse high-dimensional GLARMA models
Massive MU-MIMO-OFDM Uplink with Direct RF-Sampling and 1-Bit ADCs
On the smallest singular value of multivariate Vandermonde matrices with clustered nodes
Measuring I2P Censorship at a Global Scale
Two-stage sample robust optimization
A Two-Stage Approach to Multivariate Linear Regression with Sparsely Mismatched Data
Step-by-Step Community Detection for Volume-Regular Graphs
Efficient Segmentation: Learning Downsampling Near Semantic Boundaries
The Tradeoff Between Privacy and Accuracy in Anomaly Detection Using Federated XGBoost
On the Performance of Renewable Energy-Powered UAV-Assisted Wireless Communications
Security Smells in Infrastructure as Code Scripts
EnforceNet: Monocular Camera Localization in Large Scale Indoor Sparse LiDAR Point Cloud
Predicting Next-Season Designs on High Fashion Runway
From Harnack inequality to heat kernel estimates on metric measure spaces and applications
Explaining Classifiers with Causal Concept Effect (CaCE)
Fast, Provably convergent IRLS Algorithm for p-norm Linear Regression
On the ”steerability’ of generative adversarial networks
Ordinal pattern probabilities for symmetric random walks
Tightness and tails of the maximum in 3D Ising interfaces
Hubs and authorities of the scientific migration network

Whats new on arXiv

A Scalable Framework for Multilevel Streaming Data Analytics using Deep Learning

The rapid growth of data in velocity, volume, value, variety, and veracity has enabled exciting new opportunities and presented big challenges for businesses of all types. Recently, there has been considerable interest in developing systems for processing continuous data streams with the increasing need for real-time analytics for decision support in the business, healthcare, manufacturing, and security. The analytics of streaming data usually relies on the output of offline analytics on static or archived data. However, businesses and organizations like our industry partner Gnowit, strive to provide their customers with real time market information and continuously look for a unified analytics framework that can integrate both streaming and offline analytics in a seamless fashion to extract knowledge from large volumes of hybrid streaming data. We present our study on designing a multilevel streaming text data analytics framework by comparing leading edge scalable open-source, distributed, and in-memory technologies. We demonstrate the functionality of the framework for a use case of multilevel text analytics using deep learning for language understanding and sentiment analysis including data indexing and query processing. Our framework combines Spark streaming for real time text processing, the Long Short Term Memory (LSTM) deep learning model for higher level sentiment analysis, and other tools for SQL-based analytical processing to provide a scalable solution for multilevel streaming text analytics.


Mutual Reinforcement Learning

Recently, collaborative robots have begun to train humans to achieve complex tasks, and the mutual information exchange between them can lead to successful robot-human collaborations. In this paper we demonstrate the application and effectiveness of a new approach called \textit{mutual reinforcement learning} (MRL), where both humans and autonomous agents act as reinforcement learners in a skill transfer scenario over continuous communication and feedback. An autonomous agent initially acts as an instructor who can teach a novice human participant complex skills using the MRL strategy. While teaching skills in a physical (block-building) (n=34) or simulated (Tetris) environment (n=31), the expert tries to identify appropriate reward channels preferred by each individual and adapts itself accordingly using an exploration-exploitation strategy. These reward channel preferences can identify important behaviors of the human participants, because they may well exercise the same behaviors in similar situations later. In this way, skill transfer takes place between an expert system and a novice human operator. We divided the subject population into three groups and observed the skill transfer phenomenon, analyzing it with Simpson’s psychometric model. 5-point Likert scales were also used to identify the cognitive models of the human participants. We obtained a shared cognitive model which not only improves human cognition but enhances the robot’s cognitive strategy to understand the mental model of its human partners while building a successful robot-human collaborative framework.


Logic Conditionals, Supervenience, and Selection Tasks

Principles of cognitive economy would require that concepts about objects, properties and relations should be introduced only if they simplify the conceptualisation of a domain. Unexpectedly, classic logic conditionals, specifying structures holding within elements of a formal conceptualisation, do not always satisfy this crucial principle. The paper argues that this requirement is captured by \emph{supervenience}, hereby further identified as a property necessary for compression. The resulting theory suggests an alternative explanation of the empirical experiences observable in Wason’s selection tasks, associating human performance with conditionals on the ability of dealing with compression, rather than with logic necessity.


Graph Interpolating Activation Improves Both Natural and Robust Accuracies in Data-Efficient Deep Learning

Improving the accuracy and robustness of deep neural nets (DNNs) and adapting them to small training data are primary tasks in deep learning research. In this paper, we replace the output activation function of DNNs, typically the data-agnostic softmax function, with a graph Laplacian-based high dimensional interpolating function which, in the continuum limit, converges to the solution of a Laplace-Beltrami equation on a high dimensional manifold. Furthermore, we propose end-to-end training and testing algorithms for this new architecture. The proposed DNN with graph interpolating activation integrates the advantages of both deep learning and manifold learning. Compared to the conventional DNNs with the softmax function as output activation, the new framework demonstrates the following major advantages: First, it is better applicable to data-efficient learning in which we train high capacity DNNs without using a large number of training data. Second, it remarkably improves both natural accuracy on the clean images and robust accuracy on the adversarial images crafted by both white-box and black-box adversarial attacks. Third, it is a natural choice for semi-supervised learning. For reproducibility, the code is available at \url{https://…/DNN-DataDependentActivation}.


Evaluating Explanation Without Ground Truth in Interpretable Machine Learning

Interpretable Machine Learning (IML) has become increasingly important in many applications, such as autonomous cars and medical diagnosis, where explanations are preferred to help people better understand how machine learning systems work and further enhance their trust towards systems. Particularly in robotics, explanations from IML are significantly helpful in providing reasons for those adverse and inscrutable actions, which could impair the safety and profit of the public. However, due to the diversified scenarios and subjective nature of explanations, we rarely have the ground truth for benchmark evaluation in IML on the quality of generated explanations. Having a sense of explanation quality not only matters for quantifying system boundaries, but also helps to realize the true benefits to human users in real-world applications. To benchmark evaluation in IML, in this paper, we rigorously define the problem of evaluating explanations, and systematically review the existing efforts. Specifically, we summarize three general aspects of explanation (i.e., predictability, fidelity and persuasibility) with formal definitions, and respectively review the representative methodologies for each of them under different tasks. Further, a unified evaluation framework is designed according to the hierarchical needs from developers and end-users, which could be easily adopted for different scenarios in practice. In the end, open problems are discussed, and several limitations of current evaluation techniques are raised for future explorations.


A Self-Attentive model for Knowledge Tracing

Knowledge tracing is the task of modeling each student’s mastery of knowledge concepts (KCs) as (s)he engages with a sequence of learning activities. Each student’s knowledge is modeled by estimating the performance of the student on the learning activities. It is an important research area for providing a personalized learning platform to students. In recent years, methods based on Recurrent Neural Networks (RNN) such as Deep Knowledge Tracing (DKT) and Dynamic Key-Value Memory Network (DKVMN) outperformed all the traditional methods because of their ability to capture complex representation of human learning. However, these methods face the issue of not generalizing well while dealing with sparse data which is the case with real-world data as students interact with few KCs. In order to address this issue, we develop an approach that identifies the KCs from the student’s past activities that are \textit{relevant} to the given KC and predicts his/her mastery based on the relatively few KCs that it picked. Since predictions are made based on relatively few past activities, it handles the data sparsity problem better than the methods based on RNN. For identifying the relevance between the KCs, we propose a self-attention based approach, Self Attentive Knowledge Tracing (SAKT). Extensive experimentation on a variety of real-world dataset shows that our model outperforms the state-of-the-art models for knowledge tracing, improving AUC by 4.43% on average.


Deep Social Collaborative Filtering

Recommender systems are crucial to alleviate the information overload problem in online worlds. Most of the modern recommender systems capture users’ preference towards items via their interactions based on collaborative filtering techniques. In addition to the user-item interactions, social networks can also provide useful information to understand users’ preference as suggested by the social theories such as homophily and influence. Recently, deep neural networks have been utilized for social recommendations, which facilitate both the user-item interactions and the social network information. However, most of these models cannot take full advantage of the social network information. They only use information from direct neighbors, but distant neighbors can also provide helpful information. Meanwhile, most of these models treat neighbors’ information equally without considering the specific recommendations. However, for a specific recommendation case, the information relevant to the specific item would be helpful. Besides, most of these models do not explicitly capture the neighbor’s opinions to items for social recommendations, while different opinions could affect the user differently. In this paper, to address the aforementioned challenges, we propose DSCF, a Deep Social Collaborative Filtering framework, which can exploit the social relations with various aspects for recommender systems. Comprehensive experiments on two-real world datasets show the effectiveness of the proposed framework.


Meta-Learning for Black-box Optimization

Recently, neural networks trained as optimizers under the ‘learning to learn’ or meta-learning framework have been shown to be effective for a broad range of optimization tasks including derivative-free black-box function optimization. Recurrent neural networks (RNNs) trained to optimize a diverse set of synthetic non-convex differentiable functions via gradient descent have been effective at optimizing derivative-free black-box functions. In this work, we propose RNN-Opt: an approach for learning RNN-based optimizers for optimizing real-parameter single-objective continuous functions under limited budget constraints. Existing approaches utilize an observed improvement based meta-learning loss function for training such models. We propose training RNN-Opt by using synthetic non-convex functions with known (approximate) optimal values by directly using discounted regret as our meta-learning loss function. We hypothesize that a regret-based loss function mimics typical testing scenarios, and would therefore lead to better optimizers compared to optimizers trained only to propose queries that improve over previous queries. Further, RNN-Opt incorporates simple yet effective enhancements during training and inference procedures to deal with the following practical challenges: i) Unknown range of possible values for the black-box function to be optimized, and ii) Practical and domain-knowledge based constraints on the input parameters. We demonstrate the efficacy of RNN-Opt in comparison to existing methods on several synthetic as well as standard benchmark black-box functions along with an anonymized industrial constrained optimization problem.


Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches

Deep learning techniques have become the method of choice for researchers working on algorithmic aspects of recommender systems. With the strongly increased interest in machine learning in general, it has, as a result, become difficult to keep track of what represents the state-of-the-art at the moment, e.g., for top-n recommendation tasks. At the same time, several recent publications point out problems in today’s research practice in applied machine learning, e.g., in terms of the reproducibility of the results or the choice of the baselines when proposing new models. In this work, we report the results of a systematic analysis of algorithmic proposals for top-n recommendation tasks. Specifically, we considered 18 algorithms that were presented at top-level research conferences in the last years. Only 7 of them could be reproduced with reasonable effort. For these methods, it however turned out that 6 of them can often be outperformed with comparably simple heuristic methods, e.g., based on nearest-neighbor or graph-based techniques. The remaining one clearly outperformed the baselines but did not consistently outperform a well-tuned non-neural linear ranking method. Overall, our work sheds light on a number of potential problems in today’s machine learning scholarship and calls for improved scientific practices in this area. Source code of our experiments and full results are available at: https://…/RecSys2019_DeepLearning_Evaluation.


Quantum Data Fitting Algorithm for Non-sparse Matrices

We propose a quantum data fitting algorithm for non-sparse matrices, which is based on the Quantum Singular Value Estimation (QSVE) subroutine and a novel efficient method for recovering the signs of eigenvalues. Our algorithm generalizes the quantum data fitting algorithm of Wiebe, Braun, and Lloyd for sparse and well-conditioned matrices by adding a regularization term to avoid the over-fitting problem, which is a very important problem in machine learning. As a result, the algorithm achieves a sparsity-independent runtime of O(\kappa^2\sqrt{N}\mathrm{polylog}(N)/(\epsilon\log\kappa)) for an N\times N dimensional Hermitian matrix \bm{F}, where \kappa denotes the condition number of \bm{F} and \epsilon is the precision parameter. This amounts to a polynomial speedup on the dimension of matrices when compared with the classical data fitting algorithms, and a strictly less than quadratic dependence on \kappa.


Perception of visual numerosity in humans and machines

Numerosity perception is foundational to mathematical learning, but its computational bases are strongly debated. Some investigators argue that humans are endowed with a specialized system supporting numerical representation; others argue that visual numerosity is estimated using continuous magnitudes, such as density or area, which usually co-vary with number. Here we reconcile these contrasting perspectives by testing deep networks on the same numerosity comparison task that was administered to humans, using a stimulus space that allows to measure the contribution of non-numerical features. Our model accurately simulated the psychophysics of numerosity perception and the associated developmental changes: discrimination was driven by numerosity information, but non-numerical features had a significant impact, especially early during development. Representational similarity analysis further highlighted that both numerosity and continuous magnitudes were spontaneously encoded even when no task had to be carried out, demonstrating that numerosity is a major, salient property of our visual environment.


Selection Heuristics on Semantic Genetic Programming for Classification Problems

In a steady-state evolution, tournament selection traditionally uses the fitness function to select the parents, and negative selection chooses an individual to be replaced with an offspring. This contribution focuses on analyzing the behavior, in terms of performance, of different heuristics when used instead of the fitness function in tournament selection. The heuristics analyzed are related to measuring the similarity of the individuals in the semantic space. In addition, the analysis includes random selection and traditional tournament selection. These selection functions were implemented on our Semantic Genetic Programming system, namely EvoDAG, which is inspired by the geometric genetic operators and tested on 30 classification problems with a variable number of samples, variables, and classes. The result indicated that the combination of accuracy and the random selection, in the negative tournament, produces the best combination, and the difference in performances between this combination and the tournament selection is statistically significant. Furthermore, we compare EvoDAG’s performance using the selection heuristics against 18 classifiers that included traditional approaches as well as auto-machine-learning techniques. The results indicate that our proposal is competitive with state-of-art classifiers. Finally, it is worth to mention that EvoDAG is available as open source software.


Natural Adversarial Examples

We introduce natural adversarial examples — real-world, unmodified, and naturally occurring examples that cause classifier accuracy to significantly degrade. We curate 7,500 natural adversarial examples and release them in an ImageNet classifier test set that we call ImageNet-A. This dataset serves as a new way to measure classifier robustness. Like l_p adversarial examples, ImageNet-A examples successfully transfer to unseen or black-box classifiers. For example, on ImageNet-A a DenseNet-121 obtains around 2% accuracy, an accuracy drop of approximately 90%. Recovering this accuracy is not simple because ImageNet-A examples exploit deep flaws in current classifiers including their over-reliance on color, texture, and background cues. We observe that popular training techniques for improving robustness have little effect, but we show that some architectural changes can enhance robustness to natural adversarial examples. Future research is required to enable robust generalization to this hard ImageNet test set.


Mediation Challenges and Socio-Technical Gaps for Explainable Deep Learning Applications

The presumed data owners’ right to explanations brought about by the General Data Protection Regulation in Europe has shed light on the social challenges of explainable artificial intelligence (XAI). In this paper, we present a case study with Deep Learning (DL) experts from a research and development laboratory focused on the delivery of industrial-strength AI technologies. Our aim was to investigate the social meaning (i.e. meaning to others) that DL experts assign to what they do, given a richly contextualized and familiar domain of application. Using qualitative research techniques to collect and analyze empirical data, our study has shown that participating DL experts did not spontaneously engage into considerations about the social meaning of machine learning models that they build. Moreover, when explicitly stimulated to do so, these experts expressed expectations that, with real-world DL application, there will be available mediators to bridge the gap between technical meanings that drive DL work, and social meanings that AI technology users assign to it. We concluded that current research incentives and values guiding the participants’ scientific interests and conduct are at odds with those required to face some of the scientific challenges involved in advancing XAI, and thus responding to the alleged data owners’ right to explanations or similar societal demands emerging from current debates. As a concrete contribution to mitigate what seems to be a more general problem, we propose three preliminary XAI Mediation Challenges with the potential to bring together technical and social meanings of DL applications, as well as to foster much needed interdisciplinary collaboration among AI and the Social Sciences researchers.