Snake google
A regularized optimization problem over a large unstructured graph is studied, where the regularization term is tied to the graph geometry. Typical regularization examples include the total variation and the Laplacian regularizations over the graph. When applying the proximal gradient algorithm to solve this problem, there exist quite affordable methods to implement the proximity operator (backward step) in the special case where the graph is a simple path without loops. In this paper, an algorithm, referred to as ‘Snake’, is proposed to solve such regularized problems over general graphs, by taking benefit of these fast methods. The algorithm consists in properly selecting random simple paths in the graph and performing the proximal gradient algorithm over these simple paths. This algorithm is an instance of a new general stochastic proximal gradient algorithm, whose convergence is proven. Applications to trend filtering and graph inpainting are provided among others. Numerical experiments are conducted over large graphs. …

Pre-Synaptic Pool Modification (PSPM) google
A central question in neuroscience is how to develop realistic models that predict output firing behavior based on provided external stimulus. Given a set of external inputs and a set of output spike trains, the objective is to discover a network structure which can accomplish the transformation as accurately as possible. Due to the difficulty of this problem in its most general form, approximations have been made in previous work. Past approximations have sacrificed network size, recurrence, allowed spiked count, or have imposed layered network structure. Here we present a learning rule without these sacrifices, which produces a weight matrix of a leaky integrate-and-fire (LIF) network to match the output activity of both deterministic LIF networks as well as probabilistic integrate-and-fire (PIF) networks. Inspired by synaptic scaling, our pre-synaptic pool modification (PSPM) algorithm outputs deterministic, fully recurrent spiking neural networks that can provide a novel generative model for given spike trains. Similarity in output spike trains is evaluated with a variety of metrics including a van-Rossum like measure and a numerical comparison of inter-spike interval distributions. Application of our algorithm to randomly generated networks improves similarity to the reference spike trains on both of these stated measures. In addition, we generated LIF networks that operate near criticality when trained on critical PIF outputs. Our results establish that learning rules based on synaptic homeostasis can be used to represent input-output relationships in fully recurrent spiking neural networks. …

Sure Independence Screening google
Big data is ubiquitous in various elds of sciences, engineering, medicine, social sciences, and humanities. It is often accompanied by a large number of variables and features. While adding much greater exibility to modeling with enriched feature space, ultra-high dimensional data analysis poses fundamental challenges to scalable learning and inference with good statistical e ciency. Sure independence screening is a simple and e ective method to this endeavor. This framework of two-scale statistical learning, consisting of large-scale screening followed by moderate-scale variable selection introduced in Fan and Lv (2008), has been extensively investigated and extended to various model settings ranging from parametric to semiparametric and nonparametric for regression, classi cation, and survival analysis. This article provides an overview on the developments of sure independence screening over the past decade. These developments demonstrate the wide applicability of the sure independence screening based learning and inference for big data analysis with desired scalability and theoretical guarantees. …

Advertisements