Style-Based Recalibration Module (SRM)
Following the advance of style transfer with Convolutional Neural Networks (CNNs), the role of styles in CNNs has drawn growing attention from a broader perspective. In this paper, we aim to fully leverage the potential of styles to improve the performance of CNNs in general vision tasks. We propose a Style-based Recalibration Module (SRM), a simple yet effective architectural unit, which adaptively recalibrates intermediate feature maps by exploiting their styles. SRM first extracts the style information from each channel of the feature maps by style pooling, then estimates per-channel recalibration weight via channel-independent style integration. By incorporating the relative importance of individual styles into feature maps, SRM effectively enhances the representational ability of a CNN. The proposed module is directly fed into existing CNN architectures with negligible overhead. We conduct comprehensive experiments on general image recognition as well as tasks related to styles, which verify the benefit of SRM over recent approaches such as Squeeze-and-Excitation (SE). To explain the inherent difference between SRM and SE, we provide an in-depth comparison of their representational properties. …
Selective Clustering Annotated Using Modes of Projections (SCAMP)
Selective clustering annotated using modes of projections (SCAMP) is a new clustering algorithm for data in $\mathbb{R}^p$. SCAMP is motivated from the point of view of non-parametric mixture modeling. Rather than maximizing a classification likelihood to determine cluster assignments, SCAMP casts clustering as a search and selection problem. One consequence of this problem formulation is that the number of clusters is $\textbf{not}$ a SCAMP tuning parameter. The search phase of SCAMP consists of finding sub-collections of the data matrix, called candidate clusters, that obey shape constraints along each coordinate projection. An extension of the dip test of Hartigan and Hartigan (1985) is developed to assist the search. Selection occurs by scoring each candidate cluster with a preference function that quantifies prior belief about the mixture composition. Clustering proceeds by selecting candidates to maximize their total preference score. SCAMP concludes by annotating each selected cluster with labels that describe how cluster-level statistics compare to certain dataset-level quantities. SCAMP can be run multiple times on a single data matrix. Comparison of annotations obtained across iterations provides a measure of clustering uncertainty. Simulation studies and applications to real data are considered. A C++ implementation with R interface is $\href{https://…/scamp}{available\ online}$. …
QuickIM
The Influence Maximization (IM) problem aims at finding k seed vertices in a network, starting from which influence can be spread in the network to the maximum extent. In this paper, we propose QuickIM, the first versatile IM algorithm that attains all the desirable properties of a practically applicable IM algorithm at the same time, namely high time efficiency, good result quality, low memory footprint, and high robustness. On real-world social networks, QuickIM achieves the $\Omega(n + m)$ lower bound on time complexity and $\Omega(n)$ space complexity, where $n$ and $m$ are the number of vertices and edges in the network, respectively. Our experimental evaluation verifies the superiority of QuickIM. Firstly, QuickIM runs 1-3 orders of magnitude faster than the state-of-the-art IM algorithms. Secondly, except EasyIM, QuickIM requires 1-2 orders of magnitude less memory than the state-of-the-art algorithms. Thirdly, QuickIM always produces as good quality results as the state-of-the-art algorithms. Lastly, the time and the memory performance of QuickIM is independent of influence probabilities. On the largest network used in the experiments that contains more than 3.6 billion edges, QuickIM is able to find hundreds of influential seeds in less than 4 minutes, while all the state-of-the-art algorithms fail to terminate in an hour. …
Visual Discourse Parsing
Text-level discourse parsing aims to unmask how two segments (or sentences) in the text are related to each other. We propose the task of Visual Discourse Parsing, which requires understanding discourse relations among scenes in a video. Here we use the term scene to refer to a subset of video frames that can better summarize the video. In order to collect a dataset for learning discourse cues from videos, one needs to manually identify the scenes from a large pool of video frames and then annotate the discourse relations between them. This is clearly a time consuming, expensive and tedious task. In this work, we propose an approach to identify discourse cues from the videos without the need to explicitly identify and annotate the scenes. We also present a novel dataset containing 310 videos and the corresponding discourse cues to evaluate our approach. We believe that many of the multi-discipline Artificial Intelligence problems such as Visual Dialog and Visual Storytelling would greatly benefit from the use of visual discourse cues. …
If you did not already know
26 Wednesday Jan 2022
Posted What is ...
in