**Cross-Lingual Unsupervised Sense Embedding (CLUSE)**

This paper proposes a modularized sense induction and representation learning model that jointly learns bilingual sense embeddings that align well in the vector space, where the cross-lingual signal in the English-Chinese parallel corpus is exploited to capture the collocation and distributed characteristics in the language pair. The model is evaluated on the Stanford Contextual Word Similarity (SCWS) dataset to ensure the quality of monolingual sense embeddings. In addition, we introduce Bilingual Contextual Word Similarity (BCWS), a large and high-quality dataset for evaluating cross-lingual sense embeddings, which is the first attempt of measuring whether the learned embeddings are indeed aligned well in the vector space. The proposed approach shows the superior quality of sense embeddings evaluated in both monolingual and bilingual spaces. … **Veto Interval Graphs (VI Graphs)**

We introduce a variation of interval graphs, called veto interval (VI) graphs. A VI graph is represented by a set of closed intervals, each containing a point called a veto mark. The edge $ab$ is in the graph if the intervals corresponding to the vertices $a$ and $b$ intersect, and neither contains the veto mark of the other. We find families of graphs which are VI graphs, and prove results towards characterizing the maximum chromatic number of a VI graph. We define and prove similar results about several related graph families, including unit VI graphs, midpoint unit VI (MUVI) graphs, and single and double approval graphs. We also highlight a relationship between approval graphs and a family of tolerance graphs. … **SparseNet**

Deep neural networks have made remarkable progresses on various computer vision tasks. Recent works have shown that depth, width and shortcut connections of networks are all vital to their performances. In this paper, we introduce a method to sparsify DenseNet which can reduce connections of a L-layer DenseNet from O(L^2) to O(L), and thus we can simultaneously increase depth, width and connections of neural networks in a more parameter-efficient and computation-efficient way. Moreover, an attention module is introduced to further boost our network’s performance. We denote our network as SparseNet. We evaluate SparseNet on datasets of CIFAR(including CIFAR10 and CIFAR100) and SVHN. Experiments show that SparseNet can obtain improvements over the state-of-the-art on CIFAR10 and SVHN. Furthermore, while achieving comparable performances as DenseNet on these datasets, SparseNet is x2.6 smaller and x3.7 faster than the original DenseNet. …

# If you did not already know

**10**
*Wednesday*
Oct 2018

Posted What is ...

in
Advertisements