Learning to Coordinate and Teach Reinforcement (LeCTR)
We present a framework and algorithm for peer-to-peer teaching in cooperative multiagent reinforcement learning. Our algorithm, Learning to Coordinate and Teach Reinforcement (LeCTR), trains advising policies by using students’ learning progress as a teaching reward. Agents using LeCTR learn to assume the role of a teacher or student at the appropriate moments, exchanging action advice to accelerate the entire learning process. Our algorithm supports teaching heterogeneous teammates, advising under communication constraints, and learns both what and when to advise. LeCTR is demonstrated to outperform the final performance and rate of learning of prior teaching methods on multiple benchmark domains. To our knowledge, this is the first approach for learning to teach in a multiagent setting. …
Perturbative Neural Network (PNN)
Convolutional neural networks are witnessing wide adoption in computer vision systems with numerous applications across a range of visual recognition tasks. Much of this progress is fueled through advances in convolutional neural network architectures and learning algorithms even as the basic premise of a convolutional layer has remained unchanged. In this paper, we seek to revisit the convolutional layer that has been the workhorse of state-of-the-art visual recognition models. We introduce a very simple, yet effective, module called a perturbation layer as an alternative to a convolutional layer. The perturbation layer does away with convolution in the traditional sense and instead computes its response as a weighted linear combination of non-linearly activated additive noise perturbed inputs. We demonstrate both analytically and empirically that this perturbation layer can be an effective replacement for a standard convolutional layer. Empirically, deep neural networks with perturbation layers, called Perturbative Neural Networks (PNNs), in lieu of convolutional layers perform comparably with standard CNNs on a range of visual datasets (MNIST, CIFAR-10, PASCAL VOC, and ImageNet) with fewer parameters. …
Hellinger Correlation
In this paper, the defining properties of a valid measure of the dependence between two random variables are reviewed and complemented with two original ones, shown to be more fundamental than other usual postulates. While other popular choices are proved to violate some of these requirements, a class of dependence measures satisfying all of them is identified. One particular measure, that we call the Hellinger correlation, appears as a natural choice within that class due to both its theoretical and intuitive appeal. A simple and efficient nonparametric estimator for that quantity is proposed. Synthetic and real-data examples finally illustrate the descriptive ability of the measure, which can also be used as test statistic for exact independence testing. …
Locality Preserving Projection
With the advantage of low storage cost and high efficiency, hashing learning has received much attention in retrieval field. As multiple modal data representing a common object semantically are complementary, many works focus on learning unified binary codes. However, these works ignore the importance of manifold structre among data. In fact, it is still an interesting problem to directly preserve the local manifold structure among samples in hamming space. Since different modalities are isomerous, we adopt the concatenated feature of multiple modality feature to represent original object. In our framework, Locally Linear Embedding and Locality Preserving Projection are introduced to reconstruct the manifold structure of original space in the Hamming space. Besides, The L21-norm regularization are imposed on the projection matrices to further exploit the discriminative features for different modalities simultaneously. Extensive experiments are performed to evaluate the proposed method, dubbed Unsupervised Concatenation Hashing (UCH), on the three publicly available datasets and the experimental results show the superior performance of UCH outperforming most of state-of-the-art unsupervised hashing models. …
If you did not already know
21 Sunday Aug 2022
Posted What is ...
in