Capsule Projection Network (CapProNet) google
In this paper, we formalize the idea behind capsule nets of using a capsule vector rather than a neuron activation to predict the label of samples. To this end, we propose to learn a group of capsule subspaces onto which an input feature vector is projected. Then the lengths of resultant capsules are used to score the probability of belonging to different classes. We train such a Capsule Projection Network (CapProNet) by learning an orthogonal projection matrix for each capsule subspace, and show that each capsule subspace is updated until it contains input feature vectors corresponding to the associated class. Only a small negligible computing overhead is incurred to train the network in low-dimensional capsule subspaces or through an alternative hyper-power iteration to estimate the normalization matrix. Experiment results on image datasets show the presented model can greatly improve the performance of state-of-the-art ResNet backbones by $10-20\%$ at the same level of computing and memory costs. …

IntelligentCrowd google
The prosperity of smart mobile devices has made mobile crowdsensing (MCS) a promising paradigm for completing complex sensing and computation tasks. In the past, great efforts have been made on the design of incentive mechanisms and task allocation strategies from MCS platform’s perspective to motivate mobile users’ participation. However, in practice, MCS participants face many uncertainties coming from their sensing environment as well as other participants’ strategies, and how do they interact with each other and make sensing decisions is not well understood. In this paper, we take MCS participants’ perspective to derive an online sensing policy to maximize their payoffs via MCS participation. Specifically, we model the interactions of mobile users and sensing environments as a multi-agent Markov decision process. Each participant cannot observe others’ decisions, but needs to decide her effort level in sensing tasks only based on local information, e.g., its own record of sensed signals’ quality. To cope with the stochastic sensing environment, we develop an intelligent crowdsensing algorithm IntelligentCrowd by leveraging the power of multi-agent reinforcement learning (MARL). Our algorithm leads to the optimal sensing policy for each user to maximize the expected payoff against stochastic sensing environments, and can be implemented at individual participant’s level in a distributed fashion. Numerical simulations demonstrate that IntelligentCrowd significantly improves users’ payoffs in sequential MCS tasks under various sensing dynamics. …

Mix and Match (M&M) google
We introduce Mix&Match (M&M) – a training framework designed to facilitate rapid and effective learning in RL agents, especially those that would be too slow or too challenging to train otherwise. The key innovation is a procedure that allows us to automatically form a curriculum over agents. Through such a curriculum we can progressively train more complex agents by, effectively, bootstrapping from solutions found by simpler agents. In contradistinction to typical curriculum learning approaches, we do not gradually modify the tasks or environments presented, but instead use a process to gradually alter how the policy is represented internally. We show the broad applicability of our method by demonstrating significant performance gains in three different experimental setups: (1) We train an agent able to control more than 700 actions in a challenging 3D first-person task; using our method to progress through an action-space curriculum we achieve both faster training and better final performance than one obtains using traditional methods. (2) We further show that M&M can be used successfully to progress through a curriculum of architectural variants defining an agents internal state. (3) Finally, we illustrate how a variant of our method can be used to improve agent performance in a multitask setting. …

Reduced Dynamic Chain Event Graph (RDCEG) google
In this paper we introduce a new class of probabilistic graphical models called the Reduced Dynamic Chain Event Graph (RDCEG) which is a novel mixture of a Chain Event Graph (CEG) and a semi-Markov process (SMP). It has been demonstrated that many real-world scenarios, particularly in the domain of public health and security, can be modelled as an unfolding of events in the life histories of individuals. Our interest not only lies in the future trajectories of an individual with a specified history and set of characteristics but also in the timescale associated with these developments. Such information is critical in developing suitable interventions and informs the prioritisation of policy decisions. The RDCEG was born out of the need for such a model. It is a coloured graph which inherits useful properties like fast conjugate model selection, conditional independence interrogations and a support for causal interventions from the family of probabilistic graphical models. Its novelty lies in its underlying semi-Markov structure which offers the flexibility of the holding time at each state being any arbitrary distribution. We demonstrate this new decision support system with a simulated intervention to reduce falls in the elderly. …