Bottleneck Simulator google
Deep reinforcement learning has recently shown many impressive successes. However, one major obstacle towards applying such methods to real-world problems is their lack of data-efficiency. To this end, we propose the Bottleneck Simulator: a model-based reinforcement learning method which combines a learned, factorized transition model of the environment with rollout simulations to learn an effective policy from few examples. The learned transition model employs an abstract, discrete (bottleneck) state, which increases sample efficiency by reducing the number of model parameters and by exploiting structural properties of the environment. We provide a mathematical analysis of the Bottleneck Simulator in terms of fixed points of the learned policy, which reveals how performance is affected by four distinct sources of error: an error related to the abstract space structure, an error related to the transition model estimation variance, an error related to the transition model estimation bias, and an error related to the transition model class bias. Finally, we evaluate the Bottleneck Simulator on two natural language processing tasks: a text adventure game and a real-world, complex dialogue response selection task. On both tasks, the Bottleneck Simulator yields excellent performance beating competing approaches. …

Condensed Representation google
Correlated pattern mining has increasingly become an important task in data mining since these patterns allow conveying knowledge about meaningful and surprising relations among data. Frequent correlated patterns were thoroughly studied in the literature. In this thesis, we propose to benefit from both frequent correlated as well as rare correlated patterns according to the bond correlation measure. We propose to extract a subset without information loss of the sets of frequent correlated and of rare correlated patterns, this subset is called “Condensed Representation“. In this regard, we are based on the notions derived from the Formal Concept Analysis FCA, specifically the equivalence classes associated to a closure operator fbond dedicated to the bond measure, to introduce new concise representations of both frequent correlated and rare correlated patterns. …

Graph Feature Network (GFN) google
Graph Neural Nets (GNNs) have received increasing attentions, partially due to their superior performance in many node and graph classification tasks. However, there is a lack of understanding on what they are learning and how sophisticated the learned graph functions are. In this work, we first propose Graph Feature Network (GFN), a simple lightweight neural net defined on a set of graph augmented features. We then propose a dissection of GNNs on graph classification into two parts: 1) the graph filtering, where graph-based neighbor aggregations are performed, and 2) the set function, where a set of hidden node features are composed for prediction. To test the importance of these two parts separately, we prove and leverage the connection that GFN can be derived by linearizing graph filtering part of GNN. Empirically we perform evaluations on common graph classification benchmarks. To our surprise, we find that, despite the simplification, GFN could match or exceed the best accuracies produced by recently proposed GNNs, with a fraction of computation cost. Our results provide new perspectives on both the functions that GNNs learned and the current benchmarks for evaluating them. …

Quantum Annealing (QA) google
Quantum annealing (QA) is a metaheuristic for finding the global minimum of a given objective function over a given set of candidate solutions (candidate states), by a process using quantum fluctuations. Quantum annealing is used mainly for problems where the search space is discrete (combinatorial optimization problems) with many local minima; such as finding the ground state of a spin glass. It was formulated in its present form by T. Kadowaki and H. Nishimori (ja) in ‘Quantum annealing in the transverse Ising model’ though a proposal in a different form had been made by A. B. Finilla, M. A. Gomez, C. Sebenik and J. D. Doll, in ‘Quantum annealing: A new method for minimizing multidimensional functions’. Quantum annealing starts from a quantum-mechanical superposition of all possible states (candidate states) with equal weights. Then the system evolves following the time-dependent Schrödinger equation, a natural quantum-mechanical evolution of physical systems. The amplitudes of all candidate states keep changing, realizing a quantum parallelism, according to the time-dependent strength of the transverse field, which causes quantum tunneling between states. If the rate of change of the transverse field is slow enough, the system stays close to the ground state of the instantaneous Hamiltonian, i.e., adiabatic quantum computation. If the rate of change of the transverse field is accelerated, the system may leave the ground state temporarily but produce a higher likelihood of concluding in the ground state of the final problem Hamiltonian, i.e., diabatic quantum computation. The transverse field is finally switched off, and the system is expected to have reached the ground state of the classical Ising model that corresponds to the solution to the original optimization problem. An experimental demonstration of the success of quantum annealing for random magnets was reported immediately after the initial theoretical proposal. An introduction to combinatorial optimization (NP-hard) problems, the general structure of quantum annealing-based algorithms and two examples of this kind of algorithms for solving instances of the max-SAT and Minimum Multicut problems together with an overview of the quantum annealing systems manufactured by D-Wave Systems are presented in. …