Adaptive Graph Convolutional Neural Network
Graph Convolutional Neural Networks (Graph CNNs) are generalizations of classical CNNs to handle graph data such as molecular data, point could and social networks. Current filters in graph CNNs are built for fixed and shared graph structure. However, for most real data, the graph structures varies in both size and connectivity. The paper proposes a generalized and flexible graph CNN taking data of arbitrary graph structure as input. In that way a task-driven adaptive graph is learned for each graph data while training. To efficiently learn the graph, a distance metric learning is proposed. Extensive experiments on nine graph-structured datasets have demonstrated the superior performance improvement on both convergence speed and predictive accuracy.
➘ “Graph Convolutional Neural Network” …
Jumping Knowledge Network (JKN)
Recent deep learning approaches for representation learning on graphs follow a neighborhood aggregation procedure. We analyze some important properties of these models, and propose a strategy to overcome those. In particular, the range of ‘neighboring’ nodes that a node’s representation draws from strongly depends on the graph structure, analogous to the spread of a random walk. To adapt to local neighborhood properties and tasks, we explore an architecture — jumping knowledge (JK) networks — that flexibly leverages, for each node, different neighborhood ranges to enable better structure-aware representation. In a number of experiments on social, bioinformatics and citation networks, we demonstrate that our model achieves state-of-the-art performance. Furthermore, combining the JK framework with models like Graph Convolutional Networks, GraphSAGE and Graph Attention Networks consistently improves those models’ performance. …
Self-Reducibility
This chapter provides a hands-on tutorial on the important technique known as self-reducibility. Through a series of ‘Challenge Problems’ that are theorems that the reader will—after being given definitions and tools—try to prove, the tutorial will ask the reader not to read proofs that use self-reducibility, but rather to discover proofs that use self-reducibility. In particular, the chapter will seek to guide the reader to the discovery of proofs of four interesting theorems—whose focus areas range from selectivity to information to approximation—from the literature, whose proofs draw on self-reducibility. The chapter’s goal is to allow interested readers to add self-reducibility to their collection of proof tools. The chapter simultaneously has a related but different goal, namely, to provide a ‘lesson plan’ (and a coordinated set of slides is available online to support this use [Hem19]) for a lecture to a two-lecture series that can be given to undergraduate students—even those with no background other than basic discrete mathematics and an understanding of what polynomial-time computation is—to immerse them in hands-on proving, and by doing that, to serve as an invitation to them to take courses on Models of Computation or Complexity Theory. …
AQuA
Commonsense reasoning is a critical AI capability, but it is difficult to construct challenging datasets that test common sense. Recent neural question-answering systems, based on large pre-trained models of language, have already achieved near-human-level performance on commonsense knowledge benchmarks. These systems do not possess human-level common sense, but are able to exploit limitations of the datasets to achieve human-level scores. We introduce the AQuA dataset, an adversarially-constructed evaluation dataset for testing common sense. AQuA forms a challenging extension to the recently-proposed SWAG dataset, which tests commonsense knowledge using sentence-completion questions that describe situations observed in video. To produce a more difficult dataset, we introduce a novel procedure for question acquisition in which workers author questions designed to target weaknesses of state-of-the-art neural question answering systems. Workers are rewarded for submissions that models fail to answer correctly both before and after fine-tuning (in cross-validation). We create 2.8k questions via this procedure and evaluate the performance of multiple state-of-the-art question answering systems on our dataset. We observe a significant gap between human performance, which is 95.3%, and the performance of the best baseline accuracy of 65.3% by the OpenAI GPT model. …
If you did not already know
07 Sunday Nov 2021
Posted What is ...
in