Harmonic Coding
We consider the problem of distributedly computing a general class of functions, referred to as gradient-type computation, while maintaining the privacy of the input dataset. Gradient-type computation evaluates the sum of some `partial gradients’, defined as polynomials of subsets of the input. It underlies many algorithms in machine learning and data analytics. We propose Harmonic Coding, which universally computes any gradient-type function, while requiring the minimum possible number of workers. Harmonic Coding strictly improves computing schemes developed based on prior works, such as Shamir’s secret sharing and Lagrange Coded Computing, by injecting coded redundancy using harmonic progression. It enables the computing results of the workers to be interpreted as the sum of partial gradients and some redundant results, which then allows the cancellation of non-gradient terms in the decoding process. By proving a matching converse, we demonstrate the optimality of Harmonic Coding, even compared to the schemes that are non-universal (i.e., can be designed based on a specific gradient-type function). …
Geometric Illustration of Neural Networks (GINN)
This informal technical report details the geometric illustration of decision boundaries for ReLU units in a three layer fully connected neural network. The network is designed and trained to predict pixel intensity from an (x, y) input location. The Geometric Illustration of Neural Networks (GINN) tool was built to visualise and track the points at which ReLU units switch from being active to off (or vice versa) as the network undergoes training. Several phenomenon were observed and are discussed herein. This technical report is a supporting document to the blog post with online demos and is available at http://…/. …
Complex-Valued Neural Network (CVNN)
The complex-valued Neural Network is an extension of a (usual) real-valued neural network, whose input and output signals and parameters such as weights and thresholds are all complex numbers (the activation function is inevitably a complex-valued function). Neural Networks have been applied to various fields such as communication systems, image processing and speech recognition, in which complex numbers are often used through the Fourier Transformation. This indicates that complex-valued neural networks are useful. In addition, in the human brain, an action potential may have different pulse patterns, and the distance between pulses may be different. This suggests that introducing complex numbers representing phase and amplitude into neural networks is appropriate. In these years the complex-valued neural networks expand the application fields in image processing, computer vision, optoelectronic imaging, and communication and so on. The potentially wide applicability yields new aspects of theories required for novel or more effective functions and mechanisms. …
graph2vec
Recent works on representation learning for graph structured data predominantly focus on learning distributed representations of graph substructures such as nodes and subgraphs. However, many graph analytics tasks such as graph classification and clustering require representing entire graphs as fixed length feature vectors. While the aforementioned approaches are naturally unequipped to learn such representations, graph kernels remain as the most effective way of obtaining them. However, these graph kernels use handcrafted features (e.g., shortest paths, graphlets, etc.) and hence are hampered by problems such as poor generalization. To address this limitation, in this work, we propose a neural embedding framework named graph2vec to learn data-driven distributed representations of arbitrary sized graphs. graph2vec’s embeddings are learnt in an unsupervised manner and are task agnostic. Hence, they could be used for any downstream task such as graph classification, clustering and even seeding supervised representation learning approaches. Our experiments on several benchmark and large real-world datasets show that graph2vec achieves significant improvements in classification and clustering accuracies over substructure representation learning approaches and are competitive with state-of-the-art graph kernels. …
If you did not already know
19 Sunday Dec 2021
Posted What is ...
in