Developed Lagrange Interpolation (DLI) google
In this work, we introduce the new class of functions which can use to solve the nonlinear/linear multi-dimensional differential equations. Based on these functions, a numerical method is provided which is called the Developed Lagrange Interpolation (DLI). For this, firstly, we define the new class of the functions, called the Developed Lagrange Functions (DLFs), which satisfy in the Kronecker Delta at the collocation points. Then, for the DLFs, the first-order derivative operational matrix of $\textbf{D}^{(1)}$ is obtained, and a recurrence relation is provided to compute the high-order derivative operational matrices of $\textbf{D}^{(m)}$, $m\in \mathbb{N}$; that is, we develop the theorem of the derivative operational matrices of the classical Lagrange polynomials for the DLFs and show that the relation of $\textbf{D}^{(m)}=(\textbf{D}^{(1)})^{m}$ for the DLFs is not established and is developable. Finally, we develop the error analysis of the classical Lagrange interpolation for the developed Lagrange interpolation. Finally, for demonstrating the convergence and efficiency of the DLI, some well-known differential equations, which are applicable in applied sciences, have been investigated based upon the various choices of the points of interpolation/collocation. …

ThunderNet google
Real-time generic object detection on mobile platforms is a crucial but challenging computer vision task. However, previous CNN-based detectors suffer from enormous computational cost, which hinders them from real-time inference in computation-constrained scenarios. In this paper, we investigate the effectiveness of two-stage detectors in real-time generic detection and propose a lightweight two-stage detector named ThunderNet. In the backbone part, we analyze the drawbacks in previous lightweight backbones and present a lightweight backbone designed for object detection. In the detection part, we exploit an extremely efficient RPN and detection head design. To generate more discriminative feature representation, we design two efficient architecture blocks, Context Enhancement Module and Spatial Attention Module. At last, we investigate the balance between the input resolution, the backbone, and the detection head. Compared with lightweight one-stage detectors, ThunderNet achieves superior performance with only 40% of the computational cost on PASCAL VOC and COCO benchmarks. Without bells and whistles, our model runs at 24.1 fps on an ARM-based device. To the best of our knowledge, this is the first real-time detector reported on ARM platforms. Code will be released for paper reproduction. …

Physics-Guided Neural Network (PGNN) google
This paper introduces a novel framework for combining scientific knowledge of physics-based models with neural networks to advance scientific discovery. This framework, termed as physics-guided neural network (PGNN), leverages the output of physics-based model simulations along with observational features to generate predictions using a neural network architecture. Further, this paper presents a novel framework for using physics-based loss functions in the learning objective of neural networks, to ensure that the model predictions not only show lower errors on the training set but are also scientifically consistent with the known physics on the unlabeled set. We illustrate the effectiveness of PGNN for the problem of lake temperature modeling, where physical relationships between the temperature, density, and depth of water are used to design a physics-based loss function. By using scientific knowledge to guide the construction and learning of neural networks, we are able to show that the proposed framework ensures better generalizability as well as scientific consistency of results. …

Doubly Semi-Implicit Variational Inference (DSIVI) google
We extend the existing framework of semi-implicit variational inference (SIVI) and introduce doubly semi-implicit variational inference (DSIVI), a way to perform variational inference and learning when both the approximate posterior and the prior distribution are semi-implicit. In other words, DSIVI performs inference in models where the prior and the posterior can be expressed as an intractable infinite mixture of some analytic density with a highly flexible implicit mixing distribution. We provide a sandwich bound on the evidence lower bound (ELBO) objective that can be made arbitrarily tight. Unlike discriminator-based and kernel-based approaches to implicit variational inference, DSIVI optimizes a proper lower bound on ELBO that is asymptotically exact. We evaluate DSIVI on a set of problems that benefit from implicit priors. In particular, we show that DSIVI gives rise to a simple modification of VampPrior, the current state-of-the-art prior for variational autoencoders, which improves its performance. …