**Probabilistic Event Calculus (PEC)**

We present PEC, an Event Calculus (EC) style action language for reasoning about probabilistic causal and narrative information. It has an action language style syntax similar to that of the EC variant Modular-E. Its semantics is given in terms of possible worlds which constitute possible evolutions of the domain, and builds on that of EFEC, an epistemic extension of EC. We also describe an ASP implementation of PEC and show the sense in which this is sound and complete. … **TzK**

We introduce TzK (pronounced ‘task’), a conditional flow-based encoder/decoder generative model, formulated in terms of maximum likelihood (ML). TzK offers efficient approximation of arbitrary data sample distributions (similar to GAN and flow-based ML), and stable training (similar to VAE and ML), while avoiding variational approximations (similar to ML). TzK exploits meta-data to facilitate a bottleneck, similar to autoencoders, thereby producing a low-dimensional representation. Unlike autoencoders, our bottleneck does not limit model expressiveness, similar to flow-based ML. Supervised, unsupervised, and semi-supervised learning are supported by replacing missing observations with samples from learned priors. We demonstrate TzK by jointly training on MNIST and Omniglot with minimal preprocessing, and weak supervision, with results which are comparable to state-of-the-art. … **Perceptor Gradients Algorithm**

We present the perceptor gradients algorithm — a novel approach to learning symbolic representations based on the idea of decomposing an agent’s policy into i) a perceptor network extracting symbols from raw observation data and ii) a task encoding program which maps the input symbols to output actions. We show that the proposed algorithm is able to learn representations that can be directly fed into a Linear-Quadratic Regulator (LQR) or a general purpose A* planner. Our experimental results confirm that the perceptor gradients algorithm is able to efficiently learn transferable symbolic representations as well as generate new observations according to a semantically meaningful specification. … **Varchenko Determinant**

Varchenko introduced a distance function on chambers of a hyperplane arrangement which gives rise to a determinant indexed by chambers whose entry in position $(C,D)$ is the distance between $C$ and $D$, and proved that that determinant has a nice factorization: that is the Varchenko determinant. Recently, Aguiar and Mahajan defined a generalization of that distance function, and proved that, for a central hyperplane arrangement, the determinant given rise by their distance function has also a nice factorization. We prove that, for any hyperplane arrangement, the determinant given rise by the distance function of Aguiar and Mahajan has a nice factorization. We also prove that the same is true for the determinant indexed by chambers of an apartment. …

# If you did not already know

**09**
*Saturday*
Apr 2022

Posted What is ...

in