* Paper*:

**Bayesian analysis of longitudinal studies with treatment by indication**It is often of interest in observational studies to measure the causal effect of a treatment on time-to-event outcomes. In a medical setting, observational studies commonly involve patients who initiate medication therapy and others who do not, and the goal is to infer the effect of medication therapy on time until recovery, a pre-defined level of improvement, or some other time-to-event outcome. A difficulty with such studies is that the notion of a medication initiation time does not exist in the control group. We propose an approach to infer causal effects of an intervention in longitudinal observational studies when the time of treatment assignment is only observed for treated units and where treatment is given by indication. We present a framework for conceptualizing an underlying randomized experiment in this setting based on separating the process that governs the time of study arm assignment from the mechanism that determines the assignment. Our approach involves inferring the missing times of assignment followed by estimating treatment effects. This approach allows us to incorporate uncertainty about the missing times of study arm assignment, which induces uncertainty in both the selection of the control group and the measurement of time-to-event outcomes for these controls. We demonstrate our approach to study the effects on mortality of inappropriately prescribing phosphodiesterase type 5 inhibitors (PDE5Is), a medication contraindicated for groups 2 and 3 pulmonary hypertension, using administrative data from the Veterans Affairs (VA) health care system.

We develop a causal inference approach to estimate the number of adverse health events prevented by large-scale air quality regulations via changes in exposure to multiple pollutants. This approach is motivated by regulations that impact pollution levels in all areas within their purview. We introduce a causal estimand called the Total Events Avoided (TEA) by the regulation, defined as the difference in the expected number of health events under the no-regulation pollution exposures and the observed number of health events under the with-regulation pollution exposures. We propose a matching method and a machine learning method that leverage high-resolution, population-level pollution and health data to estimate the TEA. Our approach improves upon traditional methods for regulation health impact analyses by clarifying the causal identifying assumptions, utilizing population-level data, minimizing parametric assumptions, and considering the impacts of multiple pollutants simultaneously. To reduce model-dependence, the TEA estimate captures health impacts only for units in the data whose anticipated no-regulation features are within the support of the observed with-regulation data, thereby providing a conservative but data-driven assessment to complement traditional parametric approaches. We apply these methods to investigate the health impacts of the 1990 Clean Air Act Amendments in the US Medicare population.

* Paper*:

**Non-Causal Tracking by Deblatting**Tracking by Deblatting stands for solving an inverse problem of deblurring and image matting for tracking motion-blurred objects. We propose non-causal Tracking by Deblatting which estimates continuous, complete and accurate object trajectories. Energy minimization by dynamic programming is used to detect abrupt changes of motion, called bounces. High-order polynomials are fitted to segments, which are parts of the trajectory separated by bounces. The output is a continuous trajectory function which assigns location for every real-valued time stamp from zero to the number of frames. Additionally, we show that from the trajectory function precise physical calculations are possible, such as radius, gravity or sub-frame object velocity. Velocity estimation is compared to the high-speed camera measurements and radars. Results show high performance of the proposed method in terms of Trajectory-IoU, recall and velocity estimation.

* Paper*:

**On the Separability of Classes with the Cross-Entropy Loss Function**In this paper, we focus on the separability of classes with the cross-entropy loss function for classification problems by theoretically analyzing the intra-class distance and inter-class distance (i.e. the distance between any two points belonging to the same class and different classes, respectively) in the feature space, i.e. the space of representations learnt by neural networks. Specifically, we consider an arbitrary network architecture having a fully connected final layer with Softmax activation and trained using the cross-entropy loss. We derive expressions for the value and the distribution of the squared L2 norm of the product of a network dependent matrix and a random intra-class and inter-class distance vector (i.e. the vector between any two points belonging to the same class and different classes), respectively, in the learnt feature space (or the transformation of the original data) just before Softmax activation, as a function of the cross-entropy loss value. The main result of our analysis is the derivation of a lower bound for the probability with which the inter-class distance is more than the intra-class distance in this feature space, as a function of the loss value. We do so by leveraging some empirical statistical observations with mild assumptions and sound theoretical analysis. As per intuition, the probability with which the inter-class distance is more than the intra-class distance decreases as the loss value increases, i.e. the classes are better separated when the loss value is low. To the best of our knowledge, this is the first work of theoretical nature trying to explain the separability of classes in the feature space learnt by neural networks trained with the cross-entropy loss function.

* Article*:

**The Chicken or the egg? Experiments Can Help Determine Two-Way Causality**How to answer the centuries-old question: which one came first, the chicken or the egg? Is it the chicken that came out first then hatched the egg? Or, the other way around? This is a particularly challenging question, as it relates to the origin and the direction of causal inference in social science.

* Paper*:

**Dynamic causal modelling of phase-amplitude interactions**Models of coupled oscillators are used to describe a wide variety of phenomena in neuroimaging. These models typically rest on the premise that oscillator dynamics do not evolve beyond their respective limit cycles, and hence that interactions can be described purely in terms of phase differences. Whilst mathematically convenient, the restrictive nature of phase-only models can limit their explanatory power. We therefore propose a generalisation of dynamic causal modelling that incorporates both phase and amplitude. This allows for the separate quantifications of phase and amplitude contributions to the connectivity between neural regions. We establish, using model-generated data and simulations of coupled pendula, that phase-only models perform well only under weak coupling conditions. We also show that, despite their higher complexity, phase-amplitude models can describe strongly coupled systems more effectively than their phase-only counterparts. We relate our findings to four metrics commonly used in neuroimaging: the Kuramoto order parameter, cross-correlation, phase-lag index, and spectral entropy. We find that, with the exception of spectral entropy, the phase-amplitude model is able to capture all metrics more effectively than the phase-only model. We then demonstrate, using local field potential recordings in rodents and functional magnetic resonance imaging in macaque monkeys, that amplitudes in oscillator models play an important role in describing neural dynamics in anaesthetised brain states.

* Paper*:

**Quantifying Causation**I introduce an information-theoretic measure of causation, capturing how much a quantum system influences the evolution of another system. The measure discriminates among different causal relations that generate same-looking data, with no information about the quantum channel. In particular, it determines whether correlation implies causation, and when causation manifests without correlation. In the classical scenario, the quantity evaluates the strength of causal links between random variables. Also, the measure is generalized to identify and rank concurrent sources of causal influence in many-body dynamics, enabling to reconstruct causal patterns in complex networks.

* Paper*:

**A Rule-Based System for Explainable Donor-Patient Matching in Liver Transplantation**In this paper we present web-liver, a rule-based system for decision support in the medical domain, focusing on its application in a liver transplantation unit for implementing policies for donor-patient matching. The rule-based system is built on top of an interpreter for logic programs with partial functions, called lppf, that extends the paradigm of Answer Set Programming (ASP) adding two main features: (1) the inclusion of partial functions and (2) the computation of causal explanations for the obtained solutions. The final goal of web-liver is assisting the medical experts in the design of new donor-patient matching policies that take into account not only the patient severity but also the transplantation utility. As an example, we illustrate the tool behaviour with a set of rules that implement the utility index called SOFT.