Dynamic Chain Event Graph (DCEG) google
Chain Event Graphs (CEGs) have been a useful class of tree-based graphical models especially designed to capture context-specific conditional independences. However, it has became increasingly apparent that in many contexts modelling variable changes explicitly over time provide better results. Only very recently such dynamic CEG was defined in the literature. In this paper we introduce a dynamic version of CEGs to model longitudinal discrete processes observed over discrete time intervals that have highly asymmetric developments and context-specific structures. We discuss some limitations of the approach that directly extends the semantics of CEGs to dynamic settings. Finally, an example of multivariate processes describing a dynamic radicalisation process of inmates in a prison is used to show how to reason with a dynamic CEG.
An N Time-Slice Dynamic Chain Event Graph


Learning through Probing (LTP) google
Multi-agent reinforcement learning has received significant interest in recent years notably due to the advancements made in deep reinforcement learning which have allowed for the developments of new architectures and learning algorithms. Using social dilemmas as the training ground, we present a novel learning architecture, Learning through Probing (LTP), where agents utilize a probing mechanism to incorporate how their opponent’s behavior changes when an agent takes an action. We use distinct training phases and adjust rewards according to the overall outcome of the experiences accounting for changes to the opponents behavior. We introduce a parameter eta to determine the significance of these future changes to opponent behavior. When applied to the Iterated Prisoner’s Dilemma (IPD), LTP agents demonstrate that they can learn to cooperate with each other, achieving higher average cumulative rewards than other reinforcement learning methods while also maintaining good performance in playing against static agents that are present in Axelrod tournaments. We compare this method with traditional reinforcement learning algorithms and agent-tracking techniques to highlight key differences and potential applications. We also draw attention to the differences between solving games and societal-like interactions and analyze the training of Q-learning agents in makeshift societies. This is to emphasize how cooperation may emerge in societies and demonstrate this using environments where interactions with opponents are determined through a random encounter format of the IPD. …

Stochastic Approximation of Expectation Maximization (SAEM) google
The SAEM algorithm: – computes the maximum likelihood estimator of the population parameters, without any approximation of the model (linearisation, quadrature approximation,…), using the Stochastic Approximation Expectation Maximization (SAEM) algorithm, – provides standard errors for the maximum likelihood estimator – estimates the conditional modes, the conditional means and the conditional standard deviations of the individual parameters, using the Hastings-Metropolis algorithm. Several applications of SAEM in agronomy, animal breeding and PKPD analysis have been published by members of the Monolix group (http://group.monolix.org ). …