Log-Likelihood
For many applications, the natural logarithm of the likelihood function, called the log-likelihood, is more convenient to work with. Because the logarithm is a monotonically increasing function, the logarithm of a function achieves its maximum value at the same points as the function itself, and hence the log-likelihood can be used in place of the likelihood in maximum likelihood estimation and related techniques. Finding the maximum of a function often involves taking the derivative of a function and solving for the parameter being maximized, and this is often easier when the function being maximized is a log-likelihood rather than the original likelihood function. For example, some likelihood functions are for the parameters that explain a collection of statistically independent observations. In such a situation, the likelihood function factors into a product of individual likelihood functions. The logarithm of this product is a sum of individual logarithms, and the derivative of a sum of terms is often easier to compute than the derivative of a product. In addition, several common distributions have likelihood functions that contain products of factors involving exponentiation. The logarithm of such a function is a sum of products, again easier to differentiate than the original function. In phylogenetics the log-likelihood ratio is sometimes termed support and the log-likelihood function support function. However, given the potential for confusion with the mathematical meaning of ‘support’ this terminology is rarely used outside this field.
➚ “Likelihood Function” …
Event Sourcing (ES)
An architectural pattern which warrants that your entities (as per Eric Evans’ definition) do not track their internal state by means of direct serialization or O/R mapping, but by means of reading and committing events to an event store. Where ES is combined with CQRS and DDD, aggregate roots are responsible for thoroughly validating and applying commands (often by means having their instance methods invoked from a Command Handler), and then publishing a single or a set of events which is also the foundation upon which the aggregate roots base their logic for dealing with method invocations. Hence, the input is a command and the output is one or many events which are transactionally (single commit) saved to an event store, and then often published on a message broker for the benefit of those interested (often the views are interested; they are then queried using Query-messages). When modeling your aggregate roots to output events, you can isolate the internal state event further than would be possible when projecting read-data from your entities, as is done in standard n-tier data-passing architectures. One significant benefit from this is that tooling such as axiomatic theorem provers (e.g. Microsoft Contracts or CHESS) are easier to apply, as the aggregate root comprehensively hides its internal state. Events are often persisted based on the version of the aggregate root instance, which yields a domain model that synchronizes in distributed systems around the concept of optimistic concurrency. …
Cartesian Neural Network Constitutive Model (CaNNCM)
Elasticity images map biomechanical properties of soft tissues to aid in the detection and diagnosis of pathological states. In particular, quasi-static ultrasonic (US) elastography techniques use force-displacement measurements acquired during an US scan to parameterize the spatio-temporal stress-strain behavior. Current methods use a model-based inverse approach to estimate the parameters associated with a chosen constitutive model. However, model-based methods rely on simplifying assumptions of tissue biomechanical properties, often limiting elastography to imaging one or two linear-elastic parameters. We previously described a data-driven method for building neural network constitutive models (NNCMs) that learn stress-strain relationships from force-displacement data. Using measurements acquired on gelatin phantoms, we demonstrated the ability of NNCMs to characterize linear-elastic mechanical properties without an initial model assumption and thus circumvent the mathematical constraints typically encountered in classic model-based approaches to the inverse problem. While successful, we were required to use a priori knowledge of the internal object shape to define the spatial distribution of regions exhibiting different material properties. Here, we introduce Cartesian neural network constitutive models (CaNNCMs) that are capable of using data to model both linear-elastic mechanical properties and their distribution in space. We demonstrate the ability of CaNNCMs to capture arbitrary material property distributions using stress-strain data from simulated phantoms. Furthermore, we show that a trained CaNNCM can be used to reconstruct a Young’s modulus image. CaNNCMs are an important step toward data-driven modeling and imaging the complex mechanical properties of soft tissues. …
PanopticFusion
We propose PanopticFusion, a novel online volumetric semantic mapping system at the level of stuff and things. In contrast to previous semantic mapping systems, PanopticFusion is able to densely predict class labels of a background region (stuff) and individually segment arbitrary foreground objects (things). In addition, our system has the capability to reconstruct a large-scale scene and extract a labeled mesh thanks to its use of a spatially hashed volumetric map representation. Our system first predicts pixel-wise panoptic labels (class labels for stuff regions and instance IDs for thing regions) for incoming RGB frames by fusing 2D semantic and instance segmentation outputs. The predicted panoptic labels are integrated into the volumetric map together with depth measurements while keeping the consistency of the instance IDs, which could vary frame to frame, by referring to the 3D map at that moment. In addition, we construct a fully connected conditional random field (CRF) model with respect to panoptic labels for map regularization. For online CRF inference, we propose a novel unary potential approximation and a map division strategy. We evaluated the performance of our system on the ScanNet (v2) dataset. PanopticFusion outperformed or compared with state-of-the-art offline 3D DNN methods in both semantic and instance segmentation benchmarks. Also, we demonstrate a promising augmented reality application using a 3D panoptic map generated by the proposed system. …
If you did not already know
11 Monday Jul 2022
Posted What is ...
in