Causal Mediation Analysis (mediation)
We implement parametric and non parametric mediation analysis. This package performs the methods and suggestions in Imai, Keele and Yamamoto (2010), Imai, Keele and Tingley (2010), Imai, Tingley and Yamamoto (2013), Imai and Yamamoto (2013) and Yamamoto (2013). In addition to the estimation of causal mediation effects, the software also allows researchers to conduct sensitivity analysis for certain parametric models.

Computation of Risk-Based Portfolios (RiskPortfolios)
Collection of functions designed to compute risk-based portfolios as described in Ardia et al. (2016) <doi:10.2139/ssrn.2650644> and Ardia et al. (2017) <doi:10.21105/joss.00171>.

Structural Estimators and Algorithms for the Analysis of Stable Matchings (matchingMarkets)
Implements structural estimators to correct for the sample selection bias from observed outcomes in matching markets. Also contains R code for matching algorithms such as the deferred-acceptance algorithm for college admissions, the top-trading-cycles algorithm for house allocation and a partitioning linear program for the roommates problem.

Clustering of Multivariate Binary Data with Dimension Reduction via L1-Regularized Likelihood Maximization (cbird)
The clustering of binary data with reducing the dimensionality (CLUSBIRD) proposed by Yamamoto and Hayashi (2015) <doi:10.1016/j.patcog.2015.05.026>.

Model Confidence Set Procedure (MCS)
Perform the model confidence set procedure of Hansen et al (2011).

Natural Language Processing Infrastructure (NLP)
Basic classes and methods for Natural Language Processing.

Probabilistic Feature Analysis of Two-Way Two-Mode Frequencies (plfm)
The package can be used to estimate probabilistic latent feature models with a disjunctive or a conjunctive mapping rule for two-way two-mode frequency data

Optimistic Optimization in R (OOR)
Implementation of optimistic optimization methods for global optimization of deterministic or stochastic functions. The algorithms feature guarantees of the convergence to a global optimum. They require minimal assumptions on the (only local) smoothness, where the smoothness parameter does not need to be known. They are expected to be useful for the most difficult functions when we have no information on smoothness and the gradients are unknown or do not exist. Due to the weak assumptions, however, they can be mostly effective only in small dimensions, for example, for hyperparameter tuning.

Advertisements