Ridge Regularized Linear Models (RRLM) google
Ridge regularized linear models (RRLMs), such as ridge regression and the SVM, are a popular group of methods that are used in conjunction with coefficient hypothesis testing to discover explanatory variables with a significant multivariate association to a response. …

Data Shared Adaptive Bootstrap Aggregated (AdaBag) google
In this article we propose a new supervised ensemble learning method called Data Shared Adaptive Bootstrap Aggregated (AdaBag) Lasso for capturing low dimensional useful features for word based sentiment analysis and mining problems. The literature on ensemble methods is very rich in both statistics and machine learning. The algorithm is a substantial upgrade of the Data Shared Lasso uplift algorithm. The most significant conceptual addition to the existing literature lies in the final selection of bag of predictors through a special bootstrap aggregation scheme. We apply the algorithm to one simulated data and perform dimension reduction in grouped IMDb data (drama, comedy and horror) to extract reduced set of word features for predicting sentiment ratings of movie reviews demonstrating different aspects. We also compare the performance of the present method with the classical Principal Components with associated Linear Discrimination (PCA-LD) as baseline. There are few limitations in the algorithm. Firstly, the algorithm workflow does not incorporate online sequential data acquisition and it does not use sentence based models which are common in ANN algorithms . Our results produce slightly higher error rate compare to the reported state-of-the-art as a consequence. …

Target Set Selection in a Social Network (TSS Problem) google
Given a social network with diffusion probabilities as edge weights and an integer k, which k nodes should be chosen for initial injection of information to maximize influence in the network? This problem is known as Target Set Selection in a social network (TSS Problem) and more popularly, Social Influence Maximization Problem (SIM Problem). …

Kernel Wasserstein Distance google
The Wasserstein distance is a powerful metric based on the theory of optimal transport. It gives a natural measure of the distance between two distributions with a wide range of applications. In contrast to a number of the common divergences on distributions such as Kullback-Leibler or Jensen-Shannon, it is (weakly) continuous, and thus ideal for analyzing corrupted data. To date, however, no kernel methods for dealing with nonlinear data have been proposed via the Wasserstein distance. In this work, we develop a novel method to compute the L2-Wasserstein distance in a kernel space implemented using the kernel trick. The latter is a general method in machine learning employed to handle data in a nonlinear manner. We evaluate the proposed approach in identifying computerized tomography (CT) slices with dental artifacts in head and neck cancer, performing unsupervised hierarchical clustering on the resulting Wasserstein distance matrix that is computed on imaging texture features extracted from each CT slice. Our experiments show that the kernel approach outperforms classical non-kernel approaches in identifying CT slices with artifacts. …