M-PACT google
Action classification is a widely known and popular task that offers an approach towards video understanding. The absence of an easy-to-use platform containing state-of-the-art (SOTA) models presents an issue for the community. Given that individual research code is not written with an end user in mind and in certain cases code is not released, even for published articles, the importance of a common unified platform capable of delivering results while removing the burden of developing an entire system cannot be overstated. To try and overcome these issues, we develop a tensorflow-based unified platform to abstract away unnecessary overheads in terms of an end-to-end pipeline setup in order to allow the user to quickly and easily prototype action classification models. With the use of a consistent coding style across different models and seamless data flow between various submodules, the platform lends itself to the quick generation of results on a wide range of SOTA methods across a variety of datasets. All of these features are made possible through the use of fully pre-defined training and testing blocks built on top of a small but powerful set of modular functions that handle asynchronous data loading, model initializations, metric calculations, saving and loading of checkpoints, and logging of results. The platform is geared towards easily creating models, with the minimum requirement being the definition of a network architecture and preprocessing steps from a large custom selection of layers and preprocessing functions. M-PACT currently houses four SOTA activity classification models which include, I3D, C3D, ResNet50+LSTM and TSN. The classification performance achieved by these models are, 43.86% for ResNet50+LSTM on HMDB51 while C3D and TSN achieve 93.66% and 85.25% on UCF101 respectively. …

Full Normalization (FN) google
Batch Normalization (BN) has been used extensively in deep learning to achieve faster training process and better resulting models. However, whether BN works strongly depends on how the batches are constructed during training and it may not converge to a desired solution if the statistics on a batch are not close to the statistics over the whole dataset. In this paper, we try to understand BN from an optimization perspective by formulating the optimization problem which motivates BN. We show when BN works and when BN does not work by analyzing the optimization problem. We then propose a refinement of BN based on compositional optimization techniques called Full Normalization (FN) to alleviate the issues of BN when the batches are not constructed ideally. We provide convergence analysis for FN and empirically study its effectiveness to refine BN. …

Quasi-Support Vector Data Description (QSVDD) google
In the area of data classification, the different classifiers have been developed by its own strengths and weaknesses. Among these classifiers, we propose a method which is based on the maximum margin between two classes. One of the main challenges in this area is dealt with noisy data. In this paper, our aim is to optimize the method of large margin classifier based on hyperdisk (LMC-HD) and incorporate it into quasi-support vector data description (QSVDD) method. In the proposed method, the bounding hypersphere is calculated based on the QSVDD method. So our convex class model is more robust in compared with support vector machine (SVM) and less tight than LMC-HD. Applying this idea causes the reduction of the impact of the noisy data set in classification. Large margin classifiers aim to maximize the margin and minimizing the risk. Sine our proposed method ignores the effect of outliers and noises, so this method has the widest margin compared with other large margin classifiers. In the end, we compare our proposed method with other popular large margin classifiers by the experiments on a set of standard data which indicates our results are more efficient than the others. …

Adapted Geographically Weighted Lasso (Ada-GWL) google
Ridership estimation at station level plays a critical role in metro transportation planning. Among various existing ridership estimation methods, direct demand model has been recognized as an effective approach. However, existing direct demand models including Geographically Weighted Regression (GWR) have rarely included local model selection in ridership estimation. In practice, acquiring insights into metro ridership under multiple influencing factors from a local perspective is important for passenger volume management and transportation planning operations adapting to local conditions. In this study, we propose an Adapted Geographically Weighted Lasso (Ada-GWL) framework for modelling metro ridership, which performs regression-coefficient shrinkage and local model selection. It takes metro network connection intermedia into account and adopts network-based distance metric instead of Euclidean-based distance metric, making it so-called adapted to the context of metro networks. The real-world case of Shenzhen Metro is used to validate the superiority of our proposed model. The results show that the Ada-GWL model performs the best compared with the global model (Ordinary Least Square (OLS), GWR, GWR calibrated with network-based distance metric and GWL in terms of estimation error of the dependent variable and goodness-of-fit. Through understanding the variation of each coefficient across space (elasticities) and variables selection of each station, it provides more realistic conclusions based on local analysis. Besides, through clustering analysis of the stations according to the regression coefficients, clusters’ functional characteristics are found to be in compliance with the facts of the functional land use policy of Shenzhen. These results of the proposed Ada-GWL model demonstrate a great spatial explanatory power in transportation planning. …