Featuretools
Featuretools is a framework to perform automated feature engineering. It excels at transforming temporal and relational datasets into feature matrices for machine learning. …
Max-Value Entropy Search (MES)
Bayesian optimization (BO) is an effective tool for black-box optimization in which objective function evaluation is usually quite expensive. In practice, lower fidelity approximations of the objective function are often available. Recently, multi-fidelity Bayesian optimization (MFBO) has attracted considerable attention because it can dramatically accelerate the optimization process by using those cheaper observations. We propose a novel information theoretic approach to MFBO. Information-based approaches are popular and empirically successful in BO, but existing studies for information-based MFBO are plagued by difficulty for accurately estimating the information gain. Our approach is based on a variant of information-based BO called max-value entropy search (MES), which greatly facilitates evaluation of the information gain in MFBO. In fact, computations of our acquisition function is written analytically except for one dimensional integral and sampling, which can be calculated efficiently and accurately. We demonstrate effectiveness of our approach by using synthetic and benchmark datasets, and further we show a real-world application to materials science data. …
Out-In-Channel Sparsity Regularization (OICSR)
Channel pruning can significantly accelerate and compress deep neural networks. Many channel pruning works utilize structured sparsity regularization to zero out all the weights in some channels and automatically obtain structure-sparse network in training stage. However, these methods apply structured sparsity regularization on each layer separately where the correlations between consecutive layers are omitted. In this paper, we first combine one out-channel in current layer and the corresponding in-channel in next layer as a regularization group, namely out-in-channel. Our proposed Out-In-Channel Sparsity Regularization (OICSR) considers correlations between successive layers to further retain predictive power of the compact network. Training with OICSR thoroughly transfers discriminative features into a fraction of out-in-channels. Correspondingly, OICSR measures channel importance based on statistics computed from two consecutive layers, not individual layer. Finally, a global greedy pruning algorithm is designed to remove redundant out-in-channels in an iterative way. Our method is comprehensively evaluated with various CNN architectures including CifarNet, AlexNet, ResNet, DenseNet and PreActSeNet on CIFAR-10, CIFAR-100 and ImageNet-1K datasets. Notably, on ImageNet-1K, we reduce 37.2% FLOPs on ResNet-50 while outperforming the original model by 0.22% top-1 accuracy. …
Oddball SGD
Stochastic Gradient Descent (SGD) is arguably the most popular of the machine learning methods applied to training deep neural networks (DNN) today. It has recently been demonstrated that SGD can be statistically biased so that certain elements of the training set are learned more rapidly than others. In this article, we place SGD into a feedback loop whereby the probability of selection is proportional to error magnitude. This provides a novelty-driven oddball SGD process that learns more rapidly than traditional SGD by prioritising those elements of the training set with the largest novelty (error). In our DNN example, oddball SGD trains some 50x faster than regular SGD. …
If you did not already know
18 Friday Jun 2021
Posted What is ...
in