Multi-Task Triple-Stream Network (MTTSNet)
Our goal in this work is to train an image captioning model that generates more dense and informative captions. We introduce ‘relational captioning,’ a novel image captioning task which aims to generate multiple captions with respect to relational information between objects in an image. Relational captioning is a framework that is advantageous in both diversity and amount of information, leading to image understanding based on relationships. Part-of speech (POS, i.e. subject-object-predicate categories) tags can be assigned to every English word. We leverage the POS as a prior to guide the correct sequence of words in a caption. To this end, we propose a multi-task triple-stream network (MTTSNet) which consists of three recurrent units for the respective POS and jointly performs POS prediction and captioning. We demonstrate more diverse and richer representations generated by the proposed model against several baselines and competing methods. …
Adaptive Blending Unit (ABU)
The most widely used activation functions in current deep feed-forward neural networks are rectified linear units (ReLU), and many alternatives have been successfully applied, as well. However, none of the alternatives have managed to consistently outperform the rest and there is no unified theory connecting properties of the task and network with properties of activation functions for most efficient training. A possible solution is to have the network learn its preferred activation functions. In this work, we introduce Adaptive Blending Units (ABUs), a trainable linear combination of a set of activation functions. Since ABUs learn the shape, as well as the overall scaling of the activation function, we also analyze the effects of adaptive scaling in common activation functions. We experimentally demonstrate advantages of both adaptive scaling and ABUs over common activation functions across a set of systematically varied network specifications. We further show that adaptive scaling works by mitigating covariate shifts during training, and that the observed advantages in performance of ABUs likewise rely largely on the activation function’s ability to adapt over the course of training. …
Entrofy
Selecting a cohort from a set of candidates is a common task within and beyond academia. Admitting students, awarding grants, choosing speakers for a conference are situations where human biases may affect the make-up of the final cohort. We propose a new algorithm, Entrofy, designed to be part of a larger decision making strategy aimed at making cohort selection as just, quantitative, transparent, and accountable as possible. We suggest this algorithm be embedded in a two-step selection procedure. First, all application materials are stripped of markers of identity that could induce conscious or sub-conscious bias. During blind review, the committee selects all applicants, submissions, or other entities that meet their merit-based criteria. This often yields a cohort larger than the admissible number. In the second stage, the target cohort can be chosen from this meritorious pool via a new algorithm and software tool. Entrofy optimizes differences across an assignable set of categories selected by the human committee. Criteria could include gender, academic discipline, experience with certain technologies, or other quantifiable characteristics. The Entrofy algorithm yields the computational maximization of diversity by solving the tie-breaking problem with provable performance guarantees. We show how Entrofy selects cohorts according to pre-determined characteristics in simulated sets of applications and demonstrate its use in a case study. This cohort selection process allows human judgment to prevail when assessing merit, but assigns the assessment of diversity to a computational process less likely to be beset by human bias. Importantly, the stage at which diversity assessments occur is fully transparent and auditable with Entrofy. Splitting merit and diversity considerations into their own assessment stages makes it easier to explain why a given candidate was selected or rejected. …
Quantization Loss Re-Learning
In order to quantize the gate parameters of the LSTM (Long Short-Term Memory) neural network model with almost no recognition performance degraded, a new quantization method named Quantization Loss Re-Learn Method is proposed in this paper. The method does lossy quantization on gate parameters during training iterations, and the weight parameters learn to offset the loss of gate parameters quantization by adjusting the gradient in back propagation during weight parameters optimization. We proved the effectiveness of this method through theoretical derivation and experiments. The gate parameters had been quantized to 0, 0.5, 1 three values, and on the Named Entity Recognition dataset, the F1 score of the model with the new quantization method on gate parameters decreased by only 0.7% compared to the baseline model. …
If you did not already know
09 Saturday Sep 2023
Posted What is ...
in