Robust Optimization
Robust optimization is a field of optimization theory that deals with optimization problems in which a certain measure of robustness is sought against uncertainty that can be represented as deterministic variability in the value of the parameters of the problem itself and/or its solution. …
Deadline-Aware Task rEplication for Vehicular Cloud (DATE-V)
Vehicular Cloud Computing (VCC) is a new technological shift which exploits the computation and storage resources on vehicles for computational service provisioning. Spare on-board resources are pooled by a VCC operator, e.g. a roadside unit, to complete task requests using the vehicle-as-a-resource framework. In this paper, we investigate timely service provisioning for deadline-constrained tasks in VCC systems by leveraging the task replication technique (i.e., allowing one task to be executed by several server vehicles). A learning-based algorithm, called DATE-V (Deadline-Aware Task rEplication for Vehicular Cloud), is proposed to address the special issues in VCC systems including uncertainty of vehicle movements, volatile vehicle members, and large vehicle population. The proposed algorithm is developed based on a novel Contextual-Combinatorial Multi-Armed Bandit (CC-MAB) learning framework. DATE-V is `contextual’ because it utilizes side information (context) of vehicles and tasks to infer the completion probability of a task replication under random vehicle movements. DATE-V is `combinatorial’ because it aims to replicate the received task and send the task replications to multiple server vehicles to guarantee the service timeliness. We rigorously prove that our learning algorithm achieves a sublinear regret bound compared to an oracle algorithm that knows the exact completion probability of any task replications. Simulations are carried out based on real-world vehicle movement traces and the results show that DATE-V significantly outperforms benchmark solutions. …
Language-Conditioned Graph Network (LCGN)
Solving grounded language tasks often requires reasoning about relationships between objects in the context of a given task. For example, to answer the question “What color is the mug on the plate?” we must check the color of the specific mug that satisfies the “on” relationship with respect to the plate. Recent work has proposed various methods capable of complex relational reasoning. However, most of their power is in the inference structure, while the scene is represented with simple local appearance features. In this paper, we take an alternate approach and build contextualized representations for objects in a visual scene to support relational reasoning. We propose a general framework of Language-Conditioned Graph Networks (LCGN), where each node represents an object, and is described by a context-aware representation from related objects through iterative message passing conditioned on the textual input. E.g., conditioning on the “on” relationship to the plate, the object “mug” gathers messages from the object “plate” to update its representation to “mug on the plate”, which can be easily consumed by a simple classifier for answer prediction. We experimentally show that our LCGN approach effectively supports relational reasoning and improves performance across several tasks and datasets. …
DeepFirearm
There are great demands for automatically regulating inappropriate appearance of shocking firearm images in social media or identifying firearm types in forensics. Image retrieval techniques have great potential to solve these problems. To facilitate research in this area, we introduce Firearm 14k, a large dataset consisting of over 14,000 images in 167 categories. It can be used for both fine-grained recognition and retrieval of firearm images. Recent advances in image retrieval are mainly driven by fine-tuning state-of-the-art convolutional neural networks for retrieval task. The conventional single margin contrastive loss, known for its simplicity and good performance, has been widely used. We find that it performs poorly on the Firearm 14k dataset due to: (1) Loss contributed by positive and negative image pairs is unbalanced during training process. (2) A huge domain gap exists between this dataset and ImageNet. We propose to deal with the unbalanced loss by employing a double margin contrastive loss. We tackle the domain gap issue with a two-stage training strategy, where we first fine-tune the network for classification, and then fine-tune it for retrieval. Experimental results show that our approach outperforms the conventional single margin approach by a large margin (up to 88.5% relative improvement) and even surpasses the strong triplet-loss-based approach. …
If you did not already know
26 Thursday Nov 2020
Posted What is ...
in