EmbraceNet
Classification using multimodal data arises in many machine learning applications. It is crucial not only to model cross-modal relationship effectively but also to ensure robustness against loss of part of data or modalities. In this paper, we propose a novel deep learning-based multimodal fusion architecture for classification tasks, which guarantees compatibility with any kind of learning models, deals with cross-modal information carefully, and prevents performance degradation due to partial absence of data. We employ two datasets for multimodal classification tasks, build models based on our architecture and other state-of-the-art models, and analyze their performance on various situations. The results show that our architecture outperforms the other multimodal fusion architectures when some parts of data are not available. …
Flint
Serverless architectures organized around loosely-coupled function invocations represent an emerging design for many applications. Recent work mostly focuses on user-facing products and event-driven processing pipelines. In this paper, we explore a completely different part of the application space and examine the feasibility of analytical processing on big data using a serverless architecture. We present Flint, a prototype Spark execution engine that takes advantage of AWS Lambda to provide a pure pay-as-you-go cost model. With Flint, a developer uses PySpark exactly as before, but without needing an actual Spark cluster. We describe the design, implementation, and performance of Flint, along with the challenges associated with serverless analytics. …
OTNSGA-II II
Two important characteristics of multi-objective evolutionary algorithms are distribution and convergency. As a classic multi-objective genetic algorithm, NSGA-II is widely used in multi-objective optimization fields. However, in NSGA-II, the random population initialization and the strategy of population maintenance based on distance cannot maintain the distribution or convergency of the population well. To dispose these two deficiencies, this paper proposes an improved algorithm, OTNSGA-II II, which has a better performance on distribution and convergency. The new algorithm adopts orthogonal experiment, which selects individuals in manner of a new discontinuing non-dominated sorting and crowding distance, to produce the initial population. And a new pruning strategy based on clustering is proposed to self-adaptively prunes individuals with similar features and poor performance in non-dominated sorting and crowding distance, or to individuals are far away from the Pareto Front according to the degree of intra-class aggregation of clustering results. The new pruning strategy makes population to converge to the Pareto Front more easily and maintain the distribution of population. OTNSGA-II and NSGA-II are compared on various types of test functions to verify the improvement of OTNSGA-II in terms of distribution and convergency. …
Recursively Decomposing the function into locally Independent Subspaces (RDIS)
Continuous optimization is an important problem in many areas of AI, including vision, robotics, probabilistic inference, and machine learning. Unfortunately, most real-world optimization problems are nonconvex, causing standard convex techniques to find only local optima, even with extensions like random restarts and simulated annealing. We observe that, in many cases, the local modes of the objective function have combinatorial structure, and thus ideas from combinatorial optimization can be brought to bear. Based on this, we propose a problem-decomposition approach to nonconvex optimization. Similarly to DPLL-style SAT solvers and recursive conditioning in probabilistic inference, our algorithm, RDIS, recursively sets variables so as to simplify and decompose the objective function into approximately independent subfunctions, until the remaining functions are simple enough to be optimized by standard techniques like gradient descent. The variables to set are chosen by graph partitioning, ensuring decomposition whenever possible. We show analytically that RDIS can solve a broad class of nonconvex optimization problems exponentially faster than gradient descent with random restarts. Experimentally, RDIS outperforms standard techniques on problems like structure from motion and protein folding.
GitXiv …
If you did not already know
16 Saturday Sep 2023
Posted What is ...
in