Conceptual Interoperability Constraint (COIN)
Building meaningful interoperation with external software units requires performing the conceptual interoperability analysis that starts with identifying the conceptual interoperability constraints of each software unit, then it compares the systems’ constraints to detect their conceptual mismatch. We call the conceptual interoperability constraints (the COINs) that can be of different types including structure, dynamic, and quality. Missing such constraints may lead to unexpected mismatches, expensive resolution, and running-late projects. However, it is a challenging task for software architects and analysts to manually analyze the unstructured text in API documents to identify the COINs. Not only it is a tedious and time-consuming task, but also it needs knowledge about the constraint types. In this article, we present and evaluate our idea of utilizing machine learning techniques in automating the COIN identification, which is the first step of conceptual interoperability analysis, from human text in API documents. Our empirical research started with a multiple-case study to build the ground truth dataset, on which we contributed our machine learning COIN-Classification Model. We show the model’s robustness through experiments using different machine learning text-classification algorithms. The experiments’ results revealed that our model can achieve up to 87% accuracy in automatically identifying the COINs in text. Thus, we implemented a tool that embeds our model to demonstrate its practical value in industrial context. Then, we evaluated the practitioners’ acceptance for the tool and found that they significantly agreed on its usefulness and ease of use. …
Singular Vector Canonical Correlation Analysis (SVCCA)
We introduce a technique based on the singular vector canonical correlation analysis (SVCCA) for measuring the generality of neural network layers across a continuously-parametrized set of tasks. We illustrate this method by studying generality in neural networks trained to solve parametrized boundary value problems based on the Poisson partial differential equation. We find that the first hidden layer is general, and that deeper layers are successively more specific. Next, we validate our method against an existing technique that measures layer generality using transfer learning experiments. We find excellent agreement between the two methods, and note that our method is much faster, particularly for continuously-parametrized problems. Finally, we visualize the general representations of the first layers, and interpret them as generalized coordinates over the input domain. …
Collaborative GAN
Unlike a conventional background inpainting approach that infers a missing area from image patches similar to the background, face completion requires semantic knowledge about the target object for realistic outputs. Current image inpainting approaches utilize generative adversarial networks (GANs) to achieve such semantic understanding. However, in adversarial learning, the semantic knowledge is learned implicitly and hence good semantic understanding is not always guaranteed. In this work, we propose a collaborative adversarial learning approach to face completion to explicitly induce the training process. Our method is formulated under a novel generative framework called collaborative GAN (collaGAN), which allows better semantic understanding of a target object through collaborative learning of multiple tasks including face completion, landmark detection, and semantic segmentation. Together with the collaGAN, we also introduce an inpainting concentrated scheme such that the model emphasizes more on inpainting instead of autoencoding. Extensive experiments show that the proposed designs are indeed effective and collaborative adversarial learning provides better feature representations of the faces. In comparison with other generative image inpainting models and single task learning methods, our solution produces superior performances on all tasks. …
Collective And Point Anomalies (CAPA)
The challenge of efficiently identifying anomalies in data sequences is an important statistical problem that now arises in many applications. Whilst there has been substantial work aimed at making statistical analyses robust to outliers, or point anomalies, there has been much less work on detecting anomalous segments, or collective anomalies. By bringing together ideas from changepoint detection and robust statistics, we introduce Collective And Point Anomalies (CAPA), a computationally efficient approach that is suitable when collective anomalies are characterised by either a change in mean, variance, or both, and distinguishes them from point anomalies. Theoretical results establish the consistency of CAPA at detecting collective anomalies and empirical results show that CAPA has close to linear computational cost as well as being more accurate at detecting and locating collective anomalies than other approaches. We demonstrate the utility of CAPA through its ability to detect exoplanets from light curve data from the Kepler telescope. …
If you did not already know
30 Monday Nov 2020
Posted What is ...
in