WhatIs-X

X-12-ARIMA Seasonal Adjustment Program X-12-ARIMA is a seasonal adjustment software that was produced by the Census Bureau.
Features include:
· Extensive time series modeling and model selection capabilities for linear regression models with ARIMA errors (regARIMA models);
· Many seasonal and trend filter options;
· Diagnostics of the quality and stability of the adjustments achieved under the options selected;
· The ability to efficiently process many series at once.
The X-12-ARIMA seasonal adjustment program of the US Census Bureau extracts the different components (mainly: seasonal component, trend component, outlier component and irregular component) of a monthly or quarterly time series. It is the state-of-the- art technology for seasonal adjustment used in many statistical offices. It is possible to include a moving holiday effect, a trading day effect and user-defined regressors, and additionally incorporates automatic outlier detection. The procedure makes additive or multiplicative adjustments and creates an output data set containing the adjusted time series and intermediate calculations.
X-Armed Bandits We propose and analyze StoROO, an algorithm for risk optimization on stochastic black-box functions derived from StoOO. Motivated by risk-averse decision making fields like agriculture, medicine, biology or finance, we do not focus on the mean payoff but on generic functionals of the return distribution, like for example quantiles. We provide a generic regret analysis of StoROO. Inspired by the bandit literature and black-box mean optimizers, StoROO relies on the possibility to construct confidence intervals for the targeted functional based on random-size samples. We explain in detail how to construct them for quantiles, providing tight bounds based on Kullback-Leibler divergence. The interest of these tight bounds is highlighted by numerical experiments that show a dramatic improvement over standard approaches.
xAUC Metric Where machine-learned predictive risk scores inform high-stakes decisions, such as bail and sentencing in criminal justice, fairness has been a serious concern. Recent work has characterized the disparate impact that such risk scores can have when used for a binary classification task and provided tools to audit and adjust resulting classifiers. This may not account, however, for the more diverse downstream uses of risk scores and their non-binary nature. To better account for this, in this paper, we investigate the fairness of predictive risk scores from the point of view of a bipartite ranking task, where one seeks to rank positive examples higher than negative ones. We introduce the xAUC disparity as a metric to assess the disparate impact of risk scores and define it as the difference in the probabilities of ranking a random positive example from one protected group above a negative one from another group and vice versa. We provide a decomposition of bipartite ranking loss into components that involve the discrepancy and components that involve pure predictive ability within each group. We further provide an interpretation of the xAUC discrepancy in terms of resource allocation fairness and make connections to existing fairness metrics and adjustments. We assess xAUC empirically on datasets in recidivism prediction, income prediction, and cardiac arrest prediction, where it describes disparities that are not evident from simply comparing within-group predictive performance.
XDATA XDATA is developing an open source software library for big data to help overcome the challenges of effectively scaling to modern data volume and characteristics. The program is developing the tools and techniques to process and analyze large sets of imperfect, incomplete data. Its programs and publications focus on the areas of analytics, visualization, and infrastructure to efficiently fuse, analyze and disseminate these large volumes of data.
X-GAN Image reconstruction including image restoration and denoising is a challenging problem in the field of image computing. We present a new method, called X-GANs, for reconstruction of arbitrary corrupted resource based on a variant of conditional generative adversarial networks (conditional GANs). In our method, a novel generator and multi-scale discriminators are proposed, as well as the combined adversarial losses, which integrate a VGG perceptual loss, an adversarial perceptual loss, and an elaborate corresponding point loss together based on the analysis of image feature. Our conditional GANs have enabled a variety of applications in image reconstruction, including image denoising, image restoration from quite a sparse sampling, image inpainting, image recovery from the severely polluted block or even color-noise dominated images, which are extreme cases and haven’t been addressed in the status quo. We have significantly improved the accuracy and quality of image reconstruction. Extensive perceptual experiments on datasets ranging from human faces to natural scenes demonstrate that images reconstructed by the presented approach are considerably more realistic than alternative work. Our method can also be extended to handle high-ratio image compression.
xgboost Gradient Boosting (GBDT, GBRT or GBM) Library for large-scale and distributed machine learning, on single node, hadoop yarn and more.
xGEM This work proposes xGEMs or manifold guided exemplars, a framework to understand black-box classifier behavior by exploring the landscape of the underlying data manifold as data points cross decision boundaries. To do so, we train an unsupervised implicit generative model — treated as a proxy to the data manifold. We summarize black-box model behavior quantitatively by perturbing data samples along the manifold. We demonstrate xGEMs’ ability to detect and quantify bias in model learning and also for understanding the changes in model behavior as training progresses.
X-Means Extending K-Means with efficient estimation of the number of Clusters.
XNLI State-of-the-art natural language processing systems rely on supervision in the form of annotated data to learn competent models. These models are generally trained on data in a single language (usually English), and cannot be directly used beyond that language. Since collecting data in every language is not realistic, there has been a growing interest in cross-lingual language understanding (XLU) and low-resource cross-language transfer. In this work, we construct an evaluation set for XLU by extending the development and test sets of the Multi-Genre Natural Language Inference Corpus (MultiNLI) to 15 languages, including low-resource languages such as Swahili and Urdu. We hope that our dataset, dubbed XNLI, will catalyze research in cross-lingual sentence understanding by providing an informative standard evaluation task. In addition, we provide several baselines for multilingual sentence understanding, including two based on machine translation systems, and two that use parallel data to train aligned multilingual bag-of-words and LSTM encoders. We find that XNLI represents a practical and challenging evaluation suite, and that directly translating the test data yields the best performance among available baselines.
XNOR Neural Engine Binary Neural Networks (BNNs) are promising to deliver accuracy comparable to conventional deep neural networks at a fraction of the cost in terms of memory and energy. In this paper, we introduce the XNOR Neural Engine (XNE), a fully digital configurable hardware accelerator IP for BNNs, integrated within a microcontroller unit (MCU) equipped with an autonomous I/O subsystem and hybrid SRAM / standard cell memory. The XNE is able to fully compute convolutional and dense layers in autonomy or in cooperation with the core in the MCU to realize more complex behaviors. We show post-synthesis results in 65nm and 22nm technology for the XNE IP and post-layout results in 22nm for the full MCU indicating that this system can drop the energy cost per binary operation to 21.6fJ per operation at 0.4V, and at the same time is flexible and performant enough to execute state-of-the-art BNN topologies such as ResNet-34 in less than 2.2mJ per frame at 8.9 fps.
xtensor Here we’re laying out a vision for the xtensor project, the n-dimensional array in the C++ language – that makes it easy to write high-performance code and bind it to the languages of data science (Python, Julia and R).
X-TrainCaps Convolutional Neural Networks (CNNs) are extensively in use due to their excellent results in various machine learning (ML) tasks like image classification and object detection. Recently, Capsule Networks (CapsNets) have shown improved performances compared to the traditional CNNs, by encoding and preserving spatial relationships between the detected features in a better way. This is achieved through the so-called Capsules (i.e., groups of neurons) that encode both the instantiation probability and the spatial information. However, one of the major hurdles in the wide adoption of CapsNets is its gigantic training time, which is primarily due to the relatively higher complexity of its constituting elements. In this paper, we illustrate how can we devise new optimizations in the training process to achieve fast training of CapsNets, and if such optimizations affect the network accuracy or not. Towards this, we propose a novel framework ‘X-TrainCaps’ that employs lightweight software-level optimizations, including a novel learning rate policy called WarmAdaBatch that jointly performs warm restarts and adaptive batch size, as well as weight sharing for capsule layers to reduce the hardware requirements of CapsNets by removing unused/redundant connections and capsules, while keeping high accuracy through tests of different learning rate policies and batch sizes. We demonstrate that one of the solutions generated by X-TrainCaps framework can achieve 58.6% training time reduction while preserving the accuracy (even 0.9% accuracy improvement), compared to the CapsNet in the original paper by Sabour et al. (2017), while other Pareto-optimal solutions can be leveraged to realize trade-offs between training time and achieved accuracy.
Xu The exponential growth of information on the Internet has created a big challenge for retrieval systems in terms of yielding relevant results. This challenge requires automatic approaches for reformatting or expanding users’ queries to increase recall. Query expansion (QE), a technique for broadening users’ queries by appending additional tokens or phrases bases on semantic similarity metrics, plays a crucial role in overcoming this challenge. However, such a procedure increases computational complexity and may lead to unwanted noise in information retrieval. This paper attempts to push the state of the art of QE by developing an automated technique using high dimensional clustering of word vectors to create effective expansions with reduced noise. We implemented a command line tool, named Xu, and evaluated its performance against a dataset of news articles, concluding that on average, expansions generated using this technique outperform those generated by previous approaches, and the base user query.
Xy Simulating Supervised Learning Data drawing . With Xy() you can convienently simulate regression data. The simulation can be very specific, since the user has many degrees of freedom. For instance, the functional shape and hence the polynomial degree of nonlinearity can be manipulated. Interaction can be formed and (co)variances altered. For a more specific motivation you can visit our blog