dhSegment
In recent years there have been multiple successful attempts tackling document processing problems separately by designing task specific hand-tuned strategies. We argue that the diversity of historical document processing tasks prohibits to solve them one at a time and shows a need for designing generic approaches in order to handle the variability of historical series. In this paper, we address multiple tasks simultaneously such as page extraction, baseline extraction, layout analysis or multiple typologies of illustrations and photograph extraction. We propose an open-source implementation of a CNN-based pixel-wise predictor coupled with task dependent post-processing blocks. We show that a single CNN-architecture can be used across tasks with competitive results. Moreover most of the task-specific post-precessing steps can be decomposed in a small number of simple and standard reusable operations, adding to the flexibility of our approach. …
Extended Isolation Forest
We present an extension to the model-free anomaly detection algorithm, Isolation Forest. This extension, named Extended Isolation Forest (EIF), improves the consistency and reliability of the anomaly score produced for a given data point. We show that the standard Isolation Forest produces inconsistent scores using score maps. The score maps suffer from an artifact generated as a result of how the criteria for branching operation of the binary tree is selected. We propose two different approaches for improving the situation. First we propose transforming the data randomly before creation of each tree, which results in averaging out the bias introduced in the algorithm. Second, which is the preferred way, is to allow the slicing of the data to use hyperplanes with random slopes. This approach results in improved score maps. We show that the consistency and reliability of the algorithm is much improved using this method by looking at the variance of scores of data points distributed along constant score lines. We find no appreciable difference in the rate of convergence nor in computation time between the standard Isolation Forest and EIF, which highlights its potential as anomaly detection algorithm. …
ShiftCNN
In this paper we introduce ShiftCNN, a generalized low-precision architecture for inference of multiplierless convolutional neural networks (CNNs). ShiftCNN is based on a power-of-two weight representation and, as a result, performs only shift and addition operations. Furthermore, ShiftCNN substantially reduces computational cost of convolutional layers by precomputing convolution terms. Such an optimization can be applied to any CNN architecture with a relatively small codebook of weights and allows to decrease the number of product operations by at least two orders of magnitude. The proposed architecture targets custom inference accelerators and can be realized on FPGAs or ASICs. Extensive evaluation on ImageNet shows that the state-of-the-art CNNs can be converted without retraining into ShiftCNN with less than 1% drop in accuracy when the proposed quantization algorithm is employed. RTL simulations, targeting modern FPGAs, show that power consumption of convolutional layers is reduced by a factor of 4 compared to conventional 8-bit fixed-point architectures. …
FRESH
Massive datasets of curves, such as time series and trajectories, are continuously generated by mobile and sensing devices. A relevant operation on curves is similarity search: given a dataset $S$ of curves, construct a data structure that, for any query curve $q$, finds the curves in $S$ similar to $q$. Similarity search is a computational demanding task, in particular when a robust distance function is used, such as the continuous Fr\’echet distance. In this paper, we propose FRESH, a novel approximate solution to find similar curves under the continuous Fr\’echet distance. FRESH leverages on a locality sensitive hashing scheme for detecting candidate near neighbors of the query curve, and a subsequent pruning step based on a pipeline of curve simplifications. By relaxing the requirement of exact and deterministic solutions, FRESH reaches high performance and outperforms the state-of-the-art approaches. The experiments indeed show that, with a recall larger than 80% and precision 100%, we have at least a factor 10 improvement in performance over a baseline given by the best solutions developed for the ACM SIGSPATIAL 2017 challenge on the Fr\’echet distance. Furthermore, the improvement peaks up to two orders of magnitude, and even more, by relaxing the precision. …
If you did not already know
19 Friday Mar 2021
Posted What is ...
in