Intra- and Inter-epoch Temporal Context Network (IITNet) google
This study proposes a novel deep learning model, called IITNet, to learn intra- and inter-epoch temporal contexts from a raw single channel electroencephalogram (EEG) for automatic sleep stage scoring. When sleep experts identify the sleep stage of a 30-second PSG data called an epoch, they investigate the sleep-related events such as sleep spindles, K-complex, and frequency components from local segments of an epoch (sub-epoch) and consider the relations between sleep-related events of successive epochs to follow the transition rules. Inspired by this, IITNet learns how to encode sub-epoch into representative feature via a deep residual network, then captures contextual information in the sequence of representative features via BiLSTM. Thus, IITNet can extract features in sub-epoch level and consider temporal context not only between epochs but also in an epoch. IITNet is an end-to-end architecture and does not need any preprocessing, handcrafted feature design, balanced sampling, pre-training, or fine-tuning. Our model was trained and evaluated in Sleep-EDF and MASS datasets and outperformed other state-of-the-art results on both the datasets with the overall accuracy (ACC) of 84.0% and 86.6%, macro F1-score (MF1) of 77.7 and 80.8, and Cohen’s kappa of 0.78 and 0.80 in Sleep-EDF and MASS, respectively. …

K-separable GGM google
In high-dimensional graph learning problems, some topological properties of the graph, such as bounded node degree or tree structure, are typically assumed to hold so that the sample complexity of recovering the graph structure can be reduced. With bounded degree or separability assumptions, quantified by a measure $k$, a $p$-dimensional Gaussian graphical model (GGM) can be learnt with sample complexity $\Omega (k \: \text{log} \: p)$. Our work in this paper aims to do away with these assumptions by introducing an algorithm that can identify whether a GGM indeed has these topological properties without any initial topological assumptions. We show that we can check whether a GGM has node degree bounded by $k$ with sample complexity $\Omega (k \: \text{log} \: p)$. More generally, we introduce the notion of a strongly K-separable GGM, and show that our algorithm can decide whether a GGM is strongly $k$-separable or not, with sample complexity $\Omega (k \: \text{log} \: p)$. We introduce the notion of a generalized feedback vertex set (FVS), an extension of the typical FVS, and show that we can use this identification technique to learn GGMs with generalized FVSs. …

DiamondGAN google
Recent studies on medical image synthesis reported promising results using generative adversarial networks, mostly focusing on one-to-one cross-modality synthesis. Naturally, the idea arises that a target modality would benefit from multi-modal input. Synthesizing MR imaging sequences is highly attractive for clinical practice, as often single sequences are missing or of poor quality (e.g. due to motion). However, existing methods fail to scale up to image volumes with high numbers of modalities and extensive non-aligned volumes, facing common draw-backs of complex multi-modal imaging sequences. To address these limitations, we propose a novel, scalable and multi-modal approach called DiamondGAN. Our model is capable of performing flexible non-aligned cross-modality synthesis and data infill, when given multiple modalities or any of their arbitrary subsets. It learns structured information using non-aligned input modalities in an end-to-end fashion. We synthesize two MRI sequences with clinical relevance (i.e., double inversion recovery (DIR) and contrast-enhanced T1 (T1-c)), which are reconstructed from three common MRI sequences. In addition, we perform multi-rater visual evaluation experiment and find that trained radiologists are unable to distinguish our synthetic DIR images from real ones. …

Regularization by Denoising (RED) google
Proposed by Romano, Elad, and Milanfar, is powerful new image-recovery framework that aims to construct an explicit regularization objective from a plug-in image-denoising function. Evidence suggests that the RED algorithms are, indeed, state-of-the-art. However, a closer inspection suggests that explicit regularization may not explain the workings of these algorithms.
Regularization by Denoising: Clarifications and New Interpretations