ViPIOS
For an increasing number of data intensive scientific applications, parallel I/O concepts are a major performance issue. Tackling this issue, we develop an input/output system designed for highly efficient, scalable and conveniently usable parallel I/O on distributed memory systems. The main focus of this research is the parallel I/O runtime system support provided for software-generated programs produced by parallelizing compilers in the context of High Performance FORTRAN efforts. Specifically, our design aims for the Vienna Fortran Compilation System. In our research project we investigate the I/O problem from a runtime system support perspective. We focus on the design of an advanced parallel I/O support, called ViPIOS (VIenna Parallel I/O System), to be targeted by language compilers supporting the same programming model like High Performance Fortran (HPF). The ViPIOS design is partly influenced by the concepts of parallel database technology. At the beginning of the project we developed a formal model, which forms a theoretical framework on which the ViPIOS design is based. This model describes the mapping of the problem specific data space starting from the application program data structures down to the physical layout on disk across several intermediate representation levels. Based on this formal model we designed and developed an I/O runtime system, ViPIOS, which provides support for several issues, as – parallel access to files for read/write operations, – optimization of data-layout on disks, – redistribution of data stored on disks, – communication of out-of-core (OOC) data, and – many optimizations including data prefetching from disks based on the access pattern knowledge extracted from the program by the compiler or provided by a user specification. …
Playgol
Children learn though play. We introduce the analogous idea of learning programs through play. In this approach, a program induction system (the learner) is given a set of tasks and initial background knowledge. Before solving the tasks, the learner enters an unsupervised playing stage where it creates its own tasks to solve, tries to solve them, and saves any solutions (programs) to the background knowledge. After the playing stage is finished, the learner enters the supervised building stage where it tries to solve the user-supplied tasks and can reuse solutions learnt whilst playing. The idea is that playing allows the learner to discover reusable general programs on its own which can then help solve the user-supplied tasks. We claim that playing can improve learning performance. We show that playing can reduce the textual complexity of target concepts which in turn reduces the sample complexity of a learner. We implement our idea in Playgol, a new inductive logic programming system. We experimentally test our claim on two domains: robot planning and real-world string transformations. Our experimental results suggest that playing can substantially improve learning performance. We think that the idea of playing (or, more verbosely, unsupervised bootstrapping for supervised program induction) is an important contribution to the problem of developing program induction approaches that self-discover BK. …
Lightweight Feature Fusion Network (LFFN)
Single image super-resolution(SISR) has witnessed great progress as convolutional neural network(CNN) gets deeper and wider. However, enormous parameters hinder its application to real world problems. In this letter, We propose a lightweight feature fusion network (LFFN) that can fully explore multi-scale contextual information and greatly reduce network parameters while maximizing SISR results. LFFN is built on spindle blocks and a softmax feature fusion module (SFFM). Specifically, a spindle block is composed of a dimension extension unit, a feature exploration unit and a feature refinement unit. The dimension extension layer expands low dimension to high dimension and implicitly learns the feature maps which is suitable for the next unit. The feature exploration unit performs linear and nonlinear feature exploration aimed at different feature maps. The feature refinement layer is used to fuse and refine features. SFFM fuses the features from different modules in a self-adaptive learning manner with softmax function, making full use of hierarchical information with a small amount of parameter cost. Both qualitative and quantitative experiments on benchmark datasets show that LFFN achieves favorable performance against state-of-the-art methods with similar parameters. …
AI Alignment
The challenge we face controlling a machine which thinks in an entirely different way to us and may well be far more intelligent than we are. Even if we have a perfect solution to the control problem we are left with a second problem, what should we ask the AI to do, think and value? This problem is AI alignment. …
If you did not already know
02 Saturday Jul 2022
Posted What is ...
in