Research Support

Support for Supertech is provided in part by the National Science Foundation (NSF), the National Security Agency, and by Foxconn under grants

NSF CCF-1314547 (SHF: AF: Large: Collaborative Research: Parallelism without Concurrency).

The widespread deployment of parallel machines --- from multicores to supercomputers --- has made it critical to develop simple approaches to programming them. Significant progress has been made in simplifying parallel programming by developing programming models to support parallelism without concurrency, that is, without the nondeterminacies in the logic of programs caused by the relative and nondeterministic timing of communicating processes. Yet most parallel programs in practice are concurrent, and hence, nondeterministic, leading to code that can only be programmed and understood by experts. This research project aims to understand how parallel computers can be made easier to use by the vast majority of programmers by developing software technology that enables deterministic parallel computing.

The project takes a holistic view of the problem from the key perspectives of programming linguistics, software systems, algorithmic analysis, and absolute performance. It acknowledges the reality that parallel programming cannot be fully deterministic at every level of abstraction. It is pursuing three key strategies for dealing with concurrency: encapsulating concurrency so that it is hidden by layered abstractions at appropriate abstraction levels, avoiding concurrency by restructuring programs to employ deterministic approaches, and managing concurrency when it is impractical to either encapsulate or avoid concurrency completely. Among the specific techniques being studied are commutative building blocks, deterministic nonassociative reducers, deterministic pipelined parallelism, deterministic interfaces, and generalized race detection for detecting invariant races. The project is developing open-source libraries, tools, and runtime extensions integrated into a multicore-software platform, as well as a problem-based benchmark suite to compare approaches.

  • NSF CNS-1409238 (CSR: Medium: Collaborative Research: FTFS: A Read/Write-Optimized Fractal Tree File Systems).

Modern, general-purpose file systems offer poor performance on microdata operations, such as file creation and destruction, small writes to large files, and metadata updates, yet these operations are pervasive on today's computer systems. Underlying this problem are fundamental limitations of the data structures used to organize data on disk. This project will explore the practical efficacy of a recently-discovered category of data structures, called write-read-optimized (WRO) data structures, which have the potential to improve microdata performance dramatically without sacrificing good performance on other types of operations. This project will bring together a team of experts from theory and systems who can bring cutting-edge algorithmic advances into operating system (OS) designs. To this end, the team will build a general-purpose file system for Linux, called FTFS, that uses WRO data structures.

Work of this nature has the potential to eliminate the current trade-off between data locality on disk and small-write performance. This project observes that WRO data structures, such as B^epsilon trees and fractal tree indexes, can give comparable asymptotic behavior to a B-tree for queries and bulk updates, as well as support small updates with performance close to logging. Preliminary work demonstrates that these asymptotic benefits translate to real performance improvements - up to two orders of magnitude faster than a traditional B-tree for some operations. Modern operating systems have certain assumptions about how file systems are designed, such as inducing extra lookups during update operations (called cryptoreads). Cryptoreads cause update operations to block on lookups, thus throttling the faster updates that WRO data structures provide. The project will investigate OS support for WRO data structures, as well as redesigning WRO data structures to support the operations of a fully-featured file system.

The ultimate goal is technology transfer and practical adoption. The effort will advance the current state of the art in file system and operating system design. Computers are a fundamental part of our society, with desktops and laptops permeating schools and workplaces, individuals carrying at least one mobile device, and scientists driving new discovery with supercomputers. File systems are the backbone of these computing platforms, and improvements to the efficiency of a general-purpose file system can improve the efficiency of our national cyber-infrastructure, as well as reintroduce flexibility into the storage stack needed to adapt to rapidly evolving devices.

  • NSF IIS-1447786 (BIGDATA: IA: DKA: Collaborative Research: High-Throughput Connectomics)

Connectomics is the science of mapping the connectivity between neuronal structures to help us understand how brains work. Using the analogy of astronomy, connectomics researchers wish to build 'telescopes' that will allow scientists to accurately view the brain. However, as in astronomy, the raw data collected by microtomes and electron microscopes, the instruments of connectomics, is too large to store effectively, and must be analyzed at very high computation rates. Our goal is to research, develop, and deploy a software architecture that enables high-throughput analysis of connectomics data at the speed at which it is being acquired. We will develop the first computational infrastructure to support high-throughput connectomics without human intervention. If successful, this system will allow for the first time the mapping of a cortical column of a small mammalian brain (1 cubic millimeter), and hopefully within a few years the mapping of significant sections of a mammalian cortex.

The solution to the big data problem of connectomics is a new high-throughput connectomics software architecture that we call MapRecurse. MapRecurse, named so because it bears some resemblance to the widely used MapReduce framework, will provide a unified way of specifying sequences of computational steps and validation tests to be applied to the collected data. Key to MapRecurse will be the ability to layout data and computation in a structured way that preserves locality. Using it, programmers will be able to apply fast, less accurate segmentation algorithms to low resolutions of the data in order to quickly compute a first version of the output neural network graph. Domain-specific graph theoretical methods will then check for correctness of the graph and identify areas of inconsistencies that are in need of further refinement. MapRecurse will then apply bottom-up, local processing with slower, more accurate segmentation and reconstruction algorithms to higher resolutions of the data, verifying and correcting any errors. The iterations progress recursively and in parallel across multiple cores, giving the approach its name. We believe that MapRecurse and the data structures and algorithms developed here will find applications in other high-throughput applications, such as, in astronomy, biology, social media applications, or economics.

  • NSA H98230-14-C-1424 (Supercloud: A Unified Approach to Compute, Big Data, Databases and Virtualized Computing).

Previous NSF grants have included 09376801162148101705803249740305606054024805412090621511, and 0615215.