Shared-Memory Parallelism Can Be Simple, Fast, and Scalable
- Shared-Memory Parallelism Parallel Graph Frameworks Deterministic Parallel Algorithms
- Categories:Computers & Internet
- Language:English(Translation Services Available)
- Publication Place:United States
- Publication date:June,2017
- Pages:445
- Retail Price:89.95 USD
- Size:(Unknown)
- Text Color:(Unknown)
- Words:(Unknown)
Request for Review Sample
Through our website, you are submitting the application for you to evaluate the book. If it is approved, you may read the electronic edition of this book online.
Special Note:
The submission of this request means you agree to inquire the books through RIGHTOL,
and undertakes, within 18 months, not to inquire the books through any other third party,
including but not limited to authors, publishers and other rights agencies.
Otherwise we have right to terminate your use of Rights Online and our cooperation,
as well as require a penalty of no less than 1000 US Dollars.
Feature
★ Proposes a three-pronged solution for shared-memory parallelism, covering programming techniques, frameworks and algorithm design, effectively lowering the difficulty of parallel program development and facilitating the technological transition to the multicore era.
★ Pioneers the parallel graph traversal framework Ligra and its upgraded version Ligra+, featuring concise code and outstanding performance. It achieves performance speedups of up to orders of magnitude compared with existing distributed memory systems, and boasts both space and performance advantages.
Description
The first part of this thesis introduces tools and techniques for deterministic parallel programming, including means for encapsulating nondeterminism via powerful commutative building blocks, as well as a novel framework for executing sequential iterative loops in parallel, which lead to deterministic parallel algorithms that are efficient both in theory and in practice. The second part of this thesis introduces Ligra, the first high-level shared memory framework for parallel graph traversal algorithms. The framework allows programmers to express graph traversal algorithms using very short and concise code, delivers performance competitive with that of highly-optimized code, and is up to orders of magnitude faster than existing systems designed for distributed memory. This part of the thesis also introduces Ligra+, which extends Ligra with graph compression techniques to reduce space usage and improve parallel performance at the same time, and is also the first graph processing system to support in-memory graph compression.
The third and fourth parts of this thesis bridge the gap between theory and practice in parallel algorithm design by introducing the first algorithms for a variety of important problems on graphs and strings that are efficient both in theory and in practice. For example, the thesis develops the first linear-work and polylogarithmic-depth algorithms for suffix tree construction and graph connectivity that are also practical, as well as a work-efficient, polylogarithmic-depth, and cache-efficient shared-memory algorithm for triangle computations that achieves a 2–5x speedup over the best existing algorithms on 40 cores.
This is a revised version of the thesis that won the 2015 ACM Doctoral Dissertation Award.





