Categories

General-Purpose Graphics Processor Architectures

  • Computer Architecture
  • Categories:Computers & Internet
  • Language:English(Translation Services Available)
  • Publication date:May,2018
  • Pages:140
  • Retail Price:(Unknown)
  • Size:190mm×234mm
  • Page Views:291
  • Words:(Unknown)
  • Star Ratings:
  • Text Color:Black and white
You haven’t logged in yet. Sign In to continue.

Request for Review Sample

Through our website, you are submitting the application for you to evaluate the book. If it is approved, you may read the electronic edition of this book online.

Copyright Usage
Application
 

Special Note:
The submission of this request means you agree to inquire the books through RIGHTOL, and undertakes, within 18 months, not to inquire the books through any other third party, including but not limited to authors, publishers and other rights agencies. Otherwise we have right to terminate your use of Rights Online and our cooperation, as well as require a penalty of no less than 1000 US Dollars.


Description

Originally developed to support video games, graphics processor units (GPUs) are now increasingly used for general-purpose (non-graphics) applications ranging from machine learning to mining of cryptographic currencies. GPUs can achieve improved performance and efficiency versus central processing units (CPUs) by dedicating a larger fraction of hardware resources to computation. In addition, their general-purpose programmability makes contemporary GPUs appealing to software developers in comparison to domain-specific accelerators. This book provides an introduction to those interested in studying the architecture of GPUs that support general-purpose computing. It collects together information currently only found among a wide range of disparate sources. The authors led development of the GPGPU-Sim simulator widely used in academic research on GPU architectures.

The first chapter of this book describes the basic hardware structure of GPUs and provides a brief overview of their history. Chapter 2 provides a summary of GPU programming models relevant to the rest of the book. Chapter 3 explores the architecture of GPU compute cores. Chapter 4 explores the architecture of the GPU memory system. After describing the architecture of existing systems, Chapters 3 and 4 provide an overview of related research. Chapter 5 summarizes cross-cutting research impacting both the compute core and memory system.

This book should provide a valuable resource for those wishing to understand the architecture of graphics processor units (GPUs) used for acceleration of general-purpose applications and to those who want to obtain an introduction to the rapidly growing body of research exploring how to improve the architecture of these GPUs.

Author

Tor M. Aamodt, University of British Columbia
Tor M. Aamodt is a Professor in the Department of Electrical and Computer Engineering at the University of British Columbia, where he has been a faculty member since 2006. His current research focuses on the architecture of general-purpose GPUs and energy-efficient computing, most recently including accelerators for machine learning. Along with students in his research group, he developed the widely used GPGPU-Sim simulator. Three of his papers have been selected as "Top Picks" by IEEE Micro Magazine, a fourth was selected as a "Top Picks" honorable mention. One of his papers was also selected as a "Research Highlight" in Communications of the ACM. He is in the MICRO Hall of Fame. He served as an Associate Editor for IEEE Computer Architecture Letters from 2012–2015 and the International Journal of High Performance Computing Applications from 2012-2016, was Program Chair for ISPASS 2013, General Chair for ISPASS 2014, and has served on numerous program committees. He was a Visiting Associate Professor in the Computer Science Department at Stanford University from 2012-2013. He was awarded an NVIDIA Academic Partnership Award in 2010, a NSERC Discovery Accelerator for 2016-2019, and a 2016 Google Faculty Research Award. Tor received his BASc (in Engineering Science), MASc, and Ph.D. at the University of Toronto. Much of his Ph.D. work was done while he was an intern at Intel's Microarchitecture Research Lab. Subsequently, he worked at NVIDIA on the memory system architecture ("framebuffer") of GeForce 8 Series GPU–the first NVIDIA GPU to support CUDA. Tor is registered as a Professional Engineer in the province of British Columbia.

Wilson Wai Lun Fung, Samsung Electronics
Wilson Wai Lun Fung is an architect in Advanced Computing Lab (ACL) as part of Samsung Austin R & D Center (SARC) at Samsung Electronics, where he contributes to the development of a next generation GPU IP. He is interested in both theoretical and practical aspects of computer architecture. Wilson is a winner of the NVIDIA Graduate Fellowship, the NSERC Postgraduate Scholarship, and the NSERC Canada Graduate Scholarship. Wilson was one of the main contributors to the widely used GPGPU-Sim simulator. Two of his papers were selected as a "Top Pick" from computer architecture by IEEE Micro Magazine. Wilson received his BASc (in Computer Engineering), MASc, and Ph.D. at the University of British Columbia. During his Ph.D., Wilson interned at NVIDIA.

Contents

Table of Contents
Preface
Acknowledgments
Introduction
Programming Model
The SIMT Core: Instruction and Register Data Flow
Memory System
Crosscutting Research on GPU Computing Architectures
Bibliography
Authors' Biographies

Share via valid email address:


Back
© 2024 RIGHTOL All Rights Reserved.