Topics in Parallel and Distributed Computing -

Topics in Parallel and Distributed Computing

Enhancing the Undergraduate Curriculum: Performance, Concurrency, and Programming on Modern Platforms
Buch | Hardcover
VI, 337 Seiten
2018 | 1st ed. 2018
Springer International Publishing (Verlag)
978-3-319-93108-1 (ISBN)
58,84 inkl. MwSt
lt;p>

This book introduces beginning undergraduate students of computing and computational disciplines to modern parallel and distributed programming languages and environments, including map-reduce, general-purpose graphics processing units (GPUs), and graphical user interfaces (GUI) for mobile applications. The book also guides instructors via selected essays on what and how to introduce parallel and distributed computing topics into the undergraduate curricula, including quality criteria for parallel algorithms and programs, scalability, parallel performance, fault tolerance, and energy efficiency analysis. The chapters designed for students serve as supplemental textual material for early computing core courses, which students can use for learning and exercises. The illustrations, examples, and sequences of smaller steps to build larger concepts are also tools that could be inserted into existing instructor material. The chapters intended for instructors are written at a teaching level and serve as a rigorous reference to include learning goals, advice on presentation and use of the material, within early and advanced undergraduate courses. 

Since Parallel and Distributed Computing (PDC) now permeates most computing activities, imparting a broad-based skill set in PDC technology at various levels in the undergraduate educational fabric woven by Computer Science (CS) and Computer Engineering (CE) programs as well as related computational disciplines has become essential. This book and others in this series aim to address the need for lack of suitable textbook support for integrating PDC-related topics into undergraduate courses, especially in the early curriculum. The chapters are aligned with the curricular guidelines promulgated by the NSF/IEEE-TCPP Curriculum Initiative on Parallel and Distributed Computing for CS and CE students and with the CS2013 ACM/IEEE Computer Science Curricula.


Anshul Gupta is a Principal Research Staff Member in IBM Research AI at IBM T.J. Watson Research Center. His research interests include sparse matrix computations and their applications in optimization and computational sci- ences, parallel algorithms, and graph/combinatorial algo- rithms for scientific computing. He has coauthored several journal articles and conference papers on these topics and a textbook titled "Introduction to Parallel Computing." He is the primary author of Watson Sparse Matrix Package (WSMP), one of the most robust and scalable parallel direct solvers for large sparse systems of linear equations. Sushil K. Prasad (BTech'85 IIT Kharagpur, MS'86 Washington State, Pullman; PhD'90 Central Florida, Orlando - all in Computer Science/Engineering) is a Professor of Computer Science at Georgia State University and Director of Distributed and Mobile Systems (DiMoS) Lab. Sushil has been honored as an ACM Distinguished Scientist in Fall 2013 for his research on parallel data structures and applications. He was the elected chair of IEEE Technical Committee on Parallel Processing for two terms (2007-11), and received its highest honors in 2012 - IEEE TCPP Outstanding Service Award. Currently, he is leading the NSF-supported IEEE- TCPP curriculum initiative on parallel and distributed computing with a vision to ensure that all computer science and engineering graduates are well-prepared in parallelism through their core courses in this era of multi- and many- cores desktops and handhelds. His current research interests are in Parallel Data Structures and Algorithms, and Computation over Geo-Spatiotemporal Datasets over Cloud, GPU and Multicore Platforms. Sushil is currently a Program Director leading the Office of Advanced Cyberinfrastructure (OAC) Learning and Workforce Development crosscutting programs at U.S. National Science Foundation. H

1 Introduction. 2 What do we need to know about parallel algorithms and their efficient implementation?.- 3 Modules for Teaching Parallel Performance Concepts.- 4 Scalability in Parallel Processing.- 5 Energy Efficiency Issues in Computing Systems.- 6 Scheduling for fault-tolerance.- 7 MapReduce for Beginners - The Clustered Data Processing Solution.- 8 The Realm of Graphical Processing Unit (GPU) Computing.- 9 Managing Concurrency in Mobile User Interfaces with Examples in Android.- 10 Parallel Programming for Interactive GUI Applications.- Scheduling in Parallel and Distributed Computing Systems.

Erscheinungsdatum
Zusatzinfo VI, 337 p. 116 illus.
Verlagsort Cham
Sprache englisch
Maße 155 x 235 mm
Gewicht 673 g
Themenwelt Mathematik / Informatik Informatik Netzwerke
Informatik Theorie / Studium Compilerbau
Technik Nachrichtentechnik
Schlagworte Computer Engineering • Computer sciences • CS2013 ACM/IEEE Computer Science Curricula • Distributed Computing • energy efficiency • GPU CUDA programming • GUI programming for mobile applications • Instructor resources for core courses • Map-reduce programming • Parallel Computing • Scalabiliy
ISBN-10 3-319-93108-3 / 3319931083
ISBN-13 978-3-319-93108-1 / 9783319931081
Zustand Neuware
Haben Sie eine Frage zum Produkt?
Wie bewerten Sie den Artikel?
Bitte geben Sie Ihre Bewertung ein:
Bitte geben Sie Daten ein:
Mehr entdecken
aus dem Bereich
Grundlagen und Anwendungen

von Hanspeter Mössenböck

Buch | Softcover (2024)
dpunkt (Verlag)
29,90
a beginner's guide to learning llvm compiler tools and core …

von Kai Nacke

Buch | Softcover (2024)
Packt Publishing Limited (Verlag)
47,50