With the widespread introduction of small inexpensive shared-memory processor systems and clusters of PCs, more and more scientists and engineers and educational institutions can realistically consider buying and using parallel systems to solve problems They all need to learn how to program these systems efficiently.
With over 60 contributors, The Sourcebook of Parallel Computing provides a broad overview to parallel computers and parallel computing by first examining the architecture of modern parallel computer systems and identifying key considerations for programming them. Through specific application studies, the authors demonstrate how to identify appropriate software and algorithms and the significant implementation issues.
I. Parallelism
1. Introduction
2. Parallel Computer Architectures
3. Parallel Programming Considerations
II. Applications
4. General Application Issues
5. Parallel Computing in CFD
6. Parallel Computing in Environment and Energy
7. Parallel Computational Chemistry
8. Application Overviews
III. Software technologies
9. Software Technologies
10. Message Passing and Threads
11. Parallel I/O
12. Languages and Compilers
13. Parallel Object-Oriented Libraries
14. Problem-Solving Environments
15. Tools for Performance Tuning and Debugging
16. The 2-D Poisson Problem
IV. Enabling Technologies and Algorithms
17. Reusable Software and Algorithms
18. Graph Partitioning for Scientific Simulations
19. Mesh Generation
20. Templates and Numerical Linear Algebra
21. Software for the Scalable Solutions of PDEs
22. Parallel Continuous Optimization
23. Path Following in Scientific Computing
24. Automatic Differentiation
V. Conclusion
25. Wrap-up and Features
With the widespread introduction of small inexpensive shared-memory processor systems and clusters of PCs, more and more scientists and engineers and educational institutions can realistically consider buying and using parallel systems to solve problems They all need to learn how to program these systems efficiently.
With over 60 contributors, The Sourcebook of Parallel Computing provides a broad overview to parallel computers and parallel computing by first examining the architecture of modern parallel computer systems and identifying key considerations for programming them. Through specific application studies, the authors demonstrate how to identify appropriate software and algorithms and the significant implementation issues.
I. Parallelism
1. Introduction
2. Parallel Computer Architectures
3. Parallel Programming Considerations
II. Applications
4. General Application Issues
5. Parallel Computing in CFD
6. Parallel Computing in Environment and Energy
7. Parallel Computational Chemistry
8. Application Overviews
III. Software technologies
9. Software Technologies
10. Message Passing and Threads
11. Parallel I/O
12. Languages and Compilers
13. Parallel Object-Oriented Libraries
14. Problem-Solving Environments
15. Tools for Performance Tuning and Debugging
16. The 2-D Poisson Problem
IV. Enabling Technologies and Algorithms
17. Reusable Software and Algorithms
18. Graph Partitioning for Scientific Simulations
19. Mesh Generation
20. Templates and Numerical Linear Algebra
21. Software for the Scalable Solutions of PDEs
22. Parallel Continuous Optimization
23. Path Following in Scientific Computing
24. Automatic Differentiation
V. Conclusion
25. Wrap-up and Features
I. Parallelism
1. Introduction
2. Parallel Computer Architectures
3. Parallel Programming Considerations
II. Applications
4. General Application Issues
5. Parallel Computing in CFD
6. Parallel Computing in Environment and Energy
7. Parallel Computational Chemistry
8. Application Overviews
III. Software technologies
9. Software Technologies
10. Message Passing and Threads
11. Parallel I/O
12. Languages and Compilers
13. Parallel Object-Oriented Libraries
14. Problem-Solving Environments
15. Tools for Performance Tuning and Debugging
16. The 2-D Poisson Problem
IV. Enabling Technologies and Algorithms
17. Reusable Software and Algorithms
18. Graph Partitioning for Scientific Simulations
19. Mesh Generation
20. Templates and Numerical Linear Algebra
21. Software for the Scalable Solutions of PDEs
22. Parallel Continuous Optimization
23. Path Following in Scientific Computing
24. Automatic Differentiation
V. Conclusion
25. Wrap-up and Features
* Provides a solid background in parallel computing
technologies
* Examines the technologies available and teaches students and
practitioners how to select and apply them
* Presents case studies in a range of application areas including
Chemistry, Image Processing, Data Mining, Ocean Modeling and
Earthquake Simulation
* Considers the future development of parallel computing
technologies and the kinds of applications they will support
Jack Dongarra is a University Distinguished Professor of Electrical Engineering and Computer Science, University of Tennessee, a Distinguished Research Staff, Oak Ridge National Laboratory and a Turning Fellow at the University of Manchester. An ACM/IEEE/ SIAM/AAAS Fellow, Dongarra pioneered the areas of supercomputer benchmarks, numerical analysis, linear algebra solvers, and high-performance computing and published extensively in these areas. He leads the Linpack benchmark evaluation of the Top-500 fastest computers over the years. Based on his high contributions in the supercomputing and high-performance areas, he was elected as a Member of the National Academy of Engineering in the USA. Ian Foster is Senior Scientist in the Mathematics and Computer Science Division at Argonne National Laboratory, where he also leads the Distributed Systems Laboratory, and Associate Professor of Computer Science at the University of Chicago. His research concerns techniques, tools, and algorithms for high-performance distributed computing, parallel computing, and computational science. Foster led the research and development of software for the I-WAY wide-area distributed computing experiment, which connected supercomputers, databases, and other high-end resources at 17 sites across North America (a live experiment at the Supercomputing conference of 1995). Geoffrey Fox is a Distinguished Professor of Informatics, Computing and Physics and Associate Dean of Graduate studies and Research in the School of Informatics and Computing, Indiana University. He has taught and led many research groups at Caltech and Syracuse University, previously. He received his Ph.D. from Cambridge University, U.K. Fox is well known for his comprehensive work and extensive publications in parallel architecture, distributed programming, grid computing, web services, and Internet applications. His book on Grid Computing (coauthored with F. Berman and Tony Hey) is widely used by the research community. He has produced over 60 Ph.D. students in physics, computer science and engineering over the years. William Gropp is a senior computer scientist and associate director of the Mathematics and Computer Science Division at Argonne National Lab. He is also a senior scientist in the Computer Science department at the University of Chicago and a senior fellow in the Argonne-University of Chicago Computation Institute. His research interests are in parallel computing, software for scientific computing, and numerical methods for partial differential equations. He has played a major role in the development of the MPI message-passing standard. p> Ken Kennedy is the Ann and John Doerr Professor of Computational Engineering and Director of the Center for High Performance Software Research (HiPerSoft) at Rice University. He is a fellow of the Institute of Electrical and Electronics Engineers, the Association for Computing Machinery, and the American Association for the Advancement of Science and has been a member of the National Academy of Engineering since 1990. From 1997 to 1999, he served as cochair of the President's Information Technology Advisory Committee (PITAC). For his leadership in producing the PITAC report on funding of information technology research, he received the Computing Research Association Distinguished Service Award (1999) and the RCI Seymour Cray HPCC Industry Recognition Award (1999). Professor Kennedy has published over 150 technical articles and supervised 34 Ph.D. dissertations on programming support software for high-performance computer systems. In recognition of his contributions to software for high-performance computation, he received the 1995 W. Wallace McDowell Award, the highest research award of the IEEE Computer Society. In 1999, he was named the third recipient of the ACM SIGPLAN Programming Languages Achievement Award. Linda Torczon is a principal investigator on the Massively Scalar Compiler Project at Rice University, and the Grid Application Development Software Project sponsored by the next Generation Software program of the National Science Foundation. She also serves as the executive director of HiPerSoft and of the Los Alamos Computer Science Institute. Her research interests include code generation, interprocedural dataflow analysis and optimization, and programming environments. Andy White is the Special Projects Director for the Weapons Physics Directorate at Los Alamos National Laboratory. This newLab oratory enterprise focuses on research issues in computer and computational sciencesassociated with employing the largest, most complex computational resources to address important national issues such as stockpile stewardship, energy and environment,systems biology, nanotechnology and crisis management.
"Sourcebook of Parallel Computing is an indispensable reference for
parallel-computing consultants, scientists, and researchers, and a
valuable addition to any computer science library." --Distributed
Systems Online
"The Sourcebook for Parallel Computing gives a thorough
introduction to parallel applications, software technologies,
enabling technologies, and algorithms. This is a great book that I
highly recommend to anyone interested in a comprehensive and
thoughtful treatment of the most important issues in parallel
computing." --Horst Simon, Director, Director, NERSC, Berkeley
"The Sourcebook builds on the important work done at the Center for
Research on Parallel Computation and within the academic community
for over a decade. It is a definitive text on Parallel Computing
and should be a key reference for students, researchers and
practitioners in the field." --Francine Berman, Director, San Diego
Supercomputer Center and the National Partnership for Advanced
Computational Infrastructure
![]() |
Ask a Question About this Product More... |
![]() |