并行程序設(shè)計導論

出版時間:2011-9  出版社:機械工業(yè)出版社  作者:(美)Peter S. Pacheco  頁數(shù):370  
Tag標簽:無  

內(nèi)容概要

采用教程形式,從簡短的編程實例起步,一步步編寫更有挑戰(zhàn)性的程序。重點介紹分布式內(nèi)存和共享式內(nèi)存的程序設(shè)計、調(diào)試和性能評估。使用MPI、PTrlread和OperIMP等編程模型,強調(diào)實際動手開發(fā)并行程序。
并行編程已不僅僅是面向?qū)I(yè)技術(shù)人員的一門學科。如果想要全面開發(fā)機群和多核處理器的計算能力,那么學習分布式內(nèi)存和共享式內(nèi)存的并行編程技術(shù)是不可或缺的。由Peter
S.Pacheco編著的《并行程序設(shè)計導論(英文版)》循序漸進地展示了如何利用MPI、PThread和OperlMP開發(fā)高效的并行程序,教給讀者如何開發(fā)、調(diào)試分布式內(nèi)存和共享式內(nèi)存的程序,以及對程序進行性能評估。

作者簡介

帕切克(Petm
S.Pacheco),擁有佛羅里達州立大學數(shù)學專業(yè)博士學位。曾擔任舊金山大學計算機主任,目前是舊金山大學數(shù)學系主任。近20年來,一直為本科生和研究生講授并行計算課程。

書籍目錄

CHAPTER 1 Why Parallel Computing?
1.1 Why We Need Ever-Increasing Performance
1.2 Why We're Building Parallel Systems
1.3 Why We Need to Write Parallel Programs
1.4 How Do We Write Parallel Programs?
1.5 What We'll Be Doing
1.6 Concurrent, Parallel, Distributed
1.7 The Rest of the Book
1.8 A Word of Warning
1.9 Typographical Conventions
1.10 Summary
1.11 Exercises
CHAPTER 2 Parallel Hardware and Parallel Software
2.1 Some Background
2.1.1 The von Neumann architecture
2.1.2 Processes, multitasking, and threads
2.2 Modifications to the von Neumann Model
2.2.1 The basics of caching
2.2.2 Cache mappings
2.2.3 Caches and programs: an example
2.2.4 Virtual memory
2.2.5 Instruction-level parallelism
2.2.6 Hardware multithreading.
2.3 Parallel Hardware
2.3.1 SIMD systems
2.3.2 MIMD systems 32
2.3.3 Interconnection networks
2.3.4 Cache coherence
2.3.5 Shared-memory versus distributed-memory
2.4 Parallel Software 47
2.4.1 Caveats 47
2.4.2 Coordinating the processes/threads
2.4.3 Shared-memory 49
2.4.4 Distributed-memory
2.4.5 Programming hybrid systems
2.5 Input and Output 56
2.6 Performance 58
2.6.1 Speedup and efficiency
2.6.2 Amdahl's law 61
2.6.3 Scalability
2.6.4 Taking timings
2.7 Parallel Program Design
2.7.1 An example
2.8 Writing and Running Parallel Programs
2.9 Assumptions
2.10 Summary
2.10.1 Serial systems
2.10.2 Parallel hardware
2.10.3 Parallel software
2.10.4 Input and output
2.10.5 Performance.
2.10.6 Parallel program design
2.10.7 Assumptions
2.11 Exercises
CHAPTER 3 Distributed-Memory Programming with MPI
3.1 Getting Started 84
3.1.1 Compilation and execution
3.1.2 MPI programs
3.1.3 MPI Init and MPI Finalize
3.1.4 Communicators, MPI Comm size and MPI Comm rank
3.1.5 SPMD programs
3.1.6 Communication
3.1.7 MPI Send
3.1.8 MPI Recv
3.1.9 Message matching
3.1.10 The status p argument 92
3.1.11 Semantics of MPI Send and MPI Recv 93
3.1.12 Some potential pitfalls 94
3.2 The Trapezoidal Rule in MPI 94
3.2.1 The trapezoidal rule 94
3.2.2 Parallelizing the trapezoidal rule 96
Contents xiii
3.3 Dealing with I/O 97
3.3.1 Output 97
3.3.2 Input100
3.4 Collective Communication 101
3.4.1 Tree-structured communication 102
3.4.2 MPI Reduce 103
3.4.3 Collective vspoint-to-point communications 105
3.4.4 MPI Allreduce 106
3.4.5 Broadcast 106
3.4.6 Data distributions 109
3.4.7 Scatter 110
3.4.8 Gather 112
3.4.9 Allgather 113
3.5 MPI Derived Datatypes 116
3.6 Performance Evaluation of MPI Programs 119
3.6.1 Taking timings 119
3.6.2 Results 122
3.6.3 Speedup and efficiency 125
3.6.4 Scalability 126
3.7 A Parallel Sorting Algorithm 127
3.7.1 Some simple serial sorting algorithms 127
3.7.2 Parallel odd-even transposition sort 129
3.7.3 Safety in MPI programs 132
3.7.4 Final details of parallel odd-even sort 134
3.8 Summary 136
3.9 Exercises 140
3.10 Programming Assignments .147
CHAPTER 4 Shared-Memory Programming with Pthreads .151
4.1 Processes, Threads, and Pthreads 151
4.2 Hello, World 153
4.2.1 Execution 153
4.2.2 Preliminaries 155
4.2.3 Starting the threads 156
4.2.4 Running the threads 157
4.2.5 Stopping the threads 158
4.2.6 Error checking 158
4.2.7 Other approaches to thread startup159
4.3 Matrix-Vector Multiplication 159
4.4 Critical Sections 162
xiv Contents
4.5 Busy-Waiting 165
4.6 Mutexes .168
4.7 Producer-Consumer Synchronization and Semaphores 171
4.8 Barriers and Condition Variables 176
4.8.1 Busy-waiting and a mutex 177
4.8.2 Semaphores 177
4.8.3 Condition variables 179
4.8.4 Pthreads barriers 181
4.9 Read-Write Locks 181
4.9.1 Linked list functions 181
4.9.2 A multi-threaded linked list 183
4.9.3 Pthreads read-write locks 187
4.9.4 Performance of the various implementations 188
4.9.5 Implementing read-write locks 190
4.10 Caches, Cache Coherence, and False Sharing 190
4.11 Thread-Safety 195
4.11.1 Incorrect programs can produce correct output 198
4.12 Summary 198
4.13 Exercises 200
4.14 Programming Assignments .206
CHAPTER 5 Shared-Memory Programming with OpenMP .209
5.1 Getting Started 210
5.1.1 Compiling and running OpenMP programs 211
5.1.2 The program 212
5.1.3 Error checking215
5.2 The Trapezoidal Rule 216
5.2.1 A first OpenMP version 216
5.3 Scope of Variables 220
5.4 The Reduction Clause .221
5.5 The parallel for Directive 224
5.5.1 Caveats 225
5.5.2 Data dependences 227
5.5.3 Finding loop-carried dependences 228
5.5.4 Estimating 229
5.5.5 More on scope231
5.6 More About Loops in OpenMP: Sorting .232
5.6.1 Bubble sort 232
5.6.2 Odd-even transposition sort 233
5.7 Scheduling Loops 236
5.7.1 The schedule clause 237
5.7.3 The dynamic and guided schedule types 239
5.7.4 The runtime schedule type 239
5.7.5 Which schedule? 241
5.8 Producers and Consumers 241
5.8.1 Queues241
5.8.2 Message-passing 242
5.8.3 Sending messages 243
5.8.4 Receiving messages 243
5.8.5 Termination detection 244
5.8.6 Startup 244
5.8.7 The atomic directive 245
5.8.8 Critical sections and locks 246
5.8.9 Using locks in the message-passing program 248
5.8.10 critical directives, atomic directives, or locks? 249
5.8.11 Some caveats 249
5.9 Caches, Cache Coherence, and False Sharing 251
5.10 Thread-Safety 256
5.10.1 Incorrect programs can produce correct output 258
5.11 Summary 259
5.12 Exercises 263
5.13 Programming Assignments .267
CHAPTER 6 Parallel Program Development 271
6.1 Two n-Body Solvers 271
6.1.1 The problem 271
6.1.2 Two serial programs 273
6.1.3 Parallelizing the n-body solvers 277
6.1.4 A word about I/O 280
6.1.5 Parallelizing the basic solver using OpenMP 281
6.1.6 Parallelizing the reduced solver using OpenMP 284
6.1.7 Evaluating the OpenMP codes 288
6.1.8 Parallelizing the solvers using pthreads 289
6.1.9 Parallelizing the basic solver using MPI 290
6.1.10 Parallelizing the reduced solver using MPI 292
6.1.11 Performance of the MPI solvers 297
6.2 Tree Search 299
6.2.1 Recursive depth-first search 302
6.2.2 Nonrecursive depth-first search 303
6.2.3 Data structures for the serial implementations 305
6.2.6 A static parallelization of tree search using pthreads 309
6.2.7 A dynamic parallelization of tree search using pthreads 310
6.2.8 Evaluating the pthreads tree-search programs 315
6.2.9 Parallelizing the tree-search programs using OpenMP 316
6.2.10 Performance of the OpenMP implementations 318
6.2.11 Implementation of tree search using MPI and static
partitioning 319
6.2.12 Implementation of tree search using MPI and dynamic
partitioning 327
6.3 A Word of Caution 335
6.4 Which API? 335
6.5 Summary 336
6.5.1 Pthreads and OpenMP 337
6.5.2 MPI 338
6.6 Exercises 341
6.7 Programming Assignments 350
CHAPTER 7 Where to Go from Here 353
References 357
Index 361

章節(jié)摘錄

版權(quán)頁:插圖:There are many possible algorithms for identifying which subtrees we assign to the processes or threads.For example,one,thread or process could run the last version of serial depth.first search until the stack stores one partial tour for each thread or process.Then it could assign one tour to each thread or process.The problem wim depth.first searchisthatweexpecta subtreewhoserootisdeeperinthetreetorequire less work than a subtree whose root is higher up in the tree,so we would probably get better load balance if we used something like breadth.first search to identify t11e subtrees.As the name suggests,breadth-first search searches as widely as possible in the  treebefore goingdeeper.Soif,forexample,we CalTyout abreadth-first searchuntil  we reach alevel ofthe tree that has at least th reftd-count or comm-sz nodes.we can  men divide the nodes at this level among the threads or processes.See Exercise 6.1 8  for implementation details.  The best tour data structure  On a shared-memory system,the best tour data structure can be shared.In this setting。  the Feasibl e function Call simply examine the data structure.However,updates to  the best tour will cause a race condition,and we U need some sort of locking t0  prevent errors.Wle’11 discuss this in more detail when we implement the parallel  version.In the case of a distributed-memory system,there are a couple of choices that we need to make about the best tour.T11e simplest option would be to have the processes operate independently of each other until they have completed searching their sub-trees.In this setting.each process would store its own local best tour.111is local best tourwouldbeusedbytheprocessin Fea s{b1 e andupdatedbytheprocesseachtime it calls Update-best tour.

媒體關(guān)注與評論

毫無疑問,隨著多核處理器和云計算系統(tǒng)的廣泛應(yīng)用,并行計算不再是計算世界中被束之高閣的偏門領(lǐng)域。并行性已經(jīng)成為有效利用資源的首要因素,Peter Pactleco撰寫的這本新教材對于初學者了解并行計算的藝術(shù)和實踐很有幫助?!  狣uncan Buell南卡羅來納大學計算機科學與工程系本書闡述了兩個越來越重要的領(lǐng)域:使用PThread和OperIMP進行共享式內(nèi)存編程,以及使用MPl進行分布式內(nèi)存編程。更重要的是,它通過指出可能出現(xiàn)的性能錯誤,強調(diào)好的編程實現(xiàn)的重要性。這本書在不同學科(包括計算機科學、物理和數(shù)學等)背景下介紹以上話題,各章節(jié)包含了難易程度不同的編程習題。對于希望學習并行編程技巧、擴展知識面的學生或?qū)I(yè)人士來說,這是一本理想的參考書籍?!  狶eigh Little紐約州立大學布羅科波特學院計算機科學系本書是一本精心撰寫的全面介紹并行計算的書籍,學生以及相關(guān)領(lǐng)域從業(yè)者會從書中的相關(guān)最新信息中獲益匪淺。作者以通俗易懂的寫作手法,結(jié)合各種有趣的實例使本書引人入勝。在并行計算這個瞬息萬變、不斷發(fā)展的領(lǐng)域里,本書深入淺出、全面涵蓋了并行軟件和硬件的方方面面?!  狵athy J.Liszka阿克隆大學計算機科學系

圖書封面

圖書標簽Tags

評論、評分、閱讀與下載


    并行程序設(shè)計導論 PDF格式下載


用戶評論 (總計13條)

 
 

  •   比國內(nèi)很多并行計算的書要好很多!本書思路清晰,如果學習并行計算,通過這本書可以少走很多彎路。
  •   尚未閱讀,期待能幫助自己在并發(fā)算法設(shè)計上有所提高。
  •   英文原版,慢慢看
  •   剛到手的,還沒細看。感覺還可以,謝謝大中午盯著太陽送包裹的快遞員大姐
  •   還沒來得及看,包裝印刷什么的還算滿意。
  •   英文版,太費腦子,不過不錯
  •   首先這是一本書,一本有關(guān)并行的書。其次,感謝送貨員大叔,大中午的送貨,辛苦了,謝謝?。。?/li>
  •   很不錯,正版書,很喜歡
  •   淺顯易懂,作為入門書箱還是很適合的。
  •   雖然按照作者的說法,本書可以用于大學一、二年級。實際上對C程序設(shè)計、計算機軟、硬件(多核)有深入了解的讀者會更加有用。
  •   介紹得很仔細,名副其實,真的是an introduction,很適合作為入門教材。
  •   之前買了一本華章的升入理解計算機系統(tǒng)概念,那本書是雙色彩印版,所以理所當然的認為這本書是一個系列的也應(yīng)該是雙色彩印的。買回來發(fā)現(xiàn)不是,有點失望
  •   MPI, Pthreads, and OpenMP都有講,很好的書。只不過是第一版,估計很快會出第二版,而且這本書的價格也不便宜。
 

250萬本中文圖書簡介、評論、評分,PDF格式免費下載。 第一圖書網(wǎng) 手機版

京ICP備13047387號-7