1 / 21

Lecture 7: POSIX Threads - Pthreads

Lecture 7: POSIX Threads - Pthreads. Parallel Programming Models. Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism / Implicit parallelism Shared memory / Distributed memory Other programming paradigms Object-oriented Functional and logic.

hop-langley
Download Presentation

Lecture 7: POSIX Threads - Pthreads

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 7:POSIX Threads - Pthreads

  2. Parallel Programming Models Parallel Programming Models: • Data parallelism / Task parallelism • Explicit parallelism / Implicit parallelism • Shared memory / Distributed memory • Other programming paradigms • Object-oriented • Functional and logic

  3. Parallel Programming Models • Shared Memory The programmer’s task is to specify the activities of a set of processes that communicate by reading and writing shared memory. • Advantage: the programmer need not be concerned with data-distribution issues. • Disadvantage: performance implementations may be difficult on computers that lack hardware support for shared memory, and race conditions tend to arise more easily • Distributed Memory Processes have only local memory and must use some other mechanism (e.g., message passing or remote procedure call) to exchange information. • Advantage: programmers have explicit control over data distribution and communication.

  4. Shared vs Distributed Memory • Shared memory • Distributed memory P P P P Bus Memory M M M M P P P P Network

  5. Parallel Programming Models Parallel Programming Tools: • Parallel Virtual Machine (PVM) • Distributed memory, explicit parallelism • Message-Passing Interface (MPI) • Distributed memory, explicit parallelism • PThreads • Shared memory, explicit parallelism • OpenMP • Shared memory, explicit parallelism • High-Performance Fortran (HPF) • Implicit parallelism • Parallelizing Compilers • Implicit parallelism

  6. Parallel Programming Models Shared Memory Model • Used on Shared memory MIMD architectures • Program consists of many independent threads • Concurrently executing threads all share a single, common address space. • Threads can exchange information by reading and writing to memory using normal variable assignment operations

  7. Parallel Programming Models Memory Coherence Problem • To ensure that the latest value of a variable updated in one thread is used when that same variable is accessed in another thread. • Hardware support and compiler support are required • Cache-coherency protocol Thread 1 Thread 2 X

  8. Parallel Programming Models Distributed Shared Memory (DSM) Systems • Implement Shared memory model on Distributed memory MIMD architectures • Concurrently executing threads all share a single, common address space. • Threads can exchange information by reading and writing to memory using normal variable assignment operations • Use a message-passing layer as the means for communicating updated values throughout the system.

  9. Parallel Programming Models Synchronization operations in Shared Memory Model • Monitors • Locks • Critical sections • Condition variables • Semaphores • Barriers

  10. PThreads POSIX Threads – Pthreads www.pthreads.org/

  11. PThreads In the UNIX environment a thread: • Exists within a process and uses the process resources • Has its own independent flow of control • Duplicates only the essential resources it needs to be independently schedulable • May share the process resources with other threads • Dies if the parent process dies • Is "lightweight" because most of the overhead has already been accomplished through the creation of its process.

  12. PThreads Because threads within the same process share resources: • Changes made by one thread to shared system resources will be seen by all other threads. • Two pointers having the same value point to the same data. • Reading and writing to the same memory locations is possible, and therefore requires explicit synchronization by the programmer.

  13. PThreads • pthread_create(thread,attr,start_routine,arg): creates new threads of control • thread: unique identifier of the thread • attr: used to set thread attributes (default NULL) • start_routine: the C routine that the thread will execute once it is created • arg: a single argument that may be passed (passed by reference) to start_routine(NULL if no arguments) • pthread_exit(): A thread terminates when the function being executed by the thread completes or when an explicit thread exit function is called.

  14. PThread Code #include <pthread.h> #include <stdio.h> #define NUM_THREADS 5 void *PrintHello(void *threadid) { long tid; tid = (long)threadid; printf("Hello World! It's me, thread #%ld!\n", tid); pthread_exit(NULL); } int main (int argc, char *argv[]) { pthread_t threads[NUM_THREADS]; int rc; long t; for(t=0; t<NUM_THREADS; t++){ printf("In main: creating thread %ld\n", t); rc = pthread_create(&threads[t], NULL, PrintHello, (void *)t); if (rc){ printf("ERROR; return code from pthread_create() is %d\n", rc); exit(-1); } } pthread_exit(NULL); }

  15. PThreads • The data-oriented synchronization routines are based on the use of a mutex (mutual exclusion). • A mutex is a dynamically allocated data structure that can be passed as an argument to the synchronization routines • pthread_mutex_lock() and pthread_mutex_unlock(): Once a pthread_mutex_lock call is made on a specific mutex, subsequent pthread_mutex_lock calls will block until a call is made to pthread_mutex_unlock with that mutex.

  16. PThreads • Condition variablesallow a thread to wait until a Boolean predicate that depends on thecontents of one or more shared-memory locations becomes true. • A condition variable associates a mutex with the desired predicate. Beforethe program makes its test, it obtains a lock on the associated mutex. Thenit evaluates the predicate. • If the predicate evaluates to false, the thread canexecute apthread_cond_wait() operation, which atomically suspends the callingthread, puts the thread record on a waiting list that is part of the conditionvariable, and releases the mutex. The thread scheduler is now free to use theprocessor to execute another thread.

  17. PThreads • If the predicate evaluates to true, thethread simply releases its lock and continues on its way. • If a thread changes the value of any shared variables associated with a conditionvariable predicate, it needs to cause any threads that may be waiting onthis condition variable to be rescheduled. The pthread_cond_signal() causes oneof the threads waiting on the condition variable to become unblocked, returningfrom the pthread_cond_wait that caused it to block in the first place. Themutex is automatically reobtained as part of the return from the wait, so thethread is in the position to reevaluate the predicate immediately.

  18. Parallel Programming Models Example: Pi calculation P = f01 f(x) dx = f014/(1+x2) dx = w∑f(xi) f(x) = 4/(1+x2) n = 10 w = 1/n xi = w(i-0.5) f(x) x 0 0.1 0.2 xi1

  19. Parallel Programming Models Sequential Code #define f(x) 4.0/(1.0+x*x); main(){ int n,i; float w,x,sum,pi; printf(“n?\n”); scanf(“%d”, &n); w=1.0/n; sum=0.0; for (i=1; i<=n; i++){ x=w*(i-0.5); sum += f(x); } pi=w*sum; printf(“%f\n”, pi); } f(x) x 0 0.1 0.2 xi1 P = w ∑ f(xi) f(x) = 4/(1+x2) n = 10 w = 1/n xi = w(i-0.5)

  20. Parallel Virtual Machine (PVM) Data Distribution f(x) x 0 0.1 0.2 xi1

  21. PThread Code #include <pthread.h> #include <stdio.h> #define f(x) 4.0/(1.0+x*x) #define NUM_THREADS 4 float pi; pthread_mutex_t m1; void *worker(void args) { int i, p, n, id; float sum, w, x; p=args[0]; n=args[1]; id=args[2]; sum=0.0; w=1.0/n; for (i=id; i<n; i+=p) { x=(i+0.5)*w; sum+=f(x); } sum=sum*w; pthread_mutex_lock(&m1); pi += sum; pthread_mutex_unlock(&m1); } int main (int argc, char *argv[]) { pthread_t threads[NUM_THREADS]; int i, n, nproc, args[3]; scanf(“%d:, &nproc); scanf(“%d:, &n); args[0]=nproc; args[1]=n; pthread_mutex_init(&m1, NULL); for(i=0; i<NUM_THREADS; i++){ args[2]=i; pthread_create(&threads[i], NULL, worker, (void *)args[0]); } for(i=0; i<NUM_THREADS; i++){ pthread_join(&threads[i], NULL); printf(“Pi=%f\n”, pi); }

More Related