660 likes | 864 Views
NCKU HPC Education & Training. 2008.03.27. Jih-Ching Chang bluesun@mail.ncku.edu.tw. Outline. HPC Introduction Easy User Guide of NCKU HPC MPI. Shared Memory VS Distributed Memory OpenMP MPI. Infrastructure and Architecture. IB. 1GE. FC. TANET. Hardware.
E N D
NCKU HPC Education & Training 2008.03.27 Jih-Ching Chang bluesun@mail.ncku.edu.tw
Outline • HPC Introduction • Easy User Guide of NCKU HPC • MPI
IB 1GE FC TANET Hardware 計算節點 Computing Node (Sun X2200 M2*128) 檔案伺服器一 (Sun X4500 *2 : 48TB) 前端伺服器(Sun X4200*2) Voltaire 9288 Infiniband 管控伺服器(Sun X4200*2 ) 檔案伺服器二 User Data (Sun ST6140:8TB) NCKU Campus Ethernet Core Switch 檔案伺服器三 - MDS Server (Lustre Parallel File System) (Sun ST6140:8TB)
Sun Studio PGI Compilers PGI Debugger PGP rof profiler Software stack
2) Notify Qmaster Schedd 3) Job Placement 4) Dispatch 1) Submit Execd 8) Record 7) Inform when done Execd qsub 6) Control 5) Load Report Execd Accounting 工作安排流程
Easy User Guide of NCKU HPC • Login • Basic instructions • How to submit job
Test Account • Account test01~test70 • Password the same as account
Login • Linux—SSH • MS Windows—putty, pietty http://www.csie.ntu.edu.tw/~piaip/pietty/stable/pietty0327.exe
File Management • WinSCP http://winscp.net/eng/docs/lang:cht
環境變數設定 (設定完需重新登入) • 範例檔 /ap/example_bashrc 先ssh到computing node (n121~n128) $ssh n121 複製到user的目錄下使用 $cp /ap/example_bashrc ~user/.bashrc 下載本次課程的範例程式 $wget http://140.116.206.34/qsub.tar 解壓縮 $tar -xvf qsub.tar
Compiler • GNU • gcc • g++ • g77,f77 • Intel • icc • ifc • MPI • mpicc • mpicxx • mpif77,mpif90
Instruction • Compile $g++ Hello.c –o Hello.x • Execute $./Hello.x >> output • Submit job $qsub serialjob.sh $qstat
MPI Introduction • Message Passing Interface Forum define standard ~ 60 members 1992/04 begin • Available standards: 1994/05/05 MPI 1.0 1995/06/12 MPI 1.1 1997/07/18 MPI 1.2 1997/07/18 MPI 2.0
MPI Introduction(2) • Free Software MPICH http://www-unix.mcs.anl.gov/mpi/mpich LAM/MPI http://www.lam-mpi.org
MPI Introduction(3) • 平行計算 將計算工作切割為n 等份,在n 個CPU上分工合作完成整個計算工作,每個CPU負責其中一個等份的計算工作 • 切割方法: *功能切割: 特殊情況下採用 *資料切割: 一般情況下採用 • 範例程式下載 $wget http://140.116.206.34/mpi.zip $unzip mpi.zip
MPI Basic • Required header file #include <mpi.h> #include "mpi.h“ • Initializing MPI • Must be the first routine called and only once int MPI_Init(int *argc, char ***argv) • Communicator Size • How many processes are contained within a communicator? MPI_Comm_size(MPI_Comm comm, int *size)
MPI Basic(2) • Process Rank • Process ID number within the communicator • Starts with zero and goes to (n-1) where n is the number of processes requested • Used to identify the source and destination of messages MPI_Comm_rank(MPI_Comm comm, int *rank) • Exiting MPI • Must be called last by “all” processes MPI_Finalize()
Instruction • Compile $mpicxx program.cpp –o program.x • Execute $mpirun –np 4 –machinefile HOST program.x • Queue $qsub parallel.sh $qstat
Point to Point Communication • MPI_Send • bufinitial address of send buffer (choice) • countnumber of elements in send buffer (nonnegative integer) • datatypedatatype of each send buffer element (handle) • destrank of destination (integer) • tagmessage tag (integer) • comm communicator (handle)
Point to Point Communication(2) • MPI_Recv • buf initial address of receive buffer (choice) Output Parameter • countnumber of elements in receivebuffer (nonnegative integer) • datatypedatatype of each receive buffer element (handle) • sourcerank of source (integer) • tagmessage tag (integer) • commcommunicator (handle) • statusstatus object (Status)
Collective Communication • MPI_Scatter • sendbufaddress of send buffer(choice, significant only at root) • sendcntnumber of elements sent to each process (integer,significant only at root) • sendtypedata type of send buffer elements (significant only at root) • recvbufaddress of receive buffer (choice) • recvcountnumber of elements in receive buffer (integer) • recvtypedata type of receive buffer elements (handle) • rootrank of sending process (integer) • commcommunicator (handle)
Collective Communication (2) • MPI_Gather • sendbuf address of send buffer (choice) • sendcntnumber of elements sent to each process (integer) • sendtypedata type of send buffer elements (handle) • recvbufaddress of receive buffer • recvcountnumber of elements in receive buffer (integer, significant only at root) • recvtypedata type of receive buffer elements (handle) (significant only at root) • rootrank of receiving process (integer) • commcommunicator (handle)
Collective Communication(4) • MPI_Bcast • bufinitial address of send buffer (choice) • countnumber of elements in send buffer (nonnegative integer) • datatypedatatype of each send buffer element (handle) • rootrank of broadcast root (integer) • commcommunicator (handle)
Collective Communication(5) • MPI_Reduce • sendbufaddress of send buffer (choice) • recvbufaddress of receive buffer (choice, significant only at root) • countnumber of elements in send buffer (integer) • datatypedatatype of each send buffer element (handle) • opreduce operation (handle) • rootrank of root process (integer) • commcommunicator (handle)
Advanced Exercise—Matrix Operation • T2SEQ—sequential • T2CP—SPMD (Single Program Multiple Data) • T2DCP—計算及資料都切割 • T3SEQ有邊界資料交換 • T3DP • T3DCP