210 likes | 308 Views
Scaling Up Parallel I/O on the SP David Skinner, NERSC Division, Berkeley Lab. Motivation. NERSC uses GPFS for $HOME and $SCRATCH Local disk filesystems on seaborg (/tmp) are tiny Growing data sizes and concurrencies often outpace I/O methodologies. Seaborg.nersc.gov.
E N D
Scaling Up Parallel I/O on the SPDavid Skinner, NERSC Division, Berkeley Lab
Motivation • NERSC uses GPFS for $HOME and $SCRATCH • Local disk filesystems on seaborg (/tmp) are tiny • Growing data sizes and concurrencies often outpace I/O methodologies
Case Study: Data Intensive Computing at NERSC • Binary black hole collisions • Finite differencing on a 1024x768x768x200 grid • Run on 64 NH2 nodes with 32GB RAM (2 TB total) • Need to save regular snapshots of full grid The first full 3D calculation of inward spiraling black holes done at NERSC by Ed Seidel, Gabrielle Allen, Denis Pollney, and Peter Diener Scientific American April 2002
Problems • The binary black hole collision uses a modified version of the Cactus code to solve Einstein’s equations. It’s choices for I/O are serial and MPI-I/O • CPU utilization suffers as time is lost to I/O • Variation in write times can be severe
Finding solutions • Data pattern is a common one • Survey strategies to determine the rate and variation in rate
Multiple File I/O if(private_dir) rank_dir(1,rank); fp=fopen(fname_r,"w"); fwrite(data,nbyte,1,fp); fclose(fp); if(private_dir) rank_dir(0,rank); MPI_Barrier(MPI_COMM_WORLD);
Single File I/O fd=open(fname,O_CREAT|O_RDWR, S_IRUSR); lseek(fd,(off_t)(rank*nbyte)-1,SEEK_SET); write(fd,data,1); close(fd);
MPI-I/O MPI_Info_set(mpiio_file_hints, MPIIO_FILE_HINT0); MPI_File_open(MPI_COMM_WORLD, fname, MPI_MODE_CREATE | MPI_MODE_RDWR, mpiio_file_hints, &fh); MPI_File_set_view(fh, (off_t)rank*(off_t)nbyte, MPI_DOUBLE, MPI_DOUBLE, "native", mpiio_file_hints); MPI_File_write_all(fh, data, ndata, MPI_DOUBLE, &status); MPI_File_close(&fh);
Large block I/O • MPI I/O on the SP includes the file hint IBM_largeblock_io • IBM_largeblock_io=true used throughout, default values show large variation • IBM_largeblock_io=true also turns off data shipping
Large block I/O = false • MPI on the SP includes the file hint IBM_largeblock_io • Except above IBM_largeblock_io=true used throughout • IBM_largeblock_io=true also turns off data shipping
Bottlenecks to scaling • Single file I/O has a tendency to serialize • Scaling up with multiple files create filesystem problems • Akin to data shipping consider the intermediate case
Future Work • Testing NERSC port of NetCDF to MPI-I/O • Comparison with Linux/Intel GPFS NERSC/LBL Alvarez Cluster 84 2way SMP Pentium Nodes Myrinet 2000 Fiber Optic Interconnect • Testing GUPFS technologies as they become available