1 / 35

Tuning DiFX2 for performance

Explore solutions for I/O bottlenecks, interprocess communication, CPU load minimization, and memory management in DiFX2. Learn how to optimize performance and troubleshoot issues effectively. For more information, visit: http://cira.ivec.org/dokuwiki/doku.php/difx/mpifxcorr/

rbolden
Download Presentation

Tuning DiFX2 for performance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Tuning DiFX2 for performance Adam Deller ASTRON 6th DiFX workshop, CSIRO ATNF, Sydney AUS

  2. Outline • I/O bottlenecks and solutions • Communication with the real world (reading raw data, writing visibilities) • Interprocess communication • Keeping out of memory trouble • Minimizing CPU load in various corners of parameter space • For more information and pictures:http://cira.ivec.org/dokuwiki/doku.php/difx/mpifxcorr/

  3. Baseband data DataStream 1 Core 1 DataStream 2 Core 2 … … processing buffer processing buffer DataStream N Core M processing buffer Visibilities Timerange, destination Source data Master Node Visibility buffer Visibility buffer Visibility buffer Getting data into DiFX Large, segmented ring buffer

  4. Getting data into DiFX • How to test? neutered_difx, with a small number of channels • Fundamental limit: native transfer speed (disk read, network pipe) • If this is the problem, buy a RAID or get infiniband or … • Potential troublemaker: CPU utilisation on datastream node (competition) • Can come from tsys estimation • Tweaking: datastream databuffer

  5. Datastream databuffer /“Subint” Key parameters: dataBufferFactor nDataSegments subintNS Only real potential problem I/O-wise: buffer too short (databufferfactor)

  6. Baseband data DataStream 1 Core 1 DataStream 2 Core 2 … … processing buffer processing buffer DataStream N Core M processing buffer Visibilities Timerange, destination Source data Master Node Visibility buffer Visibility buffer Visibility buffer Getting visibilities out of DiFX Large, segmented ring buffer To disk

  7. Getting visibilities out of DiFX • FxManager writes the visibilities to disk • This is very rarely a problem unless you have a dying disk or very large and/or frequent visibility dumps • Testing: neutered_difx + fake data source (ensures good input speeds) • Tweaking: none • If you want to write out visibilities faster, put a fast disk (probably RAID) on the manager node!

  8. Baseband data DataStream 1 Core 1 DataStream 2 Core 2 … … processing buffer processing buffer DataStream N Core M processing buffer Visibilities Timerange, destination Source data Master Node Visibility buffer Visibility buffer Visibility buffer Interprocess @ the Datastream Large, segmented ring buffer

  9. Interprocess @ the Datastream • Generally not a problem • Tweaking: dataBufferFactor, ensure reasonable size (avoids latency issues) • Default (32) generally okbut couldusually bebigger w/oproblems(increasenSegmentsalso)

  10. Baseband data DataStream 1 Core 1 DataStream 2 Core 2 … … processing buffer processing buffer DataStream N Core M processing buffer Visibilities Timerange, destination Source data Master Node Visibility buffer Visibility buffer Visibility buffer Interprocess @ the Core Large, segmented ring buffer • Tweaking: • subintNS • Output visibility size (nChan / nBaselines)

  11. Interprocess @ the Core

  12. Interprocess @ the Core • In terms of reducing data transmission, increasing subintNS is the only real knob to turn • Unimportant for continuum, single phase centre - it’s only very high spectral resolution and/or multiphase centre where this is relevant • In those cases, bigger is better; but be careful about memory (later)

  13. Baseband data DataStream 1 Core 1 DataStream 2 Core 2 … … processing buffer processing buffer DataStream N Core M processing buffer Visibilities Timerange, destination Source data Master Node Visibility buffer Visibility buffer Visibility buffer Interprocess @ the FxManager Large, segmented ring buffer The most common trouble point! Must aggregatedata from all Core nodes, can lead to high data rates

  14. Interprocess @ the FxManager

  15. Interprocess @ the FxManager • To calculate the rate into FxManager, work out the rate for one Core node and scale • Tweaking: maximise subintNS!Or (although this is usually not possible) reduce visibility size (via nChan or the number of phase centers)

  16. Memory @ the Datastream • Just don’t make the combination of dataBufferFactor and subintNS too big (can also control via “sendSize”)

  17. Memory @ the Core • Usually the biggest problem, memory-wise

  18. Memory @ the Core • Usually the biggest problem, memory-wise • Never used to be a problem, but multi-field center jobs hit hard • Bigger subint means more memory (storing datastream baseband) • More threads means more memory - at the pre-average spectral resolution • Buffering more FFTs costs more (x the number of threads, too!)

  19. Memory @ the Core • Tweaking: • subintNS • nThreads (threads file) • numBufferedFFTs • And be aware of: • nFFTChans (for multiphase centre/high spectral resolution) • Number of phase centres

  20. Memory @ the FxManager • Tweaking: visBufferLength • Multiplies the size of a single visibility (nChan, nBaselines, nPhaseCentres)

  21. Memory @ the FxManager • Tweaking: visBufferLength • Multiplies the size of a single visibility (nChan, nBaselines, nPhaseCentres) • Generally not a problem • Note: visBufferLength should not be too short, especially if you have many (esp. heterogeneous) Core nodes, as the subints can come in out of order

  22. CPU @ the Datastream • Loading of Datastream is usually pretty light • But, Datastream often runs on old hardware (e.g. Mk5 units) with limited CPU capacity • A couple of options can cause problematically high loads: • Tsys extraction (.v2d: tcalFreq = xx) • Interlaced VDIF formats (used with multi-thread VDIF data, e.g. phased EVLA) • More efficient implementations coming; for now, buy faster CPU if needed!

  23. CPU @ the Core • Many considerations here, including parameters usually fixed by the science • Number of phase centres • Spectral resolution (nChan/nFFTChan) • Plus several on array management • strideLength • numBufferedFFTs • xmacLength • And then a few others as well: • nThreads • fringe rotation order

  24. CPU @ the Core • Number of phase centers • For each phase centre, phase rotation and separate accumulation from thread to main buffer

  25. CPU @ the Core • Number of phase centers • For each phase centre, phase rotation and separate accumulation from thread to main buffer • That costs CPU (proportional to number of baselines and number of phase centres), but also ensures that results don’t fit in cache (more later)

  26. CPU @ the Core • Spectral resolution • More channels means a bigger FFT, and that costs CPU • Doesn’t typically follow a logN law like it should - bigger gets worse fast beyond ~1024 due to cache performance • Really big (>=8192 channels/subband) gets very expensive • Worst thing: typically comes in combination with multiple phase centres! (required to avoiding bandwidth smearing)

  27. CPU @ the Core • Array management • #1: strideLength (auto setting usually best) 180° One FFT of data sin/cos the first “strideLength” samples, and every “strideLength”’th after that -180°

  28. Mode 1 Mode 2 Mode 3 Mode N CPU @ the Core • Array management • #2: numBufferedFFTs (auto=10 usually ok) • Mitigates the cache miss problem by x10 … Precompute numBufferedFFTs FFT results, one station at a time But one slot fits in cache! Visibility buffer(too big for cache)

  29. Mode 1 Mode 2 Mode 3 Mode N CPU @ the Core • Array management • #3: xmacLength (auto setting of 128 usually fine; further subdivides XMAC step) … Precompute numBufferedFFTs FFT results, one station at a time But one slot fits in cache! Visibility buffer(too big for cache)

  30. CPU @ the Core • nThreads • Usually, set nThreads = n(CPU cores) - 1 • Occasionally, can be advantageous to use fewer threads (avoiding swap memory / cache contention)

  31. CPU @ the Core • Fringe Rotation Order • Default is 1, and this is almost always fine • 2nd order only ever needed for very high fringe rates with very long FFTs (space VLBI of masers?) • BUT: 0th order could often be used, and almost never is: it can be about 25% faster Fringerotationphase time 1st FFT 2nd FFT Here, fringe rate is too high for 0th order

  32. CPU @ the Core • Fringe Rotation Order • Default is 1, and this is almost always fine • 2nd order only ever needed for very high fringe rates with very long FFTs (space VLBI of masers?) • BUT: 0th order could often be used, and almost never is: it can be about 25% faster Fringerotationphase time 1st FFT 2nd FFT But at low fringe rate, 0th order approximation can be acceptable

  33. CPU @ the Core • Fringe Rotation Order • Default is 1, and this is almost always fine • 2nd order only ever needed for very high fringe rates with very long FFTs (space VLBI of masers?) • BUT: 0th order could often be used, and almost never is: it can be about 25% faster • .v2d: fringeRotOrder = [0, 1, 2]

  34. CPU @ the FxManager • CPU load at the FxManager is typically light - it only does low-cadence accumulation and scaling of visibilities • Very short subintNS can potentially lead to problems (although network issues are more likely)

  35. Questions?

More Related