1 / 23

Data Parallel Quadtree Indexing and Spatial Query Processing of Complex Polygon Data on GPUs

Data Parallel Quadtree Indexing and Spatial Query Processing of Complex Polygon Data on GPUs. Jianting Zhang 1,2 Simin You 2 , Le Gruenwald 3 1 Depart of Computer Science, CUNY City College (CCNY) 2 Department of Computer Science, CUNY Graduate Center

delila
Download Presentation

Data Parallel Quadtree Indexing and Spatial Query Processing of Complex Polygon Data on GPUs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data Parallel Quadtree Indexing and Spatial Query Processing of Complex Polygon Data on GPUs Jianting Zhang1,2 Simin You2, Le Gruenwald3 1 Depart of Computer Science, CUNY City College (CCNY) 2 Department of Computer Science, CUNY Graduate Center 3 School of Computer Science, the University of Oklahoma CISE/IIS Medium Collaborative Research Grants 1302423/1302439: “Spatial Data and Trajectory Data Management on GPUs”

  2. Outline • Introduction & Background • Application: Large-Scale Biodiversity Data Management • Data Parallel Designs and Implementations • Polygon Decomposition • Quadtree Construction • Spatial Query Processing • Experiments • Summary and Future Work

  3. CPU Host (CMP) GPU Core Core ... GDRAM SIMD GDRAM Ring Bus Core Core Core Local Cache PCI-E PCI-E ... Core ... Core Core C B MIC T0 T1 Local Cache A T2 T3 DRAM Shared Cache 4-Threads In-Order Disk SSD Thread Block Parallel Computing – Hardware 16 Intel Sandy Bridge CPU cores+ 128GB RAM + 8TB disk + GTX TITAN + Xeon Phi 3120A ~ $9994

  4. Feb. 2013 • 7.1 billion transistors (551mm²) • 2,688 processors • 4.5 TFLOPS SP and 1.3 TFLOPS DP • Max bandwidth 288.4 GB/s • PCI-E peripheral device • 250 W (17.98 GFLOPS/W -SP) • Suggested retail price: $999 ASCI Red: 1997 First 1 Teraflops (sustained) system with 9298 Intel Pentium II Xeon processors (in 72 Cabinets) What can we do today using a device that is more powerful than ASCI Red 17 years ago?

  5. ...building a highly-configurable experimental computing environment for innovative BigData technologies… GeoTECI@CCNY Computer Science LAN CCNY CUNY HPCC Web Server/ Linux App Server “Brawny” GPU cluster Windows App Server Microway SGI Octane III DIY HP 8740w Dell T5400 KVM HP 8740w Lenovo T400s Dual 8-core 128GB memory Nvidia GTX Titan Intel Xeon Phi 3120A 8 TB storage Dual Quadcore 48GB memory Nvidia C2050*2 8 TB storage *2 Dual-core 8GB memory Nvidia GTX Titan 3 TB storage Dual Quadcore 16GB memory Nvidia Quadro 6000 1.5 TB storage Quadcore 8 GB memory Nvidia Quadro 5000m DIY Dell T5400 “Wimmy” GPU cluster Dell T7500 Dell T7500 Quadcore (Haswell) 16 GB memory AMD/ATI 7970 Dual Quadcore 16GB memory Nvidia FX3700*2 Dual 6-core 24 GB memory Nvidia Quadro 6000 Dual 6-core 24 GB memory Nvidia GTX 480

  6. Spatial Data Management David Wentzlaff, “Computer Architecture”, Princeton University Course on Coursea Computer Architecture How to fill the big gap effectively?

  7. Parallel Computing– Languages & Libraries GNU Parallel Mode Thrust boost CUDPP Bolt http://www.macs.hw.ac.uk/cs/techreps/docs/files/HW-MACS-TR-0103.pdf

  8. Data Parallelisms  Parallel Primitives Parallel libraries Parallel hardware Source: http://parallelbook.com/sites/parallelbook.com/files/SC11_20111113_Intel_McCool_Robison_Reinders.pptx

  9. Outline • Introduction & Background • Application: Large-Scale Biodiversity Data Management • Data Parallel Designs and Implementations • Polygon Decomposition • Quadtree Construction • Spatial Query Processing • Experiments • Conclusions and Future Work

  10. Managing Large-Scale Biodiversity Data SELECT aoi_id, sp_id, sum (ST_area (inter_geom)) FROM ( SELECT aoi_id, sp_id, ST_Intersection (sp_geom, qw_geom) AS inter_geom FROM SP_TB, QW_TB WHERE ST_Intersects (sp_geometry, qw_geom) ) GROUP BY aoi_id, sp_id HAVING sum(ST_area(inter_geom)) >T; http://geoteci.engr.ccny.cuny.edu/birds30s/BirdsQuest.html

  11. Indexing “Complex” Polygons http://en.wikipedia.org/wiki/Simple_polygon http://en.wikipedia.org/wiki/Simple_Features • “Complex” Polygons: • Polygons with multiple rings (with holes) • Highly overlapped • Problems in indexing MBRs: • Inexpensive yet inaccurate approximation for complex polygons • Low pruning power when polygons are highly overlapped

  12. Indexing “Complex” Polygons Fang et al 2008. Spatial indexing in Microsoft SQL Server 2008. (SIGMOD’08) http://xlinux.nist.gov/dads/HTML/linearquadtr.html (Zhang et al 2009) (Zhang 2012) Using B-Tree to index quadrants, but it is unclear how the quadrants are derived from polygons Hours of runtimes on birds range maps by extending GDAL/OGR (serial)

  13. Outline • Introduction & Background • Application: Large-Scale Biodiversity Data Management • Data Parallel Designs and Implementations • Polygon Decomposition • Quadtree Construction • Spatial Query Processing • Experiments • Conclusions and Future Work

  14. Parallel Quadtree Construction Parallel Query Processing BFS  DFS 

  15. Parallel Polygon Decomposition

  16. (0,2) (1,3)

  17. Observations: • All operations are data parallel at the quadrant level; • Quadrants may be at different levels and come from same or different polygons; • Each GPU thread process a quadrant; • Accesses to GPU memory can be coalesced for neighboring quadrants from same polygons

  18. Quadtree Construction • Spatial Query Processing

  19. Experiment Setup Species distribution data • 4062 bird species in the West Hemisphere • 708,509 polygons • 77,699,991 vertices • CentOS 6.4 with GCC 4.7.2, TBB 4.2, ISPC 1.6, CUDA 5.5 • All vector initialization times in Thrust on GPUs are counted (new versions of Thrust allow uninitialized device vectors) • Performance can vary among CUDA SDKs • Dual 8-core Sandy Bridge CPU (2.60G) • 128GB memory • Nvidia GTX Titan (6GB, 2688 cores) • Intel Xeon Phi 3120A (6GB, 57 cores) • 8 TB storage

  20. Runtimes (Polygon Decomposition) G2 (100-1000) G1 (10-100) G3 (1000-10000) G4 (10000-100000)

  21. Comparisons with PixelBox* (Polygon Decomposition) PixelBox*: modifying and extending the PixelBox algorithm [5] to decompose single polygons (vs. computing sizes of intersection areas of pairs of polygons) and handle “complex” multi-ring polygons PixelBox*-shared: CUDA implementation using GPU shared memory for stack • DFS traversal with a batch size of N • N can not be too big (shared memory capacity) or too small (GPU is underutilization if less than warp size) PixelBox*-global: CUDA implementation using GPU global memory for stack • DFS traversal with different batch sizes • Coalesced global GPU memory accesses are efficient Proposed technique: Thrust data parallel implementation on top of parallel primitives • BFS traversal with higher degrees of parallelisms • Data parallel designs (using primitives) simplify implementations • GPU shared memory is not explicitly used and is more flexible • Coalesced global GPU memory accesses are efficient • But: large memory footprint (for the current implementation)

  22. Summary and Future Work • Diversified hardware makes it challenging to develop efficient parallel implementations for complex domain specific applications across platforms. • The framework on data parallel designs on top of parallel primitives seems to be a viable solution in the context of managing and querying large-scale geo-referenced species distribution data • Experiments on 4000+ birds species distribution data have shown up to 190X speedups for polygon decomposition and 27X speedups for quadtree construction over serial implementations on a high-end GPU. • Comparisons with PixelBox* variations that are natively CUDA implementations have shown that efficiency and productivity can be achieved simultaneously based on the data parallel framework using parallel primitives • Further understand the advantages and disadvantages of data parallel designs/implementations on parallel hardware (GPUs, MICs and CMPs) through domain specific applications. • More efficient polygon decomposition algorithms (e.g. scanline based) using parallel primitives • System integration and more applications

  23. Q&A http://www-cs.ccny.cuny.edu/~jzhang/ jzhang@cs.ccny.cuny.edu

More Related