1 / 28

Overview of the CrossGrid Project

Overview of the CrossGrid Project. Marian Bubak Institute of Computer Science & ACC CYFRONET AGH, Kraków, Poland and Michał Turała Institute of Nuclear Physics, Cracow, Poland. Towards the CrossGrid. 1 st meeting January 24, 2001, to join DataGrid CPA9 Call

twyla
Download Presentation

Overview of the CrossGrid Project

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Overview of the CrossGrid Project Marian Bubak Institute of Computer Science & ACC CYFRONET AGH, Kraków, Poland and Michał Turała Institute of Nuclear Physics, Cracow, Poland

  2. Towards the CrossGrid • 1st meeting January 24, 2001, to join DataGrid • CPA9 Call • Extended collaboration meeting at GGF1 (March 7) • 23 partners • New type of applications • Proposal submitted – April 22, 2001; 22 partners • Comments of reviewers and PO • Negotiations October 24, 2001; 21 partners • ...

  3. CrossGrid Collaboration Ireland: TCD Dublin Poland: Cyfronet & INP Cracow PSNC Poznan ICM & IPJ Warsaw Germany: FZK Karlsruhe TUM Munich USTU Stuttgart Netherlands: UvA Amsterdam Slovakia: II SAS Bratislava Austria: U.Linz Spain: CSIC Santander Valencia & RedIris UAB Barcelona USC Santiago & CESGA Greece: Algosystems Demo Athens AuTh Thessaloniki Portugal: LIP Lisbon Italy: DATAMAT Cyprus: UCY Nikosia

  4. Main Objectives • New category of Grid enabled applications • computing and data intensive • distributed • near real time response (a person in a loop) • layered • New programming tools • Grid more user friendly, secure and efficient • Interoperability with other Grids • Implementation of standards

  5. CrossGrid Architecture Interactive, Compute and Data Intensive Applications (WP1) Grid Application Programming Environment (WP2) • Interactive simulation and visualisation of a biomedical system • Flooding crisis team support • Distributed data analysis in HEP • Weather forecast and air pollution modelling • MPI code debugging and verification • Metrics and benchmarks • Interactive and semiautomatic performance evaluation tools Grid Visualisation Kernel HLA Data Mining New Grid Services and Tools (WP3) Datagrid GriPhyN ... • Portals and roaming access • Grid resource management • Grid monitoring • Optimisation of data access Services Globus Middleware Fabric Infrastructure

  6. Key functionalities of applications • Data gathering • Data generators and data bases geographically distributed • Selected on demand • Processing • Needs large processing capacity on demand • Interactive • Presentation • Complex data require versatile 3D visualisation • Support interaction and feedback to other components

  7. Why Interactive Computing? • Goal: From Data, via Information to Knowledge =>Planning and Management • Complexity: Huge data-sets, complex processes • Approach: Parametric exploration and sensitivity analyses: • Combine raw (sensory) data with simulation • Person in the loop: • Sensory interaction • Intelligent short-cuts

  8. Common issues of applications • Inherently distributed applications profit from grid approach • All tasks require high performance & MPI • 1.1 and 1.2 - interactive, near-real time • 1.3 and 1.4 - high throughput • Data mining • 1.3 and 1.4 • Data discovery • 1.2 and 1.4

  9. Example – medical application

  10. Architecture

  11. Distributed Data Analysis in HEP • Complementarity with DataGrid HEP application package: • Crossgrid will develop interactive final user application for physics analysis, will make use of the products of non-interactive simulation & data-processing preceeding stages of Datagrid • Apart from the file-level service that will be offered by Datagrid, Crossgrid will offer an object-level service to optimise the use of distributed databases: • -Two possible implementations (will be tested in running experiments): • Three-tier model accesing OODBMS or O/R DBMS • More specific HEP solution like ROOT. • User friendly due to specific portal tools

  12. Distributed Data Analysis in HEP • Several challenging points: • Access to large distributed databases in the Grid. • Development of distributed data-mining techniques. • Definition of a layered application structure. • Integration of user-friendly interactive access. • Focus on LHC experiments (ALICE, ATLAS, CMS and LHCb)

  13. WP2 - Grid Application Programming Environments • Objectives • specify • develop • integrate • test • tools that facilitate the development and tuning of paralleldistributed • high-performance and high-throughput computing applications on Grid infrastructures

  14. WP2 - Grid Application Programming Environments Six Tasks in WP2 2.0 Co-ordination and Management 2.1 Tools requirement definition 2.2 MPI code debugging and verification 2.3 Metrics and benchmarks 2.4 Interactive and semiautomatic performance evaluation tools 2.5 Integration, testing and refinement

  15. Benchmarks (2.3) Performance analysis (2.4) Automatic analysis Application WP1 running on testbed WP4 Performance measurement Visualization Grid monitoring (3.3) Analytical model MPI verification (2.2) Application source code WP2 - Components and relations to other WPs

  16. WP3 Objectives • Tools for development of interactive compute- and data-intensive applications • To address user-friendly Grid environments • To simplify the applications and Grid access by supporting the end user • To achieve a reasonable trade-off between resource usage efficiency and application speedup • To support management issues while accessing resources

  17. WP3 Portals (3.1) Applications WP1 Roaming Access (3.1) End Users Grid Resource Management (3.2) Performance evaluation tools (2.4) Grid Monitoring (3.3) Optimisation of Data Access (3.4) Tests and Integration (3.5) Testbed WP4 WP1, WP2, WP5

  18. Testbed Organisation (WP4) • Testbed setup and incremental evolution • from several local testbeds to fully integrated one • Integration with DataGrid • common design, environment for HEP applications • Infrastructure support • flexible fabric management tools and network support • Verification and quality control • reliability of the middleware and network infrastructure

  19. TCD Dublin PSNC Poznan Uv Amsterdam ICM & IPJ Warsaw FZK Karlsruhe CYFRONET Cracow II SAS Bratislava USC Santiago CSIC Santander LIP Lisbon Auth Thessaloniki UABarcelona CSIC Madrid CSIC Valencia DEMO Athens UCY Nikosia CrossGrid WP4 - International Testbed Organisation Partners in WP4 WP4 lead by CSIC (Spain)

  20. WP4 - International Testbed Organisation • Tasks in WP4 • 4.0 Coordination and management • (task leader: J.Marco, CSIC, Santander) • Coordination with WP1,2,3 • Collaborative tools (web+videoconf+repository) • Integration Team • 4.1 Testbed setup & incremental evolution • (task leader:R.Marco, CSIC, Santander) • Define installation • Deploy testbed releases • Trace security issues • Testbed site responsibles: • CYFRONET (Krakow) A.Ozieblo • ICM(Warsaw) W.Wislicki • IPJ (Warsaw) K.Nawrocki • UvA (Amsterdam) D.van Albada • FZK (Karlsruhe) M.Kunze • IISAS (Bratislava) J.Astalos • PSNC(Poznan) P.Wolniewicz • UCY (Cyprus) M.Dikaiakos • TCD (Dublin) B.Coghlan • CSIC (Santander/Valencia) S.Gonzalez • UAB (Barcelona) G.Merino • USC (Santiago) A.Gomez • UAM (Madrid) J.del Peso • Demo (Athenas) C.Markou • AuTh (Thessaloniki) D.Sampsonidis • LIP (Lisbon) J.Martins

  21. WP4 - International Testbed Organisation • Tasks in WP4 • 4.2Integration with DATAGRID (task leader: M.Kunze, FZK) • Coordination of testbed setup • Exchange knowledge • Participate in WP meetings • 4.3 Infrastructure Support (task leader: J.Salt, CSIC, Valencia) • Fabric management • HelpDesk • Provide Installation Kit • Network support • 4.4 Verification & quality control (task leader: J.Gomes, LIP) • Feedback • Improve stability of the testbed

  22. Technical Coordination • Merging of requirements • Specification and refinement of the GrossGrid architecture (protocols, APIs; HLA, CCA ...) • Establishing standard operational procedures • repository acces procedures • problem reporting mechanism • handling changed requests mechanism • release preparation procedure • Specification of the structure of deliverables • Approach: rapid prototyping and iterative engineering

  23. Project Phases M 4 - 12: first development phase: design, 1st prototypes, refinement of requirements M 25 -32: third development phase: complete integration, final code versions M 33 -36: final phase: demonstration and documentation M 1 - 3: requirements definition and merging M 13 -24: second development phase: integration of components, 2nd prototypes

  24. Clustering with # Projects • Objective – exchange of • information • software components • Our partners • DATAGRID • DATATAG • GRIDLAB • EUROGRID and GRIP • GRIDSTART • Participation in GGF

  25. Expected Results of the CrossGrid • Grid enabled interactiveapplications • Elaborated methodology • Generic application architecture • New programming tools • New Grid services • Extension of the Grid in Europe and to new virtual organisations

  26. Dissemination & Exploitation • Methods & software developed will be available to scientific community • Each collaboration partner • topic conferences, GGF, national Grid initiatives • MSc, PhD and lectures on Grid technology • Centralised • CrossGrid vortal • workshops, seminars, user/focus groups • newsletter, brochures • industrial deployment

  27. Overall Links between WPs and Tasks 1.0 Coordination 2.0 Coordination 3.0 Coordination 5.1 Coordination & Management 1.1-1.4 Applications 2.1 Requirements 3.1-3.4 Services 5.2 Architecture Team 2.2-2.4 Tools 5.3 Dissemination & Exploitation 2.5 Tests 3.5 Tests 4.0 Coordination 4.2 Integration with DataGrid 4.1, 4.3, 4.4 Testbeds GGF DataGrid Other Grid Projects

  28. Ready to start January 1, 2002

More Related