1 / 16

ApGrid: Current Status and Future Direction

ApGrid: Current Status and Future Direction. Yoshio Tanaka (AIST). ApGrid: Asia Pacific Partnership for Grid Computing. North America. Europe. International Collaboration Standardization. Asia. ApGrid Testbed International Grid Testbed over the Asia Pacific countries. ApGrid focuses on

jola
Download Presentation

ApGrid: Current Status and Future Direction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ApGrid: Current Status and Future Direction Yoshio Tanaka (AIST)

  2. ApGrid: Asia Pacific Partnership for Grid Computing North America Europe • International Collaboration • Standardization Asia ApGrid Testbed International Grid Testbed over the Asia Pacific countries • ApGrid focuses on • Sharing resources, knowledge, technologies • Developing Grid technologies • Helping the use of our technologies in create new applications • Collaboration on each others work • Possible Applications on the Grid • Bio Informatics • (Rice Genome, etc.) • Earth Science • (Weather forecast, Fluid prediction, Earthquake prediction, etc.)

  3. PRAGMA Pacific Rim Application andGrid Middleware Assembly http://www.pragma-grid.net

  4. History and Future Plan 2000 2002 2001 Kick-off meeting Yokohama, Japan Demo @ HPCAsia Gold Coast, Australia demo @ SC2002 Baltimore, USA (50cpu) Presentation @ GF5 Boston, USA 1st ApGrid Workshop Tokyo, Japan presentation @ SC2001 SC Global Event demo @ iGrid2002 Amsterdam, Netherland 1st Core Meeting Phuket, Thailand ApGrid PRAGMA Presentation @ APAN Shanghai, China 1st PRAGMA Workshop San Diego, USA 2nd PRAGMA Workshop Seoul, Korea 2nd ApGid Workshop/Core Meeting Taipei, Taiwan

  5. History and Future Plan (cont’d) 2003 2004 3rd PRAGMA Workshop Fukuoka, Japan presentation @ APAN Hawaii, USA demo @ SC2004 Pittsburgh, USA 7th PRAGMA Workshop San Diego, USA demo @ CCGrid Tokyo, Japan (100cpu) 6th PRAGMA Workshop Beijing, China Asia Grid Workshop (HPC Asia) Oomiya, Japan 4th PRAGMA Workshop Melbourne, Australia (200cpu) demo @ SC2003 Joing Demo with TeraGrid Phoenix, USA (853CPU) demo & ApGrid Informal Meeting @ APAC’03 Gold Coast, Australia (250cpu) 5th PRAGMA Workshop Hsinchu, Taiwan (300cpu)

  6. ApGrid/PRAGMA Testbed • Architecture, technology • Based on GT2 • Allow multiple CAs • Build MDS Tree • Grid middleware/tools from Asia Pacific • Ninf-G (GridRPC programming) • Nimrod-G (parametric modeling system) • SCMSWeb (resource monitoring) • Grid Data Farm (Grid File System), etc. • Status • 26 organizations (10 countries) • 27 clusters (889 CPUs)

  7. Users, Applications and Experiences • Users • Participants of both/either ApGrid and/or PRAGMA • Applications • Scientific Computing • Quantum Chemistry, Molecular Energy Calculations, Astronomy, Climate Simulation, Molecular Biology, Structural Biology, Ecology and Environment, SARS Grid, Neuroscience, Tele Science, … • Experiences • Successful resource sharing between more than 10 sites in the application level. • Lessons Learned • We have to pay much efforts for initiation • Installation of GT2/JobManager, CA, firewall, etc. • Difficulties caused by the bottom-up approach • Resources are not dedicated • Incompatibility between different version of software • Performance problems • MDS, etc. • Instability of resources • Key issue is sociological rather than technical

  8. Behavior of the System Severs NCSA Cluster (225 CPU) Ninf-G Client (AIST) Severs AIST Cluster (50 CPU) Titech Cluster (200 CPU) KISTI Cluster (25 CPU)

  9. Preliminary Evaluation • Testbed: 500 CPU • TeraGrid: 225 CPU (NCSA) • ApGrid: 275 CPU (AIST, TITECH, KISTI) • Ran 1000 Simulations • 1 simulation = 20 seconds • 1000 simulation = 20000 seconds = 5.5 hour (if runs on a single PC) • Results • 150 seconds = 2.5 min • Insights • Ninf-G2 efficiently works on large-scale cluster of cluster • Ninf-G2 provides good performance for fine grain task-parallel applications on large-scale Grid.

  10. Observations • Still being a “grass roots” organization • Less administrative formality • cf. PRAGMA, APAN, APEC/TEL, etc. • Difficulty in establishing collaboration with others • Unclear membership rules • Join/leave, membership levels • Rights/Obligations • Vague mission, but already collected (potentially) large computing resources

  11. Observations (cont’d) • Duplication of efforts on “similar” activities • Organization-wise • APAN - participation by country • PRAGMA – most organizations are overlapped • Operation-wise • ApGrid testbed vs PRAGMA-resource • may cause confusion • technically, the same approach • Multi-grid federation • Network-wise • Primary APAN – TransPAC • Skillful engineering team

  12. Summary of current status • Difficulties are caused by not technical problems but sociological/political problems • Each site has its own policy • account management • firewalls • trusted CAs • … • Differences in interests • Application, middleware, networking, etc. • Differences in culture, language, etc. • Human interaction is very important

  13. Summary of current status (cont’d) • Activities at the GGF • Production Grid Management RG • Draft a Case Study Document (ApGrid Testbed) • Groups in the Security Area • Policy Management Authority RG (not yet approved) • Discuss with representatives from DOE Science Grid, NASA IPG, EUDG, etc. • Federation/publishing of CAs (will kick off) • I’ll be one of co-chairs

  14. Summary of current status (cont’d) • What has been done? • Resource sharing between more than 10 sites (853cpus are used by Ninf-G application) • Use GT2 as a common software • What hasn’t? • Formalize “how to use the Grid Testbed” • I could use, but it is difficult for others • I was given an account at each site by personal communication • Provide documentation • Keep the testbed stable • Develop management tools • Browse information • CA/Cert. management

  15. Future Direction (proposal) • Draft “Asia Pacific Grid Middleware Deployment Guide”, which is a recommendation document for deployment of Grid middleware • Minimum requirements • Configuration • Draft “Instruction of Grid Operation in the Asia Pacific Region”, which guides how to run Grid Operation Center to support management of stable Grid testbed. • Need support by APAN • Ask APAN to approve the documents as “recommendation” and encourage member countries to follow the documents for deployment of Grid middleware.

  16. Other issues (technical) • Should think about GT3/GT4-based Grid Testbed • Each CA must provide CP/CPS • International Collaboration • TeraGrid, UK eScience, EUDG, etc. • Run more applications to evaluate feasibility of Grid • large-scale cluster + fat link • many small cluster + thin link

More Related