1 / 44

NEGST Workshop, June 23-24, 2006

Outline of NAREGI Project: Current Status and Future Direction. NEGST Workshop, June 23-24, 2006. Kenichi Miura, Ph.D. Information Systems Architecture Research Division, National Institute of Informatics And Program Director Science Grid NAREGI Program

holleb
Download Presentation

NEGST Workshop, June 23-24, 2006

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Outline of NAREGI Project: Current Status and Future Direction NEGST Workshop, June 23-24, 2006 Kenichi Miura, Ph.D. Information Systems Architecture Research Division, National Institute of Informatics And Program Director Science Grid NAREGI Program Center for Grid Research and Development

  2. National Research Grid Initiative (NAREGI) Cyber Science Infrastructure Framework at NII Project for Development and Applications of Advanced High-performance Supercomputer (FY2006-FY2012) Synopsis

  3. Overview of NAREGI Project

  4. National Research Grid Initiative (NAREGI) Project:Overview • Started as an R&D project funded by MEXT • (FY2003-FY2007) • 2 B Yen(~17M$) budget in FY2003 • One of Japanese Government’s Grid Computing Projects • ITBL, Visualization Grid, GTRC, BioGrid etc. • Collaboration of National Labs. Universities and Industry • in the R&D activities (IT and Nano-science Apps.) • - NAREGI Testbed Computer Resources (FY2003) MEXT:Ministry of Education, Culture, Sports,Science and Technology

  5. National Research Grid Initiative (NAREGI) Project:Goals (1) To develop a Grid Software System (R&D in Grid Middleware and Upper Layer) as the prototype of future Grid Infrastructure in scientific research in Japan (2) To provide a Testbed to prove that the High-end Grid Computing Environment (100+Tflop/s expected by 2007) can be practically utilized in the Nano-science Applications over the Super SINET. (3) To Participate in International Collaboration (U.S., Europe, Asian Pacific) (4) To Contribute to Standardization Activities, e.g., GGF

  6. Project Leader: Dr. K.Miura Grid Middleware Integration and Operation Group Grid Middleware And Upper Layer R&D Dir.: Dr. F.Hirata Organization of NAREGI Ministry of Education, Culture, Sports, Science and industry(MEXT) Oparation And Collaboration Collaboration Center for Grid Research and Development (National Institute of Informatics) Cyber Science Infrastructure(CSI) Industrial Association for Promotion of Supercomputing Technology Deployment Coordination and Operation Committee Computing and Communication Centers (7 National Universities) etc. Grid Technology Research Center (AIST), JAEA Joint R&D TiTech, Kyushu-U, Osaka-U, Kyushu-Tech., Fujitsu, Hitachi, NEC SuperSINET Unitization Collaboration Computational Nano Center (Inst. Molecular science) ITBL Joint Research Collaboration Joint Research R&D on Grand Challenge Problems for Grid Applications (ISSP, Tohoku-U, AIST, Inst. Chem. Research, KEK etc.)

  7. National Institute of Informatics (NII) (Center for Grid Research & Development) Institute for Molecular Science (IMS) (Computational Nano‐science Center) Universities and National Laboratories(Joint R&D) (AIST, JAEA,Titech, Osaka-u, Kyushu-u, Kyushu Inst. Tech., Utsunomiya-u, etc.) Research Collaboration (ITBL Project, National Supecomputing Centers, KEK, NAO etc.) Participation from Industry (IT and chemical/Material etc.) Consortium for Promotion of Grid Applications in Industry Participating Organizations

  8. Grid-Enabled Nano-Applications Grid Visualization Grid PSE Grid Programming -Grid RPC -Grid MPI Data Grid Grid Workflow Distributed Information Service Super Scheduler (Globus,Condor,UNICOREOGSA) Packaging Grid VM High-Performance & Secure Grid Networking SuperSINET Computing Resources NII Research Organizations IMS etc NAREGI Software Stack

  9. WP-1: Lower and Middle-Tier Middleware for Resource Management: Matsuoka (Titech), Kohno(ECU), Aida (Titech) WP-2: Grid Programming Middleware: Sekiguchi(AIST), Ishikawa(AIST) WP-3: User-Level Grid Tools & PSE: Usami (NII), Kawata(Utsunomiya-u) WP-4: Data Grid Environment Matsuda (Osaka-u) WP-5: Networking, Security & User Management Shimojo (Osaka-u), Oie ( Kyushu Tech.), Imase(Osaka U.) WP-6: Grid-enabling tools for Nanoscience Applications : Aoyagi (Kyushu-u) R&D in Grid Software and Networking Area (Work Packages)

  10. WP-2:Grid Programming – GridRPC/Ninf-G2 (AIST/GTRC) GridRPC • Programming Model using RPC on the Grid • High-level, taylored for Scientific Computing (c.f. SOAP-RPC) • GridRPC API standardization by GGF GridRPC WG • Ninf-G Version 2 • A reference implementation of GridRPC API • Implemented on top of Globus Toolkit 2.0 (3.0 experimental) • Provides C and Java APIs Numerical Library IDL FILE IDL Compiler Client 4. connect back 3. invoke Executable generate 2. interface reply Remote Executable GRAM 1. interface request fork Interface Information LDIF File MDS retrieve Client side Server side

  11. WP-2:Grid Programming-GridMPI (AIST and U-Tokyo) Cluster A: Cluster B: YAMPII IMPI YAMPII IMPI server ■ GridMPI is a library which enables MPI communication between parallel systems in the grid environment. This realizes;   ・Huge data size jobs which cannot be executed in a single cluster system   ・Multi-Physics jobs in the heterogeneous CPU architecture environment • Interoperability: • - IMPI(Interoperable MPI) compliance communication protocol • - Strict adherence to MPI standard in implementation • High performance: • - Simple implementation • - Buit-in wrapper to vendor-provided MPI library

  12. WP-3: User-Level Grid Tools & PSE • Grid PSE - Deployment of applications on the Grid - Support for execution of deployed applications • Grid Workflow - Workflow language independent of specific Grid middleware - GUI in task-flow representation • Grid Visualization - Remote visualization of massive data distributed over the Grid - General Grid services for visualization

  13. RISM-FMO Loosely Coupled Simulation (Total Workflow) Distributed Computation of Single fragments Distributed Computation of Fragment pairs Distributed computation of Initial Charge Distribution Parallel computation of RISM

  14. RISM-FMO Loosely Coupled Simulation (Completion)

  15. Grid Visualization: Visualization of Distributed Massive Data Site A 5 VisualizationClient ServiceInterface 4 AnimationFile Image-Based Remote Visualization • Server-side parallel visualization • Image transfer & composition 3 File Server ComputationResults ParallelVisualization Module 2 Compressed Image Data Computation Server Site B ServiceInterface Advantages • No need of rich CPUs, memories, or storages for clients • No need to transfer large amounts of simulation data • No need to merge distributed massive simulation data • Light network load • Visualization views on PCs anywhere • Visualization in high resolution without constraint 1 File Server ComputationResults ParallelVisualization Module Computation Server

  16. WP-4:NAREGI Data Grid β Architecture Grid Workflow Job 1 Job 2 Job n Data Grid Components Data 1 Data 2 Data n Import data into workflow Data Access Management Place & register data on the Grid Job 1 Job 2 Metadata Management Assign metadata to data Grid-wide Data Sharing Service Meta- data Meta- data Job n Meta- data Data Resource Management Data 1 Data 2 Data n Store data into distributed file nodes Grid-wide File System

  17. High Energy Physics - KEK - EGEE Astronomy - National Astronomical Observatory (Virtual Observatory) Bio-informatics - BioGrid Project Collaboration in Data Grid Area

  18. WP-6:Adaptation of Nano-science Applications to Grid Environment Grid Middleware Grid Middleware Grid MPIover SuperSINET Site B Site A RISM FMO Solvent Distribution Analysis Electronic StructureAnalysis Electronic Structure in Solutions Data Transformationbetween Different Meshes MPICH-G2, Globus Fragment Molecular Orbital method RISM FMO Reference Interaction Site Model

  19. GridMPI FMOPC Cluster 128 CPUs RISMSMP 64 CPUs RISMsource FMOsource Scenario for Multi-sites MPI Job Execution Work-flow Input files a: Registration b: Deployment c: Edit 1: Submission IMPI CA PSE WFT 3: Negotiation DistributedInformationService SuperScheduler Resource Query Output files 3: Negotiation 5: Sub-Job Agreement Grid Visualization 10: Accounting 6: Co-Allocation GridVM GridVM GridVM 6: Submission 4: Reservation LocalScheduler LocalScheduler LocalScheduler MPI 10: Monitoring FMOJob IMPIServer RISMJob CA CA CA Site C (PC cluster) Site B (SMP machine) Site A

  20. Research Topics: Computational Nanoscience Center Nano structure Functions Contribution to the industry Functional nano-scale molecule molecular-scale electronics, molecular magnetization, molecular capsule, Supramolecule, nano-catalyst, fullerrene, nanotube Molecular-scale electronics Institute for Molecular Science High-functional molecular catalyst Nano-scale molecular assembly Self-organized molecular assembly, proteins, biomembranes, Molecular-scale electronics, nanoporous liquid, quantum nano liquid-drop Biomolecular elements Institute for Molecular Science Drug discovery Nano-scale electronic system Control of the degrees of inner freedom, optical switch, spintronics, magnetic tunnel elements 創薬 Molecular superconductor Institute for Materials Research Nonequilibrium material Nano-scale magnetization Correlated electron system, magnetic nanodot, light-induced Nano-structure, quantum nano-structure,nano-spin glass, Ferromagnetic nano-particles Magnetic elements Institute for Solid State Physics Optical electronic devices Memory elements Design of nano-scale complex system Nano electronics material, quantum dot, quantum wire, composite nano-particle, composite nano-organized functional material The National Institute of Advanced Industrial Science and Technology New telecommunication principle

  21. Resource and Execution Management GT4/WSRF based OGSA-EMS incarnation Job Management, Brokering, Reservation based co-allocation, Monitoring, Accounting Network traffic measurement and control Security Production-quality CA VOMS/MyProxy based identity/security/monitoring/accounting Data Grid WSRF based grid-wide data sharing with Gfarm Grid Ready Programming Libraries Standards compliant GridMPI (MPI-2) and GridRPC Bridge tools for different type applications in a concurrent job User Tools Web based Portal Workflow tool w/NAREGI-WFML WS based application contents and deployment service Large-Scale Interactive Grid Visualization Highlights of NAREGI b release (2005-6) The first incarnation In the world (@a) NAREGI is operating production level CA in APGrid PMA Grid wide seamless data access High performance communication Support data form exchange A reference implementation of OGSA-ACS

  22. Network Topology Map ofSINET/SuperSINET (Feb. 2006) (Line speeds) Number of SINET particular Organizations (2006.2.1)

  23. ISSP Small Test App Clusters Kyoto Univ. Small Test App Clusters Tohoku Univ. Small Test App Clusters KEK Small Test App Clusters AIST Small Test App Clusters NAREGIPhase 1 Testbed ~3000 CPUs ~17 Tflops TiTech Campus Grid Osaka Univ. BioGrid AIST SuperCluster Kyushu Univ. Small Test App Clusters Super-SINET (10Gbps MPLS) Computational Nano-science Center(IMS) ~10 Tflops Center for GRID R&D (NII) ~5 Tflops

  24. Computer System for Grid Software Infrastructure R & D Center for Grid Research and Development(5 Tflop/s,700GB) File Server (PRIMEPOWER 900 +ETERNUS3000   + ETERNUS LT160) High Perf. Distributed-memory type Compute Server (PRIMERGY RX200) Intra NW-A Intra NW 128CPUs(Xeon, 3.06GHz)+Control Node Memory 130GB Storage 9.4TB InfiniBand 4X(8Gbps) Memory 16GB Storage10TB Back-up Max.36.4TB 1node/8CPU High Perf. Distributed-memoryType Compute Server (PRIMERGY RX200) (SPARC64V1.3GHz) 128 CPUs(Xeon, 3.06GHz)+Control Node Memory65GB Storage 9.4TB L3 SW 1Gbps (upgradable To 10Gbps) SMP type Compute Server (PRIMEPOWER HPC2500)  InfiniBand 4X (8Gbps) Distributed-memory type Compute Server (Express 5800)  Intra NW-B 1node (UNIX, SPARC64V1.3GHz/64CPU) Memory 128GB Storage 441GB 128 CPUs (Xeon, 2.8GHz)+Control Node Memory65GB Storage 4.7TB GbE (1Gbps) SMP type Compute Server (SGI Altix3700) Distributed-memory type Compute Server(Express 5800)  128 CPUs (Xeon, 2.8GHz)+Control Node 1node (Itanium2 1.3GHz/32CPU) Memory 32GB Storage 180GB Memory 65GB Storage4.7TB Ext. NW GbE (1Gbps) Distributed-memory type Compute Server(HPC LinuxNetworx )  SMP type Compute Server (IBM pSeries690) 128 CPUs (Xeon, 2.8GHz)+Control Node Memory 65GB Storage 4.7TB L3 SW 1Gbps (Upgradable to 10Gbps) 1node (Power4 1.3GHz/32CPU) Memory 64GB Storage 480GB GbE (1Gbps) Distributed-memory type Compute Server(HPC LinuxNetworx )  128 CPUs(Xeon, 2.8GHz)+Control Node Memory 65GB Storage 4.7TB SuperSINET GbE (1Gbps)

  25. Computer System for Nano Application R & D Computational Nano science Center(10 Tflop/s,5TB) SMP type Computer Server Distributed-memory type Computer Server(4 units) 5.0 TFLOPS 5.4 TFLOPS 818 CPUs(Xeon, 3.06GHz)+Control Nodes Myrinet2000(2Gbps) 16ways×50nodes (POWER4+ 1.7GHz) Multi-stage Crossbar Network Memory 3072GB Storage2.2TB Memory 1.6TB Storage1.1TB/unit Front-end Server Front-end Server File Server 16CPUs(SPARC64 GP, 675MHz) L3 SW 1Gbps (Upgradable to 10Gbps) CA/RA Server Memory 8GB Storage30TB Back-up25TB Firewall VPN Center for Grid R & D SuperSINET

  26. Roadmap of NAREGI Grid Middleware (Original 5 year plan) FY2003 FY2004 FY2005 FY2006 FY2007 Development and Integration of αVer. Middleware OGSA/WSRF-based R&D Framework UNICORE-based R&D Framework Utilization of NAREGI-Wide Area Testbed Utilization of NAREGI NII-IMS Testbed Prototyping NAREGI Middleware Components Apply Component Technologies to Nano Apps and Evaluation Version 1.0 Release βVer. Release Evaluation of αVer. In NII-IMS Testbed Deployment of βVer. Evaluation of βRelease By IMS and other Collaborating Institutes Development and Integration of βVer. Middleware Evaluation on NAREGI Wide-area Testbed αVer. (Internal) Development of OGSA-based Middleware Verification & Evaluation Of Ver. 1 Midpoint Evaluation

  27. A new information infrastructure is needed for boosting today’s advanced scientific research. Integrated information resources and system super computer and high-performance computing software, databases and digital contents such as e-journals “Human” and research processes themselves U.S.A: Cyber-Infrastructure (NSF TeraGrd)) Europe: e-Infrastructure (EU EGEE, DEISA…..) Break-through of research methodology is required in various fields such as nanoscience, bioinformatics,… the key to industry/academia cooperation: from ‘Science’ to ‘Intellectual production’ Cyber Science Infrastructure: background A new comprehensive framework of information infrastructure in Japan Cyber Science Infrastructure CSI

  28. Cyber-Science Infrastructure for R & D Cyber-Science Infrastructure (CSI) NII-REO (Repository of Electronic Journals and Online Publications NAREGI Outputs GeNii (Global Environment for Networked Intellectual Information) Virtual Labs Live Collaborations Deployment of NAREGI Middleware UPKI: National Research PKI Infrastructure Management Body / Education & Training International Infrastructural Collaboration Industry/Societal Feedback SuperSINET and Beyond: Lambda-based Academic Networking Backbone Hokkaido-U ★ ● ★ Tohoku-U Kyoto-U ☆ ★ ★ ★ Tokyo-U Kyushu-U ★ NII Nagoya-U ★ Osaka-U (Titech, Waseda-U, KEK, etc.) Restructuring Univ. IT Research ResourcesExtensive On-Line Publications of Results

  29. NAREGI Site Middleware CA Research ナノ分野 実証・評価 分子研 Dev.(  ) ナノ分野 実証・評価 分子研 βver. V1.0 V2.0 Nano Proof of al.Concept Eval. IMS • International • Collaboration • - EGEE • - UNIGRIDS • Teragrid • GGF etc. Operaontional Collaborati National Institute of Informatics Deployment of Cyber Science Infrastructure (planned 2006-2011) R&D Collaboration Cyber-Science Infrastructure(CSI) (IT Infra. for Academic Research and Education) Nano Proof, Eval. IMS Joint Project (Bio) Osaka-U R&D Collaboration Joint Project AIST Feedback R&D Collaboration Delivery Delivery Operation/ Maintenance (UPKI,CA) Contents Operation/ Maintenance (Middleware) Networking Delivery Feedback Delivyer Feedback Core Site Project-oriented VO Project- Oriented VO Customization Operation/Maintenance Industrial Projects Univ./National Supercomputing VO Domain Specific VO (e.g ITBL) Domain Specific VOs IMS,AIST,KEK,NAO,etc. Networking Infrastructure (Super-SINET ) Note: names of VO are tentative)

  30. Future Direction of NAREGI Grid Middleware Toward Petascale Computing Environment for Scientific Research Center for Grid Research and Development (National Institute of Informatics) Cyber Science Infrastructure(CSI) Science Grid Environment Productization of Generalpurpose Grid Middleware for Scientific Computing Grid Middleware Resource Management in the Grid Environment Grid Middleware for Large Computer Centers Grid Programming Environment Personnel Training (IT and Application Engineers) Grid Application Environment Evaluation of Grid System with Nano Applications Data Grid Environment Contribution to International Scientific Community and Standardization High-Performance & Secure Grid Networking Grid-Enabled Nano Applications Computational Methods for Nanoscience using the Lastest Grid Technology Research Areas Requirement from the Industry with regard to Science Grid for Industrial Applications Large-scale Computation Vitalization of Industry High Throughput Computation Solicited Research Proposals from the Industry to Evaluate Applications Progress in the Latest Research and Development (Nano, Biotechnology) New Methodology for Computational Science Computational Nano-science Center Use In Industry (New Intellectual Product Development) Industrial Committee for Super Computing Promotion (Institute for Molecular Science)

  31. Science and Technology Basic Plan ProjectedTotal R&D Expenditure: 25 Trillion yen Source:CSTP

  32. Networking Technology WG Ubiquitous Technology WG Device & Display Technology WG Security & Software WG Human Interface & Contents WG Robotics WG R&D Infrastructure WG - Supercomputing, Network Utilization (including GRID technology), Ultra Low Power CPU Design etc. Seven Identified Key Areas in Informationand Communication Technologies(Working Groups)

  33. Development & Applications of Advanced High-performance Supercomputer Project FY20063,547Million yen(New) FY2006-FY2012 (total national budget)~110 billion yen 1.Purpose Development,installation, and applications of advanced high-performance supercomputer system 2. Effect Together with theory and experiment, simulation has been recognized as the third methodology for science and technology today. In order to lead the world for the future and progress of science, technology and industries, (1) Development and deployment of the system and application software for utilizing supercomputer system (2) Development and installation of the advanced high-performance supercomputer system (3) Establishment and Operation of “Advanced Computational Science and Technology Center (tentative)”, where the developed supercomputer system is the core computational resource. With the above (1),(2) and (3), creative talented people who can improve the level of research in Japan and lead the world, will be fostered. 3.Framework for use and administration ・MEXT is responsible for installation, use, and administration of the facilities comprehensively. ・The new law will be enacted for the framework of usage and administration. ・The facilities for fundamental research and industrial applications will be open to universities, research institutes and industries. 4

  34. Project Organization Advisory Board Evaluation R&D MEXT(Supervisor) Office for Supercomputer Development Planning (Akihiko Fujita/Tadashi Watanabe) External Evaluation Committee Project Committee RIKEN(Main Contractor) The Next-Generation Supercomputer R&D Center (NSC) (Ryoji Noyori) -Hardware/System -System Software -Application Software Users (Industry) Industrial Committee for Promotion of Supercomputing Universities, Laboratories, Industries Dr. R.Himeno 5

  35. Vision for Continuous Development of Supercomputers NationalLeadership Sustained Performance (FLOPS) Government Investment NationalLeadership National Infrastructure Institute, University 100P 100P Next-next-nextGenerationProject NationalLeadership (The Next Generation Supercomputer) The Next GenerationSupercomputer Project National Infrastructure Institute, University Next-next Generation Project 1P 1P NationalLeadership (Earth Simulator) National Infrastructure Institute, University Earth Simulator Project Enterprise Company, Laboratory NationalLeadership CP-PACS, NWT 10T 10T Enterprise Company, Laboratory National Infrastructure Institute University Enterprise Company, Laboratory 100G 100G PersonalEntertainment PC, Home Server Workstation, Game Machine, Digital TV PersonalEntertainment PC, Home Server Workstation, Game Machine, Digital TV PersonalEntertainment PC, Home Server Workstation, Game Machine, Digital TV National Infrastructure Institute, University Enterprise Company, Laboratory 8 1990 2000 2010 2025 2015 2020

  36. Schedule(projected) FY 2006 2007 2008 2009 2010 2011 2012 ★Operation Start Event & Evaluation Project fllow-up★ ★CSTP follow-up ★CSTP follow-up R&D Project follow-up★ (Oparation,Outcome etc.) ★CSTP follow-up (Performance,Function etc.) GRID middleware etc.,Design&Production Evaluation System Software S o Next Generation NanoTechnology Simulation, f Design&Production t Evaluation w a Grand Challenge r Application Software Next Generation Life Science Simulation,Design&Production Evaluation e System Architecture Definition Implementation Technology Design Design & Evaluation Production Enhancement Hardware Design Production Enhancement FileSystem,etc. Investigation Design Construction Geographical investigation ,Construction CSTP: Council for Science & Technology Policy (CSTP) is one of the four councils of important policies of Cabinet Office. 6

  37. Nanoscience (6) e.g., MD,Fragment MO,First Principle MD (physical space formulation), RISM, OCTA Life Science (6) e.g.,MD(Protein),DF(Protein),Folding, Blood Flow,Gene network Climate/Geoscience (3) Earth Quake Wave Propagation, Hi-res. Atmospheric GCM, Hi-res. Ocean GCM Physical Science (2) Galaxy Formation (hydro + gravitational N-body), QCD Engineering (4) e.g.,Structural Analysis by FEM, Non-steady Flow by FDM, Compressive Flow, Large Eddy Simulation Application Analysis for Architecture Evaluation So far, 21 application programs have been identified as candidates to be used for architecture evaluation

  38. The Next Generation Supercomputer System Project Total National Budget requested: 110 Billion Yen in 7 years(including, system software, applications, hardware, facility etc.) < System > Performance Target: Over 1 petaflop/s (Application-dependent) High Priority Apps: Nanoscience/Nanotechnology,Life/Bio System Architecture Candidates selected: Summer 2006 System Completion: end FY2010(initial) - end FY2011 (final)

  39. Next Generation Supercomputer Project: Schedule(projected) FY 2006 2007 2008 2009 2010 2011 2012 ★Operation Start Event & Evaluation Project fllow-up★ ★CSTP follow-up ★CSTP follow-up R&D Project follow-up★ (Oparation,Outcome etc.) ★CSTP follow-up (Performance,Function etc.) GRID middleware etc.,Design&Production Evaluation System Software S o Next Generation NanoTechnology Simulation, f Design&Production t Evaluation w a Grand Challenge r Application Software Next Generation Life Science Simulation,Design&Production Evaluation e System Architecture Definition Implementation Technology Design Design & Evaluation Production Enhancement Hardware Design Production Enhancement FileSystem,etc. Investigation Design Construction Geographical investigation ,Construction CSTP: Council for Science & Technology Policy (CSTP) is one of the four councils of important policies of Cabinet Office. 6

  40. NAREGI Site Middleware CA Research ナノ分野 実証・評価 分子研 Dev.(  ) ナノ分野 実証・評価 分子研 βver. V1.0 V2.0 Nano Proof of al.Concept Eval. IMS • International • Collaboration • - EGEE • - UNIGRIDS • Teragrid • GGF etc. Operaontional Collaborati Cyber Science Infrastructure toward Petascale Computing (planned 2006-2011) NII Collaborative Operation Center R&D Collaboration Cyber-Science Infrastructure(CSI) (IT Infra. for Academic Research and Education) Nano Proof, Eval. IMS Joint Project (Bio) Osaka-U R&D Collaboration Joint Project AIST Feedback R&D Collaboration Delivery Delivery Operation/ Maintenance (UPKI,CA) Contents Operation/ Maintenance (Middleware) Networking Delivery Feedback Delivyer Peta-scale System VO Feedback Core Site Project-oriented VO Project- Oriented VO Customization Operation/Maintenance Industrial Projects Univ./National Supercomputing VO Domain Specific VO (e.g ITBL) Domain Specific VOs IMS,AIST,KEK,NAO,etc. Networking Infrastructure (Super-SINET) Note: names of VO are tentative)

  41. Cyber Science Infrastructure Plan National Institute of Informatics (NII) NAREGI(National Research Grid Initiative) Project Service for operation and maintenance of information infrastructure VOs ( Virtual Organizations ) University/inter-university research institutes VO NLS (Next generation Supercomputer) ProjectVO Virtual research environment for various fields Industrial project VO Middleware as Infrastructure (GRID, Certificate Autorizatin, etc.) 7 VO:Virtual organization

  42. The 3rd Science and Technology Basic Plan just started in April 2006. The Next Generation Supercomputer Project (FY2006-2012, ~1B$) is one of the high priority projects, aimed at the peta-scale computation in the key application areas. NAREGI Grid middleware will enable seamless federation of heterogeneous computational resources. Computations in Nano-science/technology applications over Grid is to be promoted, including participation from industry. NAREGI Grid Middleware is to be adopted as one of the important components in the new Japanese Cyber Science Infrastructure Framework. NAREGI will provide the access and computational infrastructure for the Next Generation Supercomputer System. Summary

  43. Thank you! http://www.naregi.org

More Related