1 / 21

Stork An Introduction Condor Week 2006 Milan

Stork An Introduction Condor Week 2006 Milan. Two Main Ideas. Make data transfers a “first class citizen” in Condor Reuse items in the Condor toolbox. The tools. ClassAds Matchmaking DAGMan. The data transfer problem. Process large data sets at sites on grid. For each data set:

molimo
Download Presentation

Stork An Introduction Condor Week 2006 Milan

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Stork An IntroductionCondor Week 2006Milan

  2. Two Main Ideas • Make data transfers a “first class citizen” in Condor • Reuse items in the Condor toolbox

  3. The tools • ClassAds • Matchmaking • DAGMan

  4. The data transfer problem Process large data sets at sites on grid. For each data set: stage in data from remote server run CPU data processing job stage out data to remote server

  5. Simple Data Transfer Job #!/bin/sh globus-url-copy source dest Often works fine for short, simple data transfers, but…

  6. What can go wrong? Too many transfers at one time Service down; need to try later Service down; need to try alternate data source Partial transfers Time out; not worth waiting anymore

  7. Stork What Schedd is to CPU jobs, Stork is to data placement jobs. Job queue Flow control Failure-handling policies Event log

  8. Supported Data Transfers local file system GridFTP FTP HTTP SRB • NeST • SRM • other protocols via simple plugin

  9. Stork Commands stork_submit - submit a job stork_q - list the job queue stork_status - show completion status stork_rm - cancel a job

  10. Creating a Submit Description File A plain ASCII text file Tells Stork about your job: source/destination alternate protocols proxy location debugging logs command-line arguments

  11. Simple Submit File // c++ style comment lines [ dap_type = "transfer"; src_url = "gsiftp://server/path”; dest_url = "file:///dir/file"; x509proxy = "default"; log = "stage-in.out.log"; output = "stage-in.out.out"; err = "stage-in.out.err"; ] Note: different format from Condor submit files

  12. Sample stork_submit # stork_submit stage-in.stork using default proxy: /tmp/x509up_u19100 ================ Sending request: [ dest_url = "file:///dir/file"; src_url = "gsiftp://server/path"; err = "path/stage-in.out.err"; output = "path/stage-in.out.out"; dap_type = "transfer"; log = "path/stage-in.out.log"; x509proxy = "default" ] ================ Request assigned id: 1 # returned job id

  13. Sample Stork User Log 000 (001.-01.-01) 04/17 19:30:00 Job submitted from host: <128.105.121.53:54027> ... 001 (001.-01.-01) 04/17 19:30:01 Job executing on host: <128.105.121.53:9621> ... 008 (001.-01.-01) 04/17 19:30:01 job type: transfer ... 008 (001.-01.-01) 04/17 19:30:01 src_url: gsiftp://server/path ... 008 (001.-01.-01) 04/17 19:30:01 dest_url: file:///dir/file ... 005 (001.-01.-01) 04/17 19:30:02 Job terminated. (1) Normal termination (return value 0) Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage 0 - Run Bytes Sent By Job 0 - Run Bytes Received By Job 0 - Total Bytes Sent By Job 0 - Total Bytes Received By Job ...

  14. Who needs Stork? SRM exists. It provides a job queue, logging, etc. Why not use that?

  15. Use whatever makes sense! • Another way to view Stork: • Glue between DAGMan and data transport or transport scheduler. • So one DAG can describe a workflow, including both data movement and computation steps.

  16. Stork jobs in a DAG A DAG is defined by a text file, listing each job and its dependents: # data-process.dag Data IN in.stork Job CRUNCH crunch.condor Data OUT out.stork Parent IN Child CRUNCH Parent CRUNCH Child OUT each node will run the Condor or Stork job specified by accompanying submit file IN CRUNCH OUT

  17. Important Stork Parameters • STORK_MAX_NUM_JOBS limits number of active jobs • STORK_MAX_RETRY limits job attempts, before job marked as failed • STORK_MAXDELAY_INMINUTES specifies “hung job” threshold

  18. Features in Development Matchmaking • Job ClassAd with site ClassAd • Global max transfers  per site limits • Load balancing across sites • Dynamic reconfiguration of sites • Coordination of multiple instances of Stork Working prototype developed with Globus gridftp team

  19. Further Ahead • Automatic startup of personal stork server on demand • Fair sharing between users • Fit into new pluggable scheduling framework ala schedd-on-the-side

  20. Summary Stork manages a job queue for data transfers A DAG may describe a workflow containing both data movement and processing steps.

  21. Additional Resources http://www.cs.wisc.edu/condor/stork/ Condor Manual, Stork sections stork-announce@cs.wisc.edu list stork-discuss@cs.wisc.edu list

More Related