1 / 38

Emulab Node Lifecycle

Emulab Node Lifecycle. Overview: Node Lifecycle State Machines. State Machines and stated. Emulab uses a centralized service to track what each node does: stated Booting, self-configuration, reloading images, shutdown, etc. One server for all nodes; each node tracked individually

ryan-franks
Download Presentation

Emulab Node Lifecycle

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Emulab Node Lifecycle

  2. Overview:Node Lifecycle State Machines

  3. State Machines and stated • Emulab uses a centralized service to track what each node does: stated • Booting, self-configuration, reloading images, shutdown, etc. • One server for all nodes; each node tracked individually • Tracking based on state machines that describe what nodes should be doing

  4. Local vnode Local PC Example: Normal Node Boot

  5. Local PC PC+OS w/ minimal Elab support Example: PC w/ partial OS support

  6. State Overview • Each OS has a state machine • describes what is valid • Many OS’s can follow the same state machine • Node (or boss) sends event for changes • stated records,checks in state machine (in DB) if transition from state A to B is valid • Takes actions if not (mail, reboot, retry, etc.) • Also takes action when we “time out” in a state

  7. States for Communication • Programs on boss depend on state transitions to find out about important events • Reboot nodes, wait for ISUP state • Reloading: wait for RELOADDONE, then ISUP • States can also have actions associated with them (“state triggers”) • E.g. when reloading finishes, we check if it is a node being cleaned up before going free, and release it if necessary

  8. Example: Node Reload

  9. All together now…

  10. Now, more depth on some of the node lifecycle pieces • Node bootstrapping • Node self-configuration • TMCC/CD: Testbed Master Control Client & Daemon

  11. EmulabNode Bootstrapping

  12. Requirements • Ability to take control of a node regardless of current state • Ability to restore node to a known state • All with no manual intervention • Provide this capability to users

  13. Taking Control:Rebooting a Node • Multi-step approach • “ssh reboot” • ICMP “Ping of Death” (IPoD) • Power cycle • Encapsulated in node_reboot • Available to users

  14. node_reboot • Authenticates the caller • Sends an event to stated • stated knows what the node is doing • Can prevent reboots at bad time • stated reruns node_reboot in “really do it” mode

  15. Taking control:Catching the Boot • PXE-enabled NICs, first boot choice • PXE downloads a boot loader via TFTP • Emulab boot loaders may then • Boot from a particular disk partition • Download a standalone kernel • Download an MFS-based FreeBSD

  16. The PXE boot process

  17. Taking control: Catching the Boot (wide-area nodes) • Use bootable CDROM • CDROM contains an MFS-based FreeBSD system • Contact Emulab (using secure tmcc)for instructions: • Apply patches • Reload disk • Just boot from disk

  18. Restoring a Node:Disk reloading • Frisbee: the multi-threaded, multi-filesystem, multicast marvel! • Images are intelligently compressed using filesystem specific knowledge • Image distribution is client driven: • Each client independently requests data • Clients “snoop” each other’s requests • Client has network, unzip, disk threads • Server takes requests from multiple clients and multicasts data

  19. Disk reloading II • Frisbee server (frisbeed) is started up to feed appropriate image • Client node boots into FreeBSD MFS • Obtains info about what image to load • Runs frisbee client (frisbee) • Performs post-frisbee customizations • Users can reload their own disks at any time (os_load)

  20. Disk reloading(wide-area) • Initiated by the CDROM system • Copies or streams the image from Emulab via ssh • Feeds into imageunzip • Distribution in this manner means: • TCP for flow control and reliability • ssh for privacy

  21. Summary:A typical bootstrap scenario • Experiment creation requests nodes with FreeBSD or Linux • DB state is setup, nodes are rebooted • On each, PXE loads pxeboot which boots from appropriate disk partition • Nodes come up and self-configure • When freed, nodes are reloaded with the default image

  22. Bootstrap Issues • PXE-based boot does not scale well • PXE (DHCP) requires MAC broadcast • Alternative: CD/floppy/flash-based • Speed issues: • Biggest time sink: the BIOS (2 min) • DHCP can be slow (10-15 sec) • Disk reload not a problem (30-90 sec)

  23. Emulab NodeSelf-configuration

  24. What is “Self-configuration”? • Nodes run “stock” OS install and customize themselves at boot time • Alternative: pre-customized disk images • Issues: speed and space • Alternative: post-disk load customization • Issue: compatibility • Disadvantage of self-configuration: • Portability: must be adapted to every OS

  25. Configuration feature Traditional Emulab Local Emulab Remote Network identity Shared filesystems User accounts and keys Hosts file Network interfaces IP tunnels Link shaping Routing Tar and RPM installation Daemon and agent startup Custom user script execution x x x x x x x x x x x x x x x x x x x x x Emulab self-configuration

  26. Features of the Implementation • Non-intrusive • Single hook on the host (rc scripts) • Adds two directories of scripts (mostly perl) • Some changes/replacements of standard files • Mostly “just works” on Unix-like systems • Linux, FreeBSD, OpenBSD to date • Should be easy to port to others • Windows XP support partially done

  27. Where is my Control Net?? • Must locate the control net interface • Bus search order different in BSD and Linux • Cannot rely on DB since we can’t reach it! • Current: • Hack scripts to ID based on kernel boot output • Lame: must be customized, tied to node type • Future: • DHCP on all interfaces, use IF that replies?

  28. The process • rc.testbed run as last step of node boot • Tell boss we are configuring (TBSETUP) • If node status is “free,” we are done • Get all TMCD-provided info in one transfer • Set up FS mounts, accounts • Construct rc.foo scripts for initializing the rest • interfaces, routes, tarballs... • Scripts generated depend on target environment

  29. The process(we're not done yet!) • Network setup: run scripts for setup of interfaces, tunnels, link shaping, routes • User files: run scripts for RPMs and tarballs • Run daemons: healthd, idled, watchdog • Run agents: program, link, trafgen • Tell boss we are up (ISUP) • Configure virtual nodes

  30. Configuring non-PC nodes • IXP network processor • Parasitic relationship with the host PC • Much of the configuration done from host PC • Multiplexed (“virtual”) nodes • Like IXPs, has “sub node” relationship with host • Many-to-one relationship makes it advantageous to perform setup from the physical node • Still many aspects performed by the node itself

  31. Configuring non-PC nodes(cont.) • Future: Cisco routers • First cut: build config file (from template) and have router reconfigure • More advanced: allow custom router OS and config file

  32. The Emulab Master Control Protocol (TMCC/TMCD)

  33. Executive Summary • Simple protocol for transfering state between nodes and “boss” (essentially a database proxy) • Primarily used for node self-configuration • Flexible authentication and transport protocol • The “Swiss-army knife” (or “kitchen sink”) of Emulab protocols

  34. TMCC/TMCD • Testbed Master Control Client and Daemon • Used to request and return: • Configuration info for a node • State transitions • Uses a simple ASCII message format, suitable for perl parsing • Can use UDP, TCP, SSL on TCP • Client has “caching” and proxy modes

  35. Command Action nodeid status ifconfig accounts rpms mounts routing state vnodelist fullconfig Returns Emulab node name Returns project and experiment ID Returns IP info for network interfaces Returns user names and public keys to install Returns list of RPMs to install Returns list of NFS directories to mount Returns list of static routes to install Sets current node state List of virtual nodes for this physical node Bulk return of all info needed at boot time TMCC API • Usage: tmcc command argument … • Returns NAME=VALUE pairs (usually) • Examples:

  36. TMCC Security(local node) • Identify their server via config file, compiled in name, or DNS • Authentication at the server based solely on IP address • Vulnerabilities: • Malicious impersonation of server on control net • Malicious impersonation of another node

  37. TMCC Security(wide area nodes) • SSL: single node private key used by all • Nodes can ensure they are talking to the server • Server can ensure it is talking to some node • Vulnerability: node private key is in filesystem of all nodes, crack one node and you have them all

  38. Issues • Scaling • Every client made 20+ calls at boot time • Mitigated with “bulk transfer” and proxies • Still a hot spot (e.g., ISALIVE messages) • Security • Highly DOS-able • Mitigate with caching, per-experiment proxy?

More Related