210 likes | 579 Views
Distributed Computing. Software based solutions to Parallel Computing. Distributed Computing. Cluster based computing Network of Workstations Duality of standalone workstation Vs. parallel computing environment. Distributed Applications RC5 Clients SETI Clients.
E N D
Distributed Computing Software based solutions to Parallel Computing
Distributed Computing • Cluster based computing • Network of Workstations • Duality of standalone workstation Vs. parallel computing environment. • Distributed Applications • RC5 Clients • SETI Clients
Issues of Distributed Computing • Homogeneous networks • Networks of similar machines. Requires only one version of executables. • Heterogeneous networks • Different Architectures of machines. (Alpha,x86]. Each architecture requires its own version of executables. • Issues of data representation, system level interface differences, and suitability.
Support Model • Application support is critcal. • Three basic models • Application is built with full awareness and support • User land facilities are provided to facilitate awareness and support • Kernel level support
Application Support • Application modeled as self-contained entity capable of client-server, peer-peer cooperation models. • RC5 client • SETI client • Support can be streamlined • Loss of transparency, load balancing,lower QoS.
User Level Support • Provide basic parallel mechanisms • Message/Data passing • Synchronization • Process Handling • Rough control • Provides greater transparency • Less dependence on architecture, transparency supports heterogeneous solutions
Kernel Support • Provides system level support • Scheduling • Load Balancing • Process Migration • Resource Allocation • Doesn’t work well in heterogeneous configurations. • Process Migration increases communication costs
Three Solutions • PVM - Parallel Virtual Machine • User Land Parallel Solution • Message Passing • MPI - Message Passing Interface • Standard • Several Implementations • User Land Support
Three Solutions - continued • MOSIX - Linux Kernel Extensions • Provide dynamic scheduling • process migration • system level integration
PVM - Parallel Virtual Machine • Message Passing Paradigm • Heterogeneous Computing • Deals effectively with data representation issues • Supports variety of systems including MPP’s, SMP’s, and vector machines • Uses a daemon to provide parallel facilities
PVM - continued • Tracks tasks using a unique system assigned ID called a TID • Supports grouping, and group level activities.
MPI or LAM • Message Passing Interface • MPI is the standard, several implementations exist. One is LAM and is maintained by Notre Dame. • Similar to PVM • Heterogeneous • Uses a daemon, also includes a peer-peer mode.
MPI - LAM - continued • LAM environment must be started explicitly • Provides compiler shells to handle program compilation • Nodes are dynamic • Suite of utilities to maintain the message passing virtual machine. • The parallelism is explicitly programmed
MPI - LAM - continued • MPI addresses resource limitations. Uses a property called Guaranteed Envelope Resources to maintain integrity of processes.
MOSIX and Linux 2.2.7 • Linux, not much more to say. • Mosix is developed to extend several unix operating systems • Homogeneous • Transparent and Preemptive process migration • Dynamic process reassignment
Mosix - continued • Global resource assignment • Ideal for cluster based computing • Offers memory ushering • prevents thrashing • Maintains home system • Requires that the nodes of the cluster be networked well
Mosix - Continued • Mosix maintains a given process’ association with a home node. • Unique home node • Implements a bi-level approach to maintain this association through migration • User Context (remote) - system independent contexts
Mosix - continued • Deputy - node dependent interface. Remains when process migrates from home node. • All system dependent calls are routed through remote to deputy at home system • gettimeofday • Initial assignment needs to be done by PVM or MPI. Mosix doesn’t handle this aspect.
Bad Mosix • Overhead is increased • Delayed system calls • I/O sockets (file, network) don’t route well • Migration time adds overhead • Needs further work in migratable sockets and files
You’re a good audience • Are you asleep yet. • But I tried! • Thank you, • Any Questions?