1 / 129

Computer Networks - Theory and Practice

Computer Networks - Theory and Practice. CSE 434 / 598 Spring 2001. Sourav Bhattacharya Computer Science & Engineering Arizona State University. Class Objectives. Technical Goals: Provide basic training in the area of “Computer and Communication Networks”

zorion
Download Presentation

Computer Networks - Theory and Practice

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computer Networks - Theory and Practice CSE 434 / 598 Spring 2001 Sourav Bhattacharya Computer Science & Engineering Arizona State University

  2. Class Objectives • Technical Goals: • Provide basic training in the area of “Computer and Communication Networks” • A comprehensive protocol/algorithm level understanding of the “essentials” of a network • Concept driven, not implementation/package driven • Focus on core communication aspects, and not on cosmetics • Achieve a level where you are ready to learn about specific network implementations • Other Goals: • Learn to learn, Job well done, intellectual honesty, mutual “good wish”, promote research careers, …

  3. Success Criteria • At the end of the class • Class does well in the tests, and projects • Class has learnt the subject matter from the instructor • Instructor has inspired few (at least !!) career advancements • Instructor has improved the class material … • Don’t Do List • Instructor: Demonstration of “research” • Class: Inhibitions, shy to ask questions, interrupt...

  4. Text and Syllabus • Computer Networks, by Andrew S. Tannenbaum, 3rd ed., Prentice Hall, 1996 • Flow of Discussion • Chapter 1 and 2 - background, assumed !! • You are graduate students or undergrad seniors !! • Chapter 4 - Medium Access Sublayer • Chapter 3 - Data Link Layer • Chapter 5 - Network Layer • Chapter 6 - Transport Layer • Sporadic Coverages: Security and Encryption, Network Management, Multimedia, WWW, ... (as time permits)

  5. References • High-Speed Networks: TCP/IP and ATM Design Principles, by William Stallings, Prentice Hall • Network Analysis with Applications, by William D. Stanley, Prentice Hall • Local and Metropolitan Area Networks, by William Stallings, Prentice Hall • Protocol Design for Local and Metropolitan Area Networks, by Pawel Gburzynski, Prentice Hall • Introduction to Data Communications: A Practical Approach, by Larry Hughes, Jones and Burlett Publishers. • High-Speed LANs Handbook, by Stephen Saunders, McGraw-Hill

  6. The Network Design Problem: At A Glance • Design Analogy: N persons can successfully, and efficiently communicate amongst themselves, sharing individual, group and global views • Step 1: Two remote persons can communicate • Step 2: Three or more remote persons can efficiently share a common medium to exchange distinct views (but these people have to do the entire co-ordination by themselves) • Step 3: Increasingly convenient ways of doing Step 2 • Abstraction • Quality of Service • Value added features...

  7. Layered Protocol Hierarchies • Basic data transfer occurs at the lowest layer • The rest is merely solving “human problems” • Abstraction and convenience of access • Inter-operability • Making sure that multiple users do not fight • Or, if they do, at least gracefully, and with a recourse ... ... Layer N Layer N Layer 3 Layer 3 Layer 2 Layer 2 Layer 1 Layer 1 Physical Medium

  8. Quality of Connections • Issues • Layered Protocol Interfaces • Protocol Header, and Body • Nework Architecture • Network Architecture • Connections Type • Simplex, vs. Duplex • Connection-oriented, vs. Connection-less (datagram) • Life of connection, vs. Delay of setting up a new connection • QoS of Connections

  9. OSI Model Application Application • Open System Interconnection (OSI) Model • Data header at each layer • Real data transfer at the lowest layer • Logical data flow at upper layers Presentation Presentation Session Session Transport Transport Network Network Data Link Data Link Physical Physical Physical Medium

  10. TCP / IP Model Telnet, Ftp, Smtp, DNS, ... Application Application • Application layer controls everything above the Transport Layer (theme: “reduce the overhead”) TCP, or UDP Transport Transport IP Network Network Data Link Data Link ArpaNet, NSFNet, various LANs, ... } Physical Physical Physical Medium

  11. Network Standardization • International Standards Organization (ISO) • Various TCs, and Working Groups • ANSI (Am. Nat’l Standards Inst.) • NIST • IEEE • Internet Engineering Task Force (IETF) • Produces stream of RFCs

  12. Medium Access Control Chapter 4 of the Text

  13. Problem Introduction • Two or more contenders for a common media • Contenders: Independent nodes or stations with its own data/information to distribute • Distribute: one-to-one, one-to-many, one-to-all (routing, multicast, broadcast) • Data/Information: anything from a bit to a long message stream • Common media • Fiber, cable, radio frequency channel, ... • Characteristics of the media -- refer Chapter 2

  14. The Most Obvious Solution • N cars to share a common road • Two approaches • Slice the road width into N parallel parts, i.e., Lanes(hopefully each part will still be wide enough for a car) • Each car drives in its own Lane • Regulate the cars to drive on a rotation basis, i.e., one after the other • Careful co-ordination is critical • No width restriction. Each car can enjoy the entire road width !! • Problems • Naive, and simplistic • Opportunity for resource wastage

  15. The Two Naive Solutions... • Frequency Division Multiplexing (FDM) • For N user stations, partition the bandwidth into N (equally sized?) frequency bands • Each user transmits onto a particular bandwidth slot • No contention. But, likely under-utilization of bandwidth. • Time Division Multiplexing (TDM) • For N user stations, create a cycle of N (equally sized ?) time slots • Each user takes its turn, and transmits only during the corresponding time slot • No contention. But, likely under-utilization of the time slots

  16. Channel Allocation on “as needed” Basis • Instead of apriori partitioning of the channel resource (bandwidth, time) - employ dynamic resource management • Advantages include: reduced channel resource wastage • Disadvantages: • Require explicit (or, implicit) co-ordination of transmission schedules • Co-ordination can be of several categories • Detection and Correction • Avoidance • Prevention (contention-free !)

  17. Model and Assumptions • User stations or Nodes • Probability of a frame being generated in an interval T is L*T, where L is a constant for a particular user. • Independent in their transmissions. Can transmit a frame any time. • Concerns: • This model is not valid for co-related transmissions (e.g., performance analysis for a set of parallel/distributed programs or threads) • Single channel Assumption • No second medium is available among the stations to communicate (data, and/or control information) • Concern: this assumption is not true for many environments, where the control information may be carried on a second channel.

  18. Model (contd...) • Carrier Sense, or No Carrier Sense • Before transmission nodes can (or, cannot) sense if the channel is currently busy due to another user’s message • Protocols can be lot more efficient if “Carrier Sense” is true • Issue: It is hardware, and analog device specific • Activation Instances • Continuous time: a message can be attempted for transmission at any time. There is no master clock. • Slotted time: a message can be delivered only at a fixed set of points in time. Time axis is discretized. Requires a master clock.

  19. ALOHA - A Simple Multiple Access Protocol • N user stations, randomly generating data frames • Anytime data is ready ---> transmit on the media(without care for collison) • Listen to the channel, and find out if there is/was collison • If collison, then wait for a random time and goto step 1 • Collision vulnerability period • If frame time = t, then vulnerability period = 2t • Reason: two frames can collide (head, tail) or (tail, head) at the extreme ends • Refer Figure 4-2

  20. insert figure 4-2 here

  21. Performance of ALOHA • A lot of nodes are suddenly jumping into the shared, common channel - What can you expect about the performance ? • G = # frame transmission attempts (including new, and re-transmission) • Thus, during a 2-frame vulnerability period (refer Fig 4-2) there will be 2G frames generated • Probability that k frames are generated during a given vulnerability period = ((2G)^k * e^(-2G)) / k! • Probability that no frame will be generated, i.e., k=0, => e^(-2G) • Successful transmissions, or throughput = rate * prob(none else transmits) = G * e^(-2G)

  22. ALOHA => Slotted ALOHA • Best case performance of ALOHA • G = 0.5, Throughput = 1/(2e), nearly 18% • What else can you expect from purely random, and no carrier sense protocols • Slotted ALOHA • Like ALOHA, in every sense, except when a transmission request can originate • Discretize the time axis into slots, 1 slot = 1 frame width • A node can only transmit a frame at a slot beginning • Requires a master clock, typically one node transmitting a special control signal at the beginning of each frame • Issue: Is clock synchronization that easy ?

  23. Performance of Slotted ALOHA • Effect of restricted transmission request time instants • Vulnerability period is reduced from 2t to t, where t is the frame width (refer Figure 4-2, and explain why ?) • Probability of no other transmission during one frame = e^(-G) • Thus, Throughput = G * e^(-G) • Best throughput is for G=1, with nearly 37% throughput • 37% utilization, 37% empty slots and 26% collisions • About twice better than pure ALOHA • Exercise • Increasing G would reduce the # of empty slots. Why that will not increase the throughput ? • Work out few examples...

  24. ALOHA ==> Slotted ALOHA Insert Fig 4-3 here

  25. Carrier Sense Protocols • Best performance of Slotted ALOHA = 1/e • Since, nodes cannot sense the carrier prior to transmission • In other words, they cannot avoid collision, can only detect • Carrier Sense Protocols • Can listen for a carrier, i.e., shared channel, to become idle and then transmit • Carrier Sense Multiple Access (CSMA) class of protocols • Persistent CSMA • Also, called as 1-persistent, since it transmits with a probability = 1 • A node with ready data • Listen for idle channel, if line is busy then WAIT Persistently • When channel is free, transmit the packet, and then listen for a collision • If collision, then sleep for a random time and goto Step 1

  26. Persistent CSMA • How does contention resolution occur ? • Depends on the “randomness” of the wait periods • If a set of random wait periods, one from each user, are in effect then eventually everyone will get through... • Role of Propagation Delay • Collision detection time depends on the propagation delay • If d is the propagation delay, then worst case collision detection time = 2d • d = 0, there may still be some collisions • Analogous to round table conference discussions among human users • Improvement over ALOHA • Nodes do not jump in at the middle of another node’s transmission

  27. Non-Persistent CSMA • Persistent CSMA • When looking for an idle channel, it keeps a continuous wait • A greedy mode for “seize asap” • Consequence: multiple contenders, each in the “seize asap” mode will lead to followup collisions • Non-Persistent CSMA • If an idle channel is not found, the node desiring to transmit does not wait in a “grab as soon as available” mode • Instead, the node attempting to transmit goes into a random wait period. It wakes up at the end of the random wait, and re-tries for an idle channel • Benefit: reduced contention (Note: it includes a 2-level randomness) • Random wait, if not found idle channel • Random wait, if found idle channel, transmitted but had collision

  28. Non-Persistent CSMA => p-Persistent CSMA • Contention reduction strategy • Involve more and more random delays in each user activities • Throughput will increase, but individual user delays will decrease • p-Persistent CSMA • Channel is time slotted, similar to Slotted ALOHA • A node with ready data • Look for an idle channel, if channel is busy then wait for the next slot • If idle channel found then transmit with probability = p (i.e., defer until the next slot with prob = 1-p) • If next slot is also idle, then transmit with prob=p, and defer for the second next slot with prob = 1-p • Continue until the data is transmitted, or some other node starts transmitting • If so, wait for a random time and goto Step 1

  29. Why p-Persistent CSMA ? • The more probabilistic events, and randomness => the less contention and increased throughput • Degrees of uncertainty • Persistent CSMA = 1, random delay when a collision occurs • Non-Persistent CSMA = 2, random delay both at the channel seek, and at the collision • p-Persistent CSMA = 2 (but different kind from Non-Persistent) • Random delay at collision (as Non-Persistent) • Deterministic seizure attitude at channel seek time (like Persistent) • Slotted time (like Slotted ALOHA) • But, non-deterministic transmission even when channel is idle • An additional level of uncertainty beyond Persistent CSMA)

  30. Performance of CSMA Class of Protocols • Throughput and individual user delays are against each other • Throughput • Non-persistent is better than Persistent • Non-Persistent VS. p-Persistent • Depends on the value of p • Both have 2 degrees of uncertainty, but different kinds • Refer Figure 4-4 for an aggregate performance depiction • In increasing throughput • Pure ALOHA • Slotted ALOHA • 1-Persistent, or Persistent CSMA • 0.5 Persistent CSMA • (Non-Persistent, 0.1 Persistent) CSMA • 0.01 Persistent CSMA

  31. include figure 4-4 here

  32. CSMA with Collision Detection • CSMA does not abort a transmission when a collision occurs • Colliding transmissions will continue (until the frame completion) • A fair (!!) amount of garbage being generated, once a collision occurs • Why not abort transmission as soon as a collision is detected • CSMA with Collision Detection • IEEE 802.3, Ethernet protocol • Quickly terminate damaged frames • Contention periods are single slot each, not a frame width (Fig 4-5) • Resource wastage = width of the slots (and not those of the frames) • Slot width = worst case signal propagation delay • Actually, twice of that • Includes the delay of the analog devices as well

  33. include fig 4-5 here

  34. Collision-Free Protocols • Channel co-ordination can be of several categories • Detection and Correction • Avoidance • Prevention (contention-free !) • Static MAC Policies • Collision-free by design, i.e., avoidance • Resource utilization may be questionable • Dynamic MAC with Collision Detection • Like CSMA/CD • Dynamic MAC with contention prevention • Protocol does few extra steps in run-time to prevent collision

  35. Reservation-Based Dynamic MAC Protocols • Protocols consist of two phases • Reservation or bidding process • Actual usage, after the bidding process • Reservation phase • All nodes with data to transmit go through the reservation phase • Result: one or more winners ==> implicit reservations • Transmission phase • The winner channel(s) transmits (one after another) • Bit-Map Protocol - One Reservation Policy • Basic idea stems from Link List approach • Refer Figure 4-6

  36. include fig 4-6 here

  37. Bit-Map Protocol • N Contention Slots for N stations • Node i transmits a “1” in Slot i, iff node i has data to send • The collection of 1’s in the Contention Slot will indicate which stations are with data (to transmit) • Followed by Transmission Phase • Allocate Frames only for those Nodes with a 1 in the Contention Slots • Performance • Low load :- • Frames’ time << Contention Slot time • Contention Slot’s delay for Low numbered station -- 1.5N (why ?) • Contention Slot’s delay for High numbered station -- 0.5N (why?) • Average wait = N slots (sloppy analysis !!) • For d-bit data frames, efficiency = d / (d +N)

  38. Performance of Bit-Map Protocol • At high load • Multiple (k) frames per each group of N Contention Slots • Efficiency = k*d / (N + k*d) • For k ==> N, efficiency = d/(d+1) • Question ? • Is this a realistic analysis ? • Can you do a queueing analysis for this protocol ? • Is there any fundamental bottleneck ?

  39. Binary Countdown Protocol • 2-phase Protocol : Reservation followed by Transmission • Reservation phase • Each station, with ready data, transmits its bit address in msb to lsb order • At each bit-position, binary OR of all the respective bits from each node. If a node with a 0-bit, observes a 1 after the OR operation - then it withdraws from the competition. The latest surviving node is the winner. • Transmission phase: Winner (single) transmits the data • Example, nodes 3, 4 and 6 have data to transmit • Node ids (0011), (0100) and (0110) get transmitted • First transmission: 0, 0, and 0 • Second transmission: 0, 1 and 1 ==> Node 3 withdraws • Third transmission: none, 0, and 1 ==> Node 4 withdraws • Node 6 is the winner. Node 6 transmits data frame.

  40. Performance of Binary Countdown Protocol • Note: only a single winner in this approach • The node with the highest bit address • This approach may starve the lower numbered users • For N nodes, ln(N) bit addresses will be transmitted • d bits frame ==> efficiency = d / (d + ln(N)) • Enhancements: • Bit ordering different from (msb --> lsb) type • Parallelized version of binary countdown, instead of serial • Efficiency can reach upto 100%

  41. Limited Contention Protocols • Design features: • Low traffic load - Collision detection approaches are better, they offer low delay, and not much collision occurs anyways • High traffic load - Collision free protocols are better, they have higher delay, but at least the channel efficiency is much better... • What if we combine the advantages of the two ? • Limited Contention Protocols • Idea: Do not let every station compete for the channel with equal probability. Allow different groups of nodes to compete at different times... • Refer Figure 4-8, for Success Probability = f(# ready stations) • Question: give an analogy of this idea using the car/road domain...

  42. include fig 4-8 here

  43. Adaptive Tree Walk - Limited Contention Protocol • Group the N nodes as a log(N) height binary tree • Tree leaves are the N nodes • Starting phase, or immediately after a successful transmit • All N nodes can compete for the channel • If one of the nodes acquire a channel, then repeat with all “N nodes” as the contenders’ list • Else, if collision then narrow the contenders’ list = left subgroup of nodes • If one of the nodes acquire a channel, then shift to the right sibling group of nodes for the next slot • Else, if there is a further collision, narrow down the contenders’ list to the leftward children subtree (Repeat...) • Refer Figure 4-9, essentially walk around with various subgroups of the tree leaves at each time as the Contenders’ list

  44. Figure 4-9

  45. Wavelength Division Multiplexed MAC Protocol • Analogous to FDM, used popularly for optical networks • Partition the wavelength spectrum into (equal ?) slices • One slice for each node / user • Can apply TDM in conjunction as well • Useful for implementation of broadcast topologies • Refer Figure 4-10, each wavelength slice has two parts - for control information, and for data values • Can also implement point-to-point network topologies (how ?) • Collectively it is called TWDM (time-wave-division multiplexed) MAC protocol • Key design issue: #transmitters, and #receivers at each node • Frequencies and Tunability of the transceivers...

  46. Figure 4-10

  47. WDMA - A Particular WDM MAC • WDMA - a broadcast based protocol • Each node is assigned two channels, for Control and for Data • The data channel is slotted • One slot for every other node • One slot for status information of the host node itself • The control channel is also slotted • Supports three classes of traffic • Constant data rate connection-oriented traffic • Variable data rate connection-oriented traffic • Datagram traffic, e.g., UDP packets • Each node has two receivers (one fixed freq, another tunable) and two transmitted (one fixed freq, another tunable)

  48. Arbitrary Topology Configurations using WDM and TDM • Consider any graph topology • Replace every bi-directional edge using two back-to-back simplex edges • Assign each simplex edge of the graph topology to one slot in the (frequency, time) • Select #time slots just adequate enough so that #freq * #time slots >= the #simplex edges • Work out an example

  49. Wireless LAN Protocols • Consider a Cellular Network, with Cell sizes anywhere between few meters to several miles • Frequency reuse is adopted, as a feature of Cellular system • What could be a typical MAC ? Can CSMA work ? • No, since there is no common broadcast channel which everyone eventually listens to • Refer Figure 4-11 • Design difficulty: how to detect interference at the Receiver ? • Hidden station problem: Two nodes transmit to a common receiver located in the middle • Competitor station is too far away • Exposed station problem: Two adjacent nodes transmitting in opposite directions. False sense of competition...

  50. figure 4-11

More Related