1 / 71

Lecture 2: Service and Data Management

Lecture 2: Service and Data Management. Ing-Ray Chen CS 6204 Mobile Computing Virginia Tech Fall 2005 Courtesy of G.G. Richard III for providing some of the slides for this chapter. Service Management in PCN Systems. Managing services in mobile environments

Download Presentation

Lecture 2: Service and Data Management

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 2: Service and Data Management Ing-Ray Chen CS 6204 Mobile Computing Virginia Tech Fall 2005 Courtesy of G.G. Richard III for providing some of the slides for this chapter

  2. Service Management in PCN Systems • Managing services in mobile environments • How does the server find new VLR? • Too much overhead in contacting HLR Server VLR VLR

  3. Server Proxy Client PCN Systems: Proxy-Based Solution • Per-User Service Proxy

  4. Per-User Service Proxy • Service Proxy • Maintains service context Information • Forwards client requests to servers • Forwards server replies to clients • Tracks the location of the MU, thereby reducing communication costs

  5. Server Proxy Client Static Service Proxy • Proxy is located at a fixed location • Inefficient data route delivery

  6. Server Proxy/ Client Proxy/ Client Proxy/ Client Mobile Service Proxy • Proxy could move with the client if necessary • Proxy informs servers of location changes

  7. HLR / VLR Proxy Client Location/Service Management Decoupled Model • Traditionally, service and location management are decoupled • Problem: Extra communication cost incurred for updating service proxy

  8. Integrated Location and Service Management • “Per-user” based proxy services • Service proxy co-locates (and moves) with the MU’s location database • Four possible schemes: • Centralized, fully distributed, dynamic anchor and static anchor schemes

  9. Integrated: Centralized Scheme • The proxy is centralized and co-located with the HLR to minimize communication costs with HLR to track MU • When MU moves to a different VLR, a location update operation incurs to the HLR/proxy • Data delivery/call incurs a search operation at the HLR/proxy to locate the MU • Data service route: Server -> proxy/HLR -> MU

  10. Integrated: Fully Distributed Scheme • Location and service handoffs occur when MU moves to a new VLR • Service proxy co-locates (moves) with the location database at the current VLR • The proxy’s moving to new VLR causes a location update to the HLR/server, and a context transfer • A call requires a search operation at the HLR to locate the MU • Data service route: Server -> proxy/MU

  11. Integrated: Static Anchor Scheme • VLRs are grouped into anchor areas • HLR points to the current anchor • Proxy is co-located with anchor in a fixed location until MU moves to anchor area • Intra-anchor movement • Anchor/proxy is not moved and location update is sent to the anchor without updating the HLR • Inter-anchor movement • Anchor/proxy is moved with a context transfer cost and location update is send to the HLR/servers • Data/Call delivery performs a search at the HLR to locate the current anchor and then MU • Service route: Server->proxy/anchor -> MU

  12. Integrated: Dynamic Anchor Scheme • Same as static anchor except that the anchor/proxy moves to the current VLR when there is a call delivery • On a call delivery • A search to the HLR is performed • If the anchor is not the current serving VLR • Move anchor/proxy; information HLR/server of address change; and perform context transfer • Service route: server->proxy/anchor->MU • Advantageous than static anchor when CMR and SMR are high

  13. Model Parameters

  14. Cost Model • Performance metric – total communication cost per time unit: • 3 basic operations • Location update ( ) – cost for updating the location of MU and service proxy (sometimes, service context transfer) • Call delivery ( ) – cost for locating a MU to deliver a call • Data service requests ( ) – cost for MU to communicate with server through proxy

  15. Costs for Centralized and Fully Distributed Schemes

  16. Performance Evaluation-Results Cost rate under different CMR and SMR values

  17. Performance Evaluation-Results • Mobility rate fixed at 10 changes /hour; SMR = 1 to study the effect of varying CMR • Low CMR – Static/Dynamic Anchor perform better than centralized and fully distributed • High CMR – Centralized is the best. Dynamic anchor is better than static anchor. The reason is that dynamic anchor updates the HLR and moves the anchor to the current VLR, thereby reducing service request costs and location update costs Cost rate under different CMR values

  18. Performance Evaluation-Results • Mobility rate is fixed at 10 changes/hour • CMR = 1 to study the effect of SMR on cost rate • Low SMR – Fully distributed scheme is the worst due to frequent movement of service proxy with mobility • High SMR – Fully distributed scheme performs the best since the service proxy is co-located with the current VLR to lower the triangular cost Cost rate under different SMR values

  19. Performance Evaluation – Results • Depending on the user’s SMR the “best” integrated scheme and decoupled scheme were compared • Integrated scheme converges with the decoupled scheme at high SMR where the “influence” of mobility is less • Integrated scheme is better than the basic scheme at high SMR due to triangular cost between the server and MU via HLR Comparison of integrated with decoupled scheme

  20. Integrated Location and Service Management in PCS: Conclusions • Design Concept: Position the service proxy along with the location database of the MU • Centralized scheme: Suited for low SMR and high CMR • Distributed scheme: Best at high SMR and high CMR • Dynamic anchor scheme: Works best for a wide range of CMR and SMR values except when service context transfer costs are high • Static anchor scheme: Works reasonably well for a wide range of CMR and SMR values • The best location/service integrated scheme always outperforms the best decoupled scheme, and the basic scheme

  21. Communications Asymmetry in Mobile Wireless Environments • Network asymmetry • In many cases, downlink bandwidth far exceeds uplink bandwidth • Client-to-server ratio • Large client population, but few servers • Data volume • Small requests, large responses • Downlink bandwidth more important • Update-oriented communication • Updates likely affect a number of clients

  22. Disseminating Data to Wireless Hosts • Broadcast-oriented dissemination makes sense for many applications • Can be one-way or with feedback • Sports • Stock prices • New software releases (e.g., Netscape) • Chess matches • Music • Election Coverage • Weather/traffic …

  23. Dissemination: Pull • Pull-oriented dissemination can run into trouble when demand is extremely high • Web servers crash • Bandwidth is exhausted client client client server client client client help client

  24. Dissemination: Push • Server pushes data to clients • No need to ask for data • Ideal for broadcast-based media (wireless) client client client server client client client Whew! client

  25. 2 3 1 4 5 6 Broadcast Disks server Schedule of data blocks to be transmitted

  26. 2 1 3 2 1 1 4 1 5 3 6 1 Broadcast Disks: Scheduling Round Robin Schedule Priority Schedule

  27. Priority Scheduling (2) • Random • Randomize broadcast schedule • Broadcast "hotter" items more frequently • Periodic • Create a schedule that broadcasts hotter items more frequently… • …but schedule is fixed • "Broadcast Disks: Data Management…" paper uses this approach • Simplifying assumptions • Data is read-only • Schedule is computed and doesn't change… • Means access patterns are assumed the same Allows mobile hosts to sleep…

  28. "Broadcast Disks: Data Management…" • Order pages from "hottest" to coldest • Partition into ranges ("disks")—pages in a range have similar access probabilities • Choose broadcast frequency for each "disk" • Split each disk into "chunks" • maxchunks = LCM(relative frequencies) • numchunks(J) = maxchunks / relativefreq(J) • Broadcast program is then: for I = 0 to maxchunks - 1 for J = 1 to numdisks Broadcast( C(J, I mod numchunks(J) )

  29. Sample Schedule, From Paper Relative frequencies 4 2 1

  30. Hot For You Ain't Hot for Me • Hottest data items are not necessarily the ones most frequently accessed by a particularclient • Access patterns may have changed • Higher priority may be given to other clients • Might be the only client that considers this data important… • Thus: need to consider not only probability of access (standard caching), but also broadcast frequency • Observation: Hot items are more likely to be cached!

  31. Broadcast Disks Paper: Caching • Under traditional caching schemes, usually want to cache "hottest" data • What to cache with broadcast disks? • Hottest? • Probably not—that data will come around soon! • Coldest? • Ummmm…not necessarily… • Cache data with access probability significantly higher than broadcast frequency

  32. Caching • PIX algorithm (Acharya) • Eject the page from local cache with the smallest value of: probability of access broadcast frequency • Means that pages that are more frequently accessed may be ejected if they are expected to be broadcast frequently…

  33. Hybrid Push/Pull • Balancing Push and Pull for Data Broadcast • B = B0 + Bb • B0 is bandwidth dedicated to on-demand pull-oriented requests from clients • Bb is bandwidth allocated to broadcast B0=100% Schedule is totally request-based B0=0% "pure" Push Clients needinga page simply wait

  34. Optimal Bandwidth Allocation between On Demand and Broadcast • Assume there are n data items, each of size S • Each packet is of size R • The average time for the sever to service an on-demand request is D=(S+R)/B0; let m=1/D be the service rate • Each client generates requests at an average rate of r • There are m clients, so cumulative request rate is l=m*r • For on-demand requests, the average response time per request is T0=(1+queue length)*D where “queue length” is given by utilization/(1-utilization) with “utilization’ being defined as l/m (ref: queueing theory for M/M/1 - take CS 5214 to be offered in Spring 2006)

  35. Optimal Bandwidth Allocation between On Demand and Broadcast • What are the best frequencies for broadcasting data items? • Imielinski and Viswanathan showed that if there are n data items with popularity ratio p1, p2, …, pn, they should be broadcast with frequencies f1, f2, …, fn, where fi = sqrt(pi)/[sqrt(p1)+sqrt(p2) …+sqrt(pn)] in order to minimize the average latency Tb for accessing a broadcast data item.

  36. Optimal Bandwidth Allocation between On Demand and Broadcast • T=Tb + To is the average time to access a data item • Imielinski and Viswanathan’s algorithm: Assign D1, D2, …, Di to broadcast channel Assign Di+1, Di+2, …, Dn to on-demand channel Determine optimal Bb, Bo to minimize T=Tb + To: Compute Toby modeling on-demand channel as M/M/1 (or M/D/1) Compute Tbby using the optimal frequencies f1, f2, …, fn Compute optimal Bb which minimizes T to within an acceptable threshold L

  37. Mobile Caching: General Issues • Mobile user/application issues: • Data access pattern (reads vs. writes?) • Data update rate • Communication/access cost • Mobility pattern of the client • Connectivity characteristics • disconnection frequency • available bandwidth • Data freshness requirements of the user • Context dependence of the information

  38. Mobile Caching (2) • Research questions: • How can client-side latency be reduced? • How can consistency be maintained among all caches and the server(s)? • How can we ensure high data availability in the presence of frequent disconnections? • How can we achieve high energy/bandwidth efficiency? • How to determine the cost of a cache miss and how to incorporate this cost in the cache management scheme? • How to manage location-dependent data in the cache? • How to enable cooperation between multiple peer caches? Pertaining To Mobile Computing

  39. Mobile Caching (3) • Cache organization issues: • Where do we cache? (client? proxy? service?) • How many levels of caching do we use (in the case of hierarchical caching architectures)? • What do we cache (i.e., when do we cache a data item and for how long)? • How do we invalidate cached items? • Who is responsible for invalidations? • What is the granularity at which the invalidation is done? • What data currency guarantees can the system provide to users? • What are the (real $$$) costs involved? How do we charge users? • What is the effect on query delay (response time) and system throughput (query completion rate)?

  40. Weak vs. Strong Consistency • Strong consistency • Value read is most current value in system • Invalidation on each write • Disconnections may cause loss of invalidation messages • Can also poll on every access • Impossible to poll if disconnected! • Weak consistency • Value read may be “somewhat” out of date • TTL (time to live) associated with each value • Can combine TTL with polling • e.g., Polling to update TTL or retrieval of new copy of data item if out of date

  41. Invalidation Report for Strong Cache Consistency • Stateless: Server does not maintain information about the cache content of the clients • Synchronous: An invalidation report is broadcast periodically, e.g., Broadcast Timestamp Scheme • Asynchronous: reports are sent on data modification • Property: Client cannot miss an update (say because of sleep or disconnection); otherwise, it will need to discard the entire cache content • Stateful: Server keeps track of the cache contents of its clients • Synchronous: none • Asynchronous: A proxy is used for each client to maintain state information for items cached by the client and their last modification times; invalidation messages are sent to the proxy asynchronously. • Clients can miss updates and get sync with the proxy (agent) upon reconnection

  42. Asynchronous Stateful (AS) Scheme • Whenever the server updates any data item, an invalidation report message is sent to the MH’s HA via the wired line. • A home location cache (HLC) is being maintained in the HA to keep track of data having been cached by the MH. • The HLC is a list of records (x,T,invalid_flag) for each data item x locally cached at MH where x is the data item ID and T is the timestamp of the last invalidation of x • Invalidation reports are transmitted asynchronously and are buffered at the HA until explicit acknowledgment is received from the specific MH. The invalid_flag is set to true for data items for which an invalidation message has been sent to the MH but no acknowledgment is received • Before answering queries from application, the MH verifies whether a requested data item is in a consistent state. If it is valid, it will satisfy the query; otherwise, an uplink request to the HA is issued.

  43. Asynchronous Stateful (AS) Scheme • When MH receives an invalidation message from HA, it discards that data item from the cache • Each client maintains a cache timestamp indicating the timestamp of the last message received by the MH from the HA. • HA discards any invalidation messages from the HLC with the timestamp less than or equal to the cache timestamp t received from MH and only sends invalidation messages with timestamp greater than t • In the sleep mode, the MH is unable to receive any invalidation message • When a MH reconnects, it sends a probe message to its HA with its cache timestamp upon receiving a query. In response to this probe message, the HA sends an invalidation report. • The AS scheme can handle arbitrary sleep patterns of the MH.

  44. Broadcasting Timestamp Scheme (Synchronous Stateless) Asynchronous Stateful Scheme A. Kahol, S. Khurana, S. K. S. Gupta, P. K. Srimani, An Efficient Cache Maintenance Scheme for Mobile Environment

  45. Analysis of AS Scheme • Goal: • Cache miss probability • Mean query delay • Model constraints: A single Mobile Switching Station (MSS) with N mobile hosts

  46. Assumptions • M data items, each of size ba bits • No queuing of queries when MH is disconnected • Single wireless channel of bandwidth C • All messages are queued and serviced FCFS • A query is of size bq bits • An invalidations is of size bi bits • Processing overhead is ignored

  47. MH Queries • Queries follow a Poisson distribution with mean rate l • Queries are uniformly distributed over all items M in the database

  48. Data Item Updates • Time between two consecutive updates to a data item is exponentially distributed with rate μ

  49. MH states • Mobile hosts alternate between sleep and awake modes • s is the fraction of time in sleep mode; 0 ≤s≤ 1 • ω is rate at which state changes (sleeping or awake), i.e., the time t between two consecutive wake ups is exponentially distributed with rate ω

  50. A. Kahol, S. Khurana, S. K. S. Gupta, P. K. Srimani, An Efficient Cache Maintenance Scheme for Mobile Environment

More Related