1 / 23

Performance Tuning

Performance Tuning. Next, we focus on lock-based concurrency control, and look at optimising lock contention. The key is to combine the theory of concurrency control with practical DBMS knowledge Goal: maximise DBMS throughput (not the response time of any single transaction). Lock Tuning (I).

Download Presentation

Performance Tuning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Performance Tuning • Next, we focus on lock-based concurrency control, and look at optimising lock contention. • The key is to combine the theory of concurrency control with practical DBMS knowledge • Goal: maximise DBMS throughput (not the response time of any single transaction)

  2. Lock Tuning (I) • Use special system facilities for long reads • Create a version for reading purposes -- multiversion read consistency • Snapshot Isolation • Eliminate locking when it is unnecessary • Single transaction: bulk loading • Read only transaction: statistical analysis

  3. Lock Tuning (II) • Take advantages of transactional context to chop transactions into small pieces • Atomicity is only guaranteed on the small pieces! • Longer transactions more locks, longer wait (blocking others longer time) • Weaken isolation guarantees when the application allows it • SQL allows 4 level of consistency options

  4. Lock Tuning (III) • Select the appropriate locking granularity • Table, record, field… • Do DDL statements when not busy • Catalog is accessed by all transactions • Avoid updates when system is busy • Think about partitioning • Use multiple physical disks, insertion points

  5. Lock Tuning (IV) • Circumventing hot spots • Partitioning • Delay the access till late stage of processing • Use special database operations • Tune the deadlock intervals • How long to time-out a transaction? • …

  6. Understand Your DBMS • Each DBMS product may have different default locking behaviours • Example: SQL Server and Sybase • Write locks are held to the end of a transaction • Read locks are released immediately after use • Not 2PL!

  7. Understand Your Applications • What is the requirement for your transaction? • What will be the transactions that will run in parallel to your transaction? • Where is the start and the end of your transaction?

  8. Understand Your Run-Time Environment • What are the concurrent transactions? • What are their priorities? • How your system has performed so far?

  9. Each transaction executes against the version of the data items that was committed when the transaction started: No locks for read Costs space (old copy of data must be kept) Almost serialisable level: T1: x:=y T2: y:= x Initially x=3 and y =17 Serial execution: x,y=17 or x,y=3 Snapshot isolation: x=17, y=3 if both transactions start at the same time. R(Y) returns 1 R(Z) returns 0 R(X) returns 0 T1 W(Y:=1) T2 W(X:=2, Z:=3) T3 TIME X=Y=Z=0 Snapshot Isolation

  10. Isolation • Correctness vs. Performance • Number of locks held by each transaction • Kind of locks • Length of time a transaction holds locks • Life is full of compromises • High performance, at the cost of allowing some bad things happening • Application programmer and DBA should make a decision • An informed decision!

  11. SQL Isolation Levels • Degree 0: • Allow dirty read and lost update/nonrepeatable reads • Write-locks released immediately after writing, and no read locks • Degree 1 (read uncommitted) • Degree 2 (read committed) • Degree 3 (serialisable)

  12. Isolation Levels • Read Uncommitted (No lost update) • Write-locks held for the duration of the transactions • No read-locks • Read Committed (No dirty retrieval) • Read-locks released immediately after the read operation. • SQL Server default option • Repeatable Read (no unrepeatable reads for read/write ) • Two phase locking • Serialisable (read/write/insert/delete model) • Table locking or index locking to avoid phantoms

  13. Value of Serializability: Data Settings: accounts( number, branchnum, balance); create clustered index c on accounts(number); • 100000 rows • Cold buffer; same buffer size on all systems. • Row level locking • Isolation level (SERIALIZABLE or READ COMMITTED) • SQL Server 7, DB2 v7.1 and Oracle 8i on Windows 2000 • Dual Xeon (550MHz,512Kb), 1Gb RAM, Internal RAID controller from Adaptec (80Mb), 4x18Gb drives (10000RPM), Windows 2000.

  14. Value of Serializability: Transactions Concurrent Transactions: • T1: summation query [1 thread] select sum(balance) from accounts; • T2: swap balance between two account numbers (in order of scan to avoid deadlocks) [N threads] valX:=select balance from accounts where number=X;valY:=select balance from accounts where number=Y;update accounts set balance=valX where number=Y;update accounts set balance=valY where number=X;

  15. With SQL Server and DB2 the scan returns incorrect answers if the read committed isolation level is used (default setting) Value of Serializability: Results

  16. Because the update conflicts with the scan, correct answers are obtained at the cost of decreased concurrency and thus decreased throughput. Cost of Serializability

  17. Logical Bottleneck: Sequential Key Generation • Consider an application that reuires a sequential number to act as a key in a table, e.g. invoice numbers for bills. • Ad hoc approach: a separate table holding the last invoice number. Fetch and update that number on each insert transaction. • Counter approach: use facility such as Sequence (Oracle)/Identity(MSSQL).

  18. Counter Facility: Data Settings: • default isolation level: READ COMMITTED; Empty tables • Dual Xeon (550MHz,512Kb), 1Gb RAM, Internal RAID controller from Adaptec (80Mb), 4x18Gb drives (10000RPM), Windows 2000. accounts( number, branchnum, balance); create clustered index c on accounts(number); counter ( nextkey ); insert into counter values (1);

  19. Counter Facility: Transactions No Concurrent Transactions: • System [100 000 inserts, N threads] • SQL Server 7 (uses Identity column) insert into accounts values (94496,2789); • Oracle 8i insert into accounts values (seq.nextval,94496,2789); • Ad-hoc [100 000 inserts, N threads]begin transactionNextKey:=select nextkey from counter; update counter set nextkey = NextKey+1;commit transactionbegin transaction insert into accounts values(NextKey,?,?);commit transaction

  20. System generated counter (system) much better than a counter managed as an attribute value within a table (ad hoc). Avoid Bottlenecks: Counters

  21. Insertion Points: Transactions No Concurrent Transactions: • Sequential [100 000 inserts, N threads] Insertions into account table with clustered index on ssnum Data is sorted on ssnum Single insertion point • Non Sequential [100 000 inserts, N threads] Insertions into account table with clustered index on ssnum Data is not sorted (uniform distribution) 100 000 insertion points • Hashing Key [100 000 inserts, N threads] Insertions into account table with extra attribute att with clustered index on (ssnum, att) Extra attribute att contains hash key (1021 possible values) 1021 insertion points

  22. Page locking: single insertion point is a source of contention (sequential key with clustered index, or heap) Row locking: No contention between successive insertions. DB2 v7.1 and Oracle 8i do not support page locking. Insertion Points

  23. Summary • In today’s lecture, we have covered • A review of concurrency control in DBMS • How to optimise lock contention • Concurrency control levels, their implications to the applications and their overheads in different systems

More Related