310 likes | 388 Views
Reliable Verification Using Symbolic Simulation with Scalar Values. Chris Wilson and David L. Dill Computer Systems Laboratory Stanford University June, 2000. Bug rate. Directed testing Random testing. Many “easy”. fewer “hard”. tapeout. “purgatory”. time. Verification Bottleneck.
E N D
Reliable Verification Using Symbolic Simulation with Scalar Values Chris Wilson and David L. Dill Computer Systems Laboratory Stanford University June, 2000
Bug rate Directed testing Random testing Many “easy” fewer “hard” tapeout “purgatory” time Verification Bottleneck
Directed testing random testing Model checking emulation semi-formal methods Current Approach Bug rate time
Symbolic simulation Our Approach Bug rate Key issue: Reliability! time
Reliability • Definition: • Always gives some coverage when resource limits encountered. • Gives coverage proportional to effort. • Ease of use • predictable coverage • useful feedback • easy to debug
Efficiency • Efficiency = Coverage/Unit Effort • Coverage • specified functionality • “input space” • Effort • manpower • computer resource usage • schedule
% of bugs found Model Checking Random testing Emulation Directed testing Reliability vs. Efficiency Efficiency Reliability
Goal • Have the reliability, ease of use of directed testing. • AND… • efficiency equal or greater than random testing.
Model Checking Random testing Emulation Directed testing Reliability vs. Efficiency Target area Efficiency Reliability
<d1,d2,d3,d4> 23 pass/ fail datain = dataout <a1,a2,a3,a4> 408 address 0 interrupt 5 <c1,c2,c3> req valid dly =0 counter request DUT “read” 0 1 “write” 0 Symbolic Simulation Symbolic test = directed test with symbolic values
Symbolic Simulation • Efficiency • 1 symbolic test <=> many directed tests. • Ease of use • short tests => easy to write, debug. • Blow up? • BDDs too unpredictable. How to prevent blow up?
Quasi-symbolic simulation • Symbolic simulation externally • scalar values internally • simulation run requires constant memory. • Key ideas • Don’t compute exact value unless necessary. • many don’t cares in large designs. • Trade time for memory. • Multiple runs to generate exact values.
Obeys law of excluded middle! Symbolic variable 0 b a -a a c X X X X X X X X X 0 b -a a c a Don’t care variables Conservative approximation “traditional” X value Don’t care logic Basic Algorithm & & & &
b a & X X X X X b a O O Decision Procedure
case split evaluate a=1 a=0 unit propagate X X b=1 b=0 b=1 0 0 0 Davis-Putnam Algorithm • Tree Search… • Davis, Logemann, Loveland [DPLL62]. X
X 0 1 b 0 X b b a & b a=1 X X X X X X a=0 b b a 0 0 ? & 0 0 0 O Decision Procedure ? Test is Unsatisfiable! Variable selection heuristic: pick relevant variable by propagating from inputs.
Reactivity • Reactive Test • test behavior depends on circuit. • Most tests require reactivity • since goal is to find all bugs… • must support reactivity efficiently.
Reactivity example Set ‘request’ = READ; Set ‘reqv’ = “1”; wait for ‘ack’; check that ‘data’ = expected_data; stop;
Reactivity example Set ‘request’ = READ; Set ‘reqv’ = “1”; wait for ‘ack’; check that ‘data’ = expected_data; stop; What if ‘ack’ = “X”?
Virtual thread Wait Statement Set ‘request’ = READ; Set ‘reqv’ = “1”; wait for ‘ack’; ‘ack’ == T = “X” ‘ack’ == F = “X” check that ‘data’ = expected_data; stop; wait for ‘ack’ == “1”;
Cycle 2 ‘ack’ == F = “X” wait for ‘ack’; ‘ack’ == T = “X” ‘ack’ == F = “X” check that ‘data’ = expected_data; stop; wait for ‘ack’;
Stopping Guard = “X” check that ‘data’ = expected_data; stop; Stop? or not stop?
Stopping • Modify Davis-Putnam... • if guard condition = “X” when stopped… • prove that test can really stop in this cycle. • Case split on guard condition. • case split on fail/pass condition only if stop = “1”.
Stopping • Modify Davis-Putnam... • if guard condition = “X” when stopped… • prove that test can really stop in this cycle. • Case split on guard condition. • Do not allow unit propagation. • case split on fail/pass condition only if stop = “1”. • Unit propagation is allowed. Disallowing unit propagation allows method to be complete.
Related Work • BDD-based Symbolic Simulation • STE [BryantSeger95], Innologic. • Sequential ATPG • SAT/ATPG-based Model Checking • BMC [Biere99], [Boppana99] • Other SAT-based Semi-Formal Methods • [Ganai99]
Experiments • Show that quasi-symbolic simulation can find bugs. • Test case bugs do not cause bottlenecks. • Demonstrate graceful degradation • get good coverage if simulation time limit hit.
Experiment 1 • Write/debug testcase for “hard” bug. • 140K gate industrial design. • Not found in simulation or bringup! • Four possible results • SAT - test case error. • TIMEOUT - test case error (device timeout.) • UNSAT - no bug found. • BUG - bug found.
Experiment 1 cases evals time(sec.) SAT 19 3.8 31.4 TIMEOUT 22 1.6 49.0 UNSAT 9 52.3 445.9 BUG 1 78 863.0
Highest covered sub-node Time limit hit! Experiment 2
Experiment 2 Maximum tree size Number of dependent variables in the test
Conclusions • Want to find allbugs faster. • Reliability is key. • Use quasi-symbolic simulation • has the efficiency of random testing. • And reliability of directed testing. • Experiments show it can be used as primary verification method.