1 / 46

Formation et Analyse d’Images Session 6

Formation et Analyse d’Images Session 6. Daniela Hall 18 November 2004. Course Overview. Session 1: Homogenous coordinates and tensor notation Image transformations Camera models Session 2: Camera models Reflection models Color spaces Session 3: Review color spaces

sherry
Download Presentation

Formation et Analyse d’Images Session 6

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Formation et Analyse d’ImagesSession 6 Daniela Hall 18 November 2004

  2. Course Overview • Session 1: • Homogenous coordinates and tensor notation • Image transformations • Camera models • Session 2: • Camera models • Reflection models • Color spaces • Session 3: • Review color spaces • Pixel based image analysis • Session 4: • Gaussian filter operators • Scale Space

  3. Course overview • Session 5: • Contrast description • Hough transform • Session 6: • Kalman filter • Tracking of regions, pixels, and lines • Session 7: • Stereo vision • Epipolar geometry • Session 8: exam

  4. Session overview • Kalman filter • Robust tracking of targets • Tracking of point neighborhoods • using SSD • using CC and NCC • using Gaussian receptive fields

  5. Kalman filter • The Kalman filter ist a optimal recursiv estimator. • Kalman filtering has been applied in areas such as • aerospace, • marine navigation, • nuclear power plant instrumentation, • manufactring, and many others. • The typical problem tries to estimate position and speed from the measurements. Tutorial:http://www.cs.unc.edu/~welch/kalman/kalmanIntro.html G. Welch, G. Bishop: An Introduction to the Kalman Filter, TR 95-041, Univ of N. Carolina, USA

  6. Kalman filter • The Kalman filter tries to estimate the state x of a discrete time controlled process that is governed by the equation • with measurement • Process noise • Measurement noise • Process noise covariance Q and measurement noise covariance R might in practise change over time, but are assumed constant • Matrix A relates the state at time k to the state at previous time k-1, in absence of noise. In practise A might change over time, but is assumed constant. • Matrix B relates the optional control input to the state x. We let it aside for the moment. • Matrix H relates the state to the measurement.

  7. Notations • Measurement • Measurement noise covariance • Process noise • Process noise covariance • Kalman gain

  8. Kalman filter notations A priori state estimate A posteriori state estimate A priori estimate error A posteriori estimate error A priori estimate error covariance A posteriori estimate error covariance

  9. Kalman filter • Goal: find a posteriori state estimate ^xk as a linear combination of an a priori estimate ^xk- and a weighted difference between the actual measurement zk and a measurement prediction H^xk- • The difference (zk – H^xk-) is called innovation or residual • K is the gain or blending factor that minimizes the a posteriori error covariance.

  10. Kalman gain K • Matrix K is the gain that minimizes the a posteriori error covariance. • The equations that need to be minimized • How to minimize

  11. Kalman gain K • One form of the result is • Measurement cov small weights residual heavier • A priori estimate error small weights residual little

  12. Kalman gain K • When the error covariance R approaches 0, the actual measurement zk is trusted more, while the predicted measurement ^xk- is trusted less • When the a priori estimate error covariance approaches 0, the actual measurement zk is trusted less, while the predicted measurement ^xk- is trusted more.

  13. Discrete Kalman filter algorithm • The Kalman filter estimates a process by using a form of feedback control. • The filter estimates the process state at some time and then obtains feedback in form of noisy measurements. • The Kalman filter equation fall in 2 groups • time update equations • measurement update equations • Time update equations project forward in time the current state and error covariance estimates to obtain a priori estimates. • The measurement update equations implement feedback. They incorporate a new measurement into the a priori estimate to form an improved a posteriori estimate.

  14. Kalman filter algorithm Time Update (« Predict ») Measurement update (« Correct ») The time update projects current state estimate ahead in time. The measurement update adjusts the projected estimate by an actual measurement.

  15. Time update equations (predict) Project state and covariance estimates forward in time Measurement update equations (correct) Compute Kalman gain K Measure process zk Compute a posteriori estimate xk Compute a posteriori error covariance estimate Pk Kalman filter Initial estimates for ^xk-1 and Pk-1

  16. Filter parameters and tuning • R: measurement noise covariance can be measured a priori to the filter operation (off-line) • Q: process noise covariance. Can not be measured, because we can not directly observe the process we are measuring. If we choose Q large enough (lots of uncertainty), a poor process model can produce acceptable results. • Parameter tuning: We can increase filter performance by tuning the parameters R and Q. We can even use a distinct Kalman filter for tuning. • If R and Q are constant, the estimation error cov Pk and the Kalman gain Kk will stabilize quickly and stay constant. In this case, Pk and Kk can be precomputed.

  17. Session overview • Kalman filter • Robust tracking of targets • Tracking of point neighborhoods • using SSD • using CC and NCC • using Gaussian receptive fields

  18. Robust tracking of objects List of predictions Predict Detection List of targets Correct Measurements Trigger regions New targets Detection

  19. Robust tracking of objects • Measurement • State vector • State equation • Source: M. Kohler: Using the Kalman Filter to track Human interactive motion, Research report No 629/Feb 1997, University of Dortmund Germany

  20. Robust Tracking of objects • Measurement noise error covariance • Temporal matrix • Process noise error covariance • a affects the computation speed (large a increases uncertainty and therefore the search regions)

  21. Form of the temporal matrix A • Matrix A relates a posteriori state estimate ^xk-1 to the a priori state estimate ^xk- • The new a priori state estimate requires the temporal derivative • According to a Taylor serie we can write

  22. Kalman filter notations A priori state estimate A posteriori state estimate A priori estimate error A posteriori estimate error A priori estimate error covariance A posteriori estimate error covariance

  23. Time update equations (predict) Project state and covariance estimates forward in time Measurement update equations (correct) Compute Kalman gain K Measure process zk Compute a posteriori estimate xk Compute a posteriori error covariance estimate Pk Kalman filter Initial estimates for ^xk-1 and Pk-1

  24. Example • A 1D point moves with a certain speed on a continuous scale • We have a sensor that gives only integer values • Compute a Kalman filter for the process.

  25. Example results p 2.2 6.4 10.6 14.8 19 true position p’ 4.2 4.2 4.2 4.2 true speed z 2 6 11 15 19 measured pos z’ 4 5 4 4 measured grad ^x 2 5.6 10.9 15.1 estimated pos ^x’ 0 4.2 5.3 4.2 estimated grad K and P converge quickly

  26. Session overview • Kalman filter • Robust tracking of targets • Tracking of point neighborhoods • using SSD • using CC and NCC • using Gaussian receptive fields

  27. Tracking of point neighborhoods • When we have additive noise, the euclidean norm is the optimal method for neighborhood matching, because it minimises the error probability. • Goal: which position (i,j) of the image I(i,j) is the most similar to the pattern X(i,j). • Hypotheses: • additive Gaussian noise • No image rotation (2D) • No rotation in space (3D) • No scale changes • The euclidean norm is known as SSD (« sum of squared distances ») • The method is efficient and precise, but sensible.

  28. Sum of squared distances (SSD) • Definition: • Let X(m,n) be the pattern with 0<m<M-1, 0<n<N-1 • Let I(i,j) be the image with 0<i<I-1, 0<j<J-1, (M<<I, N<<J) • The position (i,j) of the image I(i,j) is the most similar to the pattern X(i,j) is computed as

  29. Sum of squared distances • Searching a pattern X(m,n) within an image I(i,j) corresponds to placing the pattern at all possible positions (i,j) and computing the SSD(i,j). • Depending on the size of the pattern and the image, this can be costly. • SSD is sensitive to rotations, scale and noise.

  30. Pattern as a feature vector • Any image patch can be seen as a vector. • To transform an 2D image patch to a vector, you need to concatenate the lines one after another. For an image of size MxN, you obtain a vector with M*N dimensions.

  31. SSD using feature vectors • Transform the pattern X(m,n) and the neighborhood of size MxN at the position (i,j) of the image I to vectors. • SSD is the norm of the difference of these two vectors.

  32. Cross Correlation (CC) • Another method for pattern matching is cross correlation (scalar product). The best match is characterised by maximising the product. • In the case of normalised vectors, the scalar product is the cos of the angles between the vectors. This is the definition of the normalised cross correlation (NCC). -1 <NCC<1

  33. Relation of SSD and NCC • The best match minimises SSD and maximises NCC. • We note:

  34. Tracking by correlation • Computation time of tracking by correlation depends on the size of the pattern (target) and the size of the image. • When all possible positions in the image are tested, this is slow. • How can we optimise tracking by correlation (reduce the computation time): • Reduce the number of tests by testing only one position out of two. Increases speed by 4, reduces precision of the result. Problem: if too little positions are tested, the target might be missed. • Reduce the number of tests by restricting the search to a small search region (region of interest, ROI).

  35. Speed up of tracking • The search region can be determined from the position of the target at time t-1 and its maximum speed. This is measured in pixels/delta t. • If we can reduce the search region, we can process more images (reduce delta t), which allows us to reduce the search region more, .... • Problem: speed depends on the distance of the object to the camera. Close objects have higher speeds than objects far away.

  36. Example

  37. Example • Person traverses entry hall in 5.2s (130 frames*25frames/s) • Distance is 288 pixels, target size is 45x35 pixels • Speed 55.4pixels/s • Let maximum speed be twice the measured speed 110.8pixels/s • Then we need a search region of size target size + • ROI = target size +/- 4.4 pixels = 54 x 44 pixels.

  38. Example • Number of tests exhaustive search (searching whole image of size 384x288 pixels) • (384-45)(288-35)=85767 tests • Number of tests using search region (54x44 pixels) • (54-45)(44-35)=81 tests • Speed up factor 85767/81= 1090

  39. Session overview • Kalman filter • Robust tracking of targets • Tracking of point neighborhoods • using SSD • using CC and NCC • using Gaussian receptive fields

  40. Discussion of pattern matching using SSD and NCC • Matching is fast (when we use small search region) and precise (when we test pixelwise positions) • Sensitivity to rotation and scale changes. • Matching fails • Approach regular update of target: reduces sensitivity to rotation.

  41. Regular target update For each frame • Compute search region • Detect best matching position • Update the target by copying the target from the current frame • Update the target position • Tracking must be fast and slow changes in rotation. • It can also compensate for scale changes, but we do not know if we need to increase or diminish the target size.

  42. Point neighborhood description using Gaussian receptive fields • Approach that uses scale and orientation invariant description of point neighborhoods. • Matching of point neighborhoods possible by evaluating distance in feature space. • Computationally more expensive than SSD or NCC due to scale and orientation normalisation. • This argument is less critical nowadays because we have sufficient computing power to do real-time point matching using receptive fields.

  43. Model target by a grid of local receptive fields Tracking within ROI extract the scale and rotation invariant receptive field response Search point neighborhoods that are the most similar to the model by evaluating the distance in feature space. Target localisation using receptive fields

  44. Extraction of invariant features For all positions within the ROI • Determine the dominant orientation for each point neighborhood. • Determine the intrinsic feature scale for each point neighborhood. • Project the point neighborhood on the scale and orientation normalised receptive field vector. Result is the feature signature. See session 4 for details.

  45. Search matching candidates • Matching candidates are found by evaluating the distance in feature space. • All elements within a sphere with radius e from the query vector are matching candidates. • Tree structures allow efficient nearest neigbor search.

  46. Discussion of matching using receptive field vectors • Allows matching invariant to position, orientation and size. Target can be found, even when its scale and orientation (2d) have changed. • Computationally more expensive. • Real-time tracking and identification possible by restriction to search region.

More Related