1 / 14

Ljubomir Jovanov Aleksandra Pi ˇzurica Stefan Schulte

c ombined w avelet -d omain and m otion -c ompensated v ideo d enoising b ased on v ideo c odec m otion e stimation m ethods. Ljubomir Jovanov Aleksandra Pi ˇzurica Stefan Schulte Peter Schelkens Adrian Munteanu Etienne Kerre Wilfried Philips . IEEE TRANSACTIONS ON

sook
Download Presentation

Ljubomir Jovanov Aleksandra Pi ˇzurica Stefan Schulte

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. combined wavelet-domainand motion-compensated videodenoising basedon video codec motion estimation methods LjubomirJovanov Aleksandra Piˇzurica Stefan Schulte Peter Schelkens Adrian MunteanuEtienne KerreWilfried Philips IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 19, NO. 3, MARCH 2009

  2. outline • Intriduction • Motion field refinement step • Motion compenstated temporal filter • Spatial filter • Result • motion refinement algorithm • denoising results

  3. introduction • noise in the video sequences increases image entropy, thereby reducing the effective compression performance. • By reusing motion estimation resources from the video coding module for video denoising • motion fields produced by real-time video codecs cannot be directly employed in video denoising, since they, as opposed to noise filters, tolerate errors in the motion field  a novel motion-field filtering step that refines the accuracy of the estimated motion to a degree • a novel temporal filter is proposed that is robust against errors in the estimated motion field.

  4. Motion field refinement step • Motion estimation, such as half-pixel motion field estimator defined in MPEG-4 standard used in our work, do not capture the realistic motion fields, because it does not use the neighboring motion vectors to impose a structure on a motion field. • we propose a motion field filtering technique that eliminates spurious motion vectors from the spatial areas in the video frames where no actual motion exists. • compare the MAD between the corresponding blocks with the average MAD, and based on that we decide if motion is present or not. Block(i,j) pixel(m,n) Frame k-1 Frame k

  5. Motion field refinement step • We define a threshold for motion detection in the kthframe as follows: • we decide whether motion exists in each block simply by comparing the absolute block difference with the previously calculated threshold γ : Scalar, 0.45 yield the best result for most of the test sequence  motion vector is set to zero Otherwise  keep its original value

  6. Motioncompenstated temporal filter • Denoising based on motion compensated filtering along the estimated trajectory is a very powerful approach. • But this approach can yield very disturbing artifacts at positions where the MVs are incorrect. • The main idea behind the proposed filter is to control switching between weaker and stronger temporal smoothing based on a motion detection variable . =0 (no motion detected) apply standard recursive temporal filter =1 (moving positions )  filter along the trajectory and using different filter coefficients

  7. Motioncompenstated temporal filter • Moreover, we take into account an estimate of the reliability of the estimated motion through prediction errors • expressing filtering unreliability through is to avoid wrong averaging of the different pixel values along the estimated motion trajectory, and hence to avoid motion blur and ghosting artifacts • Proposed motion compensated filter : ensure 0<ε<1 k K-1 α,βare fixed parameters of the recursive filters in static and moving areas Optimal α = 0.45 ,β = 0.85 static moving

  8. Spatial filter • Aiming at low complexity and a hardware-friendly solution,we start from fuzzy filter of [11] • This filter applies to each wavelet coefficient a shrinkage factor, which is a function of two measurements: the coefficient magnitude and a local spatial activity indicator (LSAI) ,i.e., . average of the wavelet coefficients in the (2K + 1) × (2K + 1) neighborhood around a given position (i, j). 1 : signal of interest 0 : not interest Otherwise : not sure Degree of activation of Fuzzy Rule 1

  9. Spatial filter • We propose the modification of the FuzzyShrink method to make it adaptive to spatially non-stationary noise by estimating σ locally, since the noise after temporal filtering has non-uniform variance. • We use 16*16 overlapping windows and shift these in steps of 8 pixels along each direction. • For each window we use Donoho’s wavelet domain median estimator

  10. Result–motion refinement algorithm • compare the mean squared error (MSE) in the motion field with and without the motion field refinement step • We observe that the MSE of the motion compensation decreasesfor the most of the test sequences, which proves the effectiveness of the motion field filtering step.

  11. Result–Motion Refinement Algorithm The algorithm sets the motion vectors to zero in the smooth areas where no actual motion exists

  12. Result – denoisingresults • Use four sequence with additive white Gaussian noise of σ = 10,15,20 SEQWT[7] : 0.5 - 1.4 dB ↑ WST [6] : around 1 dB ↑ • [6] slightly tends to degrade the textures in the image, while preserving well static image edges • “Miss America” contain less textures than other test sequence

  13. Result – denoisingresults Preserves texture!! Less motion blur!!

  14. Result – denoisingresults • Compare with [10] • It is important to notice that the method [10] is much more complex 1m40s per frame for the frame size 384*288 on a powerful 8*3 GHz processer • Moreover, the methods [10] requires up to 7 frames to be stored, while proposed method uses only current and previous frame.

More Related