1 / 60

Image Enhancement and Filtering Techniques

Image Enhancement and Filtering Techniques. EE4H, M.Sc 0407191 Computer Vision Dr. Mike Spann m.spann@bham.ac.uk http://www.eee.bham.ac.uk/spannm. Introduction . Images may suffer from the following degradations:

christine
Download Presentation

Image Enhancement and Filtering Techniques

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Image Enhancement and Filtering Techniques EE4H, M.Sc 0407191 Computer Vision Dr. Mike Spann m.spann@bham.ac.uk http://www.eee.bham.ac.uk/spannm

  2. Introduction • Images may suffer from the following degradations: • Poor contrast due to poor illumination or finite sensitivity of the imaging device • Electronic sensor noise or atmospheric disturbances leading to broad band noise • Aliasing effects due to inadequate sampling • Finite aperture effects or motion leading to spatial

  3. Introduction • We will consider simple algorithms for image enhancement based on lookup tables • Contrast enhancement • We will also consider simple linear filtering algorithms • Noise removal

  4. Histogram equalisation • In an image of low contrast, the image has grey levels concentrated in a narrow band • Define the grey level histogram of an image h(i) where : • h(i)=number of pixels with grey level= i • For a low contrast image, the histogram will be concentrated in a narrow band • The full greylevel dynamic range is not used

  5. Histogram equalisation

  6. Histogram equalisation • Can use a sigmoid lookup to map input to output grey levels • A sigmoid function g(i) controls the mapping from input to output pixel • Can easily be implemented in hardware for maximum efficiency

  7. Histogram equalisation

  8. Histogram equalisation • θcontrols the position of maximum slope • λ controls the slope • Problem - we need to determine the optimum sigmoid parameters and for each image • Abetter method would be to determine the best mapping function from the image data

  9. Histogram equalisation • A general histogram stretching algorithm is defined in terms of a transormation g(i) • We require a transformation g(i) such that from any histogram h(i) :

  10. Histogram equalisation • Constraints (N x N x 8 bit image) • No ‘crossover’ in grey levels after transformation

  11. Histogram equalisation • An adaptive histogram equalisation algorithm can be defined in terms of the ‘cumulative histogram’ H(i) :

  12. Histogram equalisation • Since the required h(i) is flat, the required H(i) is a ramp: h(i) H(i)

  13. Histogram equalisation • Let the actual histogram and cumulative histogram be h(i) and H(i) • Let the desired histogram and desired cumulative histogram be h’(i) and H’(i) • Let the transformation be g(i)

  14. Histogram equalisation • Since g(i) is an ‘ordered’ transformation

  15. Histogram equalisation • Worked example, 32 x 32 bit image with grey levels quantised to 3 bits

  16. 0 197 197 1.351 - 1 256 453 3.103 197 2 212 665 4.555 - 3 164 829 5.676 256 4 82 911 6.236 - 5 62 993 6.657 212 6 31 1004 6.867 246 7 20 1024 7.07 113 Histogram equalisation

  17. Histogram equalisation

  18. Histogram equalisation

  19. Histogram equalisation

  20. Histogram equalisation • ImageJ demonstration • http://rsb.info.nih.gov/ij/signed-applet

  21. Image Filtering • Simple image operators can be classified as 'pointwise' or 'neighbourhood' (filtering) operators • Histogram equalisation is a pointwise operation • More general filtering operations use neighbourhoods of pixels

  22. Image Filtering

  23. Image Filtering • The output g(x,y) can be a linear or non-linear function of the set of input pixel grey levels {f(x-M,y-M)…f(x+M,y+M}.

  24. Image Filtering • Examples of filters:

  25. Linear filtering and convolution • Example • 3x3 arithmetic mean of an input image (ignoring floating point byte rounding)

  26. Linear filtering and convolution • Convolution involves ‘overlap – multiply – add’ with ‘convolution mask’

  27. Linear filtering and convolution

  28. Linear filtering and convolution • We can define the convolution operator mathematically • Defines a 2D convolution of an image f(x,y) with a filter h(x,y)

  29. Linear filtering and convolution • Example – convolution with a Gaussian filter kernel • σdetermines the width of the filter and hence the amount of smoothing

  30. Linear filtering and convolution σ

  31. Linear filtering and convolution Noisy Original Filtered σ=3.0 Filtered σ=1.5

  32. Linear filtering and convolution • ImageJ demonstration • http://rsb.info.nih.gov/ij/signed-applet

  33. Linear filtering and convolution • We can also convolution to be a frequency domain operation • Based on the discrete Fourier transform F(u,v) of the image f(x,y)

  34. Linear filtering and convolution • The inverse DFT is defined by

  35. Linear filtering and convolution

  36. Linear filtering and convolution

  37. Linear filtering and convolution • F(u,v) is the frequency content of the image at spatial frequency position(u,v) • Smooth regions of the image contribute low frequency components to F(u,v) • Abrupt transitions in grey level (lines and edges) contribute high frequency components toF(u,v)

  38. Linear filtering and convolution • We can compute the DFT directly using the formula • An N point DFT would require N2 floating point multiplications per output point • Since there are N2 output points , the computational complexity of the DFT is N4 • N4=4x109 for N=256 • Bad news! Many hours on a workstation

  39. Linear filtering and convolution • The FFT algorithm was developed in the 60’s for seismic exploration • Reduced the DFT complexity to 2N2log2N • 2N2log2N~106 for N=256 • A few seconds on a workstation

  40. Linear filtering and convolution • The ‘filtering’ interpretation of convolution can be understood in terms of the convolution theorem • The convolution of an image f(x,y) with a filter h(x,y) is defined as:

  41. Linear filtering and convolution

  42. Linear filtering and convolution • Note that the filter mask is shifted and inverted prior to the ‘overlap multiply and add’ stage of the convolution • Define the DFT’s of f(x,y),h(x,y), and g(x,y) as F(u,v),H(u,v) and G(u,v) • The convolution theorem states simply that :

  43. Linear filtering and convolution • As an example, suppose h(x,y) corresponds to a linear filter with frequency response defined as follows: • Removes low frequency components of the image

  44. Linear filtering and convolution DFT IDFT

  45. Linear filtering and convolution • Frequency domain implementation of convolution • Image f(x,y)N x N pixels • Filter h(x,y) M x M filter mask points • Usually M<<N • In this case the filter mask is 'zero-padded' out to N x N • The output image g(x,y) is of size N+M-1 x N+M-1 pixels. The filter mask ‘wraps around’ truncating g(x,y) to an N x N image

  46. Linear filtering and convolution

  47. Linear filtering and convolution

  48. Linear filtering and convolution • We can evaluate the computational complexity of implementing convolution in the spatial and spatial frequency domains • N x Nimage is to be convolved with an M x Mfilter • Spatial domain convolution requires M 2 floating point multiplications per output point or N 2 M 2 in total • Frequency domain implementation requires 3x(2N 2 log 2N) +N 2 floating point multiplications ( 2 DFTs + 1 IDFT + N 2multiplications of the DFTs)

  49. Linear filtering and convolution • Example 1, N=512, M=7 • Spatial domain implementation requires 1.3 x 107 floating point multiplications • Frequency domain implementation requires 1.4 x 107 floating point multiplications • Example 2, N=512, M=32 • Spatial domain implementation requires 2.7 x 108 floating point multiplications • Frequency domain implementation requires1.4 x 107floating point multiplications

  50. Linear filtering and convolution • For smaller mask sizes, spatial and frequency domain implementations have about the same computational complexity • However, we can speed up frequency domain interpretations by tessellating the image into sub-blocks and filtering these independently • Not quite that simple – we need to overlap the filtered sub-blocks to remove blocking artefacts • Overlap and add algorithm

More Related