1 / 91

Chapter 8 Image Representation & Analysis

Chapter 8 Image Representation & Analysis. Chuan-Yu Chang ( 張傳育 )Ph.D. Dept. of Computer and Communication Engineering National Yunlin University of Science & Technology chuanyu@yuntech.edu.tw http://mipl.yuntech.edu.tw Office: EB212 Tel: 05-5342601 Ext. 4337. Image Representation.

ganya
Download Presentation

Chapter 8 Image Representation & Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 8Image Representation & Analysis Chuan-Yu Chang (張傳育)Ph.D. Dept. of Computer and Communication Engineering National Yunlin University of Science & Technology chuanyu@yuntech.edu.tw http://mipl.yuntech.edu.tw Office: EB212 Tel: 05-5342601 Ext. 4337

  2. Image Representation • To perform a computerized analysis of an image, it is important to establish a hierarchical framework of processing steps representing the image and knowledge domain. • The bottom-up analysis starts with the analysis at the pixels-level representation and moves up toward the understanding of the scene or the scenario. • The top-down analysis starts with the hypothesis for the presence of an object and then moves toward the pixel-level representation to verify or reject the hypothesis using the knowledge-based models.

  3. Top-Down Scenario Scene-1 Scene-I Object-1 Object-J S-Region-1 S-Region-K Region-1 Region-L Edge-1 Edge-M Pixel (i,j) Pixel (k,l) Bottom-Up Image Representation

  4. Image Representation • Knowledge-based models can be used at different stages of processing. • The knowledge of physical constraints and tissue properties can be very useful in imaging and image reconstruction. • Anatomical locations of various organs in the body often impose a challenge in imaging the desired tissue or part of the organ. • An object representation model usually provides the knowledge about the shape or other characteristic features of a single objects for the classification analysis.

  5. Multi-Modality/ Multi-Subject/Multi-Dimensional Image Understanding Multi-Modality/Multi-Subject/Multi-Dimensional Registration, Visualization and Analysis Scene Representation Models Single Image Understanding Knowledge Domain Analysis of Classified Objects Object Representation Models Classification and Object Identification Data Domain Feature Extraction and Representation Feature Representation Models Edge/Region Representation Models Image Segmentation (Edge and Region) Physical Property/ Constraint Models Image Reconstruction Raw Data from Imaging System

  6. Feature Extraction • After segmentation, specific features representing the characteristics and properties of the segmented regions in the image need to be computed for object classification and understanding. • There are four major categories of features for region representation: • Statistical Features • Provide quantitative information about the pixels within a segmented region. • Ex: Histogram, Moments, Energy, Entropy, Contrast, Edges

  7. Image Analysis: Feature Extraction • Shape Features • Provide information about the characteristic shape of the region boundary. • Ex: Boundary encoding, Moments, Hough Transform, Region Representation, Morphological Features • Texture Features • Provide information about the local texture within the region or the corresponding part of the image. • Ex: second-order histogram statistics, co-occurrence matrix, wavelet processing. • Relational Features • Provide information about the relational and hierarchical structure of the regions associated with a single or a group of objects.

  8. Statistical Pixel-Level Features • The histogram of the gray values of pixels • Mean of the gray values of the pixels • Variance and central moments in the regionwhere n=2 is the variance of the region.n=3 is a measure of noncentralityn=4 is a measure of flatness of the histogram.

  9. Statistical Pixel-Level Features • Energy: Total energy of the gray-values of pixels • Entropy 熵 • Local contrast • Maximum and minimum gray values

  10. Shape Features • The shape of a region is defined by the spatial distribution of boundary pixels. • Circularity, compactness, and elongatedness through the minimum bounded rectangle that covers the region. • Several features using the boundary pixels of the segmented region can be computed as • Chain code for boundary contour • Fourier descriptor of boundary contour • Central moments based shape features for segmented region • Morphological shape descriptors

  11. A E H B O D F G C Some Shape Features • Longest axis GE. • Shortest axis HF. • Perimeter and area of the minimum bounded rectangle ABCD. • Elongation ratio: GE/HF • Perimeter p and area A of the segmented region. • Circularity • Compactness

  12. Boundary Encoding :Chain Code • Define a neighborhood matrix with the orientation primitives with respect to the center pixel. • The code of specific orientation are set for 8-connected neighborhood directions. • The orientation directions are codes with a numerical value ranging from 0 to 7. • The boundary contour needs to be approximated as a list of segments that have pre-selected length and directions.

  13. Boundary Encoding :Chain Code • To obtain boundary segments representing a piecewise approximation of the original boundary contour, the “divide and conquer ” is applied. • Selects two points on a boundary contour as vertices. • A straight line joining the two selected vertices can be used to approximate the respective curve segment if it satisfies a “maximum-deviation” criterion for no further division of the curve segment. • The maximum deviation criterion is based on the perpendicular distance between any point on the original curve segment between the selected vertices and corresponding approximated straight-line segment.

  14. Boundary Encoding :Chain Code • If the perpendicular distance or deviation of any point on the curve segment from the approximated straight-line segment exceeds a pre-selected deviation threashold, the curve segment is further divided at the point of maximum deviation. • This process of dividing the segments with additional vertices continues until all approximated straight-line segments satisfy the maximum-deviation criterion. • The representation is further approximated using the orientation primitive of the 8-connected neighborhood. • Two parameters can change the chain code: number of orientation primitives and the maximum deviation threshold used in approximating the curve.

  15. 3 2 1 3 2 1 0 xc 4 0 4 5 6 7 5 6 7 Boundary Encoding :Chain Code • The 8-connected neighborhood codes (left) and the orientation directions (right) with respect to the center pixel xc.

  16. C C C C B B B B D D D D F A A A A E E E E C B Chain Code: 110000554455533 A D 選取在方向及梯度上有較明顯的兩的頂點為起始點 BF大於預設值, 需將AC分成AB, BC • A schematic example of developing chain code for a region with boundary contour ABCDE. From top left to bottom right: the original boundary contour, two points A and C with maximum vertical distance parameter BF, two segments AB and BC approximating the contour ABC, five segments approximating the entire contour ABCDE, contour approximation represented in terms of orientation primitives, and the respective chain code of the boundary contour.

  17. Boundary Encoding: Fourier Descriptor • Fourier series may be used to approximate a closed boundary of a region. • Assume that the boundary of an object is expressed as a sequence of N points with the coordinates u[n]={x(n), y(n)}, such that • The discrete Fourier Transform of the sequence u[n] is the Fourier descriptor Fd[n] of the boundary and is defined as

  18. Boundary Encoding: Fourier Descriptor • Rigid geometric transformation of a boundary such as translation, rotation and scaling can be represented by simple operations on its Fourier transform. • The Fourier descriptors can be used as shape descriptors for region matching dealing with translation, rotation and scaling.

  19. Moments for Shape Description • The shape of a boundary or contour can be represented quantitatively by the central moments for matching. • The central moments represent specific geometrical properties of the shape and are invariant to the translation, rotation and scaling. • The central moments mpqof a segmented region or binary image f(x,y) are given by

  20. Moments for Shape Description • The normalized central moments are defined as • There are seven invariant moments for shape matching

  21. Morphological Processing for Shape Description • Mathematical morphology • A tool for extracting image components that are useful in the representation and description of region shape, such as boundaries, skeletons, and convex hull. • Sets in mathematical morphology represent objects in an image. • 2D integer space Z2 • (x,y) coordinates • Z3: gray-scale digital images • (x,y) coordinates, and gray-level value

  22. Morphological Processing for Shape Description (cont.) (9.1-1) • Let A be a set in Z2, If a=(a1, a2) is an element of A • If a is not an element of A, we write • The set with no elements is called the null or empty set and denoted by the symbol . • The elements of the sets with which we are concerned are the coordinates of pixels representing objects. • Ex:set C is the set of elements, w, such that w is formed by multiplying each of the two coordinates of all the elements of set D by -1. (9.1-2)

  23. Basic Concepts from Set Theory Subset If every element of a set A is also an element of another set B, then A is said to be a subset of B. Union The set of all elements belonging to either A, B, or both Intersection The set of all elements belonging to both A and B Morphological Processing for Shape Description (cont.) (9.1-3) (9.1-4) (9.1-5)

  24. Morphological Processing for Shape Description (cont.) • Disjoint (mutually excusive) • If the two set have no common elements • Complement: • The complement of a set A is the set of elements not contained in A • Difference: • the set of elements that belong to A, but not to B. (9.1-6) (9.1-7) (9.1-8)

  25. Morphological Processing for Shape Description (cont.)

  26. Preliminaries (cont.) • Reflection • Translation (9.1-9) (9.1-10)

  27. The principal logic operations used in image processing are AND, OR, and NOT The three basic logical operations Performed on a pixel by pixel basis between corresponding pixels of two or more images. Logical operation are restricted to binary variables These operations are functionally complete in the sense that they can be combined to form any other logic operation Logic Operations Involving Binary Images

  28. Logic Operations Involving Binary Images • Black indicates a binary 1 • White indicates a 0.

  29. For sets A and B in Z2 The dilation of A by B, denoted where set B is referred to as the structuring element. The dilation of A by B is the set of all displacements, z, such that and A overlap by at least one element. Dilation and Erosion (9.2-1) (9.2-2)

  30. Dilation and Erosion (cont.)

  31. Set B Set A Morphological Processing for Shape Description A large region with square shape representing the set A and a small region with rectangular shape representing the structuring element set B.

  32. A A : Erosion of A by B : Dilation of A by B • The dilation of set A by the structuring element set B (top left), the erosion of set A by the structuring element set B (top right) and the result of two successive erosions of set A by the structuring element set B(bottom).

  33. Dilation and Erosion (cont.) • Example of dilation • bridging gaps • The maximum length of the breaks is known to be two pixels. • A simple structuring element that can be used for repairing the gaps is shown in Fig. 9.5(b)

  34. Dilation and Erosion • For sets A and B in Z2 • The erosion of A by B, denoted where set B is referred to as the structuring element. • The erosion of A by B is the set of all points z such that B, translated by z, is contained in A.

  35. Dilation and Erosion

  36. A B Morphological Features

  37. Dilation and Erosion 使用13x13的方形結構,對圖(a)進行erosion 使用13x13的方形結構,對圖(b)進行dilation Example of erosion -eliminating irrelevant detail

  38. Opening and Closing • Opening • Generally smoothes the contour of an object, breaks narrow isthmuses, and eliminates thin protrusions. • The opening A by B is the erosion of A by B, followed by a dilation of the result by B. • View the structuring element B as a flat “rolling ball” • The boundary of is then established by the points in B that reach the farthest into the boundary of A as B is rolled around the inside of this boundary.

  39. Opening and Closing • Closing • Tends to smooth sections of contours, fuses narrow breaks and long thin gulfs, eliminates small holes, and fills gaps in the contour. • The closing of set A by structuring element B, denoted • The closing of A by B is simply the dilation of A by B, followed by the erosion of the result by B.

  40. Opening and Closing

  41. Opening and Closing • The opening operation satisfies the following properties • AB is a subset of A • If C is a subset of D, then C  B is a subset of D °B • (A  B)  B=A  B

  42. Opening and Closing • The properties of closing operation • A is a subset of AB • If C is a subset of D, then C  B is a subset of D  B • (A  B)  B=A  B • Multiple openings or closings of a set have no effect after the operator has been applied once.

  43. Opening and Closing

  44. B A Morphological Processing for Shape Description • The morphological opening and closing of set A (top left) by the structuring element set B (top right): opening of A by B (bottom left) and closing of A by B (bottom right).

  45. Example of morphological operations on MR

  46. Texture Features • Texture is an important spatial property . • There are three major approaches to represent texture • Statistical • Based on region histograms, their extensions and their moments. • Representing the high-order distribution of gray values in the image are used for texture representing. • Structural • Arrangements of pre-specified primitives in texture representation, such as a repetitive arrangement of square and triangular shapes. • Spectral • Based on the autocorrelation function of a region or on the power distribution in Fourier transform domain. • Texture is represented by a group of specific spatio-frequency components, such as Fourier and wavelet transform.

  47. Texture Features • Gray-level co-occurrence matrix (GLCM) • Exploits the high-order distribution of gray values of pixels that are defined with a specific distance or neighborhood criterion. • GLCM P(i,j) is the distribution of the number of occurrences of a pair of gray values i and j separated by a distance vector d=[dx, dy] • The GLCM can be normalized by dividing each value in the matrix by the total number of occurences providing the probability of occurrence of a pair of gray values separated by a distance vector. • Statistical texture features are computed from the normalized GLCM. • The second-order histogram H(yq, yr, d) representing the probability of occurrence of a pair of gray values yq and yr separated by a distance vector d.

  48. 90o 135o 45o 1 1 2 2 1 1 2 2 3 3 1 1 3 3 1 1 0o Gray Level 1 2 3 1 2 2 0 The four direction for the GLCM 2 0 1 0 Co-occurrence matrix for 45o 3 2 1 1 Gray Level Co-occurrence Matrix (GLCM)

  49. Gray Level Co-occurrence matrix (GLCM) Figure 8.11. (a) A matrix representation of a 5x5 pixel image with three gray values; (b) the GLCM P(i,j) ford=[1,1].

More Related