1 / 40

Mathematical Preliminaries

Mathematical Preliminaries. Matrix Theory. Vectors n th element of vector u : u(n) Matrix m th row and n th column of A : a(m,n). column vector. Lexicographic Ordering(Stacking operation). Row-ordered form of a matrix Column-ordered form of a matrix.

Download Presentation

Mathematical Preliminaries

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Mathematical Preliminaries

  2. Matrix Theory • Vectors • nth element of vector u : u(n) • Matrix • mth row and nth column of A : a(m,n) column vector

  3. Lexicographic Ordering(Stacking operation) • Row-ordered form of a matrix • Column-ordered form of a matrix

  4. Transposition and conjugation rules • Toeplitz matrices • Circulant matrices

  5. Linear convolution using Toeplitz matrix

  6. (Toepliz matrix)

  7. N Circular convolution using circulant matrix • N-point circular convolution : h(n) N x(n)

  8. (circulant matrix) Circular convolution + zero padding linear convolution Circular convolution with the period : the same result with that of linear convolution

  9. (ex) Linear convolution as a Toeplitz matrix operation (ex) Circular convolution as a circulant matrix operation

  10. Orthogonal and unitary matrices • Orthogonal : • Unitary : • Positive definiteness and quadratic forms • is called positive definite, if is a Hermitian matrix and • is called positive semidefinite(nonnegative), if is a Hermitian matrix and • Theorem • if is a symmetric positive definite matrix, • then all its eigenvalues are positive and the determinant of satisfies

  11. : diagonal matrix containing the the eigenvalues of : eigenvalue : eigenvector • Diagonal forms • For any Hermitian matrix there exists a unitary matrix such that • Eigenvalue and eigenvector

  12. n n m m Block Matrices • Block matrices : elements are matrices n 1 5 5 1 3 10 5 2 2 3 -2 -3 (ex) m y(m,n) Column ; Stacking Operation

  13. Let xn and yn be the column vector, then where block matrix

  14. Kronecker Products Definition (ex) Properties(Table2.7)

  15. Separable transformation • Transformation on an NXM image Consider the transformation , if : matrix form : vector form row-ordered form

  16. Random Signals • Definitions • Random signal : a sequence of random variables • Mean : • Variance : • Covariance : • Cross covariance : • Autocorrelation : • Cross correlation :

  17. : mean vector • Representation for an NX1 vector : Nx1 vector : NxN matrix : NxN matrix : covariance matrix Gaussian(or Normal) distribution • Gaussian random processes • Gaussian random process if the joint probability density of any finite sub-sequence is a Gaussian distribution : covariance matrix

  18. Stationary process • Strict-sense stationary • if the joint density of any partial sequence is the same as that of the shifted sequence • Wide-sense stationary • if • Gaussian process : wide-sense = strict sense : covariance matrix is Toeplitz

  19. Markov processes • p-th order Markov • Orthogonal : • Independent : • Uncorrelated : (ex) Covariance matrix of a first-order stationary Markov sequence u(n) : Toeplitz

  20. : NxN unitary matrix • Karhunen-Loeve(KL) transform • KL transform of • Property • The elements of y(k) are orthogonal • is called the KL transform matrix • The rows of are the conjugate eigenvectors of

  21. Discrete Random Field • Definitions • Discrete random field • Each sample of a 2-D sequence is a random variable • Mean : • Covariance : • White noise field • Symmetry

  22. Separable and isotropic image covariance functions • Separable • Separable stationary covariance function • Nonseparable exponential function (Nonstationary case) (Stationary case) (isotropic or circularly symmetric) • Estimation mean and autocorrelation

  23. SDF(spectral density function) • Definition • Fourier transform of autocorrelation function • 1-D case • 2-D case • Average power

  24. (ex) the SDF of stationary white noise field

  25. Estimation Theory • Mean square estimates Estimate the random variable x by a suitable function g(y), such that is min. but the integrand is non-negative ; it is sufficient to minimize for every y

  26. minimum mean square estimate (MMSE) also unbiased estimator • Theorem Let y and x be jointly Gaussian with zero mean. △ The MMSE estimation is , where ai is chosen, such that ∀ all k = 1, 2, … , N (Pf) The random variable are jointly Gaussian. But the first one is uncorrelated with all the rest, it is independent of them. Thus, the error is independent of the random vector y.

  27. where : estimation error yields , n = 1, 2, … , N

  28. is determined by solving linear equations The estimation error is minimized if , n = 1, 2, … , N orthogonality principle • If x and {y(n)} are independent • If zero mean Gaussian random variables : linear combination of {y(n)}

  29. Since is a function of • Orthogonality principle • The minimum mean square estimation error vector is orthogonal to every random variable functionally related to the observations, i.e., for any , substitute matrix notation

  30. Minimum MSE : • If x,y(n) are nonzero mean r.v. • If x,y(n) are non-Gaussian, the results still give the best linear mean square estimate.

  31. Information Theory • Information Entropy • For a binary source, i.e.,

  32. Information Theory Let x be a discrete r.v. with Sx={1, 2, … , K} {x=k} with pk=Pr[x=k] let event Ak △ uncertainty of Ak is low, if pk is close to one, and it is high, if pk is small. uncertainty of event : if Pr(x=k) = 1 entropy : unit : bit when the logarithm is base 2

  33. Entropy as a measure of information Consider the event Ak, describing the emission of symbol sk by the source with probability pk 1) if pk=1 and pi=0 ∀ all i≠k no surprise ⇒ no information when sk is emitted by the source 2) if rk is low more surprise ⇒ information when sk is emitted by the source ; amount of information gained after observing the event sk ; average information per source symbol

  34. Ex) 16 balls : 4 balls “1”, 4 balls “2” 2 balls “3”, 2 balls “4” 1 ball “5”, “6”, “7”, “8” Question : Find out the number of the ball through a series of yes/no questions. 1) no no no x=1 ? x=7 ? x=2 ? x=8 yes yes yes x=1 x=2 x=7 the average number of question asked :

  35. 2) no no no no x≤2 ? x≤6 ? x=7 ? x≤4 ? x=8 yes yes yes yes no no no x=7 x=1 ? x=3 ? x=5 ? yes yes yes x=2 x=1 x=4 x=6 x=3 x=5 ⇒ The problem of designing the series of questions to identify x is exactly the same as the problem of encoding the output of information source.

  36. 3 bit / symbol variable length code pk x=1 0 0 0 yes / yes ⇒ 1 1 x=2 0 0 1 yes / no ⇒ 1 0 x=3 0 1 0 no / yes / yes ⇒ 0 1 1 x=4 0 1 1 no / yes / no ⇒ 0 1 0 x=5 1 0 0 no / no / yes / yes ⇒ 0 0 1 1 x=6 1 0 1 no / no / yes / no ⇒ 0 0 1 0 x=7 1 1 0 no / no / no / yes ⇒ 0 0 0 1 x=8 1 1 1 no / no / no / no ⇒ 0 0 0 0 ⇒ Huffman code ⇒ short code to frequency source symbol long code to rare source symbol ⇒ entropy of x represent the max. average number of bits required to identify the outcome of x

  37. Noiseless Coding Theorem (1948, Shannon) • min(R) = H(x) +ε bit / symbol • when R is the transmission rate and ε is a positive quantity that can be arbitrarily close to zero by sophisticated coding procedure utilizing an appropriate amount of encoding delay.

  38. x : Gaussian r.v of variance Rate distortion function for a Gaussian source : Gaussian r.v.’s : reproduced values • Rate distortion function • Distortion y : reproduced value • Rate distortion function of x • For a fixed average distortion D where is determined by solving

More Related