1 / 16

Scientific Computing

Scientific Computing. Linear Least Squares. Interpolation vs Approximation. Recall : Given a set of (x,y) data points, Interpolation is the process of finding a function (usually a polynomial) that passes through these data points. . Interpolation vs Approximation.

stacie
Download Presentation

Scientific Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Scientific Computing Linear Least Squares

  2. Interpolation vs Approximation Recall: Given a set of (x,y) data points, Interpolation is the process of finding a function (usually a polynomial) that passes through these data points.

  3. Interpolation vs Approximation Given a set of (x,y) data points, Approximation is the process of finding a function (usually a line or a polynomial) that comes the “closest” to the data points. Data has “noise” – cannot find interpolating line.

  4. General Least Squares Idea Given data and a class of functions F, the goal is to find the “best” function f in F that approximates the data. Consider the data to be two vectors of length n + 1. That is, x = [x0 x1 … xn]t and y = [y0 y1 … yn]t

  5. General Least Squares Idea Definition: The error, or residual, of a given function with respect to this data is the vector r = y - f(x): That is r = [r0 r1 … rn]t where ri = yi - f(xi) We want to find a function f such that the error is made as small as possible. How do we measure the size of the error? With vector norms.

  6. Vector Norms A vector norm is a quantity that measures how large a vector is (the magnitude of the vector). Our previous examples for vectors in Rn : Manhattan Euclidean Chebyshev For Least Squares, the best norm to use will be the 2-norm.

  7. Ordinary Least Squares Definition The least-squares best approximating function to a set of data, x, y from a class of functions, F, is the function f* in F that minimizes the 2-norm of the error. That is, if f * is the least squares best approximating function, then This is often called the ordinary least squares method of approximating data.

  8. Linear Least Squares Linear Least Squares: We assume that the class of functions is the class of all possible lines. By the method in the last slide, we then want to find a linear function f(x) = ax + b that minimizes the 2-norm of the vector y-f(x), i.e. that minimizes:

  9. Linear Least Squares Linear Least Squares: To minimize it is enough to minimize the term inside the square root. So, if f(x) = ax+b, we need to minimize over all possible values of a and b. From calculus, we know that the minimum will occur where the partial derivatives with respect to a and b are zero.

  10. Linear Least Squares Linear Least Squares: These equations can be written as: These last two equations are called the Normal Equations for the best line fit to the data.

  11. Linear Least Squares Linear Least Squares: Note that these are two equations in the unknowns a and b. Let Then, the solution is

  12. Matrix Formulation of Linear Least Squares Linear Least Squares: Want to minimize Let Then, we want to find a vector c that minimizes the length squared of the error vector Ac-y (or y – Ac) That is, we want to minimize This is equivalent to minimizing the Euclidean distance from y to Ac.

  13. Matrix Formulation of Linear Least Squares Linear Least Squares: Find a vector c to minimize the Euclidian distance from y to Ac. Equivalently, minimize ||y–Ac||2 or (y–Ac)t(y–Ac) If we take all partials of this expression (with respect to c0 , c1 ) and set these equal to zero, we get

  14. Matrix Formulation of Linear Least Squares The equation At Ac = At y is also called the Normal Equation for the linear best fit. The equations one gets from At Ac = At y are exactly the same equations we had before: The solution c=[a b] for At Ac = At y gives the constants for the line ax+b.

  15. Matlab Example % Example of how to find the linear least squares fit to noisy data x = 1:.1:6; % x values y = .1*x + 1; % linear y-values ynoisy = y + .1*randn(size(x)); % add noise to y values plot(x, ynoisy,'.') % Plot the noisy data hold on % So we can plot more data later % Find d11, d12, d21, d22, e1, e2 D=[sum(x.^2), sum(x); sum(x), length(x)]; e1 = x*ynoisy'; e2 = sum(ynoisy); % Solve for a and b det = D(1,1)*D(2,2) - D(1,2)*D(2,1); a = (D(2,2)*e1 - D(1,2)*e2)/det; b = (D(1,1)*e2 - D(2,1)*e1)/det; % Create a vector of y-values for the linear best fit fit_y = a.*x + b; plot(x, fit_y,'-') % Plot the best fit line

  16. Matlab Example At Ac = At y % Example of how to find the linear least squares fit to noisy data x = 1:.1:6; % x values y = .1*x + 1; % linear y-values ynoisy = y + .1*randn(size(x)); % add noise to y values plot(x, ynoisy,'.') % Plot the noisy data hold on % So we can more data later % Create matrix A A = zeros(length(x), 2); A(:,1) = x; A(:,2) = ones(length(x),1); D = A'*A; e = A'*ynoisy'; % Solve for constants a and b c= D \ e; fit_y = c(1).*x + c(2); plot(x, fit_y,'O')

More Related