1 / 5

APPENDIX C P ROBABILITY T HEORY

Slides for Introduction to Stochastic Search and Optimization ( ISSO ) by J. C. Spall. APPENDIX C P ROBABILITY T HEORY.

Download Presentation

APPENDIX C P ROBABILITY T HEORY

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Slides for Introduction to Stochastic Search and Optimization (ISSO)by J. C. Spall APPENDIX CPROBABILITY THEORY …you can never know too much probability theory. If you are well grounded in probability theory, you will find it easy to integrate results from theoretical and applied statistics into the analysis of your applications.Daniel McFadden, 2000 Nobel Prize in Economics Organization of appendix in ISSO Basic properties Sample space Expected value Convergence theory Definitions (four modes) Examples and counterexamples Dominated convergence theorem Convergence in distribution and central limit theorem

  2. Probability Theory • Random variables, distribution functions, and expectations are critical • Central tools in stochastic search, optimization, and Monte Carlo methods • Probabilistic convergence important in building theoretical foundation for stochastic algorithms • Most theoretical results for algorithms rely on asymptotic arguments (i.e., convergence)

  3. Expectation • Let X  m , m {1, 2,…}, be distributed according to density function pX(x) • Then the expected value of a function f(X) is provided that <  • Obvious analogue to above for discrete random vectors • Important special cases for expected value: • Mean: f(X) = X • Covariance matrix: f(X) = [X – E(X)] [X – E(X)]T

  4. Probabilistic Convergence • Finite-sample results are usually hopeless • Asymptotic (convergence) results provide means to analyze stochastic algorithms • Four famous modes of convergence: • almost surely (a.s.) • in probability (pr.) • in mean-square (m.s.) • in distribution (dist.) • First three modes above pertain to sense in which Xk X ask  • Last mode (dist.) pertains to convergence of distribution function of Xkto distribution function of X

  5. Implications for Four Modes of Convergence a.s. pr. m.s. dist.

More Related