1 / 9

False discovery rate: setting the probability of false claim of detection

False discovery rate: setting the probability of false claim of detection. Lucio Baggio Italy, INFN and University of Trento. Where does this FDR comes from?.

Download Presentation

False discovery rate: setting the probability of false claim of detection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. False discovery rate: setting the probability of false claim of detection Lucio Baggio Italy, INFN and University of Trento

  2. Where does this FDR comes from? ·I met prof. James T. Linnemann(MSU) at PHYSTAT2003, I explained him the problem we had in assessing frequentist probability for IGEC results… ·     ”We made many different (almost independent) background and coincidence counts using different values for the target signal amplitude (thresholds)” ·     ”At last, one of the 90% confidence intervals was not including the null hypothesis… but when one accounts for many trials, it is possible to compute that with 30% probability we had a chance that at least one of the tests falsely rejected the null hypothesis. So, no true discovery, after all.” ·     ”Perhaps next time we should use 99.99% confidence intervals, in order to make the probability of false claim still low after tens of trials…But I’m afraid that signals cannot any more emerge with such stringent a requirement.” ·He pointed out that maybe what I’m looking for are false discovery rate methods… Thanks, Jim!!

  3. When should I care of multiple test procedures?. Why FDR? • All sky surveys: many source directions and polarizations are tried • Template banks • Wide-open-eyes searches: many analysis pipelines are tried altogether, with different amplitude thresholds, signal durations, and so on • Periodic updates of results: every new science run is a chance for a “discovery”. “Maybe next one is the good one”. • Many graphical representations or aggregations of the data: “If I change the binning, maybe the signal shows up better…

  4. False discoveries (false positives) inefficiency Detected signals (true positives) Reported signal candidates Preliminary (1) : hypothesis testing

  5. Assume you have a model for the noise that affects the measure x. • The distribution of p is always linearly raising in case of agreement of the noise with the model P(p)=p  dP/dp = 1 pdf However, for our purposes it is sufficient assuming that the signal can be distinguished from the noise, i.e. dP/dp  1. Typically, the measured values of p are biased toward 0. 1 signal background p-level Preliminary (2): p-level You derive a test statistics t(x) from x. F(t) is the distribution of t when x is sampled from noise only (off-signal). The p-level associated with t(x) is the value of the distribution of t in t(x): p = F(t) = P(t>t(x)) • Example: 2 test  p is the “one-tail” 2 probability associated with n counts (assuming d degrees of freedom) Usually, the alternative hypothesis is not known.

  6. For each hypothesis test, the condition {p<  reject null} leads to false positives with a probability  Usual multiple testing procedures In case of multiple tests (need not to be the same test statistics, nor the same tested null hypothesis), let p={p1, p2, …pm} be the set of p-levels. m is the trial factor. We select “discoveries” using a threshold T(p): {pj<T(p) reject null}. • Uncorrected testing: T(p)=  • The probability that at least one rejection is wrong is • P(B>0) = 1 – (1- )m ~ m • hence false discovery isguaranteed for m large enough • Fixed total 1st type errors (Bonferroni): T(p)= /m • Controls familywise error rate in the most stringent manner: • P(B>0) =  • This makes mistakes rare… • … but in the end efficiency (2nd type errors) becomes negligible!!

  7. We desire to control (=bound) the ratio of false discoveries over the total number of claims: B/R = B/(B+S)  q. The level T(p) is then chosen accordingly. Let us make a simple case when signals are easily separable (e.g. high SNR) pdf S B m0 p q m cumulative counts B R S p Controlling false discovery fraction

  8. compute p-values {p1, p2, …pm} for a set of tests, and sort them in creasing order; • determine the threshold T(p)= pk by finding the index k such that pj<(q/m) j/c(m) for every j>k; m q/c(m) reject H0 p Benjamini & Hochberg FDR control procedure Among the procedures that accomplish this task, one simple recipe was proposed by Benjamini & Hochberg (JRSS-B (1995) 57:289-300) • choose your desired FDR q (don’t ask too much!); • define c(m)=1 if p-values are independent or positively correlated; otherwise c(m)=Sumj(1/j)

  9. Summary In case of multiple tests one wants to control the false claim probability, but it is advisable to mitigate the strict requirement that we want NO false claim, which could end up in burying the signals also. Controlling FDR seems to be a wise suggestion. This talk was based mainly on Miller et. al. ApJ 122: 3492-3505 Dec 2001http://arxiv.org/abs/astro-ph/0107034 and Jim Linnemann’s talk http://user.pa.msu.edu/linnemann/public/FDR/ However, there is a fairly wide literature aboute this, if one looks for references in biology, imaging, HEP, and recently also astrophysics, at last!

More Related