• 多重检验中的FDR错误控制方法与p-value的校正及


    多重检验中的FDR错误控制方法与p-value的校正及

     

     
    数据分析中常碰见多重检验问题(multiple testing).Benjamini于1995年提出一种方法,通过控制FDR(False Discovery Rate)来决定P值的域值. 假设你挑选了R个差异表达的基因,其中有S个是真正有差异表达的,另外有V个其实是没有差异表达的,是假阳性的.实践中希望错误比例Q=V/R平均而言不能超过某个预先设定的值(比如0.05),在统计学上,这也就等价于控制FDR不能超过5%. 根据Benjamini在他的文章中所证明的定理,控制fdr的步骤实际上非常简单。 设总共有m个候选基因,每个基因对应的p值从小到大排列分别是p(1),p(2),...,p(m),则若想控制fdr不能超过q,则只需找到最大的正整数i,使得 p(i)<= (i*q)/m.然后,挑选对应p(1),p(2),...,p(i)的基因做为差异表达基因,这样就能从统计学上保证fdr不超过q。

    The False Discovery Rate (FDR) of a set of predictions is the expected percent of false predictions in the set of predictions. For example if the algorithm returns 100 genes with a false discovery rate of .3 then we should expect 70 of them to be correct.

    The FDR is very different from a p-value, and as such a much higher FDR can be tolerated than with a p-value. In the example above a set of 100 predictions of which 70 are correct might be very useful, especially if there are thousands of genes on the array most of which are not differentially expressed. In contrast p-value of .3 is generally unacceptabe in any circumstance. Meanwhile an FDR of as high as .5 or even higher might be quite meaningful.

     

    计算方法请参考:

    http://stat.ethz.ch/R-manual/R-devel/library/stats/html/p.adjust.html

     

    > p<-c(0.0003,0.0001,0.02) > p [1] 3e-04 1e-04 2e-02 > > p.adjust(p,method="fdr",length(p)) [1] 0.00045 0.00030 0.02000 > > p*length(p)/rank(p) [1] 0.00045 0.00030 0.02000 > length(p) [1] 3 > rank(p) [1] 2 1 3 sort(p) [1] 1e-04 3e-04 2e-02

     

    1) P-value 是 (在H0 = true的情况下)得到和试验数据一样极端(或更极端)的统计量的概率. 它不是H1发生的概率. 假定吃苹果的一组和不吃苹果的一组的差异为D, P-value=0.2的意思是, pure randomly (即H0=true)的情况下, 观察到和D一样或比D更大的差异的概率是20%.
    2) p-value 的本质是控制PFR (false positive rate), hypothesis test 的目的是make decision. 传统上把小概率事件的概率定义为0.05或0.01, 但不总是这样. 主要根据研究目的. 在一次试验中(注意:是一次试验, 即single test), 0.05 或0.01的cutoff足够严格了(想象一下, 一个口袋有100个球, 95个白的, 5个红的, 只让你摸一次, 你能摸到红的可能性是多大?). 我刚才强调的是single test, 在multiple test中, 通常不用p-value, 而采用更加严格的q-value. 与p-value 不同, q-value 控制的是FDR (false discovery rate).
    3)举个例子.假如有一种诊断艾滋病的试剂, 试验验证其准确性为99%(每100次诊断就有一次false positive). 对于一个被检测的人(single test) 来说, 这种准确性够了. 但对于医院 (multiple test) 来说, 这种准确性远远不够, 因为每诊断10 000个个体, 就会有100个人被误诊为艾滋病.
    4)总之, 如果你很care false positive, p-value cutoff 就要很低. 如果你很care false negative (就是"宁可错杀一千, 也不能漏掉一个" 情况), p-value 可以适当放松到 0.1, 0.2 都是可以的. *******************

    Multiple testing的问题最近越来越火了:)
    其实我一直有一个问题,从Benjamini开始,现在FDR的控制方法不下10种,为什么Storey的是最流行的?实际应用起来除了Benjamini的方法,其它所有的方法基本上都是一样的。q-value究竟是如何脱颖而出的呢?
    q-value 是随着multipel test 而产生的. 在multiple test (比如10000次), 如果用p-value=0.05去cut. 如果有1000次是显著的, 那么在这1000中, 有10000*0.05=500次是 False positive. 这显然不能接受. 太宽松了.
    Bonferroni提出FWER, 在上面的例子中, 就是把cutoff 设为: 0.05/10000 = 0.000005, 这虽然能控制False Positive, 但这只在极少数情况下有用. 因为太严格了, 大量的true alternatives 被miss掉了
    q-vlaue 实际上是上述两种方法的折衷. 既能控制FP, 有不会miss掉太多的true alternatives.
    For details see Storey's paper published ON PNAS (2003).

    *************************

    赞同:)不过我的问题并不是关于FWER,而是关于FDR的控制。Benjamini and Hochberg在1995年第一次提出了FDR的概念,其出发点就是基于Bonferroni的保守性,并给出了控制FDR的方法(这算是FDR控制方法的祖师爷了)。不过他们的方法也有其保守性。所以随后人们开始研究更加powerful的方法,现有的方法有Storey的, Broberg的,Dalmasso的,Guan的,Strimmer的等等等等。Benjamini的方法是将FDR控制在一个level以下,而之后所有的方法都在试图精确地估计FDR。所以后来的这些方法都要powerful一些。不过他们所付出的代价就是robustness。
    现有FDR控制方法最大的弊端在于,他们假设p-value's under the null hypothesis是(1)independent(2)following uniform (0,1) distribution. 这两点假设从实际观察到的数据来看经常是不合理的,尤其是第二点。(顺便提一句,Storey和Leek在07年的PLOS Genetics发表了一篇文章专门解决第二个假设的合理性问题,很牛,有兴趣可以看一下)
    我现在的问题是:Storey的方法没有比后来出现的其它方法更精确,在robustness方面也没有体现其优越性。它究竟是怎么获胜的?为什么它是最流行的FDR control procedure?

    From: http://cos.name/cn/topic/13846

    Bonferroni校正    如果在同一数据集上同时检验n个独立的假设,那么用于每一假设的统计显著水平,应为仅检验一个假设时的显著水平的1/n。举个例子:如要在同一数据集上检验两个独立的假设,显著水平设为常见的0.05。此时用于检验该两个假设应使用更严格的 0.025。即0.05* (1/2)。该方法是由Carlo Emilio Bonferroni发展的,因此称Bonferroni校正。  这样做的理由是基于这样一个事实:在同一数据集上进行多个假设的检验,每20个假设中就有一个可能纯粹由于概率,而达到0.05的显著水平。   维基百科原文:    Bonferroni correction   Bonferroni correction states that if an experimenter is testing n independent hypotheses on a set of data, then the statistical significance level that should be used for each hypothesis separately is 1/n times what it would be if only one hypothesis were tested.   For example, to test two independent hypotheses on the same data at 0.05 significance level, instead of using a p value threshold of 0.05, one would use a stricter threshold of 0.025.   The Bonferroni correction is a safeguard against multiple tests of statistical significance on the same data, where 1 out of every 20 hypothesis-tests will appear to be significant at the α = 0.05 level purely due to chance. It was developed by Carlo Emilio Bonferroni.   A less restrictive criterion is the rough false discovery rate giving (3/4)0.05 = 0.0375 for n = 2 and (21/40)0.05 = 0.02625 for n = 20.    数据分析中常碰见多重检验问题(multiple testing).Benjamini于1995年提出一种方法,通过控制FDR(False Discovery Rate)来决定P值的域值. 假设你挑选了R个差异表达的基因,其中有S个是真正有差异表达的,另外有V个其实是没有差异表达的,是假阳性的.实践中希望错误比例Q=V/R平均而言不能超过某个预先设定的值(比如0.05),在统计学上,这也就等价于控制FDR不能超过5%.   根据Benjamini在他的文章中所证明的定理,控制fdr的步骤实际上非常简单。  设总共有m个候选基因,每个基因对应的p值从小到大排列分别是 p(1),p(2),...,p(m),则若想控制fdr不能超过q,则只需找到最大的正整数i,使得 p(i)<= (i*q)/m.然后,挑选对应p(1),p(2),...,p(i)的基因做为差异表达基因,这样就能从统计学上保证fdr不超过q。  The False Discovery Rate (FDR) of a set of predictions is the expected percent of false predictions in the set of predictions. For example if the algorithm returns 100 genes with a false discovery rate of .3 then we should expect 70 of them to be correct.   The FDR is very different from a p-value, and as such a much higher FDR can be tolerated than with a p-value. In the example above a set of 100 predictions of which 70 are correct might be very useful, especially if there are thousands of genes on the array most of which are not differentially expressed. In contrast p-value of .3 is generally unacceptabe in any circumstance. Meanwhile an FDR of as high as .5 or even higher might be quite meaningful.   FDR错误控制法是Benjamini于1995年提出一种方法,通过控制FDR(False Discovery Rate)来决定P值的域值. 假设你挑选了R个差异表达的基因,其中有S个是真正有差异表达的,另外有V个其实是没有差异表达的,是假阳性的。实践中希望错误比例Q=V/R平均而言不能超过某个预先设定的值(比如0.05),在统计学上,这也就等价于控制FDR不能超过5%.   对所有候选基因的p值进行从小到大排序,则若想控制fdr不能超过q,则只需找到最大的正整数 i,使得 p(i)<= (i*q)/m.然后,挑选对应p(1),p(2),...,p(i)的基因做为差异表达基因,这样就能从统计学上保证fdr不超过q。因此,FDR的计算公式如下:   q-value(i)=p(i)*length(p)/rank(p)    参考文献:    1.Audic, S. and J. M. Claverie (1997). The significance of digital gene expression profiles. Genome Res 7(10): 986-95.    2.Benjamini, Y. and D. Yekutieli (2001). The control of the false discovery rate in multiple testing under dependency. The Annals of Statistics. 29: 1165-1188.

    =================================================================================================

    FDR的目标是控制错误发现率(假激活体素/报告的激活体素)低于一定值。比如说FDR控制Q<0.05,就意味着在这种校正之后,你每报告100个激活体素,其中只有5个是假阳性。为了达到FDR的目标,就需要基于你报告的所有体素的P值数据,去计算一个P的阈值,当P小于这个阈值的时候,可以认为错误发现率将小于给定的Q(如0.05)。寻找的P阈值的算法是先对所有P进行排序,然后寻找第一个满足某种条件的P

    1.为社么要校正 The multiple comparison problemThe multiple comparison problem potentially arises whenever you would like to test multiple hypotheses simultaneously.If you don't correct for the number of comparisons, then the more hypotheses you test, the higher the probability of obtaining at least one false positive.Since imaging statistics often involve many thousands of simultaneous tests (typically one in each voxel in the brain), it's important to correct for the number of comparisons properly.
    Just to be a little more concrete, consider a brain with 10,000 voxels. If we used a statistical test with a univariate false positive rate of 0.05, then we would expectto see 500 false positives in a functional brain map containing no genuine effect.Usually, that rate of spurious activation is unacceptable.Even worse, spatial smoothing can make it appear that the spurious activation forms coherent,highly plausible blobs. You can verify this easily with resting BOLD data.

    2.FWER校正做了什么
    The goal of correction, loosely speaking, is to meet community standards for statistical significance.At the moment, the prevailing standard is that reported results should come from a test with afamily-wise error rate (FWER) of 0.05. The "family," in this case, is the entire set of tests in a singlebrain map,so this is sometimes called the map-wise false positive rate. Basically, this means that if itturns out that your manipulation has no effect whatsoever, anywhere in the brain(imagine the projector wasn't working), the probability of seeing so much asa single voxel above statistical threshold should be 0.05. There is obviously an arbitrary elementto this, and it's open to debate how quickly research would progress if this standard were adjusted.At least one of the approaches noted below (false discovery rate control, or FDR) suggests an alternativestandard that does not meet this criterion, but could be viewed as preferable.

    3.Boferroni校正(行为实验中很常用)
    Bonferroni correction is the simplest and most conservative approach to correction, often used in behavioral and other types of non-imaging research. To correct, you simply re-calculate your threshold to correspond to your desired map-wise alpha criterion divided by your total number of comparisons (in this case voxels). So if you have 10 voxels, you should only count as statistically significant results that are independently associated with a p value below 0.005. For a given voxel, your probability of a spurious result is tiny (and your threshold is often painfully high). However, you're guaranteed that the probability of seeing so much as a single supra-threshold voxel in the absence of an experimental effect will be at most 0.05. By the prevailing standards of the community, you can report all of the voxels surviving this threshold as statistically significant.
    Bonferroni correction offers provably adequate control over the false positive rate (given the validity of the univariate statistical test). But it is overly conservative in general, more so when the number of independent observations is much smaller than the nominal number of observations. This is often the case in imaging, where spatial smoothness (intrinsic and extrinsic) usually means many fewer observations than voxels
    3.Boferroni校正(行为实验中很常用)
    Bonferroni correction is the simplest and most conservative approach to correction, often used in behavioral and other types of non-imaging research. To correct, you simply re-calculate your threshold to correspond to your desired map-wise alpha criterion divided by your total number of comparisons (in this case voxels). So if you have 10 voxels, you should only count as statistically significant results that are independently associated with a p value below 0.005. For a given voxel, your probability of a spurious result is tiny (and your threshold is often painfully high). However, you're guaranteed that the probability of seeing so much as a single supra-threshold voxel in the absence of an experimental effect will be at most 0.05. By the prevailing standards of the community, you can report all of the voxels surviving this threshold as statistically significant.
    Bonferroni correction offers provably adequate control over the false positive rate (given the validity of the univariate statistical test). But it is overly conservative in general, more so when the number of independent observations is much smaller than the nominal number of observations. This is often the case in imaging, where spatial smoothness (intrinsic and extrinsic) usually means many fewer observations than voxels.
    3.Boferroni校正(行为实验中很常用)
    Bonferroni correction is the simplest and most conservative approach to correction, often used in behavioral and other types of non-imaging research. To correct, you simply re-calculate your threshold to correspond to your desired map-wise alpha criterion divided by your total number of comparisons (in this case voxels). So if you have 10 voxels, you should only count as statistically significant results that are independently associated with a p value below 0.005. For a given voxel, your probability of a spurious result is tiny (and your threshold is often painfully high). However, you're guaranteed that the probability of seeing so much as a single supra-threshold voxel in the absence of an experimental effect will be at most 0.05. By the prevailing standards of the community, you can report all of the voxels surviving this threshold as statistically significant.
    Bonferroni correction offers provably adequate control over the false positive rate (given the validity of the univariate statistical test). But it is overly conservative in general, more so when the number of independent observations is much smaller than the nominal number of observations. This is often the case in imaging, where spatial smoothness (intrinsic and extrinsic) usually means many fewer observations than voxels.
    3.Boferroni校正(行为实验中很常用)
    Bonferroni correction is the simplest and most conservative approach to correction, often used in behavioral and other types of non-imaging research. To correct, you simply re-calculate your threshold to correspond to your desired map-wise alpha criterion divided by your total number of comparisons (in this case voxels). So if you have 10 voxels, you should only count as statistically significant results that are independently associated with a p value below 0.005. For a given voxel, your probability of a spurious result is tiny (and your threshold is often painfully high). However, you're guaranteed that the probability of seeing so much as a single supra-threshold voxel in the absence of an experimental effect will be at most 0.05. By the prevailing standards of the community, you can report all of the voxels surviving this threshold as statistically significant.
    Bonferroni correction offers provably adequate control over the false positive rate (given the validity of the univariate statistical test). But it is overly conservative in general, more so when the number of independent observations is much smaller than the nominal number of observations. This is often the case in imaging, where spatial smoothness (intrinsic and extrinsic) usually means many fewer observations than voxels.
    3.Boferroni校正(行为实验中很常用)
    Bonferroni correction is the simplest and most conservative approach to correction, often used in behavioral and other types of non-imaging research. To correct, you simply re-calculate your threshold to correspond to your desired map-wise alpha criterion divided by your total number of comparisons (in this case voxels). So if you have 10 voxels, you should only count as statistically significant results that are independently associated with a p value below 0.005. For a given voxel, your probability of a spurious result is tiny (and your threshold is often painfully high). However, you're guaranteed that the probability of seeing so much as a single supra-threshold voxel in the absence of an experimental effect will be at most 0.05. By the prevailing standards of the community, you can report all of the voxels surviving this threshold as statistically significant.
    Bonferroni correction offers provably adequate control over the false positive rate (given the validity of the univariate statistical test). But it is overly conservative in general, more so when the number of independent observations is much smaller than the nominal number of observations. This is often the case in imaging, where spatial smoothness (intrinsic and extrinsic) usually means many fewer observations than voxels.
    3.Boferroni校正(行为实验中很常用)
    Bonferroni correction is the simplest and most conservative approach to correction, often used in behavioral and other types of non-imaging research. To correct, you simply re-calculate your threshold to correspond to your desired map-wise alpha criterion divided by your total number of comparisons (in this case voxels). So if you have 10 voxels, you should only count as statistically significant results that are independently associated with a p value below 0.005. For a given voxel, your probability of a spurious result is tiny (and your threshold is often painfully high). However, you're guaranteed that the probability of seeing so much as a single supra-threshold voxel in the absence of an experimental effect will be at most 0.05. By the prevailing standards of the community, you can report all of the voxels surviving this threshold as statistically significant.
    Bonferroni correction offers provably adequate control over the false positive rate (given the validity of the univariate statistical test). But it is overly conservative in general, more so when the number of independent observations is much smaller than the nominal number of observations. This is often the case in imaging, where spatial smoothness (intrinsic and extrinsic) usually means many fewer observations than voxels.
    3.Boferroni校正(行为实验中很常用)
    Bonferroni correction is the simplest and most conservative approach to correction, often used in behavioral and other types of non-imaging research. To correct, you simply re-calculate your threshold to correspond to your desired map-wise alpha criterion divided by your total number of comparisons (in this case voxels). So if you have 10 voxels, you should only count as statistically significant results that are independently associated with a p value below 0.005. For a given voxel, your probability of a spurious result is tiny (and your threshold is often painfully high). However, you're guaranteed that the probability of seeing so much as a single supra-threshold voxel in the absence of an experimental effect will be at most 0.05. By the prevailing standards of the community, you can report all of the voxels surviving this threshold as statistically significant.
    Bonferroni correction offers provably adequate control over the false positive rate (given the validity of the univariate statistical test). But it is overly conservative in general, more so when the number of independent observations is much smaller than the nominal number of observations. This is often the case in imaging, where spatial smoothness (intrinsic and extrinsic) usually means many fewer observations than voxels.
    3.Boferroni校正(行为实验中很常用)
    Bonferroni correction is the simplest and most conservative approach to correction, often used in behavioral and other types of non-imaging research. To correct, you simply re-calculate your threshold to correspond to your desired map-wise alpha criterion divided by your total number of comparisons (in this case voxels). So if you have 10 voxels, you should only count as statistically significant results that are independently associated with a p value below 0.005. For a given voxel, your probability of a spurious result is tiny (and your threshold is often painfully high). However, you're guaranteed that the probability of seeing so much as a single supra-threshold voxel in the absence of an experimental effect will be at most 0.05. By the prevailing standards of the community, you can report all of the voxels surviving this threshold as statistically significant.
    Bonferroni correction offers provably adequate control over the false positive rate (given the validity of the univariate statistical test). But it is overly conservative in general, more so when the number of independent observations is much smaller than the nominal number of observations. This is often the case in imaging, where spatial smoothness (intrinsic and extrinsic) usually means many fewer observations than voxels.

    =================================================================================================

    如果在同一数据集上同时检验n个独立的假设,那么用于每一假设的统计显著水平,应为仅检验一个假设时的显著水平的1/n。举个例子:如要在同一数据集上检验两个独立的假设,显著水平设为常见的0.05。此时用于检验该两个假设应使用更严格的0.025。即0.05* (1/2)。该方法是由Carlo Emilio Bonferroni发展的,因此称Bonferroni校正。

      这样做的理由是基于这样一个事实:在同一数据集上进行多个假设的检验,每20个假设中就有一个可能纯粹由于概率,而达到0.05的显著水平。

    假如有10个检验,经过Bonferroni 校正以后,新的p值可以这样计算1-(1-p)^(1/10)=p' p'就是 新的显著性水平

    序列化的 Bonferroni 校正 子非吾 发表于 2006-12-19 19:20:00

    序列化的 Bonferroni 校正 (Sequential Bonferroni technique)

    一个健康人,如果请他/她到医院作所有指标的检查,那么也很有可能得到有一些指标“不正常”。原因就是,所有的测度和判断都是有误差的,而连续进行测度和判断的时候,即使本来都是正常的,也很有可能发现一些指标“不正常”。用统计术语,就叫增加了犯“I 类错误”的概率。

     

    当连续进行检验的时候,即使本来都是没有差异的,总是会发现一些“差异显著”的例子。解决这个问题的方法之一就是做Bonferroni 校正。B onferroni 校正最初是由Bonferroni, C. E. (1935) 提出的,不过我参考的是 Rice (1989) ,里面是引用Holm (1979) 的。另外有一个网页(http://mathworld.wolfram.com/BonferroniCorrection.html)也讲了这个方法。

     

    具体方法如下:

    对k个独立的检验,在给定的显著性水平(α)下,把每个检验对应的 P 值从小到大排列 (P1, Pk)。首先看最小的 P 值(P1);如果P1≤α/k,就认为对应的检验在总体上(table wide)α水平上显著;如果不是,就认为所有的检验都不显著;当,且仅当P1≤α/k 时,再来看第二个P值(P2)。如果P2≤α/(k-1),就认为在总体水平上对应的检验在α水平上是显著的;之后再进行下一个P值。一直进行这个过程,直到Pi≤α/(k-i)不成立;下结论i和以后的检验都不显著。

     

    这个方法被批评过,因为太保守了Moran (2003)。后来还有一些争论和改进(见Moran 网页里面的引用Moran 2003的文献)。

    参考文献:

    Bonferroni, C. E. (1935) Il calcolo delle assicurazioni su gruppi di teste. In Studi in Onore del Professore Salvatore Ortu Carboni. Rome: Italy, pp. 13-60.

    Rice W. R. (1989) Analyzing tables of statistical tests. Evolution 43: 223–225. Moran, M.D. (2003) Arguments for rejecting the sequential Bonferroni in ecological studies. Oikos 100 (2), 403-405. doi: 10.1034/j.1600-0706.2003.12010.x (http://www.blackwell-synergy.com/doi/full/10.1034/j.1600-0706.2003.12010.x?prevSearch=allfield:(Moran))

     

    See also here.

    http://privatewww.essex.ac.uk/~scholp/bonferroni.htm

  • 相关阅读:
    昨天
    独一无二
    参加婚礼
    好好说话
    叶问4
    争吵+侦探成旭
    慢慢来
    cs go
    附3、Storm课程学习整体思路及问题 ---没用
    7、kafka知识总结
  • 原文地址:https://www.cnblogs.com/pangairu/p/4223916.html
Copyright © 2020-2023  润新知