290 likes | 423 Views
Design of Statistical Investigations. 10 Random Effects. Stephen Senn. More Than one Random Term. So far only one term has been considered to be random disturbance term It is possible to have models in which more than one source of variation is taken to be random We now consider such models.
E N D
Design of Statistical Investigations 10 Random Effects Stephen Senn SJS SDI_10
More Than one Random Term • So far only one term has been considered to be random • disturbance term • It is possible to have models in which more than one source of variation is taken to be random • We now consider such models SJS SDI_10
The Example of Clinical Trials • So far we have always taken patient effects to be fixed • Suppose however we ran a parallel group trial • Some patients have one treatment some have another • Patients and treatments are confounded • If we treat patient effects as fixed, cannot estimate treatments SJS SDI_10
Solution • We treat patient effects as random • This is done implicitly in parallel group trials as follows bij is effect for patient j of treatment group i. If we declare this to be random we can form a new disturbance term as follows SJS SDI_10
So What? • We do not even need to model this explicitly • We just have a model in which we say “response = treatment + noise”, without worrying about what terms the noise is made up of. • However for more complicated designs such distinctions may be useful SJS SDI_10
Cross-over Trials • We shall now take a simple example • AB/BA cross-over • We shall, however ignore period effects to make it even simpler • Just consider the following • Treatment effects • Patient effects • Within-patient error SJS SDI_10
Model Here all the bi and eij terms are assumed independent of each other. SJS SDI_10
Consequences The variance covariance structure of the Yij then has the following block diagonal form SJS SDI_10
Variance-Covariance Matrix Here is is assumed that measurements in successive rows of Y are on the same patient SJS SDI_10
Alternative Representation SJS SDI_10
Estimation 1 If we now write this as a linear model with only one error term, we must now have We can no longer use ordinary least squares but must use generalised least squares instead. SJS SDI_10
Estimation 2 We are not going to cover the details of GLS. However, as it turns out, the estimator here is exactly the same as in the model in which we treat the patient effects as fixed rather than random. This equivalence does not generally hold. It does hold for certain balanced designs SJS SDI_10
IllustrationExp_5 • This experiment was an AB/BA cross-over • We previously analysed this using a fixed effects model for the patient effect • We now analyse this treating the patient effect as random • To do this we use the SPlus lme function (lme = linear mixed effect) SJS SDI_10
Exp_5Random Effects Analysis with SPlus > fit4 <- lme(pef ~ treat, random = ~ 1 | patient) > summary(fit4) Linear mixed-effects model fit by REML Data: NULL AIC BIC logLik 271.8428 276.5551 -131.9214 Random effects: Formula: ~ 1 | patient (Intercept) Residual StdDev: 66.24667 28.70328 Fixed effects: pef ~ treat Value Std.Error DF t-value p-value (Intercept) 295.7692 20.02402 12 14.77072 <.0001 treat 45.3846 11.25835 12 4.03120 0.0017 SJS SDI_10
Question In this balanced case, the estimator is the same for the random effect model as for the fixed effect model • Show that the variance is the same whether patient is treated as a fixed or a random effect SJS SDI_10
An Example Where thisEquivalence Does not Apply • In Exp_5, treating the patient effect as fixed or random produces the same result • This is not the case for all designs • We now consider an example where this does not apply • Exp_12, an incomplete blocks design, is a case in point SJS SDI_10
Exp_12Analysis with SPlus Random effects: Formula: ~ 1 | patient numeric matrix: 1 rows, 2 columns. (Intercept) Residual StdDev: 0.7055782 0.2263807 Fixed effects: FEV1 ~ treat Value Std.Error DF t-value p-value (Intercept) 2.021375 0.1568938 23 12.88372 <.0001 treatF24 0.034457 0.0946403 22 0.36409 0.7193 treatP -0.492521 0.0890777 22 -5.52912 <.0001 Compare these with fixed effects solution Value Std. Error t value Pr(>|t|) treatF24 0.0388 0.0955 0.4059 0.6888 treatP -0.5029 0.0897 -5.6051 0.0000 SJS SDI_10
Notes • The estimates are no longer the same • The variances are (logically) no longer the same either • The variances for the random effects approach are (slightly) smaller SJS SDI_10
Fixed effects • Any effect we nominate as fixed has to be ‘eliminated’ when estimating any other effect • If we nominate patient as fixed then patient must be eliminated in estimating the treatment effect • In Exp_12 each patient effect appears twice. • Once in period one, once in period 2 • A patient effect can only be eliminated by forming the difference between period 1 and period 2 • Analysis uses such differences SJS SDI_10
Random Effects • An effect that is random does not have to be eliminated • on average it is zero • Nominating an effect as random increases the range of possible unbiased estimators • The minimum variance estimator may or may not be the same as in the fixed effects case SJS SDI_10
A Further Source of Information • If the patient effect is random, the totals for patients vary randomly from patient to patient • These totals do not all reflect the same effects • By comparing F12 + F24 with F12 + P we can estimate the difference between F24 and P • This is a further source of information • In general referred to as inter-block information • This has been “recovered” by S-PLUS SJS SDI_10
Other Sorts of Random Effects • In example considered main effect of block was random • More unusual is to have a true random treatment effect (but this can happen) • Quite common is to have block by treatment interactions that are considered random • We consider an example of the former in the next lecture • We consider an example of the latter here SJS SDI_10
Exp_14Shumaker and Metzler • Trial to compare two formulations of phenytoin • T = test, R = reference • So-called bioequivalence study • Four period cross-over • Each of 26 subjects received each formulation twice SJS SDI_10
Data from Shumaker and Metzler, 1998 Drug Information Journal,32,1063-1072 Area under the concentration time curve (AUC) for Phenytoin. Data are log-transformed SJS SDI_10
Subject by Formulation InteractionHeuristic Explanation • We can estimate the treatment effect twice independently for each subject • for example by comparing period 2 and 1 and period 3 and 4 • We can see whether these estimates differ more between subjects than within • This enables us to estimate whether there is an “interaction” SJS SDI_10
Exp_14Two Fits > aov.1 <- aov(lAUC ~ Subject + Formulation) > summary(aov.1) Df Sum of Sq Mean Sq F Value Pr(F) Subject 25 7.747866 0.3099146 90.50657 0.0000000 Formulation 1 0.002529 0.0025288 0.73851 0.3928048 Residuals 77 0.263665 0.0034242 > aov.2 <- aov(lAUC ~ Subject * Formulation) > summary(aov.2) Df Sum of Sq Mean Sq F Value Pr(F) Subject 25 7.747866 0.3099146 82.32023 0.0000000 Formulation 1 0.002529 0.0025288 0.67172 0.4161950 Subject:Formulation 25 0.067898 0.0027159 0.72141 0.8112634 Residuals 52 0.195767 0.0037647 SJS SDI_10
Different Philosophy • MS residuals in first fit includes subject by treatment interaction • MS residuals in second does not • Hence F test in first uses such variation from subject to subject to assess precision of overall treatment estimate • Second F test excludes it SJS SDI_10
Questions In this example the MS for residuals is actually higher when the subject by formulation interaction is fitted • Is this phenomenon to be expected in general? • What does it imply? • Can you think of an explanation in this case? SJS SDI_10