90 likes | 264 Views
Questions to Ask Yourself Regarding ANOVA. History. ANOVA is extremely popular in psychological research When experimental approaches to data analysis dominated the research setting up until and through the 60s this made sense, and statistical and practical effects were more closely aligned
E N D
History • ANOVA is extremely popular in psychological research • When experimental approaches to data analysis dominated the research setting up until and through the 60s this made sense, and statistical and practical effects were more closely aligned • Due to the cognitive revolution etc., questions began to involve situations in which experiments could not be carried out or would be inappropriate to • Research either required or began using other techniques, while textbooks and courses still pretended everyone was running experiments • As a result people applied ANOVA as it typically was even though it did not make sense to • Some software ‘advances’ just made it easier to continue this unfortunate trend • ANOVA became a shining example of what can go wrong with the application of statistics, with people even consistently bending theory to fit their model into an ANOVA design
Are you doing an experiment? • Yes? • Experimental design is well suited to ANOVA • Effects are independent • Total factor effects and interactions sum to overall SSmodel • Balanced/proportional design • Lends itself easily to planned comparisons • Interactions are easily interpretable • Other factors are controlled for • Though still contribute to error variance • No? • You should probably take a different approach • Your design will be unbalanced, and not only do you have correlated effects (is main effect A really A or due to what it shares with B?) you have to worry about things like which type of sums of squares you’re going to utilize • You’re not doing an experiment that controls for other variables via randomization so restricting the model to a couple categorical predictors would almost always be notably misspecified
Which comparisons are you actually interested in? • Post hocs • Are you really interested in testing every possible group comparison (including every cell difference regarding every interaction?) • If so, are you willing to lose a lot of statistical power in order to do that? • Are you going to apply the post hoc mentality to all situations? • Control FW error among all simple effects for any interactions? • Control FW error among all group differences within simple effects? • Planned • Are there specific comparisons of interest that would make sense to test? • Are these going to tell you anything more than the main effects/interaction results and a good graph and complete understanding of the measures wouldn’t?
What graphics are you going to use to display group differences? • Bar plots as they are typically used are not a very viable way to display group differences • Typical use lacks some information and obscures other types • Stacked and 3d ones never were (shudders)1 • If you plot them in a graph, which error bars you’re displaying need to be explicitly made known • Std dev? • Std error? • CI? • Is your graph attempting to display the statistical test result? Does it? • Will the graph stand out? Are you putting as much info as you can without being overwhelming?
Just why exactly are you examining gender, race etc.? Habit or theory? • Model misspecification entails including irrelevant predictors in a model • If you’re throwing them in ‘just to see’ then you are in an exploratory paradigm and should probably be doing analyses more appropriate to that approach
Are you doing multiple ANOVAs? • If the dependent variables examined are notably correlated (theoretically so also) you would be better off in terms of power examining them all at once with MANOVA… • And leaving the analysis multivariate • There is no good reason to follow up a MANOVA with univariate tests • Several post hoc procedures are available to tease out differences at the multivariate level
Why are you doing ANCOVA again? • While it would make some sense in the experimental setting for the reduction of error due to known covariates, it makes little sense outside of the controlled setting • Partialling out a covariate means you are likely partialling out part of the actual effect you are interested in • It comes with an additional assumption that is unlikely to hold • d effect sizes on the adjusted means are likely inappropriate in a non-experimental setting
Summary • Analysis of Variance refers to the partitioning of variance into model components and as such can be used also to test competing models in terms of how much variance is attributed to each model • Proposed Model vs. Null • Experiments, Basic Regression etc. • Comparing full model vs. subset • When investigating a model with grouping factors, experimental settings lend themselves to minimal factors and contrasts, and ANOVA can easily break the model variance down into the subsequent contributions of factors and interactions, and from there orthogonal contrasts and/or simple effects for those factors and interactions • The test technically still regards examination of the reduction in residual variance with the inclusion of the factors, just as it always has with regression and continuous predictors • However even with an experiment the design can easily become very complex • When the approach is applied to non-experimental settings variance components do not add up, effects seen are difficult to tease out, post hoc procedures become cumbersome, ANCOVA is not justifiable etc.