1 / 24

OOM not Doom: A Novel Method for Improving Psychological Science

OOM not Doom: A Novel Method for Improving Psychological Science. Bradley D. Woods University of Canterbury.

lagunas
Download Presentation

OOM not Doom: A Novel Method for Improving Psychological Science

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. OOM not Doom: A Novel Method for Improving Psychological Science Bradley D. Woods University of Canterbury

  2. “”You take the blue pill the story ends, you wake up in your bed and believe whatever you want to believe. You take the red pill – you stay in Wonderland and I show you how deep the rabbit-hole goes...

  3. Overview • The Blue Pill: • Measurement • NHST • IEV vs. IAV Research Methods • 2. The Red Pill: • Observation Oriented Modeling

  4. Measurement • Modern psychological measurement paradigm really began to take shape and evolve in the 1940s from events associated with publication of the Ferguson Report which charged that psychophysical measurement was impossible because of the lack of fundamental A units of mass, length, and time upon which all B units of temperature, pressure, etc rely. • S.S. Stevens replied by redefining measurement. Assuming isomorphism between properties of objects and the properties of the number system he constructed an operational interpretation of representational number theory. • Stevens' Dictum: “The number system is merely a model, to be used in whatever way we please. It is a rich model, to be sure, and one or another aspect of its syntax can often be made to portray one or another property of objects or events.It is a useful convention, therefore, to define as measurement the assigning of numbers to objects or events in accordance with a systematic rule. ””(Stevens, 1959, p.609).” • Which led to creation of the NOIR scales and the various differing permissible operations of identity, order, difference, and equality. It also wedded measurement to the popular statistical methodology of the Pearson-Fisherian tradition, virtually guaranteeing widespread adoption (Grice, 2011).

  5. Measurement • Stevens, thus, turned the classical definition of measurement on its head. The classical concept asserted that numerical measurements supervene on quantitative attributes. According to Stevens, however, measurable attributes supervene on numerical assessment. • Aside from the major philosophical problem associated with operationalism, that of theoretical pluralism, Stevens' definition is simply wrong! And which had already been proved some 50 years earlier by Holder. • Building on Euclid, whom in modern terms had located the ratio of magnitudes relative to the series of rational numbers, Holder (1901) proved that if an attribute is quantitative then it is, in principle, measurable. Using the example of a continuous series of points on a straight line, and invoking ten axioms, Holder (1901) showed that a relation of addition amongst a series of three points must implicitly exist. • The possession of quantitative structure then is the precise reason why some attributes are measurable and others not. Hence, scientific measurement is properly defined as “…the estimation or discovery of the ratio of some magnitude of a quantitative attribute to a unit of the same attribute.” (Michell, 1997).

  6. Measurement • As Holder’s axioms prove there is no logical necessity for any attribute to possess a quantitative structure, even an ordered series of attributes. To think so commits the psychometrician’s fallacy (Michell, 2007) Thus, the contention that any attribute possesses quantitative structure is one that cannot be assumed outright and must be subject to validation. • In psychology such validation has developed along two main paths, namely. the Theory of Conjoint Measurement and Rasch Analysis. • The former tries to attribute quantitative structure via derived measurement, the same as in physics, by showing equivalence between an undetermined structure and two or more quantitative structures. The problem is there are few examples of already existing quantitative structures in psychology. This has resulted in a paucity of development so striking it has been described as a revolution that never happened (Cliff, 1992). • Rasch analysis involves the formal testing of a measurement scale against a mathematical model that purports to meet the fundamental axioms of measurement. However, it is taken as given, and has never been proved, that test performance is a conjoint structure comprising only person ability and item difficulty. It also hostage to several conceptual conundrums such as the Rasch Paradox.

  7. Measurement • The measurement problem is that, aside from a very few examples, we have no foundation or basis for our claim that we are really accurately measuring psychological variables. Some might raise an argument for justification along the lines that measurement predicts behaviour. But that raises an important scientific question, it doesn’t answer one. Without being able to show how or why we remain open to criticism such as Trendler (2009), amongst others, who assert that few, if any, psychological variables have ever been measured and will ever be able to.

  8. NHST • Hubbard, Parsa, and Luthy (1997) tracked the uptake of NHST in the Journal of Applied Psychology from 1917 to 1994 and documented an increasing reliance on statistical significance tests. Referencing more than a 1,000 articles, NHST was found to appear in only 17% of articles during the 1920s but over 90% of articles in the 1990s. • The story is the same for other psychology journals. Hubbard (2008) provides more recent figures. Sampling 1,750 papers 12 psychology journals Hubbard (2008) found the use of NHST averaged 94% between 1990 and 2002. Indeed, The Journal of Developmental Psychology and The Journal of Abnormal Psychology both averaged more than 99%. • Efforts at reforming, supplementing, and mitigating the use of NHST such as changes in editorial policy and publication standards have gained little traction. Effect sizes, confidence intervals, and validation of the assumptions underpinning NHST are severely under represented in the psychological literature (Fidler et al., 2005). • This echoed and confirmed earlier sentiments of Finch, Cumming, and Thomason (2001) who, in noting that many important aspects of statistical inference in psychology remained the same today as it was in the 1940s, concluded: “the cogent, sustained efforts of the reformers have been a dismal failure.” (p. 205).

  9. NHST • Keselman et al. (1998) reviewed more than 400 analyses published in psychology journals during 1994 and 1995 and found few studies validated the assumptions of the statistical tests employed. Similarly, Max and Onghena (1999) found corresponding levels of neglect in speech, language, and hearing research journals, while Glover and Dixon (2004) found only 35% of the tests employed in a range of psychology journals were utilised in a manner consistent with the logic of NHST. • Keselman et al. (1998) conducted a Variance Ratio (VR) review of ANOVA analyses in 17 educational and child psychology journals. In studies using a one-way design, the mean VR was found to be 4:1 and in factorial studies, the mean VR was even higher at 7.84:1. This is well outside the 1:1 required ratio (Keselman et al., 1998). Similar results have been evidenced in a range of other clinical and experimental journals (Erceg-Hurn & Mirosevich, 2008). • Micceri (1989) examined 440 large data sets from the psychological and educational literatures which encompassed a wide range of ability, aptitude and psychometric measures. None of the data were found to be normally distributed, and few distributions even remotely resembled the normal curve. Instead, the distributions were frequently multimodal, skewed, and heavy-tailed. This again violates a fundamental tenant of NHST.

  10. NHST • Haller and Krauss (2002), provided psychology faculty with a statistical print out and asked them to answer questions concerning interpretation of the results, Depressingly, the best result from all groups was that of lecturers teaching statistical methods classes, 80% of whom were found to agree with at least one erroneous statistical conception (Haller & Krauss, 2002). • This follows on from earlier work by Oakes (1986) who asked 70 academic psychologists for their interpretation of p <.01 and found only 8 (11%) gave the correct interpretation.Lest it be thought that these results were an aberration, similar findings have been established by many other authors (Gigerenzer, Krauss, & Vitouch 2004; Hubbard & Bayarri, 2003). • These and other similar examples lend credence to Kline’s (2004) contention that a substantial gap exists between NHST as described in the literature, and its use in practice. • Aside from the well known conceptual problems with NHST being a poor method, it’s poorly understood and even more poorly implemented by psychologists. To top it off it’s over utilised. As Hubbard (2008) duly summarises: “The end result is that applications of classical statistical testing in psychology are largely meaningless.” (p.297).

  11. IEV vs. IAV • An issue of real import for psychological research is that of granularity of research methods, i.e., at which level, and by which methods, should research be conducted to generate robust psychological knowledge. Traditionally, there have been two competing and contrasting approaches and which have led to the fragmentation of psychological science (Salvatore & Valsiner, 2010). • The first is concerned with the averages and aggregates of people and groups of people, interindividual variation (IEV), whereas a contrasting approach focuses on the study of events and dispositions within a single individual history, intraindividual variation (IAV) (Krauss, 2008; Molenaar, 2004; Salvatore & Valsiner, 2010). In the former case, regression analysis, correlations, t-Tests, and ANOVAs are the most common analytical tools; in the latter, visual analysis of single-case designs predominates (Saville, 2008). • Danziger (1990), has shown that during the period 1914-1916 the ratio of published IAV to IEV research was more than 2 to 1 in several leading psychology journals. These numbers shifted dramatically, however, and by the 1950s IEV research comprised more than 80% of articles in the same journals. The situation remains the same today and is described by Danziger as “the triumph of the aggregate” (p.68).

  12. IEV vs. IAV • Although contemporary psychology has progressively identified itself as a nomothetic science, nomotheticity has been taken to mean ergodicity of psychological phenomena. That is, an individual’s variability (IAV) has been assumed to be identical to the variation between persons within a given population (IEV) (Molenaar, 2004; Salvatore & Valsiner, 2010). • Classical theorems of ergodic mathematics illustrate that analysis of IAV will fail to correspond to the pattern of IEV when; a mean trend changes over time, a covariance structure changes over time, or when a process occurs differently for different members of the population. Unfortunately, these are precisely the conditions that characterise much psychological research (Krauss, 2008; Molenaar, 2004; Salvatore & Valsiner, 2010). • Borkenau and Ostendorf (1998) had participants self report items indicative of the Big 5 factors of personality, daily, over a period of 90 days. Though substantial consistency was evidenced for the factor structure of longitudinal rotations that had been averaged across all participants, the five-factor model only fitted the intraindividual organisation of psychological dispositions for fewer than 10% of the individual participants. That is, for most participants the supposed 5 factors of personality gave way to 2, 3, 6 and even 8 factor structures.

  13. IEV vs. IAV • Similar findings have been obtained by Cervone (2005), Cervone and Shoda, (1999), Epstein, (2010), Grice (2004), and Grice, Jackson, and McDaniel (2006). • The implication with the overemphasis on IEV research is that aside from neglecting a valuable research methodology and a potential treasure trove of psychological knowledge with IAV methods, we’re drawing false inferences from purely IEV research. Loftus (1996) summarises succinctly: “What we do, I sometimes think, is akin to trying to build a violin using a stone mallet and a chain saw. The tool-to-task fit is not very good, and, as a result, we wind up building a lot of poor quality violins.” (p.161).

  14. OOM • The challenges posed by the preceding issues result largely from the mindless, ritualised, one-size-fits-all application of traditional research approaches and methods to psychological research, without regard for relevant conceptual and philosophical considerations. This view echoes the earlier sentiments of editors of 24 scientific journals who asserted that traditional, variable-orientated, sample-based, research strategies were ill-suited to accounting for the complex causal processes undergirding psychological phenomena (NIMH consortium of editors on development and psychopathology, 2000). • The trick then is to get the tool to task fit correct. • Created in direct response to criticisms of the prevailing research practices, Observation Oriented Modeling (OOM) is a novel methodology and software program for conceptualising and evaluating psychological data created by James Grice PhD. • Premised on realism, seven principles of OOM result, whereby primacy is given to real, accurate, repeated, observable eventsthat allows for an alternative method of explaining patterns of observations in terms of their causal structure.

  15. OOM • At the core of OOM are the deep structures of qualitatively and quantitatively ordered observations. Such structures are obtained by translating data elements into binary form. For example, the deep structure of biological sex can be represented “1 0” for females and “0 1” for males. Similarly, in considering a 5-point Likert scale with highly disagree, disagree, neutral, agree, and highly agree categories, a highly disagree response would be recorded as “1 0 0 0 0” whereas an agree response would be coded “0 0 0 1 0”. • In matrix form, deep structures can be manipulated according to the rules of matrix algebra which allows for addition, subtraction, and other logical operations such as if, and, not, etc. The primary mathematical technique employed in OOM analysis, however, is referred to as Binary Procrustes Rotation, a modified form of Procrustes rotation often used in personality pschology and factor analysis in which sets of vectors are rotated to maximal agreement to establish common/ latent variables (Grice, 2011) • The goal of OOM is to align the column (units) in such a way that 1s in the conforming matrix (Likert response) are maximised with co-occurrence of 1s in the target matrix (gender). This is accomplished using a transformation matrix which is multiplied against the conforming matrix to establish the fully rotated deep structure.

  16. OOM Construction of the transformation matrix:

  17. OOM The fully rotated matrix compared to the original target matrix. As can be seen below, they are identical indicating that the 7 unit deep structure depression ratings could be conformed and reduced to their 2 unit gender deep structures. That is, gender is the causal factor for depression in this model.

  18. OOM • Three statistics are employed in OOM to ascertain the success of the rotation and, thus, how confident we can be about attributing causes from the conforming matrix back to the target matrix. • Classification Strength Index (CSI): This measures the the degree to which values from the target and rotated matrices matched, and range from 0 no match to 1 complete agreement. • Percent Correct Classification (PCC): This measuresthe similitude between rows of the transformed and target matrices i.e. 0 1 0 vs. 0 0 1 or 0 .8 0 etc. Note that these are classified as correct, incorrect and ambiguous depending on whether a perfect, imperfect or partial match has been obtained. • Chance Value (CVAL): Randomised resampling technique that reshuffles rows and recalculates the PCC values to see how likely it is that obtained values are at least as accurate as the actual results. A lower CVAL indicates a more unique result. • In the OOM software all this information is summarised in an output screen and graphically via a multilevel frequency histogram or mutligram.

  19. OOM The output screen:

  20. OOM The Multigram:

  21. OOM • Matrix Restrictions: To allow for successful rotation, there must be common elements between the target and conforming ratios. For example, though a 2x6 matrix could be rotated with a 10x6 matrix, a 4x3 matrix could not be conformed to a 2x5 matrix. • CVAL Assumptions: While fewer than traditional statistical analysis, particularly NHST, the CVAL procedure assumes that observations between the target and conforming matrices are independent. • Rich Analysis Options: Aside from a full suite of descriptive statistics OOM also includes other analysis tools such as model observation separation and pairwise rotation analyses. OOM can replace Chi Square tests, t-Tests, correlation analysis, ANOVAs, MANOVAs, bivariate multiple regression and item comparison procedures of traditional statistical inference (Grice, 2011). • Ease of use and Efficiency: In replacing many analysis with one tool, and avoiding the activity required with learning and validating traditional statistical analysis, OOM can greatly simplify and economise the data analytic process. It also makes it very easy to learn and use (Grice, 2011). • The greatest advantage of OOM, however, is its respect for the philosophical and conceptual considerations of the issues outlined above.

  22. OOM • OOM avoids the dubious and oft violated assumptions of NHST and respects the mathematical theorems of ergodicity by working at the observational level and using replication as the means of generalisation. • Potentially the most important advantage of OOM is that it allows for an alternate method of inferring causal attributions without assuming quantitative structure. If, in fact, many psychological variables are not quantitative then we still have a powerful method for undertaking robust psychological research. • OOM has been used to challenge conclusions drawn from previous psychological studies, which have relied on traditional methods of statistical inference, such as the bystander effect (Grice, 2011). • Grice, J., Barrett, P., Schlimgen, L., & Abramson, C. (2012). Toward a brighter future for psychology as an observation oriented science. Behavioral Sciences, 2, 1-22. • Grice, J. (2011). Observation orientated modeling: Analysis of cause in the behavioral sciences. New York: Elsevier. • The OOM software and excerpts from the accompanying text book can be downloaded free from: http: booksite.academicpress.com/grice/oom • Which leaves just one question....

  23. Which Do You Choose?

More Related