1 / 23

Validity. Test Validity & Experiment Validity.

Validity. Test Validity & Experiment Validity. I N A O E. Niusvel Acosta Mendoza. Criterion Validity. I N A O E. Criterion validity. Split into concurrent validity (other criteria assessed simultaneously) and predictive validity (predicting future or past events) sub-areas.

elisha
Download Presentation

Validity. Test Validity & Experiment Validity.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Validity.Test Validity & Experiment Validity. I N A O E Niusvel Acosta Mendoza

  2. Criterion Validity. I N A O E

  3. Criterion validity • Split into concurrent validity (other criteria assessed simultaneously) and predictive validity (predicting future or past events) sub-areas. • Scores on a test or simulated exercise are correlated with measures of actual on-the-job performance. • Extent to which a selection technique can accurately predict one or more important elements of job behavior.

  4. Concurrent validity • It is the relate to a existing similar measure • It is the relationship between test scores and some criterion measure of job performance or training performance at the same time. • It refers to a measurements ability to correlate or vary directly with an accepted measure of the same construct. • It demonstrates that the scores on one test are related to scores on another test, which could be administered at the same time or in place of the other test.

  5. Predictive validity • It is the test predict later performance on a related criterion • It refers to a measurements ability to predict scores on another related measurement. • It demonstrates that scores on one test are related to scores on an outcome, which cannot (or will not) occur until sometime in the future.

  6. Concurrent & predictive validities • The difference between concurrent validity and predictive validity rests solely on the time at which the two measures are administered. • Concurrent validity applies to validation studies in which the two measures are administered at approximately the same time. • Concurrent validity may be used as a practical substitute for predictive validity.

  7. Internal validity • Quality of a metric or model to allow sampling free of bias. • Differently from the construct validity, it does not imply that what you are measuring is related to the modelled phenomenon; it is just concerned with measurement or modelling bias. • Internal validity is achieved fully when there are irrefutable arguments showing that the intervention has had (or hadn’t) a certain effect. • More often than not, it requires a controlled experiment (with a control group) • Remember, there may be confusion; e.g. other alternative hypothesis, and thus there will be no construct validity, but you still may have internal validity. • It confirms that your experiment is correctly performed • It is concerned with causality (yet it does not represent causality!)

  8. Internal validity • Internal validity guarantees that evidence can be communicated directly. • Internal validity may be at risk when: • The analysis does not support causal relations adequately • Groups being compared are not sufficiently homogeneous • Results may not reach statistical significance

  9. Experimental Validity. I N A O E

  10. Experimental validity • The validity of the design of experimental research studies is a fundamental part of the scientific method, and a concern of research ethics. Without a valid design, valid scientific conclusions cannot be drawn.

  11. Statistical Conclusion Validity • Is the degree to which conclusions about the relationship among variablesbased on the data arecorrect or reasonable. • Involves ensuring the use of adequate sampling procedures, appropriate statistical tests, and reliable measurement procedures. • As this type of validity is concerned solely with the relationship that is found among variables, the relationship may be solely a correlation.

  12. Statistical Conclusion Validity • Fundamentally, two types of errors can occur: • finding a difference or correlation when none exists • finding no difference when one exists • Concerns the qualities of the study that make these types of errors more likely. • Involves ensuring the use of adequate sampling procedures, appropriate statistical tests, and reliable measurement procedures.

  13. Common threats • The most common threats to statistical conclusion validity are: • Low statistical power • Violated assumptions of the test statistics • Fishing and the error rate problem • Unreliability of measures • Restriction of range • Heterogeneity of the units under study • Threats to Internal Validity

  14. Common threats • Low statistical power • Power is the probability of correctly rejecting the null hypothesis when it is false. • Experiments with low power have a higher probability of incorrectly accepting the null hypothesis. • Low power occurs when the sample size of the study is too small given other factors (small effect sizes, large group variability, unreliable measures, etc.)

  15. Common threats • Violated assumptions of the test statistics • Most statistical tests (particularly inferential statistics) involve assumptions about the data that make the analysis suitable for testing a hypothesis. • Violating the assumptions of statistical tests can lead to incorrect inferences about the cause-effect relationship. • The robustness of a test indicates how sensitive it is to violations.

  16. Common threats • Fishing and the error rate problem • If a researcher fishes through their data, testing many different hypotheses to find a significant effect, they are inflating their error rate. • The more the researcher repeatedly tests the data, the higher the chance of observing an error and making an incorrect inference about the existence of a relationship.

  17. Common threats • Unreliability of measures • If the dependent and/or independent variable(s) are not measured reliably (i.e., with large amounts of measurement error), incorrect conclusions can be drawn.

  18. Common threats • Restriction of range • Restriction of range, such as floor and ceiling effects or selection effects, reduce the power of the experiment.

  19. Common threats • Heterogeneity of the units under study • Greater heterogeneity of individuals participating in the study can also impact interpretations of results by increasing the variance of results or obscuring true relationships.

  20. Common threats • Threats to Internal Validity • Any effect that can impact the internal validity of a research study may bias the results and impact the validity of statistical conclusions reached. • These threats to internal validity include unreliability of treatment implementation (lack of standardization) or failing to control for extraneous variables.

  21. I N A O E

  22. Bibliography • http://www.nicheconsulting.co.nz/validity_reliability.htm • http://www.socialresearchmethods.net/kb/measval.php • S. Wolming, C. Wikström. The concept of validity in theory and practice. Assessment in Education: Principles, Policy & Practice. Vol. 17(2):117-132, 2010. • P.C Cozby. Methods in behavioral research. 10th ed. Boston: McGraw-Hill Higher Education, 2009. • W. Shadish, T.D. Cook, D.T. Campbell.Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin, 2006. • D. Borsboom, G.J. Mellenberg, J. van Heerden. The Concept of Validity. Psychological Review. Vol. 111(4): 1061-1071, 2004. • R.J Cohen, M.E. Swerdlik. Psychological testing and assessment (6th edition). Sydney: McGraw-Hill, 2004. • T.D. Cook, D.T. Campbell, A. Day.Quasi-experimentation: Design & analysis issues for field settings. Houghton Mifflin Boston,1979.

  23. Validity.Test Validity & Experiment Validity. I N A O E Niusvel Acosta Mendoza

More Related