1 / 30

MGTO 324 Recruitment and Selections

MGTO 324 Recruitment and Selections. Validity I (Construct Validity) Kin Fai Ellick Wong Ph.D. Department of Management of Organizations Hong Kong University of Science & Technology. Prologue. Why validity? Why is it important to personnel selection?

ike
Download Presentation

MGTO 324 Recruitment and Selections

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MGTO 324 Recruitment and Selections Validity I (Construct Validity) Kin Fai Ellick Wong Ph.D. Department of Management of Organizations Hong Kong University of Science & Technology

  2. Prologue • Why validity? Why is it important to personnel selection? • According to Standard for Educational and Psychology Testing (1985) from American Psychological Association, employment testing should follow the following standards • “Validity is the most important consideration in test evaluation. The concept refers to the appropriateness, meaningfulness, and usefulness of the specific inferences made from test scores.” (p. 9) • In particular, when discussing employment testing • “In employment settings tests may be used in conjunction with other information to make predictions or decisions about individual personnel actions. The principal obligation of employment testing is to produce reasonable evidence for the validity of such predictions and decisions.” (p. 59)

  3. Prologue • A refreshment of some essential concepts • What is validity? • The extent to which a test is measuring what it is supposed to measure • In the APA’s standard • Validity “refers to the appropriateness, meaningfulness, and usefulness of the specific inferences made from test scores” (p.9) • What is test validation? • Establishment of the validity • Process examining the extent to which a test is valid • May or may not be statistical, depending on which types of evidence being examined

  4. Prologue • A refreshment of some essential concepts • What is the relationship between reliability and validity? • When a test is very low in reliability, it implies that the test scores represent a large amount of random errors… so can a test with low reliability be a valid test? • When a test is high in reliability, it implies that the test scores represent something meaningful (i.e., not errors)… so is it also a valid test? • Maximum validity coefficient (R12max)

  5. Prologue How Reliability Affects Validity* *The first column shows the reliability of the test. The second column displays the reliability of the validity criterion. The numbers in the third column are the maximum theoretical correlations between tests, given the reliability of the measures. Source: Psychological testing: Principles, applications, and issues (5th Ed). P.150 .

  6. Prologue • A refreshment of some essential concepts • Three types of validity • Content-related validity • Construct-related validity • Criterion-related validity • Face validity??? • Some remarks • Validation is a necessary step in test development • Never be too rigorous about making distinctions among these three types of validity • “All validation is one, and in a sense all is construct validation” (Cronbach, 1980)

  7. Outline

  8. Outline

  9. Part 1: Psychological constructs • What is a psychological construct? • Something constructed by mental synthesis • Does not exist as a separate thing we can touch or feel • Cannot be used as an objective criterion • Intelligence, love, curiosity, mental health • Some constructs are conceptually related • Self-esteem, General self-efficacy, Self-image • Need for power, Aggressiveness, Need for achievement • Need for cognition, curiosity • Some constructs are conceptually not so related • Need for power vs. Conscientiousness • Intelligence vs. Emotional Stability

  10. Part 1: Psychological constructs • What is construct validity? • The extent to which a test measures a theoretical concept/construct • What does it mean? • What are the relationships with other constructs? • A series of activities in which a research simultaneously defines some constructs and develops the instrumentation to measure them • A graduate process • Each time a relationship is demonstrated, a bit of meaning is attached to the test

  11. Outline

  12. Part 2: Assessing construct validity • Construct-related validation • Step 1: Defining the construct • Step 2: Identifying related constructs • Step 3: Identifying unrelated constructs • Step 4: Preliminary assessment of the test validity • converging (convergent) • divergent (discriminant) evidence • Step 5: Assessing validity using statistical methods • Multi-Trait-Multi-Method (MTMM) Technique • Factor Analysis

  13. Part 2: Assessing construct validity • Construct-related validation • Step 1: Defining the construct • Defining Aggressiveness:

  14. Part 2: Assessing construct validity • Construct-related validation • Steps 2 & 3: Identifying related and unrelated constructs

  15. Part 2: Assessing construct validity • Construct-related validation • Steps 2 & 3: Identifying related and unrelated constructs

  16. Part 2: Assessing construct validity • Construct-related validation • Step 4: Preliminary assessment of the test validity • converging (convergent) and divergent (discriminant) evidence A = Aggressiveness; N = Need for Power; H = Honesty

  17. Part 2: Assessing construct validity • Construct-related validation • Step 4: Preliminary assessment of the test validity • converging (convergent) and divergent (discriminant) evidence • Problems… • Tests using similar methods are likely to be correlated with each other to a certain extent • Teacher rating • Conservative teachers vs. liberal teachers

  18. Part 2: Assessing construct validity • Construct-related validation • Step 4: Preliminary assessment of the test validity • Problems…

  19. Part 2: Assessing construct validity • Construct-related validation • Step 5: Preliminary assessment of the test validity • Solutions • The multi-trait-multi-method (MTMM) matrix (Campbell & Fiske, 1959) • Multi-trait • Measuring more than one construct • Honesty, Aggressiveness, Intelligence • Multi-method • Measured by more than one method • Teacher rating, Tests, Observers’ rating

  20. Part 2: Assessing construct validity • Construct-related validation • Step 5: Preliminary assessment of the test validity • Solutions • The multi-trait-multi-method (MTMM) matrix (Campbell & Fiske, 1959) • Tests measuring differentconstructs with differentmethods should have low intercorrelation • Tests measuring differentconstructs with samemethods should have moderate intercorrelation (i.e., divergent evidence) • Tests measuring sameconstructs with different methods should have high intercorrelation (i.e., convergent evidence)

  21. Part 2: Assessing construct validity • Tests measuring different constructs with different methods should have low intercorrelation

  22. Part 2: Assessing construct validity • Tests measuring differentconstructs with samemethods should have moderate intercorrelation (i.e., divergent evidence)

  23. Part 2: Assessing construct validity • Tests measuring sameconstructs with different methods should have high intercorrelation (i.e., convergent evidence)

  24. Part 2: Assessing construct validity • Factor Analysis • Is there a general (common) factor determining performance in various subjects? • M = math., P = phy., C = chem., E = Eng., H = History., F = French • Determined by a single factor • i.e., General Intelligence (I) explains an individual’s performance of all subjects • or there are two factors? • i.e., One group of subjects is determined by Quantitative Ability (Q), and another group is determined by Verbal Ability (V)

  25. Part 2: Assessing construct validity • A single factor model • A two factor model

  26. Part 2: Assessing construct validity • Which one is more correct? • It is determined by Eigenvalue, i.e., the total amount of variance that is accounted by a particular factor • When a two-factor model accounts for significantly more variance than a single factor model, we believe that there are two factors • When there is no such evidence, a single-factor model is preferred • I’ll show you the steps and interpretations of Factor Analysis results in the coming Workshop

  27. Part 2: Assessing construct validity • How can I use Factor Analysis to examine construct validity of a test? • If the test is assumed to measure a single construct (e.g., need for power, self-efficacy), a single-factor model is expected • Construct validity is evident when Factor Analysis yields a single-factor model • If the test is assumed to have multiple-facets (e.g., Intelligence, including memory, verbal ability, spatial-visual ability, etc.) • Construct validity is evident when Factor Analysis yields a model that (a) has multiple factors and (b) theoretically related items are loaded into the same factor

More Related