450 likes | 620 Views
Slides on case selection, case studies, and interviewing. Knowing what to observe. Causal inference is the objective of social science Involves learning about facts we do not observe based on what we do observe Recall that we cannot observe causation directly
E N D
Knowing what to observe • Causal inference is the objective of social science • Involves learning about facts we do not observe based on what we do observe • Recall that we cannot observe causation directly • This defines what we select for observation
Correlation and causation • We observe association… • …but make inferences about causation • Making the case for causation: • Temporal: change in IV occurs prior to change in DV • Alternative explanations for association have been rebutted • Endogeneity • That a third variable causes changes in both the supposed IV and DV • Another variable is the real IV – and after controlling for it, the association between the supposed IV and DV disappears
Random selection • Each case from the population has a known probability of being selected • The selection is probabilistic – rather than intentional • Not haphazard or arbitrary • Real advantage is that this selection criteria is independent of values of the cases on the DV and IV • Unbiased results • Different types of random selection methods; more on that later
Intentional selection • Involves selecting cases using knowledge of the cases’ values on the IVs (sometimes the DV too) • When random selection is impossible or inappropriate • No sampling frame • Small populations and/or samples • Important cases cannot be missed
Eliminating alternative causal interpretations • Control for other causes (Independent Variables: IVs) • Experimental design • Random assignment of values of the IV to observations – only then do we observe values on DV • Use of control group • Use of pre-treatment measurements of DV
Quasi-experiments more common in poli. sci. • The “natural” experiment (would be more accurate to call this non-experimental research) • Use of a control group • But the researcher has not assigned observations to the treatment or control group • Often no pre-treatment measure
Statistical control • Examine the relationship between an IV and a DV while holding constant the value of other variables • E.g. Height and test scores controlling for age • Simpson’s paradox: the direction of the association changes when controlling for another variable
12 8 10 12 8 10 12 8 10 12 8 10 An example of a spurious relationship Math test score Where numbers refer to the ages of the pupils Height
Regression to the mean • We observe random / non-systematic variation in the values of variables • E.g. yearly oscillations in crime statistics • Implication for the selection of cases for making causal inference • A high value, followed by a policy intervention, followed by a lower value, does not necessarily indicate an effect of the intervention • Solution: larger number of observations before and after
Avoid indeterminacy • Research designs that are indeterminate • Where due to the cases selected, it is impossible to make valid inferences about causal effects • Can result from having more inferences than observations • …multicollinearity: where the IVs are correlated very strongly • Difference between quantitative and qualitative research
Multicollinearity • One of the supposed IVs is a perfect function of the other • No variation in one of the IVs at a given value of the other IV • E.g. Democracies (IV1) and trading partners (IV2) do not go to war (DV) with each other – then only examining democracies that trade • Success in joint research collaboration (DV) is caused by non-rivalry: pairs of countries that are different in size (IV1) and that do not trade with the same third countries (IV2) – then only examining differently-sized countries that do not trade
Avoid selection bias • Where the cases are not representative of the population you are trying to make inferences about • The associations are distorted, and apply only to the group of specific cases selected • Worst and most obvious: selecting cases to support favorite hypothesis • But there are other, more subtle manifestations • Selecting on the availability of data • What the historical record preserves
Avoid selecting on the DV • No variation in the DV • Akin to Mills method of agreement • Cannot be sure that the absence of the observed values on the IVs would be associated with different values on the DV • Limited variation in the DV • Underestimates the size of causal effects • Van Evera: is this “a methodological myth”?
Some examples of selection bias • Porter (1990) The Competitive Advantage of Nations • Rational Deterence Theory (See article on reading list by Achen and Snidal (1989)) • Causes of industrial unrest / strikes And a study that recognised and avoided selection bias • Tilly (1975) The formation of Nation States in Western Europe
Selecting on IV • Selecting on cases with only a restricted set of values on the IV • Does not bias causal inferences • But if there is no variation on the IV, no causal inferences can be made • E.g. the effect of single-party government on pledge fulfilment by studying only single-party governments • E.g. the effect of industrialisation on the prestige attributed to various occupations by studying only industrialised countries • In general, maximise variation on IV
Making the most of the available information • Maximising the number of observations • Examining lower levels of aggregation than the entire “case” • E.g. specific decisions within the Cuban missile crisis • Avoid throwing away data by aggregating them
Case study research • “Intensive study of a single unit for the purpose of understanding a larger class of (similar) units” (Gerring 2004) • Case: “a phenomenon for which we report and interpret only a single measure on any pertinent variable” (Eckstein 1975) • Qualitative, small-n, ethnographic, clinical, participant-observation or otherwise “in the field” (Yin 1994) • Characterised by process-tracing (see Van Evera 1997, Chap. 2)
The ontological position (Gerring 2004) Case study ideal Utility of case study design Nomothetic Ideographic Assumed comparability of potential units
n=1 case studies • Rare, although thought to be typical of case study research • When n=1 might be appropriate • A theory may generate precise prediction • Leading to a crucial case study • A “least likely” observation (Eckstein 1975)
Single observations cannot provide sufficient evidence • When there are alternative explanations • Indeterminate research design: more inferences than observations • When there is measurement error • In either dependent or independent variables • When the causes are not deterministic • Either because of unknown conditions or fundamental probabilistic nature
Where can more observations be found in a single “case”? • Ask: What are the possible observable implications of the theory or hypothesis being tested? • How many instances of these implications can be found in this case?
Look within units • Comparability of observations • Spatial variation • E.g. subnational regions and communities • Sectoral variation • E.g. within different policy areas / government agencies • Temporal variation • NB: may not be “independent” but still provide additional information
Measure new implications of theory • Same observations, but different measurements • E.g. an hypothesis that predicts social unrest may also have implications for voting behaviour, business investment and/or emigration • E.g. Putman’s (1993) many measures of government performance
Considerations that drive the need for more observations • Variability in DV • Need for certainty about existence and magnitude of cause • Multicollinearity: large overlap between different IVs • Little variation on IV
What are case studies good for? • Different schools of thought • “Historical wisdom about the limits of current theory, and empirical generalization to be explained by future theory”…but not… “theory construction and theory verification” (Achen and Snidal 1989) • Theory testing, causal inference (see Van Evera 1997)
Achen & Snidal on Rational Deterrence Theory • The theory: a very general set of propositions • The rational actor assumption • Variation in outcomes explained by differences in actors’ opportunities • States act as if they are unitary and rational • Predictions: use of threats to make other actors behave in desirable ways • Real-world implications • “rationality of irrationality” • Dangers of disarmament • Balance of power
Case study evidence is said to refute deterrence theory • Many examples where deterrence failed to avert conflict • Many other variables that inform policymakers choices (e.g. domestic factors, ideology) • Decision-makers do not carry out rational calculations
Selection bias • Researchers have studied crises • Cases where deterrence failed • Cannot study crises that have been averted • Could, however, study pairs of countries with serious conflicts, whether or not these result in use of force • Problem of no variance on the DV
Lists of other important variables • A theory’s explanatory power does not mean that it has to match the historical record of a particular case • Lists of other important variables are not “theory” in the sense of a set of general assumptions about how people act from which hypotheses are derived
Decision-makers’ calculations • Process tracing (see Van Evera 1997) • Identify mechanisms though which a particular outcome was reached (e.g. how bipolarity leads to peace) • Look at the individual decisions’ leading up to the final outcome to be explained • Often involves identification of decision-makers’ perceptions and “reasons” for action
The descriptivist fallacy • Rational deterrence theory does not refer to decision-makers’ perceptions or beliefs • In general, rational choice theory does not refer to mental calculations • Holds only that they act “as if” they solved certain problems, whether or not they actually solved them
Survey research • When standardised questionnaires are appropriate: • Research questions about large populations of individuals and/or organisations • Well defined concepts • Confidence about relevant variables
Sampling error in survey research • Even when using random samples from populations, and with high response rates, there will still be error • Error due to having a sample rather than all of the cases we are interested in • Causes uncertainty, but not bias • Reduced by increasing number of observations
Non-response • When non-respondents differ from respondents on one or more variables of interest • Attempts to reduce the problem by: • Call backs • Weighting cases/individuals from underrepresented parts of the population more heavily
Validity issues in measurement • Question wording • Keep it simple (see Pettigrew on Holocaust question) • Avoid double-barrel questions • Avoid value-laden terms • Use closed-ended questions where possible • Question ordering • Try to put open-ended questions first
Levels of analysis • Political science survey research often involves moving between micro and macro levels • So beware of: • Compositional fallacy • Ecological fallacy
Ronald Inglehart 2003. How Solid is Mass Support for Democracy – And How Can We Measure It? PSonline, January 2003 www.apsanet.org
Semi-structured interviews • Face-to-face, open ended, qualitative • E.g. questions about lobbying activities • When the researcher has little information about the variables of interest • For particular types of interviewee (elites, highly educated)