Designing Social Inquiry - BestLightNovel.com
You’re reading novel Designing Social Inquiry Part 3 online at BestLightNovel.com. Please use the follow button to get notification about the latest chapter next time when you visit BestLightNovel.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy
The victory of one candidate over another in a U.S. election on the basis of the victor's personality or an accidental slip of the tongue during a televised debate might be a random factor that could have affected the likelihood of cooperation between the USSR and the United States during the Cold War. But if the most effective campaign appeal to voters had been the promise of reduced tensions with the USSR, consistent victories of conciliatory candidates would have const.i.tuted a systematic factor explaining the likelihood of cooperation.
Systematic factors are persistent and have consistent consequences when the factors take a particular value. Nonsystematic factors are transitory: we cannot predict their impact. But this does not mean that systematic factors represent constants. Campaign appeals may be a systematic factor in explaining voting behavior, but that fact does not mean that campaign appeals themselves do not change. It is the effect of campaign appeals on an election outcome that is constant-or, if it is variable, it is changing in a predictable way. When Soviet-American relations were good, promises of conciliatory policies may have won votes in U.S. elections; when relations were bad, the reverse may have been true. Similarly, the weather can be a random factor (if intermittent and unpredictable shocks have unpredictable consequences) or a systematic feature (if bad weather always leads to fewer votes for candidates favoring conciliatory policies).
In short, summarizing historical detail is an important intermediate step in the process of using our data, but we must also make descriptive inferences distinguis.h.i.+ng between random and systematic phenomena. Knowing what happened on a given occasion is not sufficient by itself. If we make no effort to extract the systematic features of a subject, the lessons of history will be lost, and we will learn nothing about what aspects of our subject are likely to persist or to be relevant to future events or studies.
2.7 CRITERIA FOR JUDGING DESCRIPTIVE INFERENCES.
In this final section, we introduce three explicit criteria that are commonly used in statistics for judging methods of making inferences-unbiasedness, efficiency, and consistency. Each relies on the random-variable framework introduced in section 2.6 but has direct and powerful implications for evaluating and improving qualitative research. To clarify these concepts, we provide only the simplest possible examples in this section, all from descriptive inference. A simple version of inference involves estimating parameters, including the expected value or variance of a random variable ( or 2) for a descriptive inference. We also use these same criteria for judging causal inferences in the next chapter (see section 3.4). We save for later chapters specific advice about doing qualitative research that is implied by these criteria and focus on the concepts alone for the remainder of this section.
2.7.1 Unbiased Inferences.
If we apply a method of inference again and again, we will get estimates that are sometimes too large and sometimes too small. Across a large number of applications, do we get the right answer on average? If yes, then this method, or "estimator," is said to be unbiased. This property of an estimator says nothing about how far removed from the average any one application of the method might be, but being correct on average is desirable.
Unbiased estimates occur when the variation from one replication of a measure to the next is nonsystematic and moves the estimate sometimes one way, sometimes the other. Bias occurs when there is a systematic error in the measure that s.h.i.+fts the estimate more in one direction than another over a set of replications. If in our study of conflict in West Bank communities, leaders had created conflict in order to influence the study's results (perhaps to further their political goals), then the level of conflict we observe in every community would be biased toward greater conflict, on average. If the replications of our hypothetical 1979 elections were all done on a Sunday (when they could have been held on any day), there would be a bias in the estimates if that fact systematically helped one side and not the other (if, for instance, Conservatives were more reluctant to vote on Sunday for religious reasons). Or our replicated estimates might be based on reports from corrupt vote counters who favor one party over the other. If, however, the replicated elections were held on various days chosen in a manner unrelated to the variable we are interested in, any error in measurement would not produce biased results even though one day or another might favor one party. For example, if there were miscounts due to random sloppiness on the part of vote counters, the set of estimates would be unbiased.
If the British elections were always held by law on Sundays or if a vote-counting method that favored one party over another were built into the election system (through the use of a particular voting scheme or, perhaps, even persistent corruption), we would want an estimator that varied based on the mean vote that could be expected under the circ.u.mstances that included these systematic features. Thus, bias depends on the theory that is being investigated and does not just exist in the data alone. It makes little sense to say that a particular data set is biased, even though it may be filled with many individual errors.
In this example, we might wish to distinguish our definition of "statistical bias" in an estimator from "substantive bias" in an electoral system. An example of the latter are polling hours that make it harder for working people to vote-a not uncommon substantive bias of various electoral systems. As researchers, we may wish to estimate the mean vote of the actual electoral system (the one with the substantive bias), but we might also wish to estimate the mean of a hypothetical electoral system that doesn't have a substantive bias due to the hours the polls are open. This would enable us to estimate the amount of substantive bias in the system. Whichever mean we are estimating, we wish to have a statistically unbiased estimator.
Social science data are susceptible to one major source of bias of which we should be wary: people who provide the raw information that we use for descriptive inferences often have reasons for providing estimates that are systematically too high or low. Government officials may want to overestimate the effects of a new program in order to sh.o.r.e up their claims for more funding or underestimate the unemployment rate to demonstrate that they are doing a good job. We may need to dig deeply to find estimates that are less biased. A telling example is in Myron Weiner's qualitative study of education and child labor in India (1991). In trying to explain the low level of commitment to compulsory education in India compared to that in other countries, he had to first determine if the level of commitment was indeed low. In one state in India, he found official statistics that indicated that ninety-eight percent of school age children attend school. However, a closer look revealed that attendance was measured once, when children first entered school. They were then listed as attending for seven years, even if their only attendance was for one day! Closer scrutiny showed the actual attendance figure to be much lower.
A Formal Example of Unbiasedness. Suppose, for example, we wish to estimate in equation (2.2) and decide to use the average as an estimator,=In a single set of data,is the proportion of Labor voters averaged over all n = 650 const.i.tuencies (or the average level of conflict across West Bank communities). But considered across an infinite number of hypothetical replications of the election in each const.i.tuency, the sample mean becomes a function of 650 random variables, =Thus, the sample mean becomes a random variable, too. For some hypothetical replications, will produce election returns that are close to and other times they will be farther away. The question is whether Y will be right, that is, equal to , on average across these hypothetical replications. To determine the answer, we use the expected value operation again, which allows us to determine the average across the infinite number of hypothetical elections. The rules of expectations enable us to make the following calculations: (2.4).
Thus, Y is an unbiased estimator of . (This is a slightly less formal example than appears in formal statistics texts, but the key features are the same.)
2.7.2 Efficiency.
We usually do not have an opportunity to apply our estimator to a large number of essentially identical applications. Indeed, except for some clever experiments, we only apply it once. In this case, unbiasedness is of interest, but we would like more confidence that the one estimate we get is close to the right one. Efficiency provides a way of distinguis.h.i.+ng among unbiased estimators. Indeed, the efficiency criterion can also help distinguish among alternative estimators with a small amount of bias. (An estimator with a large bias should generally be ruled out even without evaluating its efficiency.) Efficiency is a relative concept that is measured by calculating the variance of the estimator across hypothetical replications. For unbiased estimators, the smaller the variance, the more efficient (the better) the estimator. A small variance is better because our one estimate will probably be closer to the true parameter value. We are not interested in efficiency for an estimator with a large bias because low variance in this situation will make it unlikely that the estimate will be near the true value (because most of the estimates would be closely cl.u.s.tered around the wrong value). As we describe below, we are interested in efficiency in the case of a small amount of bias, and we may often be willing to incur a small amount of bias in exchange for a large gain in efficiency.
Suppose again we are interested in estimating the average level of conflict between Palestinians and Israelis in the West Bank and are evaluating two methods: a single observation of one community, chosen to be typical, and similar observations of, for example, twenty-five communities. It should be obvious that twenty-five observations are better than a single observation-so long as the same effort goes into collecting each of the twenty-five as into the single observation. We will demonstrate here precisely why this is the case. This result explains why we should observe as many implications of our theory as possible, but it also demonstrates the more general concept of statistical efficiency, which is also relevant whenever we are deciding the best way to evaluate different ways of combining gathered observations into an inference.
Efficiency enables us to compare the single-observation case study (n = 1) estimator of with the large-n estimator (n = 25), that is the average level of conflict found from twenty-five separate week-long studies in different communities on the West Bank. If applied appropriately, both estimators are unbiased. If the same model applies, the single-observation estimator has a variance of V(typical) = 2. That is, we would have chosen what we thought was a "typical" district, which would, however, be affected by random variables. The variance of the large-n estimator is V() = 2/25, that is, the variance of the sample mean. Thus, the single-observation estimator is twenty-five times more variable (i.e., less efficient) than the estimate when n = 25. Hence, we have the obvious result that more observations are better.
More interesting are the conditions under which a more detailed study of our one community would yield as good or better results as our large-n study. That is, although we should always prefer studies with more observations (given the resources necessary to collect them), there are situations where a single case study (as always, containing many observations) is better than a study based on more observations, each one of which is not as detailed or certain.
All conditions being equal, our a.n.a.lysis shows that the more observations, the better, because variability (and thus inefficiency) drops. In fact, the property of consistency is such that as the number of observations gets very large, the variability decreases to zero, and the estimate equals the parameter we are trying to estimate.26 But often, not all conditions are equal. Suppose, for example, that any single measurement of the phenomenon we are studying is subject to factors that make the measure likely to be far from the true value (i.e., the estimator has high variance). And suppose that we have some understanding-from other studies, perhaps-of what these factors might be. Suppose further that our ability to observe and correct for these factors decreases substantially with the increase in the number of communities studied (if, for no other reason, than that we lack the time and knowledge to make corrections for such factors across a large number of observations). We are then faced with a tradeoff between a case study that has additional observations internal to the case and twenty-five cases in which each contains only one observation.
If our single case study is composed of only one observation, then it is obviously inferior to our 25-observation study. But case-study researchers have significant advantages, which are easier to understand if formalized. For example, we could first select our community very carefully in order to make sure that it is especially representative of the rest of the country or that we understand the relations.h.i.+p of this community to the others. We might ask a few residents or look at newspaper reports to see whether it was an average community or whether some nonsystematic factor had caused the observation to be atypical, and then we might adjust the observed level of conflict to arrive at an estimate of the average level of West Bank conflict, . This would be the most difficult part of the case-study estimator, and we would need to be very careful that bias does not creep in. Once we are reasonably confident that bias is minimized, we could focus on increasing efficiency. To do this, we might spend many weeks in the community conducting numerous separate studies. We could interview community leaders, ordinary citizens, and school teachers. We could talk to children, read the newspapers, follow a family in the course of its everyday life, and use numerous other information-gathering techniques. Following these procedures, we could collect far more than twenty-five observations within this one community and generate a case study that is also not biased and more efficient than the twenty-five community study.
Consider another example. Suppose we are conducting a study of the international drug problem and want a measure of the percentage of agricultural land on which cocaine is being grown in a given region of the world. Suppose further that there is a choice of two methods: a case study of one country or a large-scale, statistical study of all the countries of the region. It would seem better to study the whole region. But let us say that to carry out such a study it is necessary (for practical reasons) to use data supplied to a UN agency from the region's governments. These numbers are known to have little relations.h.i.+p to actual patterns of cropping since they were prepared in the Foreign Office and based on considerations of public relations. Suppose, further, that we could, by visiting and closely observing one country, make the corrections to the government estimates that would bring that particular estimate much closer to a true figure. Which method would we choose? Perhaps we would decide to study only one country, or perhaps two or three. Or we might study one country intensively and use our results to reinterpret, and thereby improve, the government-supplied data from the other countries. Our choice should be guided by which data best answer our questions.
To take still another example, suppose we are studying the European Community and want to estimate the expected degree of regulation of an industry throughout the entire Community that will result from actions of the Commission and the Council of Ministers. We could gather data on a large number of rules formally adopted for the industrial sector in question, code these rules in terms of their stringency, and then estimate the average stringency of a rule. If we gather data on 100 rules with similar a priori stringency, the variance of our measure will be the variance of any given rule divided by 100 (2/100), or less if the rules are related. Undoubtedly, this will be a better measure than using data on one rule as the estimator for regulatory stringency for the industry as a whole.
However, this procedure requires us to accept the formal rule as equivalent to the real regulatory activity in the sector under scrutiny. Further investigation of rule application, however, might reveal a large variation in the extent to which nominal rules are actually enforced. Hence, measures of formal rules might be systematically biased-for instance, in favor of overstating regulatory stringency. In such a case, we would face the bias-efficiency trade-off once again, and it might make sense to carry out three or four intensive case studies of rule implementation to investigate the relations.h.i.+p between formal rules and actual regulatory activity. One possibility would be to subst.i.tute an estimator based on these three or four cases-less biased and also less efficient-for the estimator based on 100 cases. However, it might be more creative, if feasible, to use the intensive case-study work for the three or four cases to correct the bias of our 100-case indicator, and then to use a corrected version of the 100-case indicator as our estimator. In this procedure, we would be combining the insights of our intensive case studies with large-n techniques, a practice that we think should be followed much more frequently than is the case in contemporary social science.
The argument for case studies made by those who know a particular part of the world well is often just the one implicit in the previous example. Large-scale studies may depend upon numbers that are not well understood by the naive researcher working on a data base (who may be unaware of the way in which election statistics are gathered in a particular locale and a.s.sumes, incorrectly, that they have some real relations.h.i.+p to the votes as cast). The researcher working closely with the materials and understanding their origin may be able to make the necessary corrections. In subsequent sections we will try to explicate how such choices might be made more systematically.
Our formal a.n.a.lysis of this problem in the box below shows precisely how to decide what the results of the trade-off are in the example of British electoral const.i.tuencies. The decision in any particular example will always be better when using logic like that shown in the formal a.n.a.lysis below. However, deciding this issue will almost always also require qualitative judgements, too.
Finally, it is worth thinking more specifically about the trade-offs that sometimes exist between bias and efficiency. The sample mean of the first two observations in any larger set of unbiased observations is also unbiased, just as is the sample mean of all the observations. However, using only two observations discards substantial information; this does not change unbiasedness, but it does substantially reduce efficiency. If we did not also use the efficiency criterion, we would have no formal criteria for choosing one estimator over the other.
Formal Efficiency Comparisons. The variance of the sample mean Y is denoted as V(Y), and the rules for calculating variances of random variables in the simple case of random sampling permit the following: Furthermore, if we a.s.sume that the variance across hypothetical replication of each district election is the same as every other district and is denoted by 2, then the variance of the sample mean is (2.5).
In the example above, n = 650, so the large-n estimator has variance 2/650 and the case-study estimator has variance 2 Unless we can use qualitative, random-error corrections to reduce the variance of the case-study estimator by a factor of at least 650, the statistical estimate is to be preferred on the grounds of efficiency.
Suppose we are interested in whether the Democrats would win the next presidential election, and we ask twenty randomly selected American adults which party they plan to vote for. (In our simple version of random selection, we choose survey respondents from all adult Americans, each of which has an equal probability of selection.) Suppose that someone else also did a similar study with 1,000 citizens. Should we include these additional observations with ours to create a single estimate based on 1,020 respondents? If the new observations were randomly selected, just as the first twenty, it should be an easy decision to include the additional data with ours: with the new observations, the estimator is still unbiased and now much more efficient.
However, suppose that only 990 of the 1,000 new observations were randomly drawn from the U.S. population and the other ten were Democratic members of Congress who were inadvertently included in the data after the random sample had been drawn. Suppose further that we found out that these additional observations were included in our data but did not know which ones they were and thus could not remove them. We now know a priori that an estimator based on all 1,020 respondents would produce a slight overestimate of the likelihood that a Democrat would win the nationwide vote. Thus, including these 1,000 additional observations would slightly bias the overall estimate, but it would also substantially improve its efficiency. Whether we should include the observations therefore depends on whether the increase in bias is outweighed by the increase in statistical efficiency. Intuitively, it seems clear that the estimator based on the 1,020 observations will produce estimates fairly close to the right answer much more frequently than the estimator based on only twenty observations. The bias introduced would be small enough, so we would prefer the larger sample estimator even though in practice we would probably apply both. (In addition, we know the direction of the bias in this case and could even partially correct for it.) If adequate quant.i.tative data are available and we are able to formalize such problems as these, we can usually make a clear decision. However, even if the qualitative nature of the research makes evaluating this trade-off difficult or impossible, understanding it should help us make more reliable inferences.
Formal Comparisons of Bias and Efficiency. Consider two estimators, one a large-n study by someone with a preconception, who is therefore slightly biased, and the other a very small-n study that we believe is unbiased but relatively less efficient and is done by an impartial investigator. As a formal model of this example, suppose we wish to estimate and the large-n study produces estimator d: We model the small-n study with a different estimator of , c:
where districts 1 and 2 are average const.i.tuencies, so that E(Y1) = and E(2) = ..
Which estimator should we prefer? Our first answer is that we would use neither and instead would prefer the sample mean; that is, a large-n study by an impartial investigator. However, the obvious or best estimator is not always applicable. To answer this question, we turn to an evaluation of bias and efficiency.
First, we will a.s.sess bias. We can show that the first estimator d is slightly biased according to the usual calculation: We can also show that the second estimator c is unbiased by a similar calculation: By these calculations alone, we would choose estimator c, the result of the efforts of our impartial investigator's small-n study, since it is unbiased. On average, across an infinite number of hypothetical replications, for the investigator with a preconception, d would give the wrong answer, albeit only slightly so. Estimator c would give the right answer on average.
The efficiency criterion tells a different story. To begin, we calculate the variance of each estimator: This variance is the same as the variance of the sample mean because 0.01 does not change (has zero variance) across samples. Similarly, we calculate the variance of c as follows:27 Thus, c is considerably less efficient than d because V(c) = 2/2 is 325 times larger than V(d) = 2/650. This should be intuitively clear as well, since c discards most of the information in the data set.
Which should we choose? Estimator d is biased but more efficient than c, whereas c is unbiased but less efficient. In this particular case, we would probably prefer estimator d. We would thus be willing to sacrifice unbiasedness, since the sacrifice is fairly small (0.01), in order to obtain a significantly more efficient estimator. At some point, however, more efficiency will not compensate for a little bias since we end up guaranteeing that estimates will be farther from the truth. The formal way to evaluate the bias-efficiency trade-off is to calculate the mean square error (MSE), which is a combination of bias and efficiency. If g is an estimator for some parameter (the Greek letter Gamma), MSE is defined as follows: (2.6).
Mean square error is thus the sum of the variance and the squared bias (see Johnston 1984:27-28). The idea is to choose the estimator with the minimum mean square error since it shows precisely how an estimator with some bias can be preferred if it has a smaller variance.
For our example, the two MSEs are as follows: (2.7).
and (2.8).
Thus, for most values of 2, MSE(d) < mse(c)="" and="" we="" would="" prefer="" d="" as="" an="" estimator="" to="">
In theory, we should always prefer unbiased estimates that are as efficient (i.e., use as much information) as possible. However, in the real research situations we a.n.a.lyze in succeeding chapters, this trade-off between bias and efficiency is quite salient.
CHAPTER 3.
Causality and Causal Inference.
WE HAVE DISCUSSED two stages of social science research: summarizing historical detail (section 2.5) and making descriptive inferences by part.i.tioning the world into systematic and nonsystematic components (section 2.6). Many students of social and political phenomena would stop at this point, eschewing causal statements and asking their selected and well-ordered facts to "speak for themselves."
Like historians, social scientists need to summarize historical detail and to make descriptive inferences. For some social scientific purposes, however, a.n.a.lysis is incomplete without causal inference. That is, just as causal inference is impossible without good descriptive inference, descriptive inference alone is often unsatisfying and incomplete. To say this, however, is not to claim that all social scientists must, in all of their work, seek to devise causal explanations of the phenomena they study. Sometimes causal inference is too difficult; in many other situations, descriptive inference is the ultimate goal of the research endeavor.
Of course, we should always be explicit in clarifying whether the goal of a research project is description or explanation. Many social scientists are uncomfortable with causal inference. They are so wary of the warning that "correlation is not causation" that they will not state causal hypotheses or draw causal inferences, referring to their research as "studying a.s.sociation and not causation." Others make apparent causal statements with ease, labeling unevaluated hypotheses or speculations as "explanations" on the basis of indeterminate research designs. 28 We believe that each of these positions evades the problem of causal inference.
Avoiding causal language when causality is the real subject of investigation either renders the research irrelevant or permits it to remain undisciplined by the rules of scientific inference. Our uncertainty about causal inferences will never be eliminated. But this uncertainty should not suggest that we avoid attempts at causal inference. Rather we should draw causal inferences where they seem appropriate but also provide the reader with the best and most honest estimate of the uncertainty of that inference. It is appropriate to be bold in drawing causal inferences as long as we are cautious in detailing the uncertainty of the inference. It is important, further, that causal hypotheses be disciplined, approximating as closely as possible the rules of causal inference. Our purpose in much of chapters 4-6 is to explicate the circ.u.mstances under which causal inference is appropriate and to make it possible for qualitative researchers to increase the probability that their research will provide reliable evidence about their causal hypotheses.
In section 3.1 we provide a rigorous definition of causality appropriate for qualitative and quant.i.tative research, then in section 3.2 we clarify several alternative notions of causality in the literature and demonstrate that they do not conflict with our more fundamental definition. In section 3.3 we discuss the precise a.s.sumptions about the world and the hypotheses required to make reliable causal inferences. We then consider in section 3.4 how to apply to causal inference the criteria we developed for judging descriptive inference. In section 3.5 we conclude this chapter with more general advice on how to construct causal explanations, theories, and hypotheses.
3.1 DEFINING CAUSALITY.
In this section, we define causality as a theoretical concept independent of the data used to learn about it. Subsequently, we consider causal inference from our data. (For discussions of specific problems of causal inference, see chapters 4-6.) In section 3.1.1 we give our definition of causality in full detail, along with a simple quant.i.tative example, and in section 3.1.2 we revisit our definition along with a more sophisticated qualitative example.
3.1.1 The Definition and a Quant.i.tative Example.
Our theoretical definition of causality applies most simply and clearly to a single unit.29 As defined in section 2.4, a unit is one of the many elements to be observed in a study, such as a person, country, year, or political organization. For precision and clarity, we have chosen a single running example from quant.i.tative research: the causal effect of inc.u.mbency status for a Democratic candidate for the U.S. House of Representatives on the proportion of votes this candidate receives. (Using only a Democratic candidate simplifies the example.) Let the dependent variable be the Democratic proportion of the two-party vote for the House. The key causal explanatory variable is then dichotomous, either the Democrat is an inc.u.mbent or not. (For simplicity throughout this section, we only consider districts where the Republican candidate lost the last election.) Causal language can be confusing and our choice here is hardly unique. The "dependent variable" is sometimes called the "outcome variable." "Explanatory variables" are often referred to as "independent variables." We divide the explanatory variables into the "key causal variable" (also called the "cause" or the "treatment variable") and the "control variables." Finally, the key causal variable always takes on two or more values, which are often denoted by "treatment group" and "control group."
Now consider only the Fourth Congressional District in New York, and imagine an election in 1998 with a Democratic inc.u.mbent and one Republican (noninc.u.mbent) challenger. Suppose the Democratic candidate receivedfraction of the vote in this election (the subscript 4 denotes the Fourth District in New York and the superscript I refers to the fact that the Democrat is an Inc.u.mbent).is then a value of the dependent variable. To define the causal effect (a theoretical quant.i.ty), imagine that we go back in time to the start of the election campaign and everything remains the same, except that the Democratic inc.u.mbent decides not to run for re-election and the Democratic Party nominates another candidate (presumably the winner of the primary election). We denote the fraction of the vote that the Democratic (noninc.u.mbent) candidate would receive by(where N denotes a Democratic candidate who is a Non-inc.u.mbent).30 This counterfactual condition is the essence behind this definition of causality, and the difference between the actual voteand the likely vote in this counterfactual situationis the causal effect, a concept we define more precisely below. We must be very careful in defining counterfactuals; although they are obviously counter to the facts, they must be reasonable and it should be possible for the counterfactual event to have occurred under precisely stated circ.u.mstances. A key part of defining the appropriate counterfactual condition is clarifying precisely what we are holding constant while we are changing the value of the treatment variable. In the present example, the key causal (or treatment) variable is inc.u.mbency status, and it changes from "inc.u.mbent" to "non-inc.u.mbent." During this hypothetical change, we hold everything constant up to the moment of the Democratic Party's nomination decision-the relative strength of the Democrats and Republicans in past elections in this district, the nature of the nomination process, the characteristics of the congressional district, and the economic and political climate at the time, etc. We do not control for qualities of the candidates, such as name recognition, visibility, and knowledge of the workings of Congress, or anything else that follows the party nomination. The reason is that these are partly consequences of our treatment variable, inc.u.mbency. That is, the advantages of inc.u.mbency include name recognition, visibility, and so forth. If we did hold these constant, we would be controlling for and hence disregarding some of the most important effects of inc.u.mbency and as a result, would misinterpret its overall effect on the vote total. In fact, controlling for enough of the consequences of inc.u.mbency could make one incorrectly believe that inc.u.mbency had no effect at all.31 More formally, the causal effect of inc.u.mbency in the Fourth District in New York-the proportion of the vote received by the Democratic Party candidate that is attributable to inc.u.mbency status-would be the difference between these two vote fractions:. For reasons that will become clear shortly, we refer to this difference as the realized causal effect and write it in more general notation for unit i instead of only district 4:32 (3.1).
Of course, this effect is defined only in theory since in any one real election we might observe eitheroror neither, but never both. Thus, this simple definition of causality demonstrates that we can never hope to know a causal effect for certain. Holland (1986) refers to this problem as the fundamental problem of causal inference, and it is indeed a fundamental problem since no matter how perfect the research design, no matter how much data we collect, no matter how perceptive the observers, no matter how diligent the research a.s.sistants, and no matter how much experimental control we have, we will never know a causal inference for certain. Indeed, most of the empirical issues of research designs that we discuss in this book involve this fundamental problem, and most of our suggestions const.i.tute partial attempts to avoid it.
Our working definition of causality differs from Holland's, since in section 2.6 we have argued that social science always needs to part.i.tion the world into systematic and nonsystematic components, and Holland's definition does not make this distinction clearly.33 To see the importance of this part.i.tioning, think about what would happen if we could rerun the 1998 election campaign in the Fourth District in New York, with a Democratic inc.u.mbent and a Republican challenger. A slightly different total vote would result, due to nonsystematic features of election campaigns-aspects of politics that do not persist from one campaign to the next, even if the campaigns begin on identical footing. Some of these nonsystematic features might include a verbal gaffe, a surprisingly popular speech or position on an issue, an unexpectedly bad performance in a debate, bad weather during one candidate's rally or on election day, or the results of some investigative journalism. We can therefore imagine a variable that would express the values of the Democratic vote across hypothetical replications of this same election.
As noted above (see section 2.6), this variable is called a "random variable" since it has nonsystematic features: it is affected by explanatory variables not encompa.s.sed in our theoretical a.n.a.lysis or contains fundamentally unexplainable variability.34 We define the random variable representing the proportion of votes received by the inc.u.mbent Democratic candidate as(note the capital Y) and the proportion of votes that would be received in hypothetical replications by a Democratic noninc.u.mbent as.
We now define the random causal effect for district 4 as the difference between these two random variables. Since we wish to retain some generality, we again switch notation from district 4 to unit i: (3.2).
(Just as in the definition of a random variable, a random causal effect is a causal effect that varies over hypothetical replications of the same experiment but also represents many interesting systematic features of elections.) If we could observe two separate vote proportions in district 4 at the same time, one from an election with and one without a Democratic inc.u.mbent running, then we could directly observe the realized causal effect in equation (3.1). Of course, because of the Fundamental Problem of Causal Inference, we cannot observe the realized causal effect. Thus, the realized causal effect in equation 3.1 is a single un.o.bserved realization of the random causal effect in equation 3.2. In other words, across many hypothetical replications of the same election in district 4 with a Democratic inc.u.mbent, and across many hypothetical replications of the same election but with a Democratic non-inc.u.mbent, the (un.o.bserved) realized causal effect becomes a random causal effect.
Describing causality as one of the systematic features of random variables may seem unduly complicated. But it has two virtues. First, it makes our definition of causality directly a.n.a.logous to those systematic features (such as a mean or variance) of a phenomenon that serve as objects of descriptive inference: means and variances are also systematic features of random variables (as in section 2.2). Secondly, it enables us to part.i.tion a causal inference problem into systematic and nonsystematic components. Although many systematic features of a random variable might be of interest, the most relevant for our running example is the mean causal effect for unit i. To explain what we mean by this, we return to our New York election example.
Recall that the random variable refers to the vote fraction received by the Democrat (inc.u.mbent or noninc.u.mbent) across a large number of hypothetical replications of the same election. We define the expected value of this random variable-the vote fraction averaged across these replications-for the noninc.u.mbent as
and for the inc.u.mbent as.
Then, the mean causal effect of inc.u.mbency in unit i is a systematic feature of the random causal effect and is defined as the difference between these two expected values (again generalized to unit i instead of to district 4): (3.3).
where in the first line of this equation, (beta) refers to this mean causal effect. In the second line, we indicate that the mean causal effect for unit i is just the mean (expected value) of the random causal effect, and in the third and fourth lines we show how to calculate the mean. The last line is another way of writing the difference in the means of the two sets of hypothetical elections. (The average of the difference between two random variables equals the difference of the averages.) To summarize in words: the causal effect is the difference between the systematic component of observations made when the explanatory variable takes one value and the systematic component of comparable observations when the explanatory variable takes on another value.
The last line of equation 3.3 is similar to equation 3.1, and as such, the Fundamental Problem of Causal Inference still exists in this formulation. Indeed, the problem expressed this way is even more formidable because even if we could get around the Fundamental Problem for a realized causal effect, we would still have all the usual problems of inference, including the problem of separating out systematic and nonsystematic components of the random causal effect. From here on, we use Holland's phrase, the Fundamental Problem of Causal Inference, to refer to the problem that he identified as well as to these standard problems of inference, which we have added to his formulation. In the box on page 95, we provide a more general notation for causal effects, which will prove useful throughout the rest of this book.
Many other systematic features of these random causal effects might be of interest in various circ.u.mstances. For example, we might wish to know the variance in the possible (realized) causal effects of inc.u.mbency status on Democratic vote in unit i, just as with the variance in the vote itself that we described in equation 2.3 in section 2.6. To calculate the variance of the causal effect, we apply the variance operation(variance of the causal effect in unit i) =
in which we avoid introducing a new symbol for the result of the variance calculation,. Certainly new inc.u.mbents would wish to know the variation in the causal effect of inc.u.mbency so they can judge how closely their experience will be to that of previous inc.u.mbents and how much to rely on their estimated mean causal effect of inc.u.mbency from previous elections. It is especially important to understand that this variance in the causal effect is a fundamental part of the world and is not uncertainty due to estimation.
3.1.2 A Qualitative Example.
We developed our precise definition of causality in section 3.1. Since some of the concepts in that section are subtle and quite sophisticated, we ill.u.s.trated our points with a very simple running example from quant.i.tative research. This example helped us communicate the concepts we wished to stress without also having to attend to the contextual detail and cultural sensitivity that characterize good qualitative research. In this section, we proceed through our definition of causality again, but this time via a qualitative example.
Political scientists would learn a lot if they could rerun history with everything constant save for one investigator-controlled explanatory variable. For example, one of the major questions that faces those involved with politics and government has to do with the consequences of a particular law or regulation. Congress pa.s.ses a tax bill that is intended to have a particular consequence-lead to particular investments, increase revenue by a certain amount, and change consumption patterns. Does it have this effect? We can observe what happens after the tax is pa.s.sed to see if the intended consequences appear; but even if they do, it is never certain that they result from the law. The change in investment policy might have happened anyway. If we could rerun history with and without the new regulation, then we would have much more leverage in estimating the causal effect of this law. Of course, we cannot do this. But the logic will help us design research to give us an approximate answer to our question.
Consider now the following extended example from comparative politics. In the wake of the collapse of the Soviet system, numerous governments in the ex-Soviet republics and in Eastern Europe have inst.i.tuted new governmental forms. They are engaged-as they themselves realize-in a great political experiment: they are introducing new const.i.tutions, const.i.tutions that they hope will have the intended effect of creating stable democratic systems. One of the const.i.tutional choices is between parliamentary and presidential forms of government. Which system is more likely to lead to a stable democracy is the subject of considerable debate among scholars in the field (Linz 1993; Horowitz 1993; Lijphart 1993). The debate is complex, not the least because of the numerous types of parliamentary and presidential systems and the variety of the other const.i.tutional provisions that might accompany and interact with this choice (such as the nature of the electoral system). It is not our purpose to provide a thorough a.n.a.lysis of these choices but rather a greatly simplified version of the choice in order to define a causal effect in the context of this qualitative example. In so doing, we highlight the distinction between systematic and nonsystematic features of a causal effect.
The debate about presidential versus parliamentary systems involves varied features of the two systems. We will focus on two: the extent to which each system represents the varied interests of the citizenry and encourages strong and decisive leaders.h.i.+p. The argument is that parliamentary systems do a better job of representing the full range of societal groups and interests in the government since there are many legislative seats to be filled, and they can be filled by representatives elected from various groups. In contrast, the all-or-nothing character of presidential systems means that some groups will feel left out of the government, be disaffected, and cause greater instability. On the other hand, parliamentary systems-especially if they adequately represent the full range of social groups and interests-are likely to be deadlocked and ineffective in providing decisive government. These characteristics, too, can lead to disaffection and instability.35 The key purpose of this section is to formulate a precise definition of a causal effect. To do so, imagine that we could inst.i.tute a parliamentary system and, periodically over the next decade or so, measure the degree of democratic stability (perhaps by actual survival or demise of democracy, attempted coups, or other indicators of instability), and in the same country and at the same time, inst.i.tute a presidential system, also measuring its stability over the same period with the same measures. The realized causal effect would be the difference between the degree of stability observed under a presidential system and that under a parliamentary system. The impossibility of measuring this causal effect directly is another example of the fundamental problem of causal inference.
As part of this definition, we also need to distinguish between systematic and nonsystematic effects of the form of government. To do this, we imagine running this hypothetical experiment many times. We define the mean causal effect to be the average of the realized causal effects across replications of these experiments. Taking the average in this way causes the nonsystematic features of this problem to cancel out and leaves the mean causal effect to include only systematic features. Systematic features include indecisiveness in a parliamentary system or disaffection among minorities in a presidential one. Nonsystematic features might include the sudden illness of a president that throws the government into chaos. The latter event would not be a persistent feature of a presidential system; it would appear in one trial of the experiment but not in others.36 Another interesting feature of this example is the variance of the causal effect. Any country thinking of choosing one of these political systems would be interested in its mean causal effect on democratic stability; however, this one country gets only one chance-only one replication of this experiment. Given this situation, political leaders may be interested in more than the average causal effect. They may wish to understand what the maximum and minimum causal effects, or at least the variance of the causal effects, might be. For example, it may be that presidentialism reduces democratic stability on average but that the variability of this effect is enormous-sometimes increasing stability a lot, sometimes decreasing it substantially. This variance translates into risk for a polity. In this circ.u.mstance, it may be that citizens and political leaders would prefer to choose an option that produces only slightly less stability on average but has a lower variance in causal effect and thus minimizes the chance of a disastrous outcome.