BestLightNovel.com

Designing Social Inquiry Part 1

Designing Social Inquiry - BestLightNovel.com

You’re reading novel Designing Social Inquiry Part 1 online at BestLightNovel.com. Please use the follow button to get notification about the latest chapter next time when you visit BestLightNovel.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy

Designing social inquiry : scientific inference in qualitiative research.

Gary King, Robert O. Keohane, Sidney Verba.

Preface.

IN THIS BOOK we develop a unified approach to valid descriptive and causal inference in qualitative research, where numerical measurement is either impossible or undesirable. We argue that the logic of good quant.i.tative and good qualitative research designs do not fundamentally differ. Our approach applies equally to these apparently different forms of scholars.h.i.+p.

Our goal in writing this book is to encourage qualitative researchers to take scientific inference seriously and to incorporate it into their work. We hope that our unified logic of inference, and our attempt to demonstrate that this unified logic can be helpful to qualitative researchers, will help improve the work in our discipline and perhaps aid research in other social sciences as well. Thus, we hope that this book is read and critically considered by political scientists and other social scientists of all persuasions and career stages-from qualitative field researchers to statistical a.n.a.lysts, from advanced undergraduates and first-year graduate students to senior scholars. We use some mathematical notation because it is especially helpful in clarifying concepts in qualitative methods; however, we a.s.sume no prior knowledge of mathematics or statistics, and most of the notation can be skipped without loss of continuity.



University administrators often speak of the complementarity of teaching and research. Indeed, teaching and research are very nearly coincident, in that they both entail acquiring new knowledge and communicating it to others, albeit in slightly different forms. This book attests to the synchronous nature of these activities. Since 1989, we have been working on this book and jointly teaching the graduate seminar "Qualitative Methods in Social Science" in Harvard University's Department of Government. The seminar has been very lively, and it often has spilled into the halls and onto the pages of lengthy memos pa.s.sed among ourselves and our students. Our intellectual battles have always been friendly, but our rules of engagement meant that "agreeing to disagree" and compromising were high crimes. If one of us was not truly convinced of a point, we took it as our obligation to continue the debate. In the end, we each learned a great deal about qualitative and quant.i.tative research from one another and from our students and changed many of our initial positions. In addition to its primary purposes, this book is a statement of our hard-won unanimous position on scientific inference in qualitative research.

We completed the first version of this book in 1991 and have revised it extensively in the years since. Gary King first suggested that we write this book, drafted the first versions of most chapters, and took the lead through the long process of revision. However, the book has been rewritten so extensively by Robert Keohane and Sidney Verba, as well as Gary King, that it would be impossible for us to identify the authors.h.i.+p of many pa.s.sages and sections reliably.

During this long process, we circulated drafts to colleagues around the United States and are indebted to them for the extraordinary generosity of their comments. We are also grateful to the graduate students who have been exposed to this ma.n.u.script both at Harvard and at other universities and whose reactions have been important to us in making revisions. Trying to list all the individuals who were helpful in a project such as this is notoriously hazardous (we estimate the probability of inadvertently omitting someone whose comments were important to us to be 0.92). We wish to acknowledge the following individuals: Christopher H. Achen, John Aldrich, Hayward Alker, Robert H. Bates, James Battista, Nathaniel Beck, Nancy Burns, Michael Cobb, David Collier, Gary c.o.x, Michael C. Desch, David Dessler, Jorge Dominguez, George Downs, Mitch.e.l.l Duneier, Matthew Evangelista, John Ferejohn, Andrew Gelman, Alexander George, Joshua Goldstein, Andrew Green, David Green, Robin Hanna, Michael Hisc.o.x, James E. Jones, Sr., Miles Kahler, Elizabeth King, Alexander Kozhemiakin, Stephen D. Krasner, Herbert Kritzer, James Kuklinski, Nathan Lane, Peter Lange, Tony Lavelle, Judy Layzer, Jack S. Levy, Daniel Little, Sean Lynn-Jones, Lisa L. Martin, Helen Milner, Gerardo L. Munck, Timothy P. Nokken, Joseph S. Nye, Charles Ragin, Swarna Rajagopalan, Shamara Shantu Riley, David Rocke, David Rohde, Frances Rosenbluth, David Schwieder, Collins G. Shackelford, Jr., Kenneth Shepsle, Daniel Walsh, Carolyn Warner, Steve Aviv Yetiv, Mary Zerbinos, and Michael Zurn. Our appreciation goes to Steve Voss for preparing the index, and to the crew at Princeton University Press, Walter Lippincott, Malcolm DeBevoise, Peter Dougherty, and Alessandra Bocco. Our thanks also go to the National Science Foundation for research grant SBR-9223637 to Gary King. Robert O. Keohane is grateful to the John Simon Guggenheim Memorial Foundation for a fellows.h.i.+p during the term of which work on this book was completed.

We (in various permutations and combinations) were also extremely fortunate to have had the opportunity to present earlier versions of this book in seminars and panels at the Midwest Political Science a.s.sociation meetings (Chicago, 2-6 April 1990), the Political Methodology Group meetings (Duke University, 18-20 July 1990), the American Political Science a.s.sociation meetings (Was.h.i.+ngton, D.C., 29 August- 1 September 1991), the Seminar in the Methodology and Philosophy of the Social Sciences (Harvard University, Center for International Affairs, 25 September 1992), the Colloquium Series of the Interdisciplinary Consortium for Statistical Applications (Indiana University, 4 December 1991), the Inst.i.tute for Global Cooperation and Change seminar series (University of California, Berkeley, 15 January 1993), and the University of Illinois, Urbana-Champaign (18 March 1993).

Gary King.

Robert O. Keohanne.

Sidney Verba.

Cambridge, Ma.s.sachusetts.

CHAPTER 1.

The Science in Social Science.

1.1 INTRODUCTION.

THIS BOOK is about research in the social sciences. Our goal is practical: designing research that will produce valid inferences about social and political life. We focus on political science, but our argument applies to other disciplines such as sociology, anthropology, history, economics, and psychology and to nondisciplinary areas of study such as legal evidence, education research, and clinical reasoning.

This is neither a work in the philosophy of the social sciences nor a guide to specific research tasks such as the design of surveys, conduct of field work, or a.n.a.lysis of statistical data. Rather, this is a book about research design: how to pose questions and fas.h.i.+on scholarly research to make valid descriptive and causal inferences. As such, it occupies a middle ground between abstract philosophical debates and the hands-on techniques of the researcher and focuses on the essential logic underlying all social scientific research.

1.1.1 Two Styles of Research, One Logic of Inference.

Our main goal is to connect the traditions of what are conventionally denoted "quant.i.tative" and "qualitative" research by applying a unified logic of inference to both. The two traditions appear quite different; indeed they sometimes seem to be at war. Our view is that these differences are mainly ones of style and specific technique. The same underlying logic provides the framework for each research approach. This logic tends to be explicated and formalized clearly in discussions of quant.i.tative research methods. But the same logic of inference underlies the best qualitative research, and all qualitative and quant.i.tative researchers would benefit by more explicit attention to this logic in the course of designing research.

The styles of quant.i.tative and qualitative research are very different. Quant.i.tative research uses numbers and statistical methods. It tends to be based on numerical measurements of specific aspects of phenomena; it abstracts from particular instances to seek general description or to test causal hypotheses; it seeks measurements and a.n.a.lyses that are easily replicable by other researchers.

Qualitative research, in contrast, covers a wide range of approaches, but by definition, none of these approaches relies on numerical measurements. Such work has tended to focus on one or a small number of cases, to use intensive interviews or depth a.n.a.lysis of historical materials, to be discursive in method, and to be concerned with a rounded or comprehensive account of some event or unit. Even though they have a small number of cases, qualitative researchers generally unearth enormous amounts of information from their studies. Sometimes this kind of work in the social sciences is linked with area or case studies where the focus is on a particular event, decision, inst.i.tution, location, issue, or piece of legislation. As is also the case with quant.i.tative research, the instance is often important in its own right: a major change in a nation, an election, a major decision, or a world crisis. Why did the East German regime collapse so suddenly in 1989? More generally, why did almost all the communist regimes of Eastern Europe collapse in 1989? Sometimes, but certainly not always, the event may be chosen as an exemplar of a particular type of event, such as a political revolution or the decision of a particular community to reject a waste disposal site. Sometimes this kind of work is linked to area studies where the focus is on the history and culture of a particular part of the world. The particular place or event is a.n.a.lyzed closely and in full detail.

For several decades, political scientists have debated the merits of case studies versus statistical studies, area studies versus comparative studies, and "scientific" studies of politics using quant.i.tative methods versus "historical" investigations relying on rich textual and contextual understanding. Some quant.i.tative researchers believe that systematic statistical a.n.a.lysis is the only road to truth in the social sciences. Advocates of qualitative research vehemently disagree. This difference of opinion leads to lively debate; but unfortunately, it also bifurcates the social sciences into a quant.i.tative-systematic-generalizing branch and a qualitative-humanistic-discursive branch. As the former becomes more and more sophisticated in the a.n.a.lysis of statistical data (and their work becomes less comprehensible to those who have not studied the techniques), the latter becomes more and more convinced of the irrelevance of such a.n.a.lyses to the seemingly non-replicable and nongeneralizable events in which its pract.i.tioners are interested.

A major purpose of this book is to show that the differences between the quant.i.tative and qualitative traditions are only stylistic and are methodologically and substantively unimportant. All good research can be understood-indeed, is best understood-to derive from the same underlying logic of inference. Both quant.i.tative and qualitative research can be systematic and scientific. Historical research can be a.n.a.lytical, seeking to evaluate alternative explanations through a process of valid causal inference. History, or historical sociology, is not incompatible with social science (Skocpol 1984: 374-86).

Breaking down these barriers requires that we begin by questioning the very concept of "qualitative" research. We have used the term in our t.i.tle to signal our subject matter, not to imply that "qualitative" research is fundamentally different from "quant.i.tative" research, except in style.

Most research does not fit clearly into one category or the other. The best often combines features of each. In the same research project, some data may be collected that is amenable to statistical a.n.a.lysis, while other equally significant information is not. Patterns and trends in social, political, or economic behavior are more readily subjected to quant.i.tative a.n.a.lysis than is the flow of ideas among people or the difference made by exceptional individual leaders.h.i.+p. If we are to understand the rapidly changing social world, we will need to include information that cannot be easily quantified as well as that which can. Furthermore, all social science requires comparison, which entails judgments of which phenomena are "more" or "less" alike in degree (i.e., quant.i.tative differences) or in kind (i.e., qualitative differences).

Two excellent recent studies exemplify this point. In Coercive Cooperation (1992), Lisa L. Martin sought to explain the degree of international cooperation on economic sanctions by quant.i.tatively a.n.a.lyzing ninety-nine cases of attempted economic sanctions from the post-World War II era. Although this quant.i.tative a.n.a.lysis yielded much valuable information, certain causal inferences suggested by the data were ambiguous; hence, Martin carried out six detailed case studies of sanctions episodes in an attempt to gather more evidence relevant to her causal inference. For Making Democracy Work (1993), Robert D. Putnam and his colleagues interviewed 112 Italian regional councillors in 1970, 194 in 1976, and 234 in 1981-1982, and 115 community leaders in 1976 and 118 in 1981-1982. They also sent a mail questionnaire to over 500 community leaders throughout the country in 1983. Four nationwide ma.s.s surveys were undertaken especially for this study. Nevertheless, between 1976 and 1989 Putnam and his colleagues conducted detailed case studies of the politics of six regions. Seeking to satisfy the "interocular traumatic test," the investigators "gained an intimate knowledge of the internal political maneuvering and personalities that have animated regional politics over the last two decades" (Putnam 1993:190).

The lessons of these efforts should be clear: neither quant.i.tative nor qualitative research is superior to the other, regardless of the research problem being addressed. Since many subjects of interest to social scientists cannot be meaningfully formulated in ways that permit statistical testing of hypotheses with quant.i.tative data, we do not wish to encourage the exclusive use of quant.i.tative techniques. We are not trying to get all social scientists out of the library and into the computer center, or to replace idiosyncratic conversations with structured interviews. Rather, we argue that nonstatistical research will produce more reliable results if researchers pay attention to the rules of scientific inference-rules that are sometimes more clearly stated in the style of quant.i.tative research. Precisely defined statistical methods that under-gird quant.i.tative research represent abstract formal models applicable to all kinds of research, even that for which variables cannot be measured quant.i.tatively. The very abstract, and even unrealistic, nature of statistical models is what makes the rules of inference s.h.i.+ne through so clearly.

The rules of inference that we discuss are not relevant to all issues that are of significance to social scientists. Many of the most important questions concerning political life-about such concepts as agency, obligation, legitimacy, citizens.h.i.+p, sovereignty, and the proper relations.h.i.+p between national societies and international politics-are philosophical rather than empirical. But the rules are relevant to all research where the goal is to learn facts about the real world. Indeed, the distinctive characteristic that sets social science apart from casual observation is that social science seeks to arrive at valid inferences by the systematic use of well-established procedures of inquiry. Our focus here on empirical research means that we sidestep many issues in the philosophy of social science as well as controversies about the role of postmodernism, the nature and existence of truth, relativism, and related subjects. We a.s.sume that it is possible to have some knowledge of the external world but that such knowledge is always uncertain.

Furthermore, nothing in our set of rules implies that we must run the perfect experiment (if such a thing existed) or collect all relevant data before we can make valid social scientific inferences. An important topic is worth studying even if very little information is available. The result of applying any research design in this situation will be relatively uncertain conclusions, but so long as we honestly report our uncertainty, this kind of study can be very useful. Limited information is often a necessary feature of social inquiry. Because the social world changes rapidly, a.n.a.lyses that help us understand those changes require that we describe them and seek to understand them contemporaneously, even when uncertainty about our conclusions is high. The urgency of a problem may be so great that data gathered by the most useful scientific methods might be obsolete before it can be acc.u.mulated. If a distraught person is running at us swinging an ax, administering a five-page questionnaire on psychopathy may not be the best strategy. Joseph Schumpeter once cited Albert Einstein, who said "as far as our propositions are certain, they do not say anything about reality, and as far as they do say anything about reality, they are not certain" (Schumpeter [1936] 1991:298-99). Yet even though certainty is unattainable, we can improve the reliability, validity, certainty, and honesty of our conclusions by paying attention to the rules of scientific inference. The social science we espouse seeks to make descriptive and causal inferences about the world. Those who do not share the a.s.sumptions of partial and imperfect knowability and the aspiration for descriptive and causal understanding will have to look elsewhere for inspiration or for paradigmatic battles in which to engage.

In sum, we do not provide recipes for scientific empirical research. We offer a number of precepts and rules, but these are meant to discipline thought, not stifle it. In both quant.i.tative and qualitative research, we engage in the imperfect application of theoretical standards of inference to inherently imperfect research designs and empirical data. Any meaningful rules admit of exceptions, but we can ask that exceptions be justified explicitly, that their implications for the reliability of research be a.s.sessed, and that the uncertainty of conclusions be reported. We seek not dogma, but disciplined thought.

1.1.2 Defining Scientific Research in the Social Sciences.

Our definition of "scientific research" is an ideal to which any actual quant.i.tative or qualitative research, even the most careful, is only an approximation. Yet, we need a definition of good research, for which we use the word "scientific" as our descriptor.1 This word comes with many connotations that are unwarranted or inappropriate or downright incendiary for some qualitative researchers. Hence, we provide an explicit definition here. As should be clear, we do not regard quant.i.tative research to be any more scientific than qualitative research. Good research, that is, scientific research, can be quant.i.tative or qualitative in style. In design, however, scientific research has the following four characteristics:1. The goal is inference. Scientific research is designed to make descriptive or explanatory inferences on the basis of empirical information about the world. Careful descriptions of specific phenomena are often indispensableto scientific research, but the acc.u.mulation of facts alone is not sufficient. Facts can be collected (by qualitative or quant.i.tative researchers) more or less systematically, and the former is obviously better than the latter, but our particular definition of science requires the additional step of attempting to infer beyond the immediate data to something broader that is not directly observed. That something may involve descriptive inference- using observations from the world to learn about other un.o.bserved facts. Or that something may involve causal inference-learning about causal effects from the data observed. The domain of inference can be restricted in s.p.a.ce and time-voting behavior in American elections since 1960, social movements in Eastern Europe since 1989-or it can be extensive-human behavior since the invention of agriculture. In either case, the key distinguis.h.i.+ng mark of scientific research is the goal of making inferences that go beyond the particular observations collected.

2. The procedures are public. Scientific research uses explicit, codified, and public methods to generate and a.n.a.lyze data whose reliability can therefore be a.s.sessed. Much social research in the qualitative style follows fewer precise rules of research procedure or of inference. As Robert K. Merton ([1949] 1968:71-72) put it, "The sociological a.n.a.lysis of qualitative data often resides in a private world of penetrating but unfathomable insights and ineffable understandings.... [However,] science ... is public, not private." Merton's statement is not true of all qualitative researchers (and it is unfortunately still true of some quant.i.tative a.n.a.lysts), but many proceed as if they had no method-sometimes as if the use of explicit methods would diminish their creativity. Nevertheless they cannot help but use some method. Somehow they observe phenomena, ask questions, infer information about the world from these observations, and make inferences about cause and effect. If the method and logic of a researcher's observations and inferences are left implicit, the scholarly community has no way of judging the validity of what was done. We cannot evaluate the principles of selection that were used to record observations, the ways in which observations were processed, and the logic by which conclusions were drawn. We cannot learn from their methods or replicate their results. Such research is not a public act. Whether or not it makes good reading, it is not a contribution to social science.All methods-whether explicit or not-have limitations. The advantage of explicitness is that those limitations can be understood and, if possible, addressed. In addition, the methods can be taught and shared. This process allows research results to be compared across separate researchers and research projects studies to be replicated, and scholars to learn.

3. The conclusions are uncertain. By definition, inference is an imperfect process. Its goal is to use quant.i.tative or qualitative data to learn about the world that produced them. Reaching perfectly certain conclusions from uncertain data is obviously impossible. Indeed, uncertainty is a central aspect of all research and all knowledge about the world. Without a reasonable estimate of uncertainty, a description of the real world or an inference about a causal effect in the real world is uninterpretable. A researcher who fails to face the issue of uncertainty directly is either a.s.serting that he or she knows everything perfectly or that he or she has no idea how certain or uncertain the results are. Either way, inferences without uncertainty estimates are not science as we define it.

4. The content is the method. Finally, scientific research adheres to a set of rules of inference on which its validity depends. Explicating the most important rules is a major task of this book.2 The content of "science" is primarily the methods and rules, not the subject matter, since we can use these methods to study virtually anything. This point was recognized over a century ago when Karl Pearson (1892: 16) explained that "the field of science is unlimited; its material is endless; every group of natural phenomena, every phase of social life, every stage of past or present development is material for science. The unity of all science consists alone in its method, not in its material."

These four features of science have a further implication: science at its best is a social enterprise. Every researcher or team of researchers labors under limitations of knowledge and insight, and mistakes are unavoidable, yet such errors will likely be pointed out by others. Understanding the social character of science can be liberating since it means that our work need not to be beyond criticism to make an important contribution-whether to the description of a problem or its conceptualization, to theory or to the evaluation of theory. As long as our work explicitly addresses (or attempts to redirect) the concerns of the community of scholars and uses public methods to arrive at inferences that are consistent with rules of science and the information at our disposal, it is likely to make a contribution. And the contribution of even a minor article is greater than that of the "great work" that stays forever in a desk drawer or within the confines of a computer.

1.1.3 Science and Complexity.

Social science const.i.tutes an attempt to make sense of social situations that we perceive as more or less complex. We need to recognize, however, that what we perceive as complexity is not entirely inherent in phenomena: the world is not naturally divided into simple and complexsets of events. On the contrary, the perceived complexity of a situation depends in part on how well we can simplify reality, and our capacity to simplify depends on whether we can specify outcomes and explanatory variables in a coherent way. Having more observations may a.s.sist us in this process but is usually insufficient. Thus "complexity" is partly conditional on the state of our theory.

Scientific methods can be as valuable for intrinsically complex events as for simpler ones. Complexity is likely to make our inferences less certain but should not make them any less scientific. Uncertainty and limited data should not cause us to abandon scientific research. On the contrary: the biggest payoff for using the rules of scientific inference occurs precisely when data are limited, observation tools are flawed, measurements are unclear, and relations.h.i.+ps are uncertain. With clear relations.h.i.+ps and unambiguous data, method may be less important, since even partially flawed rules of inference may produce answers that are roughly correct.

Consider some complex, and in some sense unique, events with enormous ramifications. The collapse of the Roman Empire, the French Revolution, the American Civil War, World War I, the Holocaust, and the reunification of Germany in 1990 are all examples of such events. These events seem to be the result of complex interactions of many forces whose conjuncture appears crucial to the event having taken place. That is, independently caused sequences of events and forces converged at a given place and time, their interaction appearing to bring about the events being observed (Hirschman 1970). Furthermore, it is often difficult to believe that these events were inevitable products of large-scale historical forces: some seem to have depended, in part, on idiosyncracies of personalities, inst.i.tutions, or social movements. Indeed, from the perspective of our theories, chance often seems to have played a role: factors outside the scope of the theory provided crucial links in the sequences of events.

One way to understand such events is by seeking generalizations: conceptualizing each case as a member of a cla.s.s of events about which meaningful generalizations can be made. This method often works well for ordinary wars or revolutions, but some wars and revolutions, being much more extreme than others, are "outliers" in the statistical distribution. Furthermore, notable early wars or revolutions may exert such a strong impact on subsequent events of the same cla.s.s-we think again of the French Revolution-that caution is necessary in comparing them with their successors, which may be to some extent the product of imitation. Expanding the cla.s.s of events can be useful, but it is not always appropriate.

Another way of dealing scientifically with rare, large-scale events is to engage in counterfactual a.n.a.lysis: "the mental construction of a course of events which is altered through modifications in one or more 'conditions' " (Weber [1905] 1949:173). The application of this idea in a systematic, scientific way is ill.u.s.trated in a particularly extreme example of a rare event from geology and evolutionary biology, both historically oriented natural sciences. Stephen J. Gould has suggested that one way to distinguish systematic features of evolution from stochastic, chance events may be to imagine what the world would be like if all conditions up to a specific point were fixed and then the rest of history were rerun. He contends that if it were possible to "replay the tape of life," to let evolution occur again from the beginning, the world's organisms today would be a completely different (Gould 1989a).

A unique event on which students of evolution have recently focused is the sudden extinction of the dinosaurs 65 million years ago. Gould (1989a:318) says, "we must a.s.sume that consciousness would not have evolved on our planet if a cosmic catastrophe had not claimed the dinosaurs as victims." If this statement is true, the extinction of the dinosaurs was as important as any historical event for human beings; however, dinosaur extinction does not fall neatly into a cla.s.s of events that could be studied in a systematic, comparative fas.h.i.+on through the application of general laws in a straightforward way.

Nevertheless, dinosaur extinction can be studied scientifically: alternative hypotheses can be developed and tested with respect to their observable implications. One hypothesis to account for dinosaur extinction, developed by Luis Alvarez and collaborators at Berkeley in the late 1970s (W. Alvarez and Asaro, 1990), posits a cosmic collision: a meteorite crashed into the earth at about 72,000 kilometers an hour, creating a blast greater than that from a full-scale nuclear war. If this hypothesis is correct, it would have the observable implication that iridium (an element common in meteorites but rare on earth) should be found in the particular layer of the earth's crust that corresponds to sediment laid down sixty-five million years ago; indeed, the discovery of iridium at predicted layers in the earth has been taken as partial confirming evidence for the theory. Although this is an unambiguously unique event, there are many other observable implications. For one example, it should be possible to find the metorite's crater somewhere on Earth (and several candidates have already been found).3 The issue of the cause(s) of dinosaur extinction remains unresolved, although the controversy has generated much valuable research. For our purposes, the point of this example is that scientific generalizations are useful in studying even highly unusual events that do not fall into a large cla.s.s of events. The Alvarez hypothesis cannot be tested with reference to a set of common events, but it does have observable implications for other phenomena that can be evaluated. We should note, however, that a hypothesis is not considered a reasonably certain explanation until it has been evaluated empirically and pa.s.sed a number of demanding tests. At a minimum, its implications must be consistent with our knowledge of the external world; at best, it should predict what Imre Lakatos (1970) refers to as "new facts," that is, those formerly un.o.bserved.

The point is that even apparently unique events such as dinosaur extinction can be studied scientifically if we pay attention to improving theory, data, and our use of the data. Improving our theory through conceptual clarification and specification of variables can generate more observable implications and even test causal theories of unique events such as dinosaur extinction. Improving our data allows us to observe more of these observable implications, and improving our use of data permits more of these implications to be extracted from existing data. That a set of events to be studied is highly complex does not render careful research design irrelevant. Whether we study many phenomena or few-or even one-the study will be improved if we collect data on as many observable implications of our theory as possible.

1.2 MAJOR COMPONENTS OF RESEARCH DESIGN.

Social science research at its best is a creative process of insight and discovery taking place within a well-established structure of scientific inquiry. The first-rate social scientist does not regard a research design as a blueprint for a mechanical process of data-gathering and evaluation. To the contrary, the scholar must have the flexibility of mind to overturn old ways of looking at the world, to ask new questions, to revise research designs appropriately, and then to collect more data of a different type than originally intended. However, if the researcher's findings are to be valid and accepted by scholars in this field, all these revisions and reconsiderations must take place according to explicit procedures consistent with the rules of inference. A dynamic process of inquiry occurs within a stable structure of rules.

Social scientists often begin research with a considered design, collect some data, and draw conclusions. But this process is rarely a smooth one and is not always best done in this order: conclusions rarely follow easily from a research design and data collected in accordance with it. Once an investigator has collected data as provided by a research design, he or she will often find an imperfect fit among the main research questions, the theory and the data at hand. At this stage, researchers often become discouraged. They mistakenly believe that other social scientists find close, immediate fits between data and research. This perception is due to the fact that investigators often take down the scaffolding after putting up their intellectual buildings, leaving little trace of the agony and uncertainty of construction. Thus the process of inquiry seems more mechanical and cut-and-dried than it actually is.

Some of our advice is directed toward researchers who are trying to make connections between theory and data. At times, they can design more appropriate data-collection procedures in order to evaluate a theory better; at other times, they can use the data they have and recast a theoretical question (or even pose an entirely different question that was not originally foreseen) to produce a more important research project. The research, if it adheres to rules of inference, will still be scientific and produce reliable inferences about the world.

Wherever possible, researchers should also improve their research designs before conducting any field research. However, data has a way of disciplining thought. It is extremely common to find that the best research design falls apart when the very first observations are collected-it is not that the theory is wrong but that the data are not suited to answering the questions originally posed. Understanding from the outset what can and what cannot be done at this later stage can help the researcher antic.i.p.ate at least some of the problems when first designing the research.

For a.n.a.lytical purposes, we divide all research designs into four components: the research question, the theory, the data, and the use of the data. These components are not usually developed separately and scholars do not attend to them in any preordained order. In fact, for qualitative researchers who begin their field work before choosing a precise research question, data comes first, followed by the others. However, this particular breakdown, which we explain in sections 1.2.1-1.2.4, is particularly useful for understanding the nature of research designs. In order to clarify precisely what could be done if resources were redirected, our advice in the remainder of this section a.s.sumes that researchers have unlimited time and resources. Of course, in any actual research situation, one must always make compromises. We believe that understanding the advice in the four categories that follow will help researchers make these compromises in such a way as to improve their research designs most, even when in fact their research is subject to external constraints.

1.2.1 Improving Research Questions.

Throughout this book, we consider what to do once we identify the object of research. Given a research question, what are the ways to conduct that research so that we can obtain valid explanations of social and political phenomena? Our discussion begins with a research question and then proceeds to the stages of designing and conducting the research. But where do research questions originate? How does a scholar choose the topic for a.n.a.lysis? There is no simple answer to this question. Like others, Karl Popper (1968:32) has argued that "there is no such thing as a logical method of having new ideas.... Discovery contains 'an irrational element,' or a 'creative intuition.' " The rules of choice at the earliest stages of the research process are less formalized than are the rules for other research activities. There are texts on designing laboratory experiments on social choice, statistical criteria on drawing a sample for a survey of att.i.tudes on public policy, and manuals on conducting partic.i.p.ant observation of a bureaucratic office. But there is no rule for choosing which research project to conduct, nor if we should decide to conduct field work, are there rules governing where we should conduct it.

We can propose ways to select a sample of communities in order to study the impact of alternative educational policies, or ways to conceptualize ethnic conflict in a manner conducive to the formulation and testing of hypotheses as to its incidence. But there are no rules that tell us whether to study educational policy or ethnic conflict. In terms of social science methods, there are better and worse ways to study the collapse of the East German government in 1989 just as there are better and worse ways to study the relations.h.i.+p between a candidate's position on taxes and the likelihood of electoral success. But there is no way to determine whether it is better to study the collapse of the East German regime or the role of taxes in U.S. electoral politics.

The specific topic that a social scientist studies may have a personal and idiosyncratic origin. It is no accident that research on particular groups is likely to be pioneered by people of that group: women have often led the way in the history of women, blacks in the history of blacks, immigrants in the history of immigration. Topics may also be influenced by personal inclination and values. The student of third-world politics is likely to have a greater desire for travel and a greater tolerance for difficult living conditions than the student of congressional policy making; the a.n.a.lyst of international cooperation may have a particular distaste for violent conflict.

These personal experiences and values often provide the motivation to become a social scientist and, later, to choose a particular research question. As such, they may const.i.tute the "real" reasons for engaging in a particular research project-and appropriately so. But, no matter how personal or idiosyncratic the reasons for choosing a topic, the methods of science and rules of inference discussed in this book will help scholars devise more powerful research designs. From the perspective of a potential contribution to social science, personal reasons are neither necessary nor sufficient justifications for the choice of a topic. In most cases, they should not appear in our scholarly writings. To put it most directly but quite indelicately, no one cares what we think-the scholarly community only cares what we can demonstrate.

Though precise rules for choosing a topic do not exist, there are ways-beyond individual preferences-of determining the likely value of a research enterprise to the scholarly community. Ideally, all research projects in the social sciences should satisfy two criteria. First, a research project should pose a question that is "important" in the real world. The topic should be consequential for political, social, or economic life, for understanding something that significantly affects many people's lives, or for understanding and predicting events that might be harmful or beneficial (see s.h.i.+vely 1990:15). Second, a research project should make a specific contribution to an identifiable scholarly literature by increasing our collective ability to construct verified scientific explanations of some aspect of the world. This latter criterion does not imply that all research that contributes to our stock of social science explanations in fact aims directly at making causal inferences. Sometimes the state of knowledge in a field is such that much fact-finding and description is needed before we can take on the challenge of explanation. Often the contribution of a single project will be descriptive inference. Sometimes the goal may not even be descriptive inference but rather will be the close observation of particular events or the summary of historical detail. These, however, meet our second criterion because they are prerequisites to explanation.

Our first criterion directs our attention to the real world of politics and social phenomena and to the current and historical record of the events and problems that shape people's lives. Whether a research question meets this criterion is essentially a societal judgment. The second criterion directs our attention to the scholarly literature of social science, to the intellectual puzzles not yet posed, to puzzles that remain to be solved, and to the scientific theories and methods available to solve them.

Political scientists have no difficulty finding subject matter that meets our first criterion. Ten major wars during the last four hundred years have killed almost thirty million people (Levy 1985:372); some "limited wars," such as those between the United States and North Vietnam and between Iran and Iraq, have each claimed over a million lives; and nuclear war, were it to occur, could kill billions of human beings. Political mismanagement, both domestic and international, has led to economic privation on a global basis-as in the 1930s-as well as to regional and local depression, as evidenced by the tragic experiences of much of Africa and Latin America during the 1980s. In general, cross-national variation in political inst.i.tutions is a.s.sociated with great variation in the conditions of ordinary human life, which are reflected in differences in life expectancy and infant mortality between countries with similar levels of economic development (Russett 1978:913-28). Within the United States, programs designed to alleviate poverty or social disorganization seem to have varied greatly in their efficacy. It cannot be doubted that research which contributes even marginally to an understanding of these issues is important.

While social scientists have an abundance of significant questions that can be investigated, the tools for understanding them are scarce and rather crude. Much has been written about war or social misery that adds little to the understanding of these issues because it fails either to describe these phenomena systematically or to make valid causal or descriptive inferences. Brilliant insights can contribute to understanding by yielding interesting new hypotheses, but brilliance is not a method of empirical research. All hypotheses need to be evaluated empirically before they can make a contribution to knowledge. This book offers no advice on becoming brilliant. What it can do, however, is to emphasize the importance of conducting research so that it const.i.tutes a contribution to knowledge.

Our second criterion for choosing a research question, "making a contribution," means explicitly locating a research design within the framework of the existing social scientific literature. This ensures that the investigator understand the "state of the art" and minimizes the chance of duplicating what has already been done. It also guarantees that the work done will be important to others, thus improving the success of the community of scholars taken as a whole. Making an explicit contribution to the literature can be done in many different ways. We list a few of the possibilities here:1. Choose a hypothesis seen as important by scholars in the literature but for which no one has completed a systematic study. If we find evidence in favor of or opposed to the favored hypothesis, we will be making a contribution.

2. Choose an accepted hypothesis in the literature that we suspect is false (or one we believe has not been adequately confirmed) and investigate whether it is indeed false or whether some other theory is correct.

3. Attempt to resolve or provide further evidence of one side of a controversy in the literature-perhaps demonstrate that the controversy was unfounded from the start.

4. Design research to illuminate or evaluate unquestioned a.s.sumptions in the literature.

5. Argue that an important topic has been overlooked in the literature and then proceed to contribute a systematic study to the area.

6. Show that theories or evidence designed for some purpose in one literature could be applied in another literature to solve an existing but apparently unrelated problem.

Focusing too much on making a contribution to a scholarly literature without some attention to topics that have real-world importance runs the risk of descending to politically insignificant questions. Conversely, attention to the current political agenda without regard to issues of the amenability of a subject to systematic study within the framework of a body of social science knowledge leads to careless work that adds little to our deeper understanding.

Our two criteria for choosing research questions are not necessarily in opposition to one another. In the long run, understanding real-world phenomena is enhanced by the generation and evaluation of explanatory hypotheses through the use of the scientific method. But in the short term, there may be a contradiction between practical usefulness and long-term scientific value. For instance, Mankiw (1990) points out that macroeconomic theory and applied macroeconomics diverged sharply during the 1970s and 1980s: models that had been shown to be theoretically incoherent were still used to forecast the direction of the U.S. economy, while the new theoretical models designed to correct these flaws remained speculative and were not sufficiently refined to make accurate predictions.

The criteria of practical applicability to the real world and contribution to scientific progress may seem opposed to one another when a researcher chooses a topic. Some researchers will begin with a real-world problem that is of great social significance: the threat of nuclear war, the income gap between men and women, the transition to democracy in Eastern Europe. Others may start with an intellectual problem generated by the social science literature: a contradiction between several experimental studies of decision-making under uncertainty or an inconsistency between theories of congressional voting and recent election outcomes. The distinction between the criteria is, of course, not hard and fast. Some research questions satisfy both criteria from the beginning, but in designing research, researchers often begin nearer one than the other.4 Wherever it begins, the process of designing research to answer a specific question should move toward the satisfaction of our two criteria. And obviously our direction of movement will depend on where we start. If we are motivated by a social scientific puzzle, we must ask how to make that research topic more relevant to real-world topics of significance-for instance, how might laboratory experiments better illuminate real-world strategic choices by political decision-makers or, what behavioral consequences might the theory have. If we begin with a real-world problem, we should ask how that problem can be studied with modern scientific methods so that it contributes to the stock of social science explanations. It may be that we will decide that moving too far from one criterion or the other is not the most fruitful approach. Laboratory experimenters may argue that the search for external referents is premature and that more progress will be made by refining theory and method in the more controlled environment of the laboratory. And in terms of a long-term research program, they may be right. Conversely, the scholar motivated by a real-world problem may argue that accurate description is needed before moving to explanation. And such a researcher may also be right. Accurate description is an important step in explanatory research programs.

In either case, a research program, and if possible a specific research project, should aim to satisfy our two criteria: it should deal with a significant real-world topic and be designed to contribute, directly or indirectly, to a specific scholarly literature. Since our main concern in this book is making qualitative research more scientific, we will primarily address the researcher who starts with the "real-world" perspective. But our a.n.a.lysis is relevant to both types of investigator.

If we begin with a significant real-world problem rather than with an established literature, it is essential to devise a workable plan for studying it. A proposed topic that cannot be refined into a specific research project permitting valid descriptive or causal inference should be modified along the way or abandoned. A proposed topic that will make no contributionto some scholarly literature should similarly be changed. Having tentatively chosen a topic, we enter a dialogue with the literature. What questions of interest to us have already been answered? How can we pose and refine our question so that it seems capable of being answered with the tools available? We may start with a burning issue, but we will have to come to grips both with the literature of social science and the problems of inference.

1.2.2 Improving Theory.

A social science theory is a reasoned and precise speculation about the answer to a research question, including a statement about why the proposed answer is correct. Theories usually imply several more specific descriptive or causal hypotheses. A theory must be consistent with prior evidence about a research question. "A theory that ignores existing evidence is an oxymoron. If we had the equivalent of 'truth in advertising' legislation, such an oxymoron should not be called a theory" (Lieberson 1992:4; see also Woods and Walton 1982).

Please click Like and leave more comments to support and keep us alive.

RECENTLY UPDATED MANGA

Designing Social Inquiry Part 1 summary

You're reading Designing Social Inquiry. This manga has been translated by Updating. Author(s): Gary King, Robert O. Keohane, Sidney Verba. Already has 858 views.

It's great if you read and follow any novel on our website. We promise you that we'll bring you the latest, hottest novel everyday and FREE.

BestLightNovel.com is a most smartest website for reading manga online, it can automatic resize images to fit your pc screen, even on your mobile. Experience now by using your smartphone and access to BestLightNovel.com