Quasi-experiment

From Wikipedia, the free encyclopedia

A quasi-experiment is an empirical interventional study used to estimate the causal impact of an intervention on target population without random assignment. Quasi-experimental research shares similarities with the traditional experimental design or randomized controlled trial, but it specifically lacks the element of random assignment to treatment or control. Instead, quasi-experimental designs typically allow the researcher to control the assignment to the treatment condition, but using some criterion other than random assignment (e.g., an eligibility cutoff mark).[1]

Quasi-experiments are subject to concerns regarding internal validity, because the treatment and control groups may not be comparable at baseline. In other words, it may not be possible to convincingly demonstrate a causal link between the treatment condition and observed outcomes. This is particularly true if there are confounding variables that cannot be controlled or accounted for.[2]

With random assignment, study participants have the same chance of being assigned to the intervention group or the comparison group. As a result, differences between groups on both observed and unobserved characteristics would be due to chance, rather than to a systematic factor related to treatment (e.g., illness severity). Randomization itself does not guarantee that groups will be equivalent at baseline. Any change in characteristics post-intervention is likely attributable to the intervention.

Design[]

The first part of creating a quasi-experimental design is to identify the variables. The will be the x-variable, the variable that is manipulated in order to affect a dependent variable. "X" is generally a with different levels. Grouping means two or more groups, such as two groups receiving alternative treatments, or a treatment group and a no-treatment group (which may be given a placebo – placebos are more frequently used in medical or physiological experiments). The predicted outcome is the dependent variable, which is the y-variable. In a time series analysis, the dependent variable is observed over time for any changes that may take place. Once the variables have been identified and defined, a procedure should then be implemented and group differences should be examined.[3]

In an experiment with random assignment, study units have the same chance of being assigned to a given treatment condition. As such, random assignment ensures that both the experimental and control groups are equivalent. In a quasi-experimental design, assignment to a given treatment condition is based on something other than random assignment. Depending on the type of quasi-experimental design, the researcher might have control over assignment to the treatment condition but use some criteria other than random assignment (e.g., a cutoff score) to determine which participants receive the treatment, or the researcher may have no control over the treatment condition assignment and the criteria used for assignment may be unknown. Factors such as cost, feasibility, political concerns, or convenience may influence how or if participants are assigned to a given treatment conditions, and as such, quasi-experiments are subject to concerns regarding internal validity (i.e., can the results of the experiment be used to make a causal inference?).

Quasi-experiments are also effective because they use the "pre-post testing". This means that there are tests done before any data are collected to see if there are any person confounds or if any participants have certain tendencies. Then the actual experiment is done with post test results recorded. This data can be compared as part of the study or the pre-test data can be included in an explanation for the actual experimental data. Quasi experiments have independent variables that already exist such as age, gender, eye color. These variables can either be continuous (age) or they can be categorical (gender). In short, naturally occurring variables are measured within quasi experiments.[4]

There are several types of quasi-experimental designs, each with different strengths, weaknesses and applications. These designs include (but are not limited to):[5]

  • Difference in differences (pre-post with-without comparison)
  • Nonequivalent control groups design
    • no-treatment control group designs
    • nonequivalent dependent variables designs
    • removed treatment group designs
    • repeated treatment designs
    • reversed treatment nonequivalent control groups designs
    • cohort designs
    • post-test only designs
    • regression continuity designs
  • Regression discontinuity design
  • Case-control design
    • time-series designs
    • multiple time series design
    • interrupted time series design
    • propensity score matching
    • instrumental variables
  • Panel analysis

Of all of these designs, the regression discontinuity design comes the closest to the experimental design, as the experimenter maintains control of the treatment assignment and it is known to "yield an unbiased estimate of the treatment effects".[5]: 242  It does, however, require large numbers of study participants and precise modeling of the functional form between the assignment and the outcome variable, in order to yield the same power as a traditional experimental design.

Though quasi-experiments are sometimes shunned by those who consider themselves to be experimental purists (leading Donald T. Campbell to coin the term “queasy experiments” for them),[6] they are exceptionally useful in areas where it is not feasible or desirable to conduct an experiment or randomized control trial. Such instances include evaluating the impact of public policy changes, educational interventions or large scale health interventions. The primary drawback of quasi-experimental designs is that they cannot eliminate the possibility of confounding bias, which can hinder one's ability to draw causal inferences. This drawback is often used to discount quasi-experimental results. However, such bias can be controlled for using various statistical techniques such as multiple regression, if one can identify and measure the confounding variable(s). Such techniques can be used to model and partial out the effects of confounding variables techniques, thereby improving the accuracy of the results obtained from quasi-experiments. Moreover, the developing use of propensity score matching to match participants on variables important to the treatment selection process can also improve the accuracy of quasi-experimental results. In fact, data derived from quasi-experimental analyses has been shown to closely match experimental data in certain cases, even when different criteria were used.[7] In sum, quasi-experiments are a valuable tool, especially for the applied researcher. On their own, quasi-experimental designs do not allow one to make definitive causal inferences; however, they provide necessary and valuable information that cannot be obtained by experimental methods alone. Researchers, especially those interested in investigating applied research questions, should move beyond the traditional experimental design and avail themselves of the possibilities inherent in quasi-experimental designs.[5]

Ethics[]

A true experiment would, for example, randomly assign children to a scholarship, in order to control for all other variables. Quasi-experiments are commonly used in social sciences, public health, education, and policy analysis, especially when it is not practical or reasonable to randomize study participants to the treatment condition.

As an example, suppose we divide households into two categories: Households in which the parents spank their children, and households in which the parents do not spank their children. We can run a linear regression to determine if there is a positive correlation between parents' spanking and their children's aggressive behavior. However, to simply randomize parents to spank or to not spank their children may not be practical or ethical, because some parents may believe it is morally wrong to spank their children and refuse to participate.

Some authors distinguish between a natural experiment and a "quasi-experiment".[1][5] The difference is that in a quasi-experiment the criterion for assignment is selected by the researcher, while in a natural experiment the assignment occurs 'naturally,' without the researcher's intervention.

Quasi-experiments have outcome measures, treatments, and experimental units, but do not use random assignment. Quasi-experiments are often the design that most people choose over true experiments. It is usually easily conducted as opposed to true experiments, because they bring in features from both experimental and non-experimental designs. Measured variables can be brought in, as well as manipulated variables. Usually quasi-experiments are chosen by experimenters because they maximize internal and external validity.[8]

Advantages[]

Since quasi-experimental designs are used when randomization is impractical and/or unethical, they are typically easier to set up than true experimental designs, which require[9] random assignment of subjects. Additionally, utilizing quasi-experimental designs minimizes threats to ecological validity as natural environments do not suffer the same problems of artificiality as compared to a well-controlled laboratory setting.[10] Since quasi-experiments are natural experiments, findings in one may be applied to other subjects and settings, allowing for some generalizations to be made about population. Also, this experimentation method is efficient in longitudinal research that involves longer time periods which can be followed up in different environments.

Other advantages of quasi experiments include the idea of having any manipulations the experimenter so chooses. In natural experiments, the researchers have to let manipulations occur on their own and have no control over them whatsoever. Also, using self selected groups in quasi experiments also takes away to chance of ethical, conditional, etc. concerns while conducting the study.[8]

Disadvantages[]

Quasi-experimental estimates of impact are subject to contamination by confounding variables.[1] In the example above, a variation in the children's response to spanking is plausibly influenced by factors that cannot be easily measured and controlled, for example the child's intrinsic wildness or the parent's irritability. The lack of random assignment in the quasi-experimental design method may allow studies to be more feasible, but this also poses many challenges for the investigator in terms of internal validity. This deficiency in randomization makes it harder to rule out confounding variables and introduces new threats to internal validity.[11] Because randomization is absent, some knowledge about the data can be approximated, but conclusions of causal relationships are difficult to determine due to a variety of extraneous and confounding variables that exist in a social environment. Moreover, even if these threats to internal validity are assessed, causation still cannot be fully established because the experimenter does not have total control over extraneous variables.[12]

Disadvantages also include the study groups may provide weaker evidence because of the lack of randomness. Randomness brings a lot of useful information to a study because it broadens results and therefore gives a better representation of the population as a whole. Using unequal groups can also be a threat to internal validity. If groups are not equal, which is sometimes the case in quasi experiments, then the experimenter might not be positive what the causes are for the results.[4]

Internal validity[]

Internal validity is the approximate truth about inferences regarding cause-effect or causal relationships. This is why validity is important for quasi experiments because they are all about causal relationships. It occurs when the experimenter tries to control all variables that could affect the results of the experiment. Statistical regression, history and the participants are all possible threats to internal validity. The question you would want to ask while trying to keep internal validity high is "Are there any other possible reasons for the outcome besides the reason I want it to be?" If so, then internal validity might not be as strong.[8]

External validity[]

External validity is the extent to which results obtained from a study sample can be generalized "to" some well-specified population of interest, and "across" subpopulations of people, times, contexts, and methods of study.[13] has argued that generalizing "to" a population is almost never possible because the populations to which we would like to project are measures of future behavior, which by definition cannot be sampled.[14] Therefore, the more relevant question is whether treatment effects generalize "across" subpopulations that vary on background factors that might not be salient to the researcher. External validity depends on whether the treatments studies have homogeneous effects across different subsets of people, times, contexts, and methods of study or whether the sign and magnitude of any treatment effects changes across subsets in ways that may not be acknowledged or understood by the researchers.[15] Athey and Imbens and Athey and Wager have pioneered machine learning techniques for inductive understanding of heterogeneous treatment effects.[16][17]

Design types[]

"" designs are the most common type of quasi experiment design. In this design, the experimenter measures at least one independent variable. Along with measuring one variable, the experimenter will also manipulate a different independent variable. Because there is manipulating and measuring of different independent variables, the research is mostly done in laboratories. An important factor in dealing with person-by-treatment designs are that random assignment will need to be used in order to make sure that the experimenter has complete control over the manipulations that are being done to the study.[18]

An example of this type of design was performed at the University of Notre Dame. The study was conducted to see if being mentored for your job led to increased job satisfaction. The results showed that many people who did have a mentor showed very high job satisfaction. However, the study also showed that those who did not receive the mentor also had a high number of satisfied employees. Seibert concluded that although the workers who had mentors were happy, he could not assume that the reason for it was the mentors themselves because of the numbers of the high number of non-mentored employees that said they were satisfied. This is why prescreening is very important so that you can minimize any flaws in the study before they are seen.[19]

"Natural experiments" are a different type of quasi-experiment design used by researchers. It differs from person-by-treatment in a way that there is not a variable that is being manipulated by the experimenter. Instead of controlling at least one variable like the person-by-treatment design, experimenters do not use random assignment and leave the experimental control up to chance. This is where the name "natural" experiment comes from. The manipulations occur naturally, and although this may seem like an inaccurate technique, it has actually proven to be useful in many cases. These are the studies done to people who had something sudden happen to them. This could mean good or bad, traumatic or euphoric. An example of this could be studies done on those who have been in a car accident and those who have not. Car accidents occur naturally, so it would not be ethical to stage experiments to traumatize subjects in the study. These naturally occurring events have proven to be useful for studying posttraumatic stress disorder cases.[18]

References[]

  1. ^ Jump up to: a b c Dinardo, J. (2008). "natural experiments and quasi-natural experiments". The New Palgrave Dictionary of Economics. pp. 856–859. doi:10.1057/9780230226203.1162. ISBN 978-0-333-78676-5.
  2. ^ Rossi, Peter Henry; Mark W. Lipsey; Howard E. Freeman (2004). Evaluation: A Systematic Approach (7th ed.). SAGE. p. 237. ISBN 978-0-7619-0894-4.
  3. ^ Gribbons, Barry; Herman, Joan (1997). "True and quasi-experimental designs". Practical Assessment, Research & Evaluation. 5 (14). Archived from the original on 2013-05-02.
  4. ^ Jump up to: a b Morgan, G. A. (2000). "Quasi-Experimental Designs". Journal of the American Academy of Child & Adolescent Psychiatry. 39 (6): 794–796. doi:10.1097/00004583-200006000-00020. PMID 10846316.
  5. ^ Jump up to: a b c d Shadish; Cook; Cambell (2002). Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Boston: Houghton Mifflin. ISBN 0-395-61556-9.
  6. ^ Campbell, D. T. (1988). Methodology and epistemology for social science: selected papers. University of Chicago Press. ISBN 0-226-09248-8.
  7. ^ Armstrong, J. Scott; Patnaik, Sandeep (2009-06-01). "Using Quasi-Experimental Data To Develop Empirical Generalizations For Persuasive Advertising" (PDF). Journal of Advertising Research. 49 (2): 170–175. doi:10.2501/s0021849909090230. ISSN 0021-8499. S2CID 14166537. Archived (PDF) from the original on 2017-08-17.
  8. ^ Jump up to: a b c DeRue, Scott (September 2012). "A Quasi Experimental Study of After-Event Reviews". Journal of Applied Psychology. 97 (5): 997–1015. doi:10.1037/a0028244. hdl:1813/71444. PMID 22506721.
  9. ^ CHARM-Controlled Experiments Archived 2012-07-22 at the Wayback Machine
  10. ^ http://www.osulb.edu/~msaintg/ppa696/696quasi.htm[permanent dead link]
  11. ^ Lynda S. Robson, Harry S. Shannon, Linda M. Goldenhar, Andrew R. Hale (2001)Quasi-experimental and experimental designs: more powerful evaluation designs Archived September 16, 2012, at the Wayback Machine, Chapter 4 of Guide to Evaluating the Effectiveness of Strategies for Preventing Work Injuries: How to show whether a safety intervention really works Archived March 28, 2012, at the Wayback Machine, Institute for Work & Health, Canada
  12. ^ Research Methods: Planning: Quasi-Exper. Designs Archived 2013-03-18 at the Wayback Machine
  13. ^ Cook, Thomas D. and Donald T. Campbell (1979), Quasi-experimentation: Design & Analysis Issues for Field Settings. Boston: Houghton-Mifflin
  14. ^ Lynch, John G., Jr. (1982), "On the External Validity of Experiments in Consumer Research," Journal of Consumer Research, 9 (December), 225–239.
  15. ^ Cronbach, Lee J. (1975),"Beyond the two disciplines of scientific psychology" American Psychologist 30 (2), 116.
  16. ^ Athey, Susan, and Guido Imbens (2016), "Recursive partitioning for heterogeneous causal effects." Proceedings of the National Academy of Sciences 113, (27), 7353–7360.
  17. ^ Wager, Stefan, and Susan Athey (2018), "Estimation and inference of heterogeneous treatment effects using random forests." Journal of the American Statistical Association 113 (523), 1228–1242.
  18. ^ Jump up to: a b Meyer, Bruce (April 1995). "Quasi & Natural Experiments in Economics" (PDF). Journal of Business and Economic Statistics. 13 (2): 151–161. doi:10.1080/07350015.1995.10524589. S2CID 56341672.
  19. ^ Seibert, Scott (1999). "The Effectiveness of Facilitated Mentoring A Longitudinal Quasi Experiment". Journal of Vocational Behavior. 54 (3): 483–502. doi:10.1006/jvbe.1998.1676.

External links[]

Retrieved from ""