Theory-driven evaluation explained
Theory-driven evaluation (also theory-based evaluation) is an umbrella term for any approach to program evaluation that develops a theory of change and uses it to design, implement, analyze, and interpret findings from an evaluation.[1] [2] [3] More specifically, an evaluation is theory-driven if it:[4]
- formulates a theory of change using some combination of social science, beneficiary lived experience, and program-related professionals' expertise;
- develops and prioritizes evaluation questions using the theory;
- uses the theory to guide the design and implementation of the evaluation;
- uses the theory to operationalize contextual, process, and outcome variables; and
- provides a causal explanation of how and why outcomes were achieved, including whether the program worked and/or had any unintended consequences (desirable or harmful), and what moderates outcomes.
By investigating the mechanisms through which outcomes are achieved, theory-driven approaches facilitate learning to improve programs and how they are implemented, and help knowledge to accumulate across apparently different programs.[5] [6] This is in contrast to methods-driven "black box" evaluations, which focus on following the steps of a method (for instance, randomized experiment or focus group) and only assess whether a program leads to its intended outcomes.[7] Theory-driven approaches can also improve the validity of evaluations, for instance leading to more precise estimates of impact in randomized controlled trials.[8]
History
Theory-driven evaluation emerged in the 1970s and 80s in response to the limitations of methods-driven "black box" evaluations. The term theory-driven evaluation was coined by Huey T. Chen and Peter H. Rossi.[9] Chen (1990)[10] wrote the first comprehensive introduction to conducting theory-driven evaluations, for example explaining how to develop a program theory of change and the different types of design. Its origins have been traced[11] to a book by Carol Weiss (1972)[12] and a rarely-cited article by Carol Taylor Fitz-Gibbon and Lynn Lyons Morris (1975).[13] However, "the first published use of what we would recognize as program theory" was in an evaluation of training programs, by Don Kirkpatrick in 1959.[14]
Funnell and Rogers (2011, pp. 23–24) comment on the confused nomenclature of the field, enumerating 22 approaches such as theory-based evaluation and program theory-driven evaluation science that are equivalent to or overlap significantly with theory-driven evaluation. The first definition of theory-based evaluation, by Fitz-Gibbon and Morris (1975), is near-identical to theory-driven evaluation:[15]
A theory-based evaluation of a program is one in which the selection of program features to evaluate is determined by an explicit conceptualization of the program in terms of a theory […] which attempts to explain how the program produces the desired effects. The theory might be psychological […] or social psychological […] or philosophical […]. The essential characteristic is that the theory points out a causal relationship between a process A and an outcome B.
Consequently, the terms theory-driven and theory-based evaluation are often used interchangeably in the literature.[16] [17] [18] However, theory-based evaluation is sometimes interpreted more narrowly to mean qualitative or small-n case study-based evaluations conducted without a comparison group, for example using process tracing or qualitative comparative analysis.[19] [20]
What is meant by "theory"?
The theory of theory-driven evaluation seeks to be as close as possible to the proximal causes of a social problem and site of intervention rather than, for instance, a "grand" theory, that tries to provide an overarching understanding of society, or a metaphysical theory about the nature of social reality:[21]
It advances evaluation practice very little to adopt one or another of current global theories in attacking, say, the problem of juvenile delinquency, but it does help a great deal to understand the authority structure in schools and the mechanisms of peer group influence and parental discipline in designing and evaluating a program that is supposed to reduce disciplinary problems in schools. [...T]he theory-driven perspective is closer to what econometricians call "model specification" than are more complicated and more abstract and general theories.
A distinction is also drawn between
normative theory, concerning what a program is supposed to do and how it should be implemented, and
causal theory, which specifies how the program is thought to work.
[22] There can then be two broad ways in which a program fails to lead to the desired outcomes: (1) a program may be implemented as intended according to the normative theory; however, the causal theory is incorrect; and (2) the causal theory is correct; however, the program was not implemented correctly.
[23]
Chen's action model/change model schema
Chen's action model/change model schema[24] provides an example of how a program theory and its context are conceptualized. The elements of the schema are then completed for each particular program.
The change model specifies how an intervention of a program leads to outcomes via determinants, also known as intermediate or mediating variables.
The action model specifies how staff and delivery organizations deliver the intervention to beneficiaries:
- The target population includes a specification of who participants are and how they are recruited.
- The implementing organization (for instance a clinic or school) and its staff of implementers (for instance therapists or teachers) are responsible for allocating resources, training, and delivering the interventions.
- Intervention and service delivery protocols would include therapy manuals or subject curricula.
- Associated organizations and community partners refers to organisations other than the implementing organisation. In the case of a psychotherapy intervention, this may include schools or general practitioners who advertise the program or refer beneficiaries to it.
- Ecological context refers to aspects of the environment, for instance family, friends, co-workers, other students, etc., that may moderate the effects of a program.
Theory-driven methods
The full-range of research methods has been argued to apply. For instance, Chen (2015) provides examples using randomized experiments, quasi-experimental designs, process and outcome monitoring, and qualitative methods.[25] Although proponents of theory-driven evaluation are critical of "black box" experiments, Chen and Rossi (1983, p. 292)[26] argue that theory-driven experiments are possible and desirable:
[A]dvocates of the black box experimental paradigm often neglect the fact that after randomization exogenous variables are still correlated with outcome variables. Knowing how such exogenous factors affect outcomes makes it possible to construct more precise estimates of experimental effects by controlling for such exogenous variables.
It has been argued that theory-driven evaluation focusses too much on statistical approaches, such as randomized experiments, quasi-experiments, and
structural equation modelling;
[27] however, a case has also been made for the importance of qualitative methods, particularly when developing program theories and understanding implementation.
[28]
There is also methodological debate concerning whether realist evaluations, considered a particular kind of theory-driven approach, may include randomized controlled trials in any form. Some evaluators think they may and conduct what they call "realist trials".[29] [30] [31] Others argue that a realist trial is an "oxymoron", and recommend instead calling them "theory-oriented trials".[32] A 2023 review of purported realist trials concluded that whether they are really realist depends on "ontological and epistemological" commitments of evaluators and that differences "cannot be resolved" by reviewing studies conducted.[33]
Examples
Examples discussed in a 2011 systematic review of 45 theory-driven evaluations include:[34]
- An evaluation of the Fort Bragg Child and Adolescent Mental Health Demonstration, a managed mental health care system with a single point of entry, which used individual interviews, focus groups, and document review to assist the development of a theory of change.[35] The theory articulated why it was thought that an integrated care system would be more cost-effective than a fragmented system.
- An evaluation of a board game created to help teach secondary school business education.[36] This evaluation developed a theory of change and used it to select measures and design regression analyses of process and outcome.
- An evaluation of a garbage reduction program.[37] The program attempted to encourage residents to reduce the volume of garbage they produce by reducing the frequency of collection; however, an unintended negative consequence identified by the evaluation was that residents produced the same volume as before, simply storing their garbage in their homes on non-collection days. This effect was identified using an comparative interrupted time series analysis with autoregressive integrated moving average (ARIMA).
A 2014 review of theory-driven evaluation in school psychology[38] highlighted two illustrative examples:
- An evaluation of conjoint behavioral consultation, a "strength-based intervention focused on building behavioral and social competence in children".[39] The evaluation tested a theory of change using a cluster-randomised controlled trial and mediation analysis.
- An evaluation of repeated reading and vocabulary previewing which tested causal theory using case study methodology, an adapted alternating treatments design with six students.[40]
Notes and References
- Chen, H.-T., & Rossi, P. H. (1980). The Multi-Goal, Theory-Driven Approach to Evaluation: A Model Linking Basic and Applied Social Science. Social Forces, 59, 106–122.
- Coryn, C. L. S., Noakes, L. A., Westine, C. D., & Schröter, D. C. (2011, p. 201). A systematic review of theory-driven evaluation practice from 1990 to 2009. American Journal of Evaluation, 32(2), 199–226.
- Donaldson, S. I. (2022, p. 9). Introduction to Theory-Driven Program Evaluation (2nd ed.). Routledge.
- Coryn, C. L. S., Noakes, L. A., Westine, C. D., & Schröter, D. C. (2011, pp. 203–205). A systematic review of theory-driven evaluation practice from 1990 to 2009. American Journal of Evaluation, 32(2), 199–226.
- Chen, H. T. (2012). Theory-driven evaluation: Conceptual framework, application and advancement. In R. Strobl, O. Lobermeier, & W. Heitmeyer (Eds.), Evaluation von Programmen und Projekten für eine demokratische Kultur (pp. 17–40). Springer Fachmedien Wiesbaden. https://doi.org/10.1007/978-3-531-19009-9_2
- Weiss, C. H. (1995). Nothing as Practical as Good Theory: Exploring Theory-Based Evaluation for Comprehensive Community Initiatives for Children and Families. In J. P. Connell, A. C. Kublsch, L. B. Schorr, & C. H. Weiss (Eds.), New Approaches to Evaluating Community Initiatives: Concepts, Methods, and Contexts (Issue 7, pp. 65–92). The Aspen Institute.
- Chen, H.-T., & Rossi, P. H. (1980). The Multi-Goal, Theory-Driven Approach to Evaluation: A Model Linking Basic and Applied Social Science. Social Forces, 59, 106–122.
- Chen, H.-T., & Rossi, P. H. (1983). Evaluating With Sense: The Theory-Driven Approach. Evaluation Review, 7(3), 283–302.
- Chen, H.-T., & Rossi, P. H. (1980). The Multi-Goal, Theory-Driven Approach to Evaluation: A Model Linking Basic and Applied Social Science. Social Forces, 59, 106–122.
- Chen, H. T. (1990). Theory-driven evaluations. Thousand Oaks, CA: Sage.
- Worthen, B. R. (1996). Editor’s Note: The Origins of Theory-Based Evaluation. Evaluation Practice, 17(2), 169–171.
- Weiss, C. H. (1972). Evaluation research: Methods for assessing program effectiveness. Englewood Cliffs, NJ: Prentice-Hall.
- Fitz-Gibbon, C. T., & Morris, L. L. (1975). Theory-based evaluation. Evaluation Comment, 5(1), 1–4. Reprinted in Fitz-Gibbon, C. T., & Morris, L. L. (1996). Theory-based evaluation. Evaluation Practice, 17(2), 177–184.
- See p. 16, Book: ((Funnell, S. C.)), ((Rogers, P. J.)) . 2011 . Purposeful Program Theory: Effective Use of Theories of Change and Logic Models . Jossey-Bass.
- Fitz-Gibbon, C. T., & Morris, L. L. (1975, p. 1).
- Birckmayer, J. D., & Weiss, C. H. (2000). Theory-based evaluation in practice: what do we learn? Evaluation review, 24(4), 407-431.
- 10.1177/10982140221122764. 45. 1. 110–132. Matta. Corrado. Lindvall. Jannika. Ryve. Andreas. The Mechanistic Rewards of Data and Theory Integration for Theory-Based Evaluation. American Journal of Evaluation. 2024.
- Dahler-Larsen, P. (2018, p. 9). Theory-Based Evaluation Meets Ambiguity: The Role of Janus Variables. American Journal of Evaluation, 39(1), 6–23.
- Stern, E., Stame, N., Mayne, J., Forss, K., Davies, R., & Befani, B. (2012). Broadening the range of designs and methods for impact evaluations. Institute for Development Studies.
- HM Treasury (2020). The Magenta Book.
- Chen, H.-T., & Rossi, P. H. (1983). Evaluating With Sense: The Theory-Driven Approach. Evaluation Review, 7(3), 283–302.
- Chen, H. T. (1989). The conceptual framework of the theory-driven perspective. Evaluation and Program Planning, 12(4), 391-396.
- Weiss, C. H. (1972, p. 38). Evaluation research: Methods for assessing program effectiveness. Englewood Cliffs, NJ: Prentice-Hall.
- Chen, H. T. (2015, Chapter 3). Practical Program Evaluation: Theory-Driven Evaluation and the Integrated Evaluation Perspective. SAGE Publications Ltd.
- Chen, H. T. (2015). Practical program evaluation: Theory-driven evaluation and the integrated evaluation perspective (2nd edition). Sage Publications.
- Chen, H.-T., & Rossi, P. H. (1983). Evaluating With Sense: The Theory-Driven Approach. Evaluation Review, 7(3), 283–302.
- Smith, N. L. (1994). Clarifying and Expanding the Application of Program Theory-driven Evaluations. Evaluation Practice, 15(1), 83–87.
- Chen, H. T. (1994). Theory-driven Evaluations: Need, Difficulties, and Options. American Journal of Evaluation, 15(1), 79–82. https://doi.org/10.1177/109821409401500109
- Martin P and Tannenbaum C (2017) A realist evaluation of patients’ decisions to deprescribe in the EMPOWER trial. BMJ Open 7(4): e015959.
- Bonell C, Fletcher A, Morton M, et al. (2012) Realist randomised controlled trials: A new approach to evaluating complex public health interventions. Social Science & Medicine 75(12): 2299–306.
- Bonell, C., Melendez-Torres, G. J., & Warren, E. (2024). Realist Trials and Systematic Reviews: Rigorous, Useful Evidence to Inform Health Policy. Cambridge: Cambridge University Press.
- Marchal, B., Westhorp, G., Wong, G., Van Belle, S., Greenhalgh, T., Kegels, G., & Pawson, R. (2013). Realist RCTs of complex interventions—An oxymoron. Social Science & Medicine, 94, 124–128.
- Nielsen, S. B., Jaspers, S. Ø., & Lemire, S. (2023). The curious case of the realist trial: Methodological oxymoron or unicorn? Evaluation.
- Coryn, C. L. S., Noakes, L. A., Westine, C. D., & Schröter, D. C. (2011). A systematic review of theory-driven evaluation practice from 1990 to 2009. American Journal of Evaluation, 32(2), 199–226. https://doi.org/10.1177/1098214010389321
- Bickman, L. (1996). The application of program theory to the evaluation of a managed mental health care system. Evaluation and Program Planning, 19, 111-119.
- Hense, J., Kriz, W. C., & Wolfe, J. (2009). Putting theory-oriented evaluation into practice: A logic model approach for evaluating SIMGAME. Simulation & Gaming, 40, 110-133.
- Chen, H. T., Weng, J. C. S., & Lin, L.-H. (1997). Evaluating the process and outcome of a garbage reduction program in Taiwan. Evaluation Review, 21, 27-42.
- Mercer, S. H., Idler, A. M., & Bartfai, J. M. (2014). Theory-Driven Evaluation in School Psychology Intervention Research: 2007–2012. School Psychology Review, 43(2), 119–131.
- Sheridan, S. M., Bovaird, J. A., Glover, T. A., Andrew Garbacz, S., Witte, A., & Kwon, K. (2012). A Randomized Trial Examining the Effects of Conjoint Behavioral Consultation and the Mediating Role of the Parent–Teacher Relationship. School Psychology Review, 41(1), 23–46.
- Hawkins, R. O., Hale, A., Sheeley, W., & Ling, S. (2011). Repeated reading and vocabulary‐previewing interventions to improve fluency and comprehension for struggling high‐school readers. Psychology in the Schools, 48(1), 59–77.