QED evaluation designs compare a treatment group to a control group, but researchers cannot control who is assigned to which group. These studies do not explain with certainty whether the treatment provided directly relates to the positive outcomes. They cannot claim that the comparison groups are truly equivalent prior to treatment. This fails to show that some factors other than the intervention may have influenced outcomes. Providers have a stronger argument for a program's effectiveness if its evaluation uses RCT methods, because they better account for differences in groups, thereby increasing confidence that the outcome is due to the program. Under an RCT design, participants are randomly assigned to the treatment group, thus allowing researchers to balance out certain pre-existing characteristics; for example, motivation to seek treatment. There are limitations to RCTs, however. It may be problematic to assume that randomization creates unbiased treatment and control groups. It is important to know who the individuals are who compose the sample and whether differences exist between the control and experimental groups despite randomization. Similarly, randomization does not correct for volunteer bias in the sample as a whole. Thus, no matter which research method is used, researchers must explain the reasons for their decisions, be aware of any limitations that might arise from the use of that method, and understand how these limitations influence the results and their interpretation. Ways are suggested for improving evaluation research designs for IPV programs. 1 table and 10 references
Similar Publications
- Second Chance Act (SCA) Grant Program Evaluation: Interim Report on Program Implementation in Three SCA Sites
- Expanding on the factor structure and construct validity of the Short-Term Assessment of Risk and Treatability (START) in a general correctional sample
- APS Investigation Across Four Types of Elder Maltreatment