U.S. flag

An official website of the United States government, Department of Justice.

Practitioner and Researcher Concerns About RCTs

Date Published
December 1, 2014

Sidebar to the article Services for IPV Victims: Encouraging Stronger Research Methods to Produce More Valid Results by Melissa Rorie, Bethany Backes and Jaspreet Chahal.

Although the use of randomized controlled trials (RCTs) can improve the validity of research results, it is not without complications. Practitioners and researchers commonly cite six concerns about RCTs:

Practitioner Concerns

  1. Victim safety: When it comes to implementing experimental designs and collecting data in intimate partner violence (IPV) research, the victim's anonymity must be protected. The most dangerous time for IPV victims is when they decide to leave the relationship or reach out for services.[1] Providers take great care to ensure victims are out of harm's way — for example, shelters are unmarked, victims must be a certain distance from the shelter in order to meet anyone, and the Health Insurance Portability and Accountability Act protects any medical treatment they seek. It is imperative that victims' anonymity be preserved to maintain their safekeeping. Practitioners' concern for victim safety is justified and should be highly considered when using experimental designs.
  2. Equity in services: In RCTs, the control group is not deprived of services. In most designs, the control group receives the agency's standard services, whereas the treatment group receives the agency's standard services and the enhanced intervention being tested.
    Still, is it ethical to deny additional services to potential recipients based on the luck of the draw? Will victims receiving only basic treatment face increased harm compared with the treatment (enhanced services) group? Conversely, is it ethical to deliver additional services without knowing whether they are helpful or even detrimental to victims? Is it ethical to deliver services that often require significant time and emotional investment from the victims without knowing whether the services work? Some researchers argue that RCTs might actually be more ethical than other methods because of the improved ability to determine whether a program is helpful and whether there are any adverse effects.[2]
  3. Agency burden of evaluation: Many victims' services agencies are small and based on a highly collaborative, mission-driven model. Being mandated to evaluate its programs requires an agency to spend staff time and funding on research instead of on direct service. Without adequate funding to ease these additional pressures, this demand on resources might be seen as harmful to victims seeking help.[3] Further, experimental designs are not only the most rigorous method of evaluation but also can be more expensive than other methods. As such, RCTs may be more costly than small programs can afford.
    There are also the potential consequences of doing an evaluation. If an evaluation indicates that the services an organization provides are not effective, then what? Has the nonprofit organization used its resources for "nothing" because the evaluation showed no effect? Where does the organization go from here? Program staff may fear that unfavorable findings will put them out of business. They also may argue that quantitative results do not accurately depict their work or the successes they see on a daily basis. The agency may measure success using anecdotal evidence, like victims' stories, and may argue that the program's success cannot be measured with quantitative data, regardless of the strength of the methods.
    In cases in which the treatment does not demonstrate the desired effect, researchers and practitioners should collaborate to understand why the program is not working and what benefits the program is having. They should then modify the program based on the findings. Practitioners and researchers must work together to establish within-agency routines that allow for continuous program performance evaluations after the researchers withdraw.

Researcher Concerns

  1. Implementation: The greatest concern for researchers is that practitioners — who know their clients well and are in a good position to make recommendations to them — will use anecdotal information to alter the random assignment of the experiment or trial. When practitioners feel certain that the program is effective or when they fear the effect that unfavorable or nonsignificant findings will have on support or funding, concerns with implementing the research design become more salient. This is a problem with all evaluations, not just RCTs.[4] But with RCTs in particular, practitioners might not fully understand the underlying assumptions and methods and, therefore, might not know what role they are supposed to take in the evaluation.[5]
    It is important for researchers to work with practitioners and provide feedback throughout the evaluation. Assuring practitioners that they will have an opportunity to voice their interpretations about the results might help alleviate their fears about the consequences of the evaluation.[6] The Enhanced Nurse Family Partnership Study in Los Angeles offers a prime example of a successful collaboration between researchers and practitioners. In this project, researchers and the Multnomah County Health Department worked together to integrate an IPV prevention program into the department's long-standing nurse-family partnership program and to test it using RCT methods.[7]
  2. Confidentiality issues: If a researcher does not take sufficient care to ensure data confidentiality and fails to comply with institutional review board (IRB) review, data might be subpoenaed and a victim's privacy compromised.[8] Because of this concern, IRBs place most experiments that evaluate victims of IPV under special scrutiny. It is critical that researchers follow IRB recommendations, as well as human subjects and confidentiality regulations dictated by their funding agencies, and that they work to address any confidentiality and safety issues when designing the study.[9]
  3. Cost to researchers: Researchers and practitioners alike find that designing and implementing an RCT is incredibly time-consuming. Practitioners are crunched for time as they struggle with overwhelming caseloads, and researchers might prefer to do something less arduous so that they can publish their results quickly. One study asked researchers why they would choose randomized methods over less rigorous designs: One of the most important motivators researchers cited was funding agency priorities. Subsidizing the randomization process might be an important way to increase the use of such designs.[10]

These concerns need to be addressed before deciding that an RCT is the best method to evaluate a specific program. Collaboration and funding might help alleviate some discomfort with this design. However, if concerns for victim safety and privacy remain high, then using alternative methods might be warranted.

About This Article

This artice appeared in NIJ Journal Issue 274, December 2014, as a sidebar to the article Services for IPV Victims: Encouraging Stronger Research Methods to Produce More Valid Results by Melissa Rorie, Bethany Backes and Jaspreet Chahal.

Date Published: December 1, 2014