Research methods
Research is critical to ensure that suicide prevention is effective in preventing suicide, reducing distress and improving wellbeing.
While knowing how effective an intervention is at preventing suicide is essential, it is equally important to understand how interventions can be implemented sucessfully across different settings and communities.
What is an intervention?
In research, an intervention is something researchers do to impact an outcome. For example, they might develop a training program (the intervention) in hopes it will improve people’s knowledge of suicide (the outcome).
Suicide prevention interventions come in many different shapes and sizes (e.g., projects, policies, services, training, campaigns or other approaches). They are organised efforts to improve the health and wellbeing of individuals and communities and ultimately, prevent suicide. Approaches can include:
- Promoting wellbeing (e.g. ensuring access to housing, education and employment)
- Preventing the onset of suicidal behaviour (e.g. safe media reporting, reducing access to means)
- Supporting people in suicidal crisis through indicated interventions (e.g. safety planning, aftercare, gatekeeper training)
- Providing postvention support to people and communities impacted by suicide (e.g. practical and psychological support).
How well an intervention is implemented will also impact its effectiveness. Even if an intervention has worked well before, it might not work the same in a different context or setting.
Factors like resources, funding, the complexity of interventions, competing priorities, people’s needs, and equity considerations all impact how well an intervention is implemented and, ultimately, how well it works.
Effective interventions may be poorly implemented, and interventions with limited evidence may be well-resourced and widely implemented.
What is implementation research?
Implementation research aims to understand how and why interventions work in real-world settings and what can be done to improve them1.
Implementation research outcomes differ from clinical or effectiveness outcomes, which might include quality of life, distress, morbidity, and mortality. Proctor et al.2 defines eight distinct outcomes in implementation research:
- Acceptability – An individual’s perception that an intervention is agreeable or satisfactory.
- Adoption – The intention, decision or action to try or implement an intervention.
- Appropriateness – How well the intervention ‘fits’ a setting, provider or person.
- Cost – How much does the intervention cost to deliver?
- Feasibility – How successfully an intervention can be implemented in a setting.
- Fidelity – If an intervention is implemented as described in the original plans.
- Penetration – How well an intervention is integrated into the setting and its subsystems.
- Sustainability – How an intervention is maintained within a setting or system.
Considerations in suicide prevention
There are some considerations for implementation research in suicide prevention.3
- Co-design with people with lived experience of suicide is vital.
All suicide prevention activities, including implementation research, need to be supported by lived experience. Evidence not being supported by lived experience is often considered a leading barrier to applying evidence in policy and practice.
Learn more about the role of lived experience in implementation. - Suicide prevention interventions (and settings) can be complex.
The more moving parts an intervention has, the more important it is to consider implementation research methods. Implementation research can embrace this complexity by exploring how and why interventions do or do not work.
View Skivington et al.’s (2021) framework for developing and evaluating complex interventions.
What are complex interventions?
An intervention might be considered complex if it: (Skivington, 2021)
- Has lots of components
- Targets a range of behaviours
- Requires certain expertise and skills to deliver or receive.
- Targets many groups
- Targets many settings
- Targets many levels (e.g., individual level, societal level).
- There is a level of flexibility in the intervention.
- Is implemented in a complex setting.
How are studies designed?
Researchers take different steps to make sure the results of their study are accurate. This might involve strategies such as:
- Having a control group who do not receive an intervention
- Randomly assigning people to groups
- Account for factors that could also impact the results (for example, age or gender)
- Following strict, standardised procedures.
These strategies help to improve the internal validity of a study, meaning researchers can be more confident that the intervention rather than other factors cause a change in an outcome.
Though, strict experimental conditions can make it harder to know if the same results would be seen in the real world. This concept is known as external validity.
Different study designs impact the internal and external validity of the research. Common designs used in implementation research includes:
- Experimental designs
Experimental designs involve researchers changing one variable to see how it affects another while controlling other variables that could also impact the outcome.
A common example of an experimental design is a randomised controlled trial. Due to their strict experimental conditions, randomised controlled trials are the gold standard in research. However, they are time-intensive and may not always be feasible. Types of implementation trials are described in the section below.
Designing and undertaking randomised implementation trials: guide for researchers
- Quasi-experimental designs
Quasi-experimental designs are a type of experimental study where participants are not randomised into groups. They attempt to establish causation in other ways, such as measuring a variable before and after an intervention.
These studies are often less expensive and time-consuming but lack internal validity and cannot show an intervention caused an outcome.
Selecting and Improving Quasi-Experimental Designs in Effectiveness and Implementation Research
- Qualitative and mixed methods
Researching implementation and effectiveness outcomes
Similar to how randomised controlled trials provide a high standard of evidence towards the effectiveness of interventions, randomised implementation trials can provide a high standard assessment of implementation strategies.
Trials can assess implementation outcomes alone or alongside intervention effectiveness. Hybrid effectiveness-implementation designs allow us to examine intervention effectiveness and implementation outcomes simultaneously.
There are three types of hybrid designs:
- Type 1 = Mainly exploring effectiveness whilst partly looking at implementation outcomes.
Example: Evaluation of a youth-focused suicide prevention HOPE aftercare service: protocol for a non-randomized hybrid effectiveness-implementation type 1 design - Type 2 = Equally focussed on effectiveness and implementation
outcomes.
Example: Implementing eScreening for suicide prevention in VA post-9/11 transition programs using a stepped-wedge, mixed-method, hybrid type-II implementation trial: a study protocol - Type 3 = Mainly exploring implementation whilst partly looking at effectiveness outcomes.
Example: Study protocol: Type III hybrid effectiveness-implementation study implementing Age-Friendly evidence-based practices in the VA to improve outcomes in older adults
Notes
- 1
Peters DH, Adam T, Alonge O, Agyepong IA, Tran N. Implementation research: what it is and how to do it. BMJ. 2013;347. doi:10.1136/bmj.f6753
- 2
Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Griffey R, Hensley M. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011 Mar;38(2):65-76. doi: 10.1007/s10488-010-0319-7
- 3
Reifels L, Krishnamoorthy S, Kõlves K, Francis J. Implementation Science in Suicide Prevention. Crisis. 2022;43(1):1–7. doi:10.1027/0227-5910/a000846.