Suicide, self-harm and online safety for young people

What's the issue?

Suicide is the leading cause of death for young people in Australia. There are many factors that influence suicide in young people, but some factors such as engaging in self-harm behaviours can increase risk of suicide. Research suggests that as many as 83% of young people have seen self-harm or suicide content on social media such as images and videos of others engaging in or encouraging self-harm or suicide.1,2 Social media algorithms may expose, or overexpose, young people to harmful content. Viewing this content may increase risk of suicide in some young people.

Young people use social media to connect with peers and find community around certain topics including suicide, and suicide loss and bereavement. Young people also use social media to learn more about topics like self-harm and suicide, and to access help and support. There is large potential in harnessing social media to share public health campaigns and population-level interventions that support the prevention of suicide.

Online safety is largely absent in national suicide prevention strategies. Given young people are a priority group for suicide prevention and a key user of social media platforms, online safety plays a large role in prevention of suicide and self-harm.

In the absence of self-harm and suicide-specific guidance in national strategy and policy, many social media platform organisations have their own set of policies to create safe online environments. The level of evidence and involvement of expert guidance to inform these policies is dependent on the organisation.

What was done?

The study looked at what young people, policymakers, and social media professionals think are the challenges and concerns about young people discussing self-harm and suicide online. It also explored what they believe social media companies and governments can do to make the internet safer for young people.

Researchers held six focus groups between June 2022-August 2022. The focus groups consisted of:

  • Young people (one focus group, seven participants)
  • Australian policy makers (two focus groups, 14 participants)
  • Individuals employed by social media companies (three groups, seven participants).

Young people were recruited via social media advertising on the #chatsafe social media pages or were invited directly by the researchers if they had previous involvement with #chatsafe. Young people were eligible to participate if they were aged between 12 and 25 years; able to speak and read English, and; if under 18, had parent or guardian consent.

Policymakers and professionals from the social media companies were invited via email by the researchers. Policymakers and professionals from social media companies were eligible to participate if they were older than 18 years and were:

  1. employed by a government department or had a policymaking role with responsibility for mental health, youth or online safety, or
  2. employed in an online safety or policy team within the social media industry. There were no exclusion criteria.

Semi-structured interview questions were used to guide the focus groups, which ran for 60-120 minutes. The focus group questions centred on the challenges of social media and suicide and self-harm content, and what can be done by young people, policy makers, and social media organisations to keep young people safe online.

What was found?

The most frequently used social media platforms used by participants were Facebook, Instagram and YouTube. Participants in all three focus groups acknowledged the potential for harms through exposure to suicide and self-harm content on social media, and agreed that social media may not meet the needs of young people to safely express their views and seek support.

The researchers analysed findings into key topic areas:

Reasons for, and challenges related to, young people using social media to communicate about self-harm and suicide

Participants in all focus groups agreed:

  • Social media is used to share personal experiences and/or seek support for suicidal thoughts and behaviours and self-harm.
  • Social media is accessible and affordable.
  • Anonymity makes it easier to talk about sensitive issues such as suicide.
  • There are concerns about the young age of which people can access social media and difficulty in controlling what people are exposed to due to algorithms.
  • There are threats to safety with unmonitored live streaming, and potential for contagion, which is the increase in suicide and suicidal behaviours as a result of the exposure to suicide or suicidal behaviours.

Focus group participants also felt there was uncertainty over what is classified as harmful content and who is responsible for censoring or removing content. And young people felt more needed to be done to prevent individuals from seeing harmful content.

Participants from the focus groups with policy makers and social media industry representatives highlighted the complexity of balancing freedom of speech and the removal of harmful content.

Public perception of harm

All focus groups felt that there was a negative public perception of the link between social media and mental health.

Participants from the focus groups with policymakers highlighted the great potential to support young people through interventions based on social media platforms.

Participants from the focus groups with social media representatives felt they received unfair judgement and that there was limited research showing the harms for online safety.

Roles and responsibilities

  • Focus group participants representing social media organisations acknowledged their role in defining socially acceptable behaviour online (and offline) through the implementation of their community guidelines or standards, however they were not medical experts and lacked the knowledge to do so in ways that provided clinical and evidence-based oversight.
  • Focus group participants representing social media organisations felt regulations may improve safety, but also limit the organisational ability to develop the platforms and impact profit.
  • Policymakers described the difficulty in regulating content in international settings with the globalisation of social media.
  • All groups acknowledged that younger users could not take sole responsibility for their own safety within environments created by, and often for, adults.
  • Young people felt strongly that the government could help by providing more education and digital literacy training, particularly when they were younger and still attending school.

The need for better collaborations

Participants from all groups highlighted the need for a collaborative approach across sectors to achieve meaningful change in ensuring online safety.

  • No focus group participants were able to show examples of viable models of partnership between sectors.
  • Collaboration is challenged by data sharing and privacy.
  • Collaborations with experts can help to guide policy and public health approaches.

Future approaches and potential solutions

  • Young people and policymakers in this study were aware that some platform safety features currently exist, but users have low belief in their effectiveness.
  • Focus groups believed that artificial intelligence (AI) could be optimised to support online safety for young people. AI tools could be used to detect risk, respond in real-time to risk or distress, and deliver services or support.
  • Legislative frameworks may prevent the use of AI. Review is required.
  • Improving safety features of social media platforms can help prevent exposure to suicide and self-harm content.
  • Innovation could integrate evidence-based interventions into platform content.
  • Young people were concerned the use of AI and in-platform interventions might compromise their privacy.

Why are the findings important?

Exposure to suicide and self-harm content though social media may increase suicide risk in some young people. It is important to determine who is accountable for safety measures to prevent this harm from occurring, as all focus group representatives had differing perspectives regarding who is responsible for online safety for young people accessing harmful content.

The focus groups found that more questions arose than answers when discussing if online safety is the responsibility of social media organisations, policymakers, or young people.

Social media platform representatives felt that their organisations were not equipped to make clinical and evidence-based decisions on content shared on social media. This highlights the potential for collaboration or new approaches involving multiple sectors to support online safety for young people.

Investment in the use of AI to mitigate harm can support online safety with the acknowledgement of the ever-changing online platforms and global trends. The use of AI can support safety given the complexity of applying Australian-specific policy to global organisations alongside better collaborations between sectors.

Notes

1

Samaritans. How social media users experience self-harm and suicide content. Samaritans; 2022.

2

Robinson J, La Sala L, Kenny B, Cooper C, Lamblin M, Spittal M. How do Australian social media users experience self-harm and suicide-related content? A national cross-sectional survey comparing young people and adults. Internet Psychol. 2024. Available from: [doi: 10.31234/osf.io/5rh93]

Study information

Authors

  • Louise La Sala
  • Amanda Sabo
  • Maria Michail
  • Pinar Thorn
  • Michelle Lamblin
  • Vivienne Browne
  • Jo Robinson

Study originally published

10 March 2025

Read the full paper

Translated on Life in Mind

8 April 2025

Population group

Citation

La Sala L, Sabo A, Michail M, Thorn P, Lamblin M, Browne V, Robinson J. Online safety when considering self-harm and suicide-related content: qualitative focus group study with young people, policy makers, and social media industry professionals. J Med Internet Res. 2025;27:e66321. doi: 10.2196/66321.