top of page




For the first thirty years of its existence, our views on the Internet largely focused on how it improved our lives. Information that was once buried in encyclopaedias became accessible with a click, interactive maps of the entire planet were available at our fingertips, and we could connect and converse with people anywhere in the world in seconds. Alongside these remarkable benefits, it became increasingly evident that the Internet also has the potential to cause significant harm. Extremist posts have been linked to terrorism, disinformation and “fake news” have undermined democratic elections, and both charities and governments have highlighted the alarming amount of child sexual abuse imagery circulating online. The risks associated with online harm became even more pronounced in 2020, as national lockdowns during the COVID-19 pandemic led many of us to spend more time online than ever before.




While a definitive list of the types of online harm is difficult to track down, these potential forms of harm seem the most prevalent:


·       terrorism-related content

·       child sexual exploitation and abuse content

·       hate speech

·       sale of illegal drugs and weapons

·       disinformation

·       violent content

·       sexually explicit content involving adults

·       cyber bullying

·       promotion of eating disorders

·       promotion of self-harm or suicide


Online harm can happen anywhere online, most commonly with social media, text messages/messaging apps, email, online chats, online gaming, and live streaming sites.




For some time, governments, societies, and major online platforms have agreed that more needs to be done to combat online harms. Many governments worldwide are seeking to replace the current patchwork of discrete laws and voluntary initiatives with more comprehensive regulation. The challenge for regulators is to strike a balance between protecting against harm and upholding fundamental human rights. Regulation also sometimes gets blurred when differentiating between what is lawful and what is harmful. For example, disinformation is regulated in the UK but not in Ireland or the United States. In terms of cyberbullying, there is no commonly agreed definition in Europe or internationally. In Italy, legislation against cyberbullying was passed in 2017 whereas in Ireland law only came into force in 2021. In 2018, Hungary put forward the idea of criminalising cyberbullying in its penal code but nothing has been finalised to this day.




With all these international and European legislative disparities, where does that leave us in the UK with regulating online harm?  After years of debate, the government's controversial Online Safety Bill, designed to make the internet safer for UK citizens, particularly for children, became law in October 2023. This legislation requires tech firms to take greater responsibility for the content on their platforms. Technology Secretary Michelle Donelan stated that it "ensures the online safety of British society not only now, but for decades to come." The Bill has five policy objectives:


·       to increase user safety online.

·       to preserve and enhance freedom of speech online.

·       to improve law enforcement’s ability to tackle illegal content online.

·       to improve users’ ability to keep themselves safe online.

·       to improve society’s understanding of the harm landscape.




In a nutshell, the new law mandates firms to protect children from harmful content, with Ofcom granted extra enforcement powers. It includes measures like requiring age verification on pornography sites and mandates platforms to remove illegal content, such as:


·       child sexual abuse

·       controlling behaviour

·       extreme sexual violence

·       illegal immigration

·       promoting suicide or self-harm

·       animal cruelty

·       selling illegal drugs or weapons

·       terrorism


New offences include cyber-flashing and sharing "deepfake" pornography. The act also aids bereaved parents in obtaining information from tech firms and controversially allows Ofcom, the UK communications regulator, to compel messaging services to check encrypted messages for child abuse material. Platforms like WhatsApp, Signal, and iMessage have threatened to leave the UK over privacy concerns, while Proton's CEO, Andy Yen, argues the bill threatens internet privacy. The government claims Ofcom will only act when "feasible technology" is available. Wikipedia has also expressed concerns about compliance, and over 20,000 small businesses may be affected by the act.




Ofcom now has the authority to request information from and inspect online service providers, levy fines up to £18 million or 10% of global annual revenue - whichever is higher - and seek court orders to restrict services. Senior managers within affected organizations could face criminal liability if their company fails to comply with Ofcom's information requests. According to Ofcom's plan, the codes of practice under the Act are expected to be phased in from Spring 2025 to Spring 2026. On its website, the UK online safety regulator says:


“We’re not responsible for removing online content, and we won’t require companies to remove content or particular accounts. Our job is to help build a safer life online by making sure firms have effective systems in place to prevent harm and protect the people using their services. We will have a range of tools to make sure services follow the rules – including setting out codes of practice and guidance for companies falling under the scope of the new legislation. We’re now consulting on these, and the new rules will come into force once the codes and guidance are approved by Parliament. Under these new rules, we will have powers to take enforcement action, including issuing fines to services if they fail to comply with their duties. Our powers are not limited to service providers based in the UK.”




Cyber Wellbeing, sometimes called Cyber Wellness or Cyber Health, refers to the positive well-being of Internet users. It encompasses an understanding of online behaviour and awareness of how to protect oneself in cyberspace. The primary goal of Cyber Wellbeing is to help everyone become responsible digital learners. When navigating the online world, individuals should demonstrate respect for themselves and others, while practicing safe and responsible use of technology. Additionally, we should strive to be positive peer influences by using technology for collaboration, learning, and productivity, and by advocating for the positive use of technology for the benefit of the community. Cyber Wellbeing has three key principles:


  • Respect for Self and Others

  • Safe and Responsible Use

  • Positive Peer Influence




Cyber London, whose vision it is to establish London as a world-leading centre of excellence for cyber, is focused around four work streams, one of which is Online Harm and Cyber Wellbeing. This work stream will explore techniques and tools that can be developed to help internet users to monitor their online behaviours and the behaviours of others in cyber space so that any unusual behaviours can be identified and mitigated through peer and community support. Cyber London is uniquely positioned to encourage best behaviour online in order to maintain good mental health and Cyber Wellbeing. For more information and to become a member, reach out to Cyber London here.







Recent Posts

See All


bottom of page