Pushing Back on Big Tech: Insights from the UK’s OSA
- Olivia Bush-Moline
- 3 hours ago
- 6 min read

By. Liv Bush-Moline
DOI. 10.57912/30585059
Digital media consumption has become a fact of life for the masses; the extreme entanglement of social media platforms with the socio-political sphere presents a plethora of issues that have long gone unaddressed. The exploitative and engineered amalgamation of recently decommissioned fact-checking processes, absence of external regulatory bodies, and lack of accountability for Big Tech platforms has created an environment ripe for polarization and radicalization for the sake of profit. At the crux of this issue lies predatory algorithms, monopolizing users’ time with addictive models, and exposing their audiences to radical, harmful, and misinforming content. Without proper regulation, unchecked social media platforms act as the sole gatekeepers of what information spreads and to whom. It’s crucial for social cohesion, anti-radicalization efforts, and violence reduction to instate comprehensive domestic and international policy that holds platforms to a high standard of transparency, prioritizes factuality, and is not subject to manipulation for political means. Yet as the need for regulation grows, the haste to produce ambitious legislation without due consideration to user data privacy and freedom of expression holds the potential to become a double-edged sword—exemplified within the impacts of the UK's Online Safety Act (OSA).
To understand the scope of the harms of social media algorithms, it’s imperative to establish the trifecta of lenses that shape the issue: profit, psychology, and polarization. Beginning with profit, social media platforms are businesses first, online social spaces second. At this point, they have honed the craft in monetizing social connection; from TikTok’s TikTok Shop and Meta’s Instagram and Facebook Shops to advertisements, capitalistic roots run deep in Big Tech. The relevance of algorithms comes into play with what drives clicks, impressions, and sales: negativity. It’s not a new phenomenon that negativity drives media consumption, but the rapid dissemination and oversaturation of online content have created extreme competition for consumers’ time and attention. To dominate in such a competitive environment, sophisticated algorithms are meticulously engineered to calculate what characteristics and sequences of content will keep you hooked for as long as possible. The longer users stay engaged on a platform, the more impressions platforms get on its paid advertisements, and the more likely users are to buy products.
Psychologically, algorithms are designed to be addictive by pushing content that elicits strong emotions. Consistent engagement with social media platforms has been shown to alter dopamine pathways—an essential element of the brain’s reward processing—creating a dependency comparable to substance addiction. The psychological phenomenon of negativity bias indicates that humans tend to learn and respond more to negative stimuli in comparison to neutral or positive stimuli. Exploiting that principle, social media algorithms push controversial and upsetting content to maintain high user engagement. In turn, the correlation between social media usage and mental health issues has been widely documented across academia in numerous studies and reports. Harmful content pertaining to eating disorders, self-harm, toxic masculinity, and unattainable beauty standards, among other issues, has been observed to be organically amplified to audiences on social media platforms.
Similarly, many scholars point to the link between accelerated and exacerbated polarization and radicalization due to algorithmic design. By creating echo chambers, amplifying divisive political content, and exploiting negativity bias, political polarization is inflated and sharpened with time on social media. Algorithms have also been found to recommend and disseminate content related to extremism and radicalizing pipelines. If not directly recommended or pushed by algorithms themselves, terrorist and violent extremist groups have also been documented to successfully utilize social media platforms to manipulate audiences by exploiting vulnerabilities in moderation.
On top of predatory and divisive algorithms, the ramifications of the entanglement between political interest and the media-information sphere will be severe. A 2024 report found that a third of Americans say they regularly get news from Facebook and YouTube. Whether it’s outright propaganda, foreign interference, fake news, or AI-generated deepfakes, misinformation remains a significant threat to democracy. The removal of fact-checking and content moderation programs will open up social media platforms to greater potential for exploitation by malicious actors, especially considering their past successes in doing so despite anti-disinformation processes. That’s when unregulated social media becomes a threat not only to democracy, but to national security; bad actors utilizing social media platforms can recruit, radicalize, and promote terrorism—a trend that traces back nearly a decade, when the UN Security Council Counter-Terrorism Committee convened to discuss the issue in 2016.
It’s critical to establish comprehensive regulation on social media platforms in both the international and domestic spheres. The most recent and salient emerged this past March, with the UK Online Safety Act (OSA) going into effect; if social media platforms fail to introduce and implement robust processes for moderating and removing illegal content, they will be subjected to fines of up to £18 million ($23,488,110) or 10% of worldwide revenue—whichever is greater. OSA became law in 2023, although the full force enforcement did not go into effect until March 2025. The policy designates 130 topics that are considered to be “priority offenses,” including terrorism, drugs, fraud, weapons, and many more.
OSA stands as the first implemented large-scale legislation of its kind, prompting critics to raise alarms of censorship and restrictions on free speech, while others argue that it has taken far too long for regulation to come to fruition. Advocacy for and against greater social media regulation is delineated by those concerned with user data privacy and free speech versus those concerned with children’s online safety. As major platforms scrambled to introduce age verification methods in compliance with OSA, one notable example is Reddit: known for its reputation for anonymity, UK users found themselves prompted to submit a photo of their government ID or a live selfie to their contracted age verification vendor Persona to access subreddits dedicated to a range of topics from LGBTQ+ identity to international journalism. Outrage over the censorship overreach sparked quickly, and a petition calling for the repeal of OSA garnered over 500,000 signatures.
Along with over-censorship concern, the question of data privacy and storage must be addressed. When it comes to sensitive and personal documents and data, OSA fails to ensure proper privacy protections. OSA’s mandated age verification provisions allow platforms to choose how to comply, raising concerns surrounding the data privacy consequences of creating a substantial new market for largely untested age verification services. Stipulations that ensure the protection of sensitive data must be enshrined in online safety legislation prior to implementation, a grave omission within OSA.
Without standards or mandates for the storage, deletion, or security of such private information, scenarios akin to the Tea app from this past August are likely to occur more frequently. The Tea app was created as a women-only app to promote women’s safety in online dating, where users could post anonymously about men, as well as conduct background and criminal record checks and reverse image searches. To ensure a woman-only space, the app asked for an ID or image to verify gender. Despite claims of immediate deletion of these materials, the sensitive data and government IDs collected from the Tea app’s verification process remained poorly protected and vulnerable in storage—leading to a massive data breach. Within only a few hours, the exposed data was taken advantage of by misogynistic groups, who published maps with pins of 33,000 users’ locations and created sites to rank and compare the appearances of the exposed users. This sort of cyber-harassment will likely see an uptick in frequency and scope without safeguards to ensure data privacy and security on third-party verification platforms.
Additionally, content moderation addresses only a portion of the greater issue. Regulating content without ensuring ethical algorithmic design leaves the door open for addictive, divisive algorithms to continue wreaking havoc on the average user, only limiting the content they can do so with. Arguably, this would reduce harm, assuming the policy’s language is precise and accurate, but fails to address the root of the issue. Ideally, future legislation that ensures sociotechnical transparency within platforms would work in tandem with content moderation laws like OSA to ensure a comprehensive policy response.
OSA is a landmark piece of legislation—the first of its kind to reach implementation. The boldness of OSA trailblazing the path for oversight of Big Tech provides two key insights: that regulating Big Tech and the digital sphere can be done, and that future legislation must strive to improve upon OSA’s shortcomings. In the U.S., a similar piece of legislation, the Kids Online Safety Act (KOSA), was introduced to Congress in 2022 and revised and reintroduced in 2025 to address the same concerns plaguing the UK’s OSA. Despite KOSA’s widespread bipartisan support among lawmakers—including 65 co-sponsors in the Senate, including both the majority and minority leaders—KOSA still falls short of appeasing its critics. Ideally, KOSA and future iterations of similar policies across the international sphere can build upon these first steps taken by the UK. As the impacts of OSA continue to reverberate throughout the digital sphere and world of Big Tech, it serves as both an example and a crucial lesson for more effective future policies that seeks to curb the rapidly growing influence and harms of Big Tech.
