Quantcast
Channel: Courthouse News Service
Viewing all articles
Browse latest Browse all 2622

UK balances safety, free speech in tackling digital regulation

$
0
0

MANCHESTER, England (CN) — While transatlantic differences over how to protect people online become starker, Britain’s online regulator is gaining new powers over internet services, including social media and search engines.

Both the U.K. and the European Union have taken tougher lines on regulation than the United States, where freedom of speech and anti-regulatory attitudes are firmly rooted concerns.

Under the U.K.’s Online Safety Act, which is being implemented in phases since coming into law in October 2023, internet services need to detail how they will identify and curb risks related to illegal content and material harmful to children. They must submit these risk assessments to the regulator, Ofcom, by March 31.

The act’s first phase concentrates on illegal content, including child sexual abuse, inciting violence, hate speech, fraud, promoting self-harm and terrorism. It also introduced new offenses that came into effect on Jan. 31, 2024, such as cyberflashing, sending false information intended to cause harm and epilepsy trolling — intentionally sending triggering images to someone with epilepsy.

Ofcom has broad enforcement powers. Companies can face significant noncompliance fines, up to $23 million or 10% of annual global turnover, whichever is greater. Senior managers can even be held criminally liable. 

These would be the most extreme cases, said Lorna Woods, a professor of internet law at the University of Essex. “While Ofcom has fining powers … enforcement will usually start by Ofcom informing the service of where it thinks the problem is and allowing the service opportunity to respond or to comply.”

How the risk assessments are to be enforced remains unclear.

“It is hard to say at the moment how this might work out in practice as we’ve not had the opportunity to see the risk assessments working yet and the services don’t have to make their risk assessments public,” said Woods, an expert on internet safety policy who believes regulators will look to digest the information from the assessments before making changes in safety measures.

The Online Safety Act follows the EU’s 2022 Digital Services Act, which aims to strengthen online safety and prevent illegal and harmful content, including disinformation.

The U.S. — especially under President Donald Trump and his close adviser, billionaire Elon Musk — has complained publicly about the effects such regulations have on Big Tech.

At the Munich Security Conference last month, Vice President JD Vance tore into the EU’s new online regulations, calling them an “undemocratic power grab” that allow unelected bureaucrats in Brussels to dictate online discourse.

Vance didn’t spare the U.K. either. During Prime Minister Keir Starmer’s visit to the White House to meet Trump, Vance directly challenged the U.K.’s record on free expression, which Starmer quickly rebutted.

“There is some concern that I have with respect to the approach that Europe is taking with the DSA in particular,” said U.S. Federal Communications Commission chair Brendan Carr on March 3, at the Mobile World Congress telecoms trade show in Barcelona. “There’s a risk that that regulatory regime imposes excessive rules with respect to free speech,” he said. 

Regulation and free speech

There is a deep skepticism in the U.S. about European-style internet regulation.

As Trump hits Europe with tariffs and geopolitical and economic disputes escalate, technology policy is deepening as a battleground in the transatlantic relationship, raising questions and concerns around free speech.

Trump issued a memo, “Defending American Companies and Innovators From Overseas Extortion and Unfair Fines and Penalties,” where he called on government agencies to develop tariffs or other retaliation for “regulations imposed on United States companies by foreign governments that could inhibit the growth or intended operation of United States companies.” 

In the memo, Trump seems to strike at the EU specifically, saying: “Foreign countries have additionally adopted regulations governing digital services that are more burdensome and restrictive on United States companies than their own domestic companies.”

In an in-depth study on free speech in the U.K., The Policy Institute at King’s College London found the public is almost evenly divided into three: 35% believe that people are too easily offended, 35% believe that people should be more sensitive in their speech, and 30% are somewhere in the middle.

The U.K. Parliament debated freedom of expression as the act progressed through the chambers. Woods pointed to protections incorporated in the legislation.

“While services are expected to have moderation systems in place, there is no provision requiring them to take down specific items of content,” said Woods. In addition, they’re required to have appeals processes in place for users where content is taken down as well as for complaints.

“The line between illegal and legal but harmful content may be a fine one and depend heavily on the relevant factual circumstances,” said Michzel Szlesinger, an associate who advises on online safety regulations. “Like with all new regimes, there will be early fringe cases, for example in distinguishing coercive behavior and bullying. Legal challenges are also likely to arise involving the distinction between content that is harmful versus not harmful. That is where the main tension may lie with protecting users’ freedom of expression.”

Under the act, specific services including large social media platforms must comply with certain duties that include freedom of expression, democratic speech and journalistic content.

Woods said, “Fundamentally, however, the approach to freedom of expression is different in the U.K. from the U.S.; it is one right among many and is not more important than those rights.”

Szlesinger noted that freedom of expression in the U.K. is not qualification-free like the First Amendment in the U.S.

”The U.K. is generally more relaxed than the U.S. in limiting freedom of expression, and with that comes a risk of limiting it in inappropriate circumstances,” Szlesinger said. “In practice, it is still too early to tell whether the Online Safety Act’s online safety duties will favor freedom of expression in borderline cases. We will need to see how it is enforced.”

Public support

The British public backs tighter regulations on social media platforms to create a safer online space. Before the introduction of the act, an Ipsos poll found that 4 in 5 adults were concerned about harmful content online. Now that it’s being implemented, concerns remain; only 9% of parents think enough is being done to protect children online.

Frustration with the current measures is clear; 85% of parents call for a stronger Online Safety Act to provide more protection.

When the bill was first introduced, under then-Prime Minister Rishi Sunak and the Conservative Party, the government said the “world-leading bill” would put rules in place to make the U.K. the “safest place in the world to be online.” 

But safety campaigners believe the Online Safety Act doesn’t go far enough to protect children, pointing to a lack of specific rules around livestreaming, the phased implementation process, and that lack of review for private messaging.

Ian Russell, whose 14-year-old daughter Molly took her own life in 2017 after viewing thousands of images promoting self-harm and suicide, wrote to Starmer, urging him to tighten the rules.

“No one can credibly claim that Ofcom’s implementation of the Online Safety Act is anything other than a disaster,” he wrote in an open letter. The way the law is being implemented has highlighted “intrinsic structural weaknesses,” Russell said, with a regulator that has “fundamentally failed to grasp the urgency and scale of its mission.”

In his letter, Russell urges the government to “act swiftly” and points to the recent policy shifts of large social media platforms.

“Mark Zuckerberg and Elon Musk are at the leading edge of a wholesale recalibration of the industry,” Russell wrote, moving away from online safety and toward an “anything-goes model.” He warned: “In this bonfire of digital ethics and online safety features, all of us will lose, but our children lose the most.”

It’s not only online safety campaigners who are urging a review of the law. London Mayor Sadiq Khan, who is part of the same Labour Party as the current government, said that the new act is “not fit for purpose.”

Khan said that it failed to tackle the misinformation that fueled the violent far-right riots in England and Northern Ireland last year following a mass stabbing that killed three girls outside Liverpool.

Rioters targeted immigrants and Muslims following false claims on social media that the killer was a Muslim asylum-seeker. Police arrested more than 1,000 people and charged just under 800.

How risk assessments will impact internet services

Woods said the regulator is responding to public concerns.

“Ofcom is under pressure from civil society and parliamentarians to do more, so it said it would iterate the codes once its information gathering powers were in place,” she said. “It is currently consulting on guidance on protecting women and girls online,” and will look to block accounts that have shared child sexual abuse material.

The regulator is investigating “using AI to tackle illegal harms, using hash-matching to prevent the sharing of non-consensual intimate imagery and terrorist content, and crisis response protocols for emergency situations,” Woods said.

Ofcom has warned it will act quickly against companies that fail to comply with risk assessments.

“We’ve identified a number of online services that may present particular risks of harm to U.K. users from illegal content — including large platforms as well as smaller sites — and are requiring them to provide their illegal harms risk assessment to us this month,” said Suzanne Cater, enforcement director at Ofcom in a statement. “We’re ready to take swift action against any provider who fails to comply.”

The process is extensive.

Woods said, “When doing the risk assessment, the service has to take into account the risk profile … that includes over 1,000 individual sources and 130 grouped priority offenses.” This helps services identify key risk factors and associated harms.

All services need to consider risk factors, user complaints, user data and analysis of any previous incidents of harm.

“Larger or multi-risk services may also need to consider additional enhanced evidence, such as product testing results, content moderation results, views of external experts and audits, and views of users and representative groups,” she said.

How the law compares to the EU and U.S.

The Digital Services Act is part of the EU’s digital regulation strategy and was introduced as a way to better regulate search engines and online platforms. The act puts online services into categories, with large platforms and search engines subject to the strictest rules.

Google, X (formerly Twitter), TikTok, Facebook and Instagram and others are required to publish annual risk assessments and take action to mitigate risks across a broad range of content, including illegal content, disinformation on public health, security, and the electoral process, gender-based violence and child safety.

Large technology companies must also publish transparency reports every six months, including information on their content moderation practices, such as the amount of content removed.

Failure to comply can lead to fines of up to 6% of the organization’s global annual turnover.

The Digital Services Act has already clashed with big technology companies, with the EU bringing action against X and TikTok regarding breaching rules on the protection of minors and advertising transparency. Other tech giants, including Amazon, lost a legal bid to delay their compliance with the act.

While the British act focuses on illegal and harmful content, the EU also includes advertising, illegal goods and services, intellectual property and dark patterns. The EU is implementing some of the toughest digital safety laws in the world and attempting to limit the power of influential platforms and search engines.

The U.S. has gone down a more decentralized path.

Utah, for example, recently became the first state to pass legislation requiring app stores to verify users’ ages and obtain parental consent when a child tries to download an app.

At the federal level, the U.S. Kids Online Safety Act would enhance protections for minors on the internet. If made law, it would require social media platforms to use the highest possible privacy settings as a default for minors, preventing exposure to harmful content.

The bill has bipartisan support, passing with 91-3 in the Senate, and is praised by children’s advocacy groups. Yet it faces delays due to concerns about potential overreach.

In the U.K., Ofcom continues to roll out the Online Safety Act. Phase two, expected to be complete by July, will look specifically at child safety, age assurances and protection for women and girls. The third and final phase will include final consultations and guidance, set to be completed by spring 2026.

If you are having thoughts of suicide, call or text 988, or call the National Suicide Prevention Lifeline at 1-800-273-8255 (TALK). Visit SpeakingOfSuicide.com/resources for a list of additional resources.


Viewing all articles
Browse latest Browse all 2622

Trending Articles