© 2024
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
WANC 103.9 Ticonderoga will be off the air frequently to allow for tower climbers to safely complete extensive work. We apologize for the inconvenience and appreciate your patience during this time.

U.K. Regulators Propose Broad Social Media Regulations To Counter 'Online Harms'

Facebook, Messenger and Instagram apps are displayed on an iPhone. U.K. regulators are proposing broad new rules on social media platforms to combat online harms.
Jenny Kane
/
AP
Facebook, Messenger and Instagram apps are displayed on an iPhone. U.K. regulators are proposing broad new rules on social media platforms to combat online harms.

Terrorists groups using the Internet to spread propaganda. Criminal gangs inciting violence over social media. Sexual predators going online to groom children for exploitation.

These are some of the harms the United Kingdom has identified that spread via social media platforms. To counter those harms, the U.K. plans to require social media companies to be much more active in removing and responding to harmful material on their platforms. The sweeping 102-page white paper, released Monday, envisions requirements for everything from ensuring an accurate news environment, to combating hate speech, to stamping out cyberbullying.

The proposal is the latest in a series of increased efforts by governments around the world to respond to harmful content online. Last year Germany imposed fines of up to $60 million if social media companies don't delete illegal content posted to their services. After the massacre in Christchurch, New Zealand, Australian lawmakers passed legislation subjecting social media executives to imprisonment if they don't quickly remove violent content. The U.K. proposal goes further than most by proposing the creation of a new regulatory body to monitor a broad array of harms and ensure companies comply.

"I'm deeply concerned that social media firms are still not doing enough to protect users from harmful content," Prime Minister Theresa May said. "So today, we're putting a legal duty of care on these companies to keep users safe. And if they fail to do so, tough punishments will be imposed. The era of social media firms regulating themselves is over."

The new duty of care has yet to be fleshed out, but U.K. officials offered plenty of suggestions for what they expect a new regulator to include in a code of practice. Officials expect companies to do what they can to "counter illegal content and activity." That could include requirements to actively "scan or monitor content for tightly defined categories of illegal content" such as or threats to national security, or material that sexually exploits children.

Officials also expect any code of practice to require that social media companies use fact-checking services during elections, promote "authoritative" news services, and generally try to counter the echo chamber in which many users are exposed to viewpoints they already agree with.

If companies don't comply with the new rules, they could face "substantial" fines — and individual members of the company's senior management could face personal liability. It's a big change from the current system of liability, in which online platforms aren't held responsible for illegal content that users upload, until they become aware of it.

"We cannot allow these harmful behaviours and content to undermine the significant benefits that the digital revolution can offer," said Jeremy Wright, the U.K.'s digital secretary, in a statement accompanying the proposal. "While some companies have taken steps to improve safety on their platforms, progress has been too slow and inconsistent overall. If we surrender our online spaces to those who spread hate, abuse, fear and vitriolic content, then we will all lose."

Regulators proposed that the new rules apply to companies that "allow users to share or discover user-generated content or interact with each other online." The broad language encompasses "a very wide range of companies of all sizes," the report said, "including social media platforms, file hosting sites, public discussion forums, messaging services and search engines."

Online companies said they supported the idea of ensuring a safer Internet, but noted that they have already poured substantial resources into countering online harms. "We haven't waited for regulation, we've created new technology, hired experts and specialists, and ensured our policies are fit for the evolving challenges we face online," Google public policy manager Claire Lilley told TechCrunch.

TechUK, a consortium of technology companies in the U.K. including Facebook, said in a statement that many of the proposals are too vague. "The duty of care is a deceptively straightforward sounding concept. However it is still not clearly defined and open to broad interpretation," the group said. "Government will need to clarify the legal meaning and how it expects companies to comply with such a potentially broad obligation which could conflict with other fundamental rights - particularly in relation to private communications on their platforms."

Berin Szoka, president of the libertarian think tank TechFreedom, cautioned that the proposal "mandates a sweeping system of self-censorship" that could cause companies to "over-censor in order to avoid the wrath of the regulator." In an e-mail to NPR, Szoka said he worries that such a system could "legitimize the same kind of system in Russia, China and other countries." Authoritarian governments around the world "will all undoubtedly exploit the UK's model to justify their own system of censoring content they deem 'harmful,' " he said.

Copyright 2021 NPR. To see more, visit https://www.npr.org.

Matthew S. Schwartz is a reporter with NPR's news desk. Before coming to NPR, Schwartz worked as a reporter for Washington, DC, member station WAMU, where he won the national Edward R. Murrow award for feature reporting in large market radio. Previously, Schwartz worked as a technology reporter covering the intricacies of Internet regulation. In a past life, Schwartz was a Washington telecom lawyer. He got his J.D. from Georgetown University Law Center, and his B.A. from the University of Michigan ("Go Blue!").