The UK is open to banning social media for children as the government begins a feasibility study

[ad_1]

The UK government does not rule out further strengthening existing online safety rules by adding an Australian-style ban on social media for under-16s, Peter Kyle, the UK’s technology minister, said.

Last summer, the government warned it might tighten laws on technology platforms in the wake of riots believed to have been fueled by online misinformation following a knife attack that killed three young girls.

Since then, it has emerged that some of the people prosecuted for rioting were minors, raising concerns about the impact of social media on impressionable developing minds.

talking to Today’s program on BBC Radio 4 Kyle was asked on Wednesday whether the government would ban social media for under-16s. He replied, “Everything is on the table with me.”

Kyle was interviewed while the Department of Science, Innovation and Technology (DSIT) outlined its goals Priorities for implementing the Internet Safety Law (OSA), which was approved by Parliament last year.

OSA targets a wide range of online harms, from cyberbullying and hate speech, to the abuse of intimate images, fraudulent advertising, Animal crueltyas UK lawmakers said they wanted to make the country the safest place to go online in the world. Although the strongest motivation was to protect children, with lawmakers responding to concerns about children accessing harmful and inappropriate content.

The ICT Department’s Strategic Priorities Statement continues this theme, by placing children’s safety at the top of the list.

Strategic priorities for online safety

Here are the five DSIT priorities for the entire OSA:

1. Safety by design: Embed safety through design to deliver safe online experiences for all users especially children, address violence against women and girls, and work to ensure there are no safe havens for illegal content and activities, including fraud, child sexual exploitation and abuse, and illegal activities. misinformation.

2. Transparency and accountability: Ensuring industry transparency and accountability from platforms to achieve online safety outcomes, promoting increased trust and expanding the evidence base to provide safer user experiences.

3. Agile organization: Introducing a flexible approach to regulation, ensuring the framework is robust in monitoring and addressing emerging harms – such as AI-generated content.

4. Comprehensiveness and flexibility: Create a digital world that is inclusive, informed, vibrant and resistant to potential harm, including misinformation.

5. Technology and innovation: Promoting innovation in online safety technologies to improve user safety and drive growth.

The mention of “unlawful disinformation” is interesting because the last government removed clauses in the bill that focused on this area over freedom of expression concerns. But following the summer riots, the government said it would reconsider the OSA’s powers and could seek to strengthen them in light of the use of social media during the unrest.

“It is essential that we learn from these events and hold platforms to account for their role in securing the UK’s online information environment and protecting the UK from future crises,” the government wrote.

On Wednesday Complete statement projectHe also had this to say about misinformation online:

“One particular area of ​​focus for the government is the huge amount of misinformation and disinformation that users can encounter online. Platforms must have robust policies and tools in place to reduce this content when it comes to their duties under the law. Combating misinformation and disinformation is a major challenge.” For the Services, given the need to maintain legitimate debate and freedom of expression online However, the growing presence of disinformation poses a unique threat to our democratic processes and societal cohesion in the UK and must be confronted robustly and the Services must also remain resilient to emerging information threats Have the flexibility necessary to respond quickly and strongly, and reduce harmful effects on users, especially vulnerable groups.

DSIT’s intervention will guide how Ofcom applies the law by requiring it to report on the government’s priorities.

For more than a year, Ofcom, the regulator charged with overseeing compliance of internet platforms and services with the OSA, has been preparing to implement the OSA by consulting and producing detailed guidance, such as in areas such as age verification technology.

The system is finally expected to come into effect from next spring – when Ofcom will actively take up powers that could result in fines of up to 10% of global annual turnover for technology companies that fail to meet their duty of care imposed by the law.

Kyle also said on children and social media: “What I want to do is look at the evidence,” noting the simultaneous launch of a “feasibility study” that he said “will look at areas where evidence is lacking.”

According to DSIT, this study will explore the effects of smartphone and social media use on children, to help advance the research and strengthen the evidence needed to build a safer online world.

Kyle also told the BBC: “There are assumptions about the impact (of social media) on children and young people, but there is no conclusive, peer-reviewed evidence,” suggesting that any UK ban on children’s use of social media would need to be evidence-based. . -He drove.

During the interview with the BBC’s Emma Barnett, Kyle was also pressed on what the government had done to address the loopholes he had previously suggested the Online Safety Act contained. He responded by flagging a change that had been enacted that would require platforms to be more proactive in tackling the abuse of intimate images.

Addressing intimate image abuse

in September DSIT has announced that it is making sharing intimate images without consent a “priority offence” under the OSA – requiring social media and other in-scope platforms and services to clamp down on the offensive practice or risk significant fines.

DSIT spokesperson Glenn McAlpine confirmed: “This move has effectively increased the seriousness of the crime of sharing intimate images under the Online Safety Act, so platforms must be proactive in removing content and preventing it from appearing in the first place.”

In further comments to the BBC, Kyle said the change means social media companies must use algorithms to prevent intimate images from being uploaded in the first place.

“They should have proactively made clear to our regulator Ofcom that algorithms would prevent this material from persisting in the first place. “If an image appears online, it should be removed as quickly as can reasonably be expected after being alerted,” he added, warning of “heavy fines” for non-compliance.

“It’s one of those areas where you can see that harm is being prevented, rather than actually going out into the community and then dealing with it afterwards — which was what happened before,” he added. “Now, thousands and thousands of women are now protected — prevented from being insulted, humiliated, and sometimes pushed toward suicidal thoughts because of that one power that enacted it.”

[ad_2]

Leave a Comment