Technology
UK open to banning social media for children as government launches feasibility study
The UK government just isn’t ruling out further tightening existing web safety rules by adding an Australian social media ban for under-16s, Technology Secretary Peter Kyle has said.
The government warned in the summertime that it could tighten rules on tech platforms within the wake of unrest that was deemed to be sparked by online disinformation following a stabbing attack that left three young girls dead.
It has since emerged that a few of those charged within the riots are minors, increasing concerns in regards to the impact of social media on impressionable, developing minds.
I’m talking to Today program on BBC Radio 4 on Wednesday, Kyle was asked whether the government would ban people under 16 from using social media. He responded by saying, “Everything is on the table for me.”
Kyle was interviewed as the Department of Science, Innovation and Technology (DSIT) presented its position priorities for enforcing the Internet Safety Act (OSA), which Parliament adopted last yr.
OSA targets a big selection of online harms, from cyberbullying and hate speech to the usage of intimate images, deceptive promoting and cruelty to animalsand British lawmakers say they need to make the country the safest place on this planet to use the web. While child protection was the strongest factor, lawmakers responded to concerns that children were accessing harmful and inappropriate content.
DSIT’s Statement of Strategic Priorities continues this theme by placing child safety at the highest of the list.
Strategic Internet security priorities
Here are DSIT’s five priorities for OSA in full:
1. Safety by design: Integrate security by design to ensure a secure online experience for all users, but especially children, combat violence against women and girls, and work to ensure there aren’t any secure havens for illegal content and activities, including fraud, child sexual exploitation and child abuse and illegal disinformation.
2. Transparency and accountability: Ensure industry transparency and accountability from platforms to deliver online safety outcomes, promoting greater trust and expanding the evidence base to provide safer user experiences.
3. Agile regulation: Ensure a versatile approach to regulation by providing a sturdy framework for monitoring and addressing emerging harms – such as AI-generated content.
4. Inclusion and resilience: Create an inclusive, informed and vibrant digital world that’s resilient to potential harm, including disinformation.
5. Technology and innovation: Supporting innovation in web security technologies to improve user safety and drive development.
The mention of “illegal disinformation” is interesting since the last government removed clauses from the bill that focused on this area somewhat than free speech issues.
In ministerial attacker In an accompanying statement, Kyle also wrote:
“A particular area of concern for the government is the enormous amount of misinformation and disinformation that users may encounter online. Platforms should have robust policies and tools in place to minimize such content where it relates to their obligations under the Act. Combating disinformation and disinformation is a challenge for services, given the need to preserve legitimate debate and freedom of expression on the Internet. However, the growing presence of disinformation poses a unique threat to our democratic processes and social cohesion in the UK that must be vigorously countered. Services should also respond to emerging information threats, providing the flexibility to respond quickly and decisively and minimize harmful effects on users, especially vulnerable groups.”
DSIT’s intervention may have an impact on how Ofcom enforces the law, requiring it to report on the government’s priorities.
For over a yr, Ofcom, the regulatory body tasked with supervising the compliance of online platforms and services with the OSA, has been preparing for the implementation of the OSA by consulting and developing detailed guidelines, including: in areas such as age verification technology.
Enforcement of the regime is predicted to start next spring when Ofcom actively takes over the powers, which could lead to financial penalties of up to 10% of worldwide annual turnover on technology corporations that fail to comply with their legal duty of care.
“I want to look at the evidence,” Kyle said in an interview with children and on social media, pointing to the simultaneous launch of a “feasibility study” that he said “will look at areas where the evidence is lacking.”
According to DSIT, this study “will examine the impact of smartphone and social media use on children to help strengthen the research and evidence needed to build a safer online world.”
“There are assumptions about the impact of social media on children and young people, but there is no solid, peer-reviewed evidence,” Kyle also told the BBC, suggesting that any UK ban on children using social media have to be evidence-based.
During an interview with the BBC’s Emma Barnett, Kyle was also pressed about what the government is doing to close loopholes he believes were previously present in web safety laws. In response, he signaled the introduction of a change that requires platforms to be more proactive in combating the abuse of intimate images.
Combating abuse related to intimate images
IN September DSIT has announced that it recognizes the non-consensual sharing of intimate images as a “priority offense” under the OSA – requiring social media and other covered platforms and services to clamp down on abuse or face heavy financial penalties.
“This move effectively increases the seriousness of the offense of sharing intimate images under the Online Safety Act, so platforms must proactively remove content and prevent it from appearing in the first place,” confirmed DSIT spokesman Glen Mcalpine.
In further comments to the BBC, Kyle said the change means social media corporations must use algorithms to prevent the upload of intimate photos in the primary place.
“They had to actively display to our regulator Ofcom that the algorithms would prevent the spread of this material in the primary place. And if the photo did appear on the Internet, it must have been removed as quickly as could reasonably be expected after receiving the warning,” he said, warning of “heavy penalties” for non-compliance.
“This is one area where you can see that harm is being prevented, rather than leaking into society and then dealing with it, which was the case before,” he added. “Now thousands of women are now protected – prevented from being degraded, humiliated and sometimes pushed towards suicidal thoughts because of this one power I introduced.”