Connect with us

Technology

UK open to banning social media for children as government launches feasibility study

Published

on

The UK government just isn’t ruling out further tightening existing web safety rules by adding an Australian social media ban for under-16s, Technology Secretary Peter Kyle has said.

The government warned in the summertime that it could tighten rules on tech platforms within the wake of unrest that was deemed to be sparked by online disinformation following a stabbing attack that left three young girls dead.

It has since emerged that a few of those charged within the riots are minors, increasing concerns in regards to the impact of social media on impressionable, developing minds.

Advertisement

I’m talking to Today program on BBC Radio 4 on Wednesday, Kyle was asked whether the government would ban people under 16 from using social media. He responded by saying, “Everything is on the table for me.”

Kyle was interviewed as the Department of Science, Innovation and Technology (DSIT) presented its position priorities for enforcing the Internet Safety Act (OSA), which Parliament adopted last yr.

OSA targets a big selection of online harms, from cyberbullying and hate speech to the usage of intimate images, deceptive promoting and cruelty to animalsand British lawmakers say they need to make the country the safest place on this planet to use the web. While child protection was the strongest factor, lawmakers responded to concerns that children were accessing harmful and inappropriate content.

DSIT’s Statement of Strategic Priorities continues this theme by placing child safety at the highest of the list.

Advertisement

Strategic Internet security priorities

Here are DSIT’s five priorities for OSA in full:

1. Safety by design: Integrate security by design to ensure a secure online experience for all users, but especially children, combat violence against women and girls, and work to ensure there aren’t any secure havens for illegal content and activities, including fraud, child sexual exploitation and child abuse and illegal disinformation.

2. Transparency and accountability: Ensure industry transparency and accountability from platforms to deliver online safety outcomes, promoting greater trust and expanding the evidence base to provide safer user experiences.

3. Agile regulation: Ensure a versatile approach to regulation by providing a sturdy framework for monitoring and addressing emerging harms – such as AI-generated content.

4. Inclusion and resilience: Create an inclusive, informed and vibrant digital world that’s resilient to potential harm, including disinformation.

Advertisement

5. Technology and innovation: Supporting innovation in web security technologies to improve user safety and drive development.

The mention of “illegal disinformation” is interesting since the last government removed clauses from the bill that focused on this area somewhat than free speech issues.

In ministerial attacker In an accompanying statement, Kyle also wrote:

“A particular area of ​​concern for the government is the enormous amount of misinformation and disinformation that users may encounter online. Platforms should have robust policies and tools in place to minimize such content where it relates to their obligations under the Act. Combating disinformation and disinformation is a challenge for services, given the need to preserve legitimate debate and freedom of expression on the Internet. However, the growing presence of disinformation poses a unique threat to our democratic processes and social cohesion in the UK that must be vigorously countered. Services should also respond to emerging information threats, providing the flexibility to respond quickly and decisively and minimize harmful effects on users, especially vulnerable groups.”

DSIT’s intervention may have an impact on how Ofcom enforces the law, requiring it to report on the government’s priorities.

Advertisement

For over a yr, Ofcom, the regulatory body tasked with supervising the compliance of online platforms and services with the OSA, has been preparing for the implementation of the OSA by consulting and developing detailed guidelines, including: in areas such as age verification technology.

Enforcement of the regime is predicted to start next spring when Ofcom actively takes over the powers, which could lead to financial penalties of up to 10% of worldwide annual turnover on technology corporations that fail to comply with their legal duty of care.

“I want to look at the evidence,” Kyle said in an interview with children and on social media, pointing to the simultaneous launch of a “feasibility study” that he said “will look at areas where the evidence is lacking.”

According to DSIT, this study “will examine the impact of smartphone and social media use on children to help strengthen the research and evidence needed to build a safer online world.”

Advertisement

“There are assumptions about the impact of social media on children and young people, but there is no solid, peer-reviewed evidence,” Kyle also told the BBC, suggesting that any UK ban on children using social media have to be evidence-based.

During an interview with the BBC’s Emma Barnett, Kyle was also pressed about what the government is doing to close loopholes he believes were previously present in web safety laws. In response, he signaled the introduction of a change that requires platforms to be more proactive in combating the abuse of intimate images.

Combating abuse related to intimate images

IN September DSIT has announced that it recognizes the non-consensual sharing of intimate images as a “priority offense” under the OSA – requiring social media and other covered platforms and services to clamp down on abuse or face heavy financial penalties.

“This move effectively increases the seriousness of the offense of sharing intimate images under the Online Safety Act, so platforms must proactively remove content and prevent it from appearing in the first place,” confirmed DSIT spokesman Glen Mcalpine.

Advertisement

In further comments to the BBC, Kyle said the change means social media corporations must use algorithms to prevent the upload of intimate photos in the primary place.

“They had to actively display to our regulator Ofcom that the algorithms would prevent the spread of this material in the primary place. And if the photo did appear on the Internet, it must have been removed as quickly as could reasonably be expected after receiving the warning,” he said, warning of “heavy penalties” for non-compliance.

“This is one area where you can see that harm is being prevented, rather than leaking into society and then dealing with it, which was the case before,” he added. “Now thousands of women are now protected – prevented from being degraded, humiliated and sometimes pushed towards suicidal thoughts because of this one power I introduced.”

Advertisement
This article was originally published on : techcrunch.com
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Meta introduces limited teen accounts on Facebook and Messenger

Published

on

By

The apps Instagram, Facebook and WhatsApp can be seen on the display of a smartphone in front of the logo of the Meta internet company.

Meta introduces teen accounts on Facebook and Messenger. A function that routinely saves young users for the impression of applications with built -in security, shall be available on these platforms within the USA, Great Britain, Australia and Canada, before it expands to additional regions in the long run.

Teen accounts first appeared on Instagram in September last yr after Instagram, and other popular social networks were grilled by American legislators for not doing enough to guard teenagers. As a part of Tuesday’s announcement, Meta said that he brings a brand new built -in account protection for teenagers on Instagram.

With the extension of Facebook and messengers, teenagers shall be routinely placed in an experience that goals to cut back inappropriate content and unwanted contact. Teens under 16 years of age need parents’ consent to vary any of the settings.

Advertisement
Image loans:Finish

While Post on the META blog about launching doesn’t provide exact restrictions under which teenagers shall be placed, the corporate told TechCrunch We -Mail that teenagers will only receive messages from individuals who follow or had news earlier.

In addition, only teen friends can see and reply to their stories. Tags, references and comments will even be limited to people they follow or who’re their friends.

Teens will even receive reminders of leaving social networks after using them for an hour a day. In addition, they shall be enrolled within the “quiet mode” overnight.

As for brand spanking new instagram restrictions, teens under 16 years of age is not going to give you the chance to modify to the platform, unless their parents give them permission. In addition, teenagers under the age of 16 can have to get the parents’ consent to show off the applying function, which blur images containing suspicion of nudity in DMS.

Advertisement
Image loans:Finish

The changes announced on Tuesday show the newest Meta step towards solving problems related to the mental health of teenagers related to social media. These fears were Raised by an American general surgeon and several states, a few of which have even began to limit teenagers from using social media Without the consent of the parent.

The Meta shared insight into how teen accounts on Instagram are doing, because the corporate claims that it has moved 54 million teenagers to teen accounts. The meta claims that there remains to be lots more, because this function remains to be developing all around the world. The company also shared that 97% of teenagers aged 13-15 maintain built-in protection, says finish.

The finish line also commissioned an IPSOS study, which showed that just about all surveyed parents (94%) claim that teen accounts are helpful for fogeys, and 85% consider that they make helping teenagers easier to have positive experiences on Instagram.

(Tagstranslate) Facebook

This article was originally published on : techcrunch.com
Advertisement
Continue Reading

Technology

Benchmarks meta for new AI models are somewhat misleading

Published

on

By

Meta sign

One of the new flagship AI Meta models released on Saturday, Maverick, Second rating at LM ArenaA test during which human rankings compare the outcomes of models and select which they like. But it appears that evidently the Maverick version, that the finish implemented on LM Arena differs from the version that’s widely available to programmers.

How several And researchers He pointed to X, Meta noticed within the announcement that Maverick on LM Arena is a “experimental version of the chat.” Chart on The official website of LlamaMeanwhile, it reveals that the testing of the LM META Arena was carried out using “Llama 4 Maverick optimized for conversation.”

As we wrote earlier, for various reasons LM Arena has never been essentially the most reliable measure of the performance of the AI ​​model. But AI firms generally didn’t adapt or otherwise adapted their models to higher rating at LM Arena-Lub a minimum of didn’t admit it.

Advertisement

The problem related to adapting the model to the reference point, suspension of it, after which releasing the “vanilla” variant of the identical model, is that programmers are difficult to predict how good it can work in specific contexts. It can be misleading. It is best if the tests tests – miserably inadequate – provide a shutter of strong and weaknesses of 1 model in various tasks.

Indeed, scientists on X have Stark was observed Differences in behavior From publicly to download maverick in comparison with the hosted model on LM Arena. The LM Arena version seems to make use of many emoji and provides extremely long answers.

We arrived at Meta and Chatbot Arena, a company that maintains LM Arena to comment.

(Tagstotransate) benchmark

This article was originally published on : techcrunch.com
Advertisement
Continue Reading

Technology

Trump delays the ban

Published

on

By

TikTok ban, rednote

Donald Trump has signed a brand new executive order “Save Tiktok”.


Tiktok will live to see the next day – at the least for now. On April 4, President Donald Trump signed a brand new executive order delaying the ban on a preferred social application by one other 75 days. The application was to darken in the USA on April 5.

The application, belonging to the Chinese company Bytedance, is now on the second extension in the first quarter of the 12 months. In 2024, President Biden signed bilateral laws of Ban Tiktok, citing fears about national security. Congress voted in a predominant means. Although Trump has signed the executive order to “save” the application, many questioned the legality of the movement. Like many president’s actions at the starting of his term, they complain that evidently he exceeds the authority of the executive office.

Advertisement

Trump announced his move to Stop the ban on social truthSaying that his administration remains to be working on the contract.

“My administration worked very hard on the Tiktok saving contract, and we have made great progress,” Trump wrote on April 4. “The contract requires more work to ensure the signing of all necessary approvals, which is why I sign an executive order to continue tiktok for an additional 75 days.”

Trump quoted his newly imposed tariffs to China as a key reason for detained negotiations for the buyer.

“We hope to continue working in good faith with China, which, as I understand, are not very satisfied with our mutual tariffs – necessary for honest and balanced trade between China and the USA,” wrote Trump. “It proves that tariffs are the most powerful economic tool and very important for our national security. We do not want Tiktok to go dark. We are looking forward to cooperation with Tiktok and China to complete the contract.”

Advertisement

This means a second time Trump entered to delay the ban. On January 2, just a couple of days after returning to the office, he signed the first extension to stop Tiktok, utilized by over 170 million Americans available to users.

The potential sales of Tiktok draws the major attention of the principal players in the business world. According to HillMany private equity firms, the Venture Capital groups and the best technological investors have introduced offers for a preferred application.

Among the firms, apparently in the mix are Blackstone, Oracle, Amazon – led by Jeff Bezos – and the founding father of Onlyfans Tim Stokely. Interest in purchasing Tiktok has increased, how uncertainty about its future in the US is always growing.

The application, utilized by 170 million Americans, is situated at the center of ongoing political and economic negotiations between the United States and China. Along with the upcoming pressure and deadlines, the possibility of selling opened the door to the largest technological and financial names.

Advertisement


This article was originally published on : www.blackenterprise.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending