Connect with us

Technology

Internet users are getting younger; now the UK is considering whether artificial intelligence can help protect them

Published

on

Artificial intelligence appeared in sights governments concerned about their potential for misuse for fraud, disinformation and other malicious activities on the Internet; currently in the UK, the regulator is preparing to analyze how artificial intelligence is getting used to combat a few of these, particularly in relation to content that is harmful to children.

Ofcomthe regulatory body accountable for enforcing UK regulations Internet Security Actannounced that it plans to launch a consultation on how artificial intelligence and other automated tools are currently used and the way they can be utilized in the future to proactively detect and take away illegal content online, specifically to protect children from harmful content and detect child abuse in sexual purposes, material previously difficult to detect.

These tools could be a part of a wider set of Ofcom proposals that concentrate on keeping children protected online. Ofcom said consultation on the comprehensive proposals would begin in the coming weeks, with a consultation on artificial intelligence happening later this yr.

Advertisement

Mark Bunting, director of Ofcom’s online safety group, says interest in artificial intelligence starts with taking a look at how well it is currently used as a control tool.

“Some services are already using these tools to identify and protect children from such content,” he told TechCrunch. “But there is not much details about the accuracy and effectiveness of those tools. We want to take a look at ways we can be sure that the industry assesses when it uses them, ensuring that risks to free speech and privacy are managed.

One likely consequence will likely be Ofcom recommending how and what platforms should assess, which could potentially lead not only to platforms adopting more sophisticated tools, but in addition to potential fines in the event that they fail to make improvements to blocking content or creating higher ways to stopping younger users from seeing this.

“As with many internet safety regulations, companies have a responsibility to ensure they take the appropriate steps and use the appropriate tools to protect users,” he said.

Advertisement

There will likely be each critics and supporters of those moves. Artificial intelligence researchers are finding increasingly sophisticated ways to make use of artificial intelligence detect deepfakes, for instance, in addition to for online user verification. And yet there are just as a lot of them skeptics who note that AI detection is not foolproof.

Ofcom announced the consultation on artificial intelligence tools at the same time because it published its latest study into kid’s online interactions in the UK, which found that overall, more younger children are connected to the web than ever before, to the extent that that Ofcom is currently ceasing activity amongst increasingly younger age groups.

Nearly 1 / 4, 24%, of all children ages 5 to 7 now have their very own smartphones, and when tablets are included, that number increases to 76%, in response to a survey of U.S. parents. The same age group is far more prone to eat media on these devices: 65% have made voice and video calls (in comparison with 59% only a yr ago), and half of kids (in comparison with 39% a yr ago) watch streaming media.

Age restrictions are getting lighter on some popular social media apps, but whatever the restrictions, they aren’t enforced in the UK anyway. Ofcom found that around 38% of kids aged 5 to 7 use social media. The hottest application amongst them is Meta’s WhatsApp (37%). We were probably relieved for the first time when flagship image app Meta became less popular than viral sensation ByteDance. It turned out that 30% of kids aged 5 to 7 use TikTok, and “only” 22% use Instagram. Discord accomplished the list, but is much less popular at just 4%.

Advertisement

About one third, 32% of kids of this age use the Internet on their very own, and 30% of fogeys said that they weren’t bothered by their minor children having social media profiles. YouTube Kids stays the hottest network amongst younger users (48%).

Games which were the hottest amongst children for years are currently utilized by 41% of kids aged 5 to 7, and 15% of kids at this age play shooting games.

Although 76% of fogeys surveyed said they’d talked to their young children about web safety, Ofcom points out that there are query marks between what a baby sees and what they can report. When examining older children aged 8-17, Ofcom interviewed them face-to-face. It found that 32% of kids said they’d seen disturbing content online, but only 20% of their parents said they’d reported anything.

Even considering some inconsistencies in reporting, “the research suggests a link between older children’s exposure to potentially harmful content online and what they share with their parents about their online experiences,” Ofcom writes. Disturbing content is just considered one of the challenges: deepfakes are also an issue. Ofcom reported that amongst 16-17-year-olds, 25% said they were unsure they may tell fake content from real content online.

Advertisement

This article was originally published on : techcrunch.com
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

One of the last AI Google models is worse in terms of safety

Published

on

By

The Google Gemini generative AI logo on a smartphone.

The recently released Google AI model is worse in some security tests than its predecessor, in line with the company’s internal comparative test.

IN Technical report Google, published this week, reveals that his Flash Gemini 2.5 model is more likely that he generates a text that violates its security guidelines than Gemini 2.0 Flash. In two indicators “text security for text” and “image security to the text”, Flash Gemini 2.5 will withdraw 4.1% and 9.6% respectively.

Text safety for the text measures how often the model violates Google guidelines, making an allowance for the prompt, while image security to the text assesses how close the model adheres to those boundaries after displaying the monitors using the image. Both tests are automated, not supervised by man.

Advertisement

In an e-mail, Google spokesman confirmed that Gemini 2.5 Flash “performs worse in terms of text safety for text and image.”

These surprising comparative results appear when AI is passing in order that their models are more acceptable – in other words, less often refuse to answer controversial or sensitive. In the case of the latest Llam Meta models, he said that he fought models in order to not support “some views on others” and answers to more “debated” political hints. Opeli said at the starting of this yr that he would improve future models, in order to not adopt an editorial attitude and offers many prospects on controversial topics.

Sometimes these efforts were refundable. TechCrunch announced on Monday that the default CHATGPT OPENAI power supply model allowed juvenile to generate erotic conversations. Opeli blamed his behavior for a “mistake”.

According to Google Technical Report, Gemini 2.5 Flash, which is still in view, follows instructions more faithfully than Gemini 2.0 Flash, including instructions exceeding problematic lines. The company claims that regression might be partially attributed to false positives, but in addition admits that Gemini 2.5 Flash sometimes generates “content of violation” when it is clearly asked.

Advertisement

TechCrunch event

Berkeley, California
|.
June 5

Book now

Advertisement

“Of course, there is a tension between (after instructions) on sensitive topics and violations of security policy, which is reflected in our assessment,” we read in the report.

The results from Meepmap, reference, which can examine how models react to sensitive and controversial hints, also suggest that Flash Gemini 2.5 is much less willing to refuse to reply controversial questions than Flash Gemini 2.0. Testing the TechCrunch model through the AI ​​OpenRoutter platform has shown that he unsuccessfully writes essays to support human artificial intelligence judges, weakening the protection of due protection in the US and the implementation of universal government supervisory programs.

Thomas Woodside, co -founder of the Secure AI Project, said that the limited details given by Google in their technical report show the need for greater transparency in testing models.

“There is a compromise between the instruction support and the observation of politics, because some users may ask for content that would violate the rules,” said Woodside Techcrunch. “In this case, the latest Flash model Google warns the instructions more, while breaking more. Google does not present many details about specific cases in which the rules have been violated, although they claim that they are not serious. Not knowing more, independent analysts are difficult to know if there is a problem.”

Advertisement

Google was already under fire for his models of security reporting practices.

The company took weeks to publish a technical report for the most talented model, Gemini 2.5 Pro. When the report was finally published, it initially omitted the key details of the security tests.

On Monday, Google published a more detailed report with additional security information.

(Tagstotransate) Gemini

Advertisement
This article was originally published on : techcrunch.com
Continue Reading

Technology

Aurora launches a commercial self -propelled truck service in Texas

Published

on

By

The autonomous startup of the Aurora Innovation vehicle technology claims that it has successfully launched a self -propelled truck service in Texas, which makes it the primary company that she implemented without drivers, heavy trucks for commercial use on public roads in the USA

The premiere appears when Aurora gets the term: In October, the corporate delayed the planned debut 2024 to April 2025. The debut also appears five months after the rival Kodiak Robotics provided its first autonomous trucks to clients commercial for operations without a driver in field environments.

Aurora claims that this week she began to freight between Dallas and Houston with Hirschbach Motor Lines and Uber Freight starters, and that she has finished 1200 miles without a driver to this point. The company plans to expand to El Paso and Phoenix until the top of 2025.

Advertisement

TechCrunch contacted for more detailed information concerning the premiere, for instance, the variety of vehicles implemented Aurora and whether the system needed to implement the Pullover maneuver or the required distant human assistance.

The commercial premiere of Aurora takes place in a difficult time. Self -propelled trucks have long been related to the necessity for his or her technology attributable to labor deficiencies in the chairman’s transport and the expected increase in freigh shipping. Trump’s tariffs modified this attitude, not less than in a short period. According to the April analytical company report from the commercial vehicle industry ACT researchThe freight is predicted to fall this yr in the USA with a decrease in volume and consumer expenditure.

Aurora will report its results in the primary quarter next week, i.e. when he shares how he expects the present trade war will affect his future activity. TechCrunch contacted to learn more about how tariffs affect Auror’s activities.

For now, Aurora will probably concentrate on further proving his safety case without a driver and cooperation with state and federal legislators to just accept favorable politicians to assist her develop.

Advertisement

TechCrunch event

Berkeley, California
|.
June 5

Book now

Advertisement

At the start of 2025, Aurora filed a lawsuit against federal regulatory bodies after the court refused to release the appliance for release from the protection requirement, which consists in placing warning triangles on the road, when the truck must stop on the highway – something that’s difficult to do when there isn’t a driver in the vehicle. To maintain compliance with this principle and proceed to totally implement without service drivers, Aurora probably has a man -driven automotive trail after they are working.

(Tagstranslate) Aurora Innovation

This article was originally published on : techcrunch.com
Continue Reading

Technology

Sarah Tavel, the first woman of the Benchmark GP, goes to the Venture partner

Published

on

By

Eight years after joining Benchmark as the company’s first partner, Sarah Tavel announced that she was going to a more limited role at Hapeure Venture.

In his latest position as a partner Venture Tavel will proceed to invest and serve existing company boards, but may have more time to examine “AI tools on the edge” and fascinated with the direction of artificial intelligence, she wrote.

Tavel joined Benchmark in 2017 after spending a half years as a partner in Greylock and three years as a product manager at Pinterest. Before Pinterest, Tavel was an investor in Bessemer Venture Partners, where she helped Source Pinterest and Github.

Advertisement

Since its foundation in 1995, the benchmark intentionally maintained a small team of six or fewer general partners. Unlike most VC corporations, wherein older partners normally receive most of the management and profits fees, the benchmark acts as an equal partnership, and all partners share fees and returns equally.

During his term as a general partner of Benchmark, Tavel invested in Hipcamp on the campsite, chains of cryptocurrency intelligence startups and the Supergreaty cosmetic platform, which was purchased by Whatnot in 2023. Tavel also supported the application for sharing photos of Paparazhi, which closed two years ago, and the AI ​​11x sales platform, about which TechCrunch wrote.

(Tagstotransate) benchmark

This article was originally published on : techcrunch.com
Advertisement
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending