Connect with us

Technology

Internet users are getting younger; now the UK is considering whether artificial intelligence can help protect them

Published

on

Artificial intelligence appeared in sights governments concerned about their potential for misuse for fraud, disinformation and other malicious activities on the Internet; currently in the UK, the regulator is preparing to analyze how artificial intelligence is getting used to combat a few of these, particularly in relation to content that is harmful to children.

Ofcomthe regulatory body accountable for enforcing UK regulations Internet Security Actannounced that it plans to launch a consultation on how artificial intelligence and other automated tools are currently used and the way they can be utilized in the future to proactively detect and take away illegal content online, specifically to protect children from harmful content and detect child abuse in sexual purposes, material previously difficult to detect.

These tools could be a part of a wider set of Ofcom proposals that concentrate on keeping children protected online. Ofcom said consultation on the comprehensive proposals would begin in the coming weeks, with a consultation on artificial intelligence happening later this yr.

Advertisement

Mark Bunting, director of Ofcom’s online safety group, says interest in artificial intelligence starts with taking a look at how well it is currently used as a control tool.

“Some services are already using these tools to identify and protect children from such content,” he told TechCrunch. “But there is not much details about the accuracy and effectiveness of those tools. We want to take a look at ways we can be sure that the industry assesses when it uses them, ensuring that risks to free speech and privacy are managed.

One likely consequence will likely be Ofcom recommending how and what platforms should assess, which could potentially lead not only to platforms adopting more sophisticated tools, but in addition to potential fines in the event that they fail to make improvements to blocking content or creating higher ways to stopping younger users from seeing this.

“As with many internet safety regulations, companies have a responsibility to ensure they take the appropriate steps and use the appropriate tools to protect users,” he said.

Advertisement

There will likely be each critics and supporters of those moves. Artificial intelligence researchers are finding increasingly sophisticated ways to make use of artificial intelligence detect deepfakes, for instance, in addition to for online user verification. And yet there are just as a lot of them skeptics who note that AI detection is not foolproof.

Ofcom announced the consultation on artificial intelligence tools at the same time because it published its latest study into kid’s online interactions in the UK, which found that overall, more younger children are connected to the web than ever before, to the extent that that Ofcom is currently ceasing activity amongst increasingly younger age groups.

Nearly 1 / 4, 24%, of all children ages 5 to 7 now have their very own smartphones, and when tablets are included, that number increases to 76%, in response to a survey of U.S. parents. The same age group is far more prone to eat media on these devices: 65% have made voice and video calls (in comparison with 59% only a yr ago), and half of kids (in comparison with 39% a yr ago) watch streaming media.

Age restrictions are getting lighter on some popular social media apps, but whatever the restrictions, they aren’t enforced in the UK anyway. Ofcom found that around 38% of kids aged 5 to 7 use social media. The hottest application amongst them is Meta’s WhatsApp (37%). We were probably relieved for the first time when flagship image app Meta became less popular than viral sensation ByteDance. It turned out that 30% of kids aged 5 to 7 use TikTok, and “only” 22% use Instagram. Discord accomplished the list, but is much less popular at just 4%.

Advertisement

About one third, 32% of kids of this age use the Internet on their very own, and 30% of fogeys said that they weren’t bothered by their minor children having social media profiles. YouTube Kids stays the hottest network amongst younger users (48%).

Games which were the hottest amongst children for years are currently utilized by 41% of kids aged 5 to 7, and 15% of kids at this age play shooting games.

Although 76% of fogeys surveyed said they’d talked to their young children about web safety, Ofcom points out that there are query marks between what a baby sees and what they can report. When examining older children aged 8-17, Ofcom interviewed them face-to-face. It found that 32% of kids said they’d seen disturbing content online, but only 20% of their parents said they’d reported anything.

Even considering some inconsistencies in reporting, “the research suggests a link between older children’s exposure to potentially harmful content online and what they share with their parents about their online experiences,” Ofcom writes. Disturbing content is just considered one of the challenges: deepfakes are also an issue. Ofcom reported that amongst 16-17-year-olds, 25% said they were unsure they may tell fake content from real content online.

Advertisement

This article was originally published on : techcrunch.com
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Trump to sign a criminalizing account of porn revenge and clear deep cabinets

Published

on

By

President Donald Trump is predicted to sign the act on Take It Down, a bilateral law that introduces more severe punishments for distributing clear images, including deep wardrobes and pornography of revenge.

The Act criminalizes the publication of such photos, regardless of whether or not they are authentic or generated AI. Whoever publishes photos or videos can face penalty, including a advantageous, deprivation of liberty and restitution.

According to the brand new law, media firms and web platforms must remove such materials inside 48 hours of termination of the victim. Platforms must also take steps to remove the duplicate content.

Advertisement

Many states have already banned clear sexual desems and pornography of revenge, but for the primary time federal regulatory authorities will enter to impose restrictions on web firms.

The first lady Melania Trump lobbyed for the law, which was sponsored by the senators Ted Cruz (R-TEXAS) and Amy Klobuchar (d-minn.). Cruz said he inspired him to act after hearing that Snapchat for nearly a 12 months refused to remove a deep displacement of a 14-year-old girl.

Proponents of freedom of speech and a group of digital rights aroused concerns, saying that the law is Too wide And it will probably lead to censorship of legal photos, similar to legal pornography, in addition to government critics.

(Tagstransate) AI

Advertisement
This article was originally published on : techcrunch.com
Continue Reading

Technology

Microsoft Nadella sata chooses chatbots on the podcasts

Published

on

By

Satya Nadella at Microsoft Ignite 2023

While the general director of Microsoft, Satya Nadella, says that he likes podcasts, perhaps he didn’t take heed to them anymore.

That the treat is approaching at the end longer profile Bloomberg NadellaFocusing on the strategy of artificial intelligence Microsoft and its complicated relations with Opeli. To illustrate how much she uses Copilot’s AI assistant in her day by day life, Nadella said that as a substitute of listening to podcasts, she now sends transcription to Copilot, after which talks to Copilot with the content when driving to the office.

In addition, Nadella – who jokingly described her work as a “E -Mail driver” – said that it consists of a minimum of 10 custom agents developed in Copilot Studio to sum up E -Mailes and news, preparing for meetings and performing other tasks in the office.

Advertisement

It seems that AI is already transforming Microsoft in a more significant way, and programmers supposedly the most difficult hit in the company’s last dismissals, shortly after Nadella stated that the 30% of the company’s code was written by AI.

(Tagstotransate) microsoft

This article was originally published on : techcrunch.com
Continue Reading

Technology

The planned Openai data center in Abu Dhabi would be greater than Monaco

Published

on

By

Sam Altman, CEO of OpenAI

Opeli is able to help in developing a surprising campus of the 5-gigawatt data center in Abu Dhabi, positioning the corporate because the fundamental tenant of anchor in what can grow to be considered one of the biggest AI infrastructure projects in the world, in accordance with the brand new Bloomberg report.

Apparently, the thing would include a tremendous 10 square miles and consumed power balancing five nuclear reactors, overshadowing the prevailing AI infrastructure announced by OpenAI or its competitors. (Opeli has not yet asked TechCrunch’s request for comment, but in order to be larger than Monaco in retrospect.)

The ZAA project, developed in cooperation with the G42-Konglomerate with headquarters in Abu Zabi- is an element of the ambitious Stargate OpenAI project, Joint Venture announced in January, where in January could see mass data centers around the globe supplied with the event of AI.

Advertisement

While the primary Stargate campus in the United States – already in Abilene in Texas – is to realize 1.2 gigawatts, this counterpart from the Middle East will be more than 4 times.

The project appears among the many wider AI between the USA and Zea, which were a few years old, and annoyed some legislators.

OpenAI reports from ZAA come from 2023 Partnership With G42, the pursuit of AI adoption in the Middle East. During the conversation earlier in Abu Dhabi, the final director of Opeli, Altman himself, praised Zea, saying: “He spoke about artificial intelligence Because it was cool before. “

As in the case of a big a part of the AI ​​world, these relationships are … complicated. Established in 2018, G42 is chaired by Szejk Tahnoon Bin Zayed Al Nahyan, the national security advisor of ZAA and the younger brother of this country. His embrace by OpenAI raised concerns at the top of 2023 amongst American officials who were afraid that G42 could enable the Chinese government access advanced American technology.

Advertisement

These fears focused on “G42”Active relationships“With Blalisted entities, including Huawei and Beijing Genomics Institute, in addition to those related to people related to Chinese intelligence efforts.

After pressure from American legislators, CEO G42 told Bloomberg At the start of 2024, the corporate modified its strategy, saying: “All our Chinese investments that were previously collected. For this reason, of course, we no longer need any physical presence in China.”

Shortly afterwards, Microsoft – the fundamental shareholder of Opeli together with his own wider interests in the region – announced an investment of $ 1.5 billion in G42, and its president Brad Smith joined the board of G42.

(Tagstransate) Abu dhabi

Advertisement
This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending