Technology
Internet users are getting younger; now the UK is considering whether artificial intelligence can help protect them
Artificial intelligence appeared in sights governments concerned about their potential for misuse for fraud, disinformation and other malicious activities on the Internet; currently in the UK, the regulator is preparing to analyze how artificial intelligence is getting used to combat a few of these, particularly in relation to content that is harmful to children.
Ofcomthe regulatory body accountable for enforcing UK regulations Internet Security Actannounced that it plans to launch a consultation on how artificial intelligence and other automated tools are currently used and the way they can be utilized in the future to proactively detect and take away illegal content online, specifically to protect children from harmful content and detect child abuse in sexual purposes, material previously difficult to detect.
These tools could be a part of a wider set of Ofcom proposals that concentrate on keeping children protected online. Ofcom said consultation on the comprehensive proposals would begin in the coming weeks, with a consultation on artificial intelligence happening later this yr.
Mark Bunting, director of Ofcom’s online safety group, says interest in artificial intelligence starts with taking a look at how well it is currently used as a control tool.
“Some services are already using these tools to identify and protect children from such content,” he told TechCrunch. “But there is not much details about the accuracy and effectiveness of those tools. We want to take a look at ways we can be sure that the industry assesses when it uses them, ensuring that risks to free speech and privacy are managed.
One likely consequence will likely be Ofcom recommending how and what platforms should assess, which could potentially lead not only to platforms adopting more sophisticated tools, but in addition to potential fines in the event that they fail to make improvements to blocking content or creating higher ways to stopping younger users from seeing this.
“As with many internet safety regulations, companies have a responsibility to ensure they take the appropriate steps and use the appropriate tools to protect users,” he said.
There will likely be each critics and supporters of those moves. Artificial intelligence researchers are finding increasingly sophisticated ways to make use of artificial intelligence detect deepfakes, for instance, in addition to for online user verification. And yet there are just as a lot of them skeptics who note that AI detection is not foolproof.
Ofcom announced the consultation on artificial intelligence tools at the same time because it published its latest study into kid’s online interactions in the UK, which found that overall, more younger children are connected to the web than ever before, to the extent that that Ofcom is currently ceasing activity amongst increasingly younger age groups.
Nearly 1 / 4, 24%, of all children ages 5 to 7 now have their very own smartphones, and when tablets are included, that number increases to 76%, in response to a survey of U.S. parents. The same age group is far more prone to eat media on these devices: 65% have made voice and video calls (in comparison with 59% only a yr ago), and half of kids (in comparison with 39% a yr ago) watch streaming media.
Age restrictions are getting lighter on some popular social media apps, but whatever the restrictions, they aren’t enforced in the UK anyway. Ofcom found that around 38% of kids aged 5 to 7 use social media. The hottest application amongst them is Meta’s WhatsApp (37%). We were probably relieved for the first time when flagship image app Meta became less popular than viral sensation ByteDance. It turned out that 30% of kids aged 5 to 7 use TikTok, and “only” 22% use Instagram. Discord accomplished the list, but is much less popular at just 4%.
About one third, 32% of kids of this age use the Internet on their very own, and 30% of fogeys said that they weren’t bothered by their minor children having social media profiles. YouTube Kids stays the hottest network amongst younger users (48%).
Games which were the hottest amongst children for years are currently utilized by 41% of kids aged 5 to 7, and 15% of kids at this age play shooting games.
Although 76% of fogeys surveyed said they’d talked to their young children about web safety, Ofcom points out that there are query marks between what a baby sees and what they can report. When examining older children aged 8-17, Ofcom interviewed them face-to-face. It found that 32% of kids said they’d seen disturbing content online, but only 20% of their parents said they’d reported anything.
Even considering some inconsistencies in reporting, “the research suggests a link between older children’s exposure to potentially harmful content online and what they share with their parents about their online experiences,” Ofcom writes. Disturbing content is just considered one of the challenges: deepfakes are also an issue. Ofcom reported that amongst 16-17-year-olds, 25% said they were unsure they may tell fake content from real content online.