Connect with us

Technology

The British agency provides tools for testing the security of AI models

Published

on

abstract multicolored wave length

The British Institute of Security, the UK’s recently established AI safety body, has released a toolkit designed to “strengthen AI security” by making it easier for industry, research organizations and academia to develop AI assessments.

A set of tools called Inspect – available under an open source license, specifically MY License — goals to evaluate some of the capabilities of AI models, including the models’ underlying knowledge and reasoning abilities, and generate an output based on the results.

In a press release announcing In Friday’s news, the Security Institute said Inspect was “the first time an artificial intelligence security testing platform led by a state-backed body has been made available for wider use.”

Take a have a look at the Inspect dashboard.

“Successful collaboration on AI safety testing means a shared, accessible approach to assessments, and we hope Inspect can become a building block,” Safety Institute chairman Ian Hogarth said in an announcement. “We hope that the global AI community will use Inspect not only to conduct their own model security testing, but also to help adapt and evolve the open source platform so that we can produce high-quality assessments across the board.”

As we have written before, AI benchmarking is difficult — not least because today’s most sophisticated AI models are black boxes whose infrastructure, training data, and other key details are kept secret by the corporations that construct them. So how does Inspect take care of this challenge? Mainly through the ability to expand and extend to recent testing techniques.

Inspect consists of three basic components: data sets, solvers and scorers. Datasets provide samples for evaluation testing. Solvers do the work of running tests. Evaluators evaluate the work of test takers and sum up test ends in the form of indicators.

Inspect’s built-in components will be prolonged with third-party packages written in Python.

In a post on X, Deborah Raj, a Mozilla researcher and renowned AI ethicist, called Inspect “a testament to the power of public investment in open source tools to increase accountability for AI.”

Clément Delangue, CEO of artificial intelligence startup Hugging Face, floated the idea of ​​integrating Inspect with Hugging Face’s model library or making a public leaderboard containing the results of the toolkit’s evaluations.

Inspect’s release comes after a state government agency – the National Institute of Standards and Technology (NIST) – launched NIST GenAI, a program that evaluates various generative AI technologies, including text and image generating AI. NIST GenAI plans to offer benchmarks, help create systems to detect content authenticity, and encourage the development of software to detect false or misleading information generated by artificial intelligence.

In April, the US and UK announced a partnership to jointly develop advanced AI model tests, following commitments announced at the UK AI Security Summit at Bletchley Park last November. As part of the cooperation, the United States intends to launch its own artificial intelligence security institute, whose foremost task will probably be to evaluate threats related to artificial intelligence and generative artificial intelligence.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

India Considers Easing Market Share Caps for UPI Payments Operators

Published

on

By

phonepe UPI being used to accept payments at a road-side sunglasses stall.

The regulator that oversees India’s popular UPI rail payments is considering relaxing a proposed market share cap for operators like Google Pay, PhonePe and Paytm because it grapples with enforcing the restrictions, two people accustomed to the matter told TechCrunch.

The National Payments Corporation of India (NPCI), which is regulated by the Indian central bank, is considering increasing the market share that UPI operators can hold to greater than 40%, said two of the people, requesting anonymity because the knowledge is confidential. The regulator had earlier proposed a 30% market share limit to encourage competition within the space.

UPI has change into the most well-liked option to send and receive money in India, with the mechanism processing over 12 billion transactions monthly. Walmart-backed PhonePe has about 48% market share by volume and 50% by value, while Google Pay has 37.3% share by volume.

Once an industry heavyweight, Paytm’s market share has fallen to 7.2% from 11% late last yr amid regulatory challenges.

According to several industry executives, the NPCI’s increase in market share limits is more likely to be a controversial move as many UPI providers were counting on regulatory motion to curb the dominance of PhonePe and Google Pay.

NPCI, which has previously declined to comment on market share, didn’t reply to a request for comment on Thursday.

The regulator originally planned to implement the market share caps in January 2021 but prolonged the deadline to January 1, 2025. The regulator has struggled to seek out a workable option to implement its proposed market share caps.

The stakes are high, especially for PhonePe, India’s Most worthy fintech startup, valued at $12 billion.

Sameer Nigam, co-founder and CEO of PhonePe, said last month that the startup cannot go public “if there is uncertainty on regulatory issues.”

“If you buy a share at Rs 100 and value it assuming we have 48-49% market share, there is uncertainty whether it will come down to 30% and when,” Nigam told a fintech conference last month. “We are reaching out to them (the regulator) whether they can find another way to at least address any concerns they have or tell us what the list of concerns is,” he added.

This article was originally published on : techcrunch.com
Continue Reading

Technology

Bluesky addresses trust and security issues related to abuse, spam and more

Published

on

By

Bluesky butterfly logo and Jay Graber

Social media startup Bluesky, which is constructing a decentralized alternative to X (formerly Twitter), provided an update Wednesday on the way it’s approaching various trust and security issues on its platform. The company is in various stages of developing and piloting a variety of initiatives focused on coping with bad actors, harassment, spam, fake accounts, video security and more.

To address malicious users or those that harass others, Bluesky says it’s developing recent tools that can have the option to detect when multiple recent accounts are created and managed by the identical person. This could help curb harassment when a foul actor creates several different personas to attack their victims.

Another recent experiment will help detect “rude” replies and forward them to server moderators. Like Mastodon, Bluesky will support a network where self-hosters and other developers can run their very own servers that connect to Bluesky’s server and others on the network. This federation capability is still in early access. But in the long term, server moderators will have the option to resolve how they need to take care of individuals who post rude responses. In the meantime, Bluesky will eventually reduce the visibility of those responses on its app. Repeated rude labels on content will even lead to account-level labels and suspensions, it says.

To curb using lists to harass others, Bluesky will remove individual users from the list in the event that they block the list creator. Similar functionality was recently introduced to Starter Packs, a sort of shared list that will help recent users find people to follow on the platform (check TechCrunch Starter Pack).

Bluesky will even scan lists with offensive names or descriptions to limit the potential of harassing others by adding them to a public list with a toxic or offensive name or description. Those who violate Bluesky’s Community Guidelines might be hidden from the app until the list owner makes changes that align with Bluesky’s policies. Users who proceed to create offensive lists will even face further motion, though the corporate didn’t provide details, adding that the lists are still an area of ​​energetic discussion and development.

In the approaching months, Bluesky also intends to move to handling moderation reports through its app, using notifications relatively than counting on email reports.

To combat spam and other fake accounts, Bluesky is launching a pilot that can attempt to routinely detect when an account is fake, scamming or sending spam to users. Combined with moderation, the goal is to have the option to take motion on accounts inside “seconds of receiving a report,” the corporate said.

One of the more interesting developments is how Bluesky will comply with local laws while still allowing free speech. It will use geotags that allow it to hide some content from users in a particular area to comply with the law.

“This allows Bluesky’s moderation service to maintain flexibility in creating spaces for free expression while also ensuring legal compliance so that Bluesky can continue to operate as a service in these geographic regions,” the corporate shared in a blog post. “This feature will be rolled out on a country-by-country basis, and we will endeavor to inform users of the source of legal requests when legally possible.”

To address potential trust and safety issues with videos which have recently been added, the team is adding features like the flexibility to disable autoplay, ensuring videos are labeled, and providing the flexibility to report videos. They are still evaluating what else might need to be added, which might be prioritized based on user feedback.

When it comes to abuse, the corporate says its general framework is “a question of how often something happens versus how harmful it is.” The company focuses on addressing high-impact, high-frequency issues, in addition to “tracking edge cases that could result in significant harm to a few users.” The latter, while only affecting a small number of individuals, causes enough “ongoing harm” that Bluesky will take motion to prevent abuse, it says.

User concerns will be reported via reports, emails and mentions @safety.bsky.app account.

This article was originally published on : techcrunch.com
Continue Reading

Technology

Apple Airpods Now With FDA-Approved Hearing Aid Feature

Published

on

By

The newest AirPods are a part of a growing group of hearing aids available over-the-counter.


Apple’s latest Airpods could help those with hearing impairments. The tech company’s software update has been approved by the FDA to be used as hearing aids.

The FDA approved Apple’s hearing aid feature on September 12. The free update, available on AirPods Pro 2, will amplify sounds for the hearing impaired. However, the feature is simply available to adults 18 and older with an iPhone or iPad compatible with iOS 18.

“Today’s approval of over-the-counter hearing aid software for a commonly used consumer audio product is another step that will increase the availability, affordability, and acceptability of hearing support for adults with mild to moderate hearing loss,” said Dr. Michelle Tarver, acting director of the FDA’s Center for Devices and Radiological Health, in a press release. obtained by .

They confirmed the feature’s use after a clinical trial with 118 participants. The results showed that users “achieved similar perceived benefits to those who received a professional fit on the same device.” Apple also announced the brand new development just days before the agency’s approval.

“Hearing health is an essential part of our overall well-being, yet it is often overlooked — in fact, according to Apple’s Hearing Study, as many as 75 percent of people diagnosed with hearing loss go untreated,” said Sumbul Desai, MD, vice chairman of Health at Apple. press release“We’re excited to deliver breakthrough software features in AirPods Pro that put users’ hearing health first, offering new ways to test and get help for hearing loss.”

What’s more, Apple intends its recent AirPods to supply a “world-first” hearing health experience. Noting that 1.5 billion people suffer from hearing loss, the device also goals to forestall and detect hearing problems.

“Your AirPods Pro will transform into your own personalized hearing aid, amplifying the specific sounds you need in real time, such as parts of speech or elements of your environment,” Desai added in a video announcing the event.

The latest AirPods are a part of a growing variety of over-the-counter (OTC) hearing aids. They usually are not only more accessible, but additionally significantly cheaper than prescription medical devices. While they’re designed for individuals with mild to moderate hearing loss, they’ll initially treat those with limited abilities.

AirPods Pro 2 is available now for $249.


This article was originally published on : www.blackenterprise.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending