Technology

The British agency provides tools for testing the security of AI models

Published

on

The British Institute of Security, the UK’s recently established AI safety body, has released a toolkit designed to “strengthen AI security” by making it easier for industry, research organizations and academia to develop AI assessments.

A set of tools called Inspect – available under an open source license, specifically MY License — goals to evaluate some of the capabilities of AI models, including the models’ underlying knowledge and reasoning abilities, and generate an output based on the results.

In a press release announcing In Friday’s news, the Security Institute said Inspect was “the first time an artificial intelligence security testing platform led by a state-backed body has been made available for wider use.”

Take a have a look at the Inspect dashboard.

“Successful collaboration on AI safety testing means a shared, accessible approach to assessments, and we hope Inspect can become a building block,” Safety Institute chairman Ian Hogarth said in an announcement. “We hope that the global AI community will use Inspect not only to conduct their own model security testing, but also to help adapt and evolve the open source platform so that we can produce high-quality assessments across the board.”

As we have written before, AI benchmarking is difficult — not least because today’s most sophisticated AI models are black boxes whose infrastructure, training data, and other key details are kept secret by the corporations that construct them. So how does Inspect take care of this challenge? Mainly through the ability to expand and extend to recent testing techniques.

Inspect consists of three basic components: data sets, solvers and scorers. Datasets provide samples for evaluation testing. Solvers do the work of running tests. Evaluators evaluate the work of test takers and sum up test ends in the form of indicators.

Inspect’s built-in components will be prolonged with third-party packages written in Python.

In a post on X, Deborah Raj, a Mozilla researcher and renowned AI ethicist, called Inspect “a testament to the power of public investment in open source tools to increase accountability for AI.”

Clément Delangue, CEO of artificial intelligence startup Hugging Face, floated the idea of ​​integrating Inspect with Hugging Face’s model library or making a public leaderboard containing the results of the toolkit’s evaluations.

Inspect’s release comes after a state government agency – the National Institute of Standards and Technology (NIST) – launched NIST GenAI, a program that evaluates various generative AI technologies, including text and image generating AI. NIST GenAI plans to offer benchmarks, help create systems to detect content authenticity, and encourage the development of software to detect false or misleading information generated by artificial intelligence.

In April, the US and UK announced a partnership to jointly develop advanced AI model tests, following commitments announced at the UK AI Security Summit at Bletchley Park last November. As part of the cooperation, the United States intends to launch its own artificial intelligence security institute, whose foremost task will probably be to evaluate threats related to artificial intelligence and generative artificial intelligence.

This article was originally published on : techcrunch.com

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version