Technology

NIST Launches New Generative Artificial Intelligence Assessment Platform

Published

on

The National Institute of Standards and Technology (NIST), an agency of the U.S. Department of Commerce that develops and tests technologies for the U.S. government, businesses and most of the people, announced on Monday the launch of NIST GenAI, a brand new program led by NIST to judge generative technologies Artificial intelligence technologies, including artificial intelligence generating text and pictures.

NIST GenAI will publish benchmarks, help create systems for detecting “content authenticity” (i.e., deep-check false information), and encourage the event of software that detects the source of false or misleading information generated by artificial intelligence, NIST explains on newly launched NIST GenAI website and press release.

“The NIST GenAI program will publish a series of challenges designed to assess and measure the capabilities and limitations of generative artificial intelligence technologies,” the press release reads. “These assessments will be used to identify strategies to promote information integrity and guidance for the safe and responsible use of digital content.”

The first NIST GenAI project is a pilot study to construct systems that may reliably distinguish human-generated media from AI-generated media, starting with text. (While many services aim to detect deepfakes, research and our own testing have shown that they’re unreliable, especially in terms of text.) NIST GenAI is inviting teams from academia, industry, and research labs to submit “generators” – AI systems to content generation – i.e. “discriminators”, i.e. systems that attempt to discover content generated by artificial intelligence.

Generators within the study must generate summaries given a subject and set of documents, while discriminators must detect whether a given summary is written by artificial intelligence. To ensure fairness, NIST GenAI will provide data vital to coach generators and discriminators; systems trained on publicly available data is not going to be accepted, including but not limited to open models akin to Meta’s Llama 3.

Registration for the pilot will begin on May 1, and the outcomes might be announced in February 2025.

The launch of NIST GenAI and study specializing in deepfakes comes at a time of exponential growth within the variety of deepfakes.

According to data from Clarity, a deepfake detection company, 900% more deepfakes have been created this yr in comparison with the identical period last yr. This causes concern, which is comprehensible. AND last vote from YouGov discovered it 85% of Americans said they were concerned regarding the spread of misleading deepfakes on the Internet.

The launch of NIST GenAI is an element of NIST’s response to President Joe Biden’s Executive Order on Artificial Intelligence, which sets rules requiring AI firms to be more transparent about how their models perform and establishes quite a lot of recent standards, including for labeling AI-generated content intelligence .

This can be NIST’s first AI-related announcement following the appointment of Paul Christiano, a former OpenAI researcher, to the agency’s AI Security Institute.

Christiano was a controversial alternative as a consequence of his “doomeristic” views; he once predicted that “there is a 50% chance that the development of artificial intelligence will end in (the destruction of humanity).” Criticsreportedly including scientists at NIST, they fear that Cristiano may encourage the AI ​​Security Institute to concentrate on “fantasy scenarios” reasonably than realistic, more immediate threats from artificial intelligence.

NIST says NIST GenAI will report on the work of the AI ​​Security Institute.

This article was originally published on : techcrunch.com

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version