The National Institute of Standards and Technology (NIST) — part of the US Department of Commerce — is asking the public for input on an AI risk management framework, which the organization is in the process of developing as a way to “manage the risks posed by artificial intelligence.”

The Artificial Intelligence Risk Management Framework (AI RMF) will be a voluntary document that can be used by developers, evaluators and others as a way to “improve the trustworthiness of AI systems.”

NIST noted that the request for input comes after Congress and the White House asked the organization to create a framework for AI.

Deputy Commerce Secretary Don Graves said in a statement that the document “could make a critical difference in whether or not new AI technologies are competitive in the marketplace.”

“Each day it becomes more apparent that artificial intelligence brings us a wide range of innovations and new capabilities that can advance our economy, security and quality of life. It is critical that we are mindful and equipped to manage the risks that AI technologies introduce along with their benefits,” Graves said. 

“This AI Risk Management Framework will help designers, developers and users of AI take all of these factors into account — and thereby improve US capabilities in a very competitive global AI market.”

There is increasing demand for some form of regulation around AI’s use across industries, especially as it begins to be incorporated into a number of critical, sensitive processes. Studies have shown that hundreds of AI systems are bereft with biases that have not been addressed by their creators. 

An insurance company, Lemonade Insurance, faced significant backlash in May for announcing that their AI system would judge whether a person was lying about a car accident based on a video submitted after the incident. 

There is already a longstanding movement to end the use of AI in facial recognition software adopted by public and private institutions. 

NIST’s Elham Tabassi, federal AI standards coordinator and a member of the National AI Research Resource Task Force, explained that for AI to reach its full potential as a benefit to society, it must be a trustworthy technology. 

“While it may be impossible to eliminate the risks inherent in AI, we are developing this guidance framework through a consensus-driven, collaborative process that we hope will encourage its wide adoption, thereby minimizing these risks,” Tabassi said. 

NIST noted that the development and use of new AI-based technologies, products and services bring “technical and societal challenges and risks.”

“NIST is soliciting input to understand how organizations and individuals involved with developing and using AI systems might be able to address the full scope of AI risk and how a framework for managing these risks might be constructed,” NIST said in a statement. 

NIST is specifically looking for information about the greatest challenges developers face in improving the management of AI-related risks. NIST is also interested in understanding how organizations currently define and manage characteristics of AI trustworthiness. The organization is similarly looking for input about the extent to which AI risks are incorporated into organizations’ overarching risk management, particularly around cybersecurity, privacy and safety.

NIST is expecting responses by August 19 and they plan to hold a workshop in September where experts will be able to help create the outline for the first draft.

Once the first draft is released, NIST will continue to work on it and may come back around for more public comments. 

Lynne Parker, director of the National AI Initiative Office in the White House Office of Science and Technology Policy, said the AI Risk Management Framework will “meet a major need in advancing trustworthy approaches to AI to serve all people in responsible, equitable and beneficial ways.”

“AI researchers and developers need and want to consider risks before, during and after the development of AI technologies, and this framework will inform and guide their efforts,” Parker added.  

To submit responses to the RFI, download the template response form and email it to AIframework@nist.gov. 



Source link