NIST Issues Guidance to Help Companies and Organizations Operationalize AI Risk Management
The U.S. Department of Commerce’s National Institute of Standards and Technology (“NIST”) last week released an Artificial Intelligence Risk Management Framework (“AI RMF 1.0”). Calling it a guidance document for voluntary use by organizations in designing, developing, deploying, or using AI systems, the framework can be used to contextualize and manage the potential risks of harm posed by AI systems, technologies, and practices in all areas where they may be used.
AI-related risk management is an increasingly important issue. Documented harms traceable to AI technologies have been widely reported and threaten to undermine people’s trust in AI. Companies that make AI systems, and those that use AI to automate decisions across their organizations or enterprises, may have policies and procedures for evaluating general corporate risks from AI. But with several states and localities implementing laws requiring data-centric risk assessments, data privacy impact assessments, and bias audits around data-based technologies like AI (including New York City’s Law No. 144 that requires audits by those who use automated employment decision tools), and with Congress poised to consider national data privacy legislation containing economy-wide risk provisions, it is important for companies and organizations that make or use AI to review to ensure their approaches to risk management around AI are comprehensive and comply with applicable laws and regulations.
To read the full post, please visit our Blank Rome Workplace blog.