Logo of the Center for Long-Term Cybersecurity at UC Berkeley, featuring three overlapping geometric shapes and the acronym CLTC.

AI Risk-Management Standards Profile for General-Purpose AI (GPAI) and Foundation Models

Dr. Rachel Gillum contributes to UC Berkeley Center for Long-Term Cybersecurity’s report on risk management for AI models.

January 2025

Increasingly multi-purpose AI models, such as cutting-edge large language models or other “general-purpose AI” (GPAI) models, “foundation models,” generative AI models, and “frontier models” (typically all referred to hereafter with the umbrella term “GPAI/foundation models” except where greater specificity is needed), can provide many beneficial capabilities but also risks of adverse events with profound consequences. This document provides risk-management practices or controls for identifying, analyzing, and mitigating risks of GPAI/foundation models. We intend this document primarily for developers of large-scale, state-of-the-art GPAI/foundation models; others that can benefit from this guidance include downstream developers of end-use applications that build on a GPAI/foundation model. This document facilitates conformity with or use of leading AI risk management-related standards, adapting and building on the generic voluntary guidance in the NIST AI Risk Management Framework and ISO/IEC 23894, with a focus on the unique issues faced by developers of GPAI/foundation models.