Legal and regulatory requirements involving AI are understood, managed, and documented. The characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices. Processes, procedures, and practices are in place to determine the needed level of risk management activities based on the organization's risk tolerance. The risk management process and its outcomes are established through transparent policies, procedures, and other controls based on organizational risk priorities. Ongoing monitoring and periodic review of the risk management process and its outcomes are planned and organizational roles and responsibilities clearly defined, including determining the frequency of periodic review. Mechanisms are in place to inventory AI systems and are resourced according to organizational risk priorities. Processes and procedures are in place for decommissioning and phasing out AI systems safely and in a manner that does not increase risks or decrease the organization's trustworthiness. Roles and responsibilities and lines of communication related to mapping, measuring, and managing AI risks are documented and are clear to individuals and teams throughout the organization. The organization's personnel and partners receive AI risk management training to enable them to perform their duties and responsibilities consistent with related policies, procedures, and agreements. Executive leadership of the organization takes responsibility for decisions about risks associated with AI system development and deployment. Decision-making related to mapping, measuring, and managing AI risks throughout the lifecycle is informed by a diverse team (e.g., diversity of demographics, disciplines, experience, expertise, and backgrounds). Policies and procedures are in place to define and differentiate roles and responsibilities for human-AI configurations and oversight of AI systems. Organizational policies and practices are in place to foster a critical thinking and safety-first mindset in the design, development, deployment, and uses of AI systems to minimize potential negative impacts. Organizational teams document the risks and potential impacts of the AI technology they design, develop, deploy, evaluate, and use, and they communicate about the impacts more broadly. Organizational practices are in place to enable AI testing, identification of incidents, and information sharing. Organizational policies and practices are in place to collect, consider, prioritize, and integrate feedback from those external to the team that developed or deployed the AI system regarding the potential individual and societal impacts related to AI risks. Mechanisms are established to enable the team that developed or deployed AI systems to regularly incorporate adjudicated feedback from relevant AI actors into system design and implementation. Policies and procedures are in place that address AI risks associated with third-party entities, including risks of infringement of a third-party's intellectual property or other rights. Contingency processes are in place to handle failures or incidents in third-party data or AI systems deemed to be high-risk.