,Gold,Gold Sources,Company,Company Sources "Legal and regulatory requirements involving AI are understood, managed, and documented.","The policy aligns with the point of understanding, managing, and documenting legal and regulatory requirements involving AI through its commitment to compliance with applicable laws, regulations, and industry standards governing AI technologies. This is mentioned in the context information under the section ""Compliance and Review"" where it states that the organization will adhere to all applicable laws, regulations, and industry standards governing AI technologies.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- ","The policy aligns with the point of understanding, managing, and documenting legal and regulatory requirements involving AI through the establishment of a generative AI oversight team. This team comprises disciplines like audit, legal, and ethics, indicating that legal and regulatory requirements are considered and addressed in the development, deployment, and use of generative AI systems. Additionally, the policy states that practices should align with the NIST AI Risk Management Framework (RMF), which further emphasizes the importance of understanding and managing legal and regulatory requirements.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 2 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Administration The Generative AI Oversight Team is responsible for administering this policy, establishing necessary procedures, guidelines and updates to align with regulations. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- " "The characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices.","The policy aligns with the point that the characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices through several statements. For example, the policy emphasizes the importance of privacy protection and states that robust privacy measures will be implemented to safeguard user data. This demonstrates that the organization is integrating the characteristic of privacy protection into its policies and practices. Additionally, the policy mentions the need for accountability and responsibility for the outcomes of AI systems, indicating that the organization is integrating these characteristics into its organizational processes and procedures.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 This policy outlines our commitment to ethical AI practices: 1. Ethical Principles: a. Transparency: We commit to transparency in our AI systems' design, development, and deployment. Users and stakeholders will be informed about the use of AI, its capabilities, and limitations. ----- ","The policy aligns with the point that the characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices through statements such as ""Generative AI projects will follow documented processes for risk-based decisions on development, deployment, and monitoring"" and ""Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance."" These statements indicate that the policy ensures that the characteristics of trustworthy AI are considered and incorporated into the various aspects of the organization's operations.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 2 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Administration The Generative AI Oversight Team is responsible for administering this policy, establishing necessary procedures, guidelines and updates to align with regulations. ----- " "Processes, procedures, and practices are in place to determine the needed level of risk management activities based on the organization's risk tolerance.","The policy aligns with the point of determining the needed level of risk management activities based on the organization's risk tolerance through its commitment to periodic review and continuous evaluation. By reviewing the policy periodically, the organization ensures that it stays aligned with evolving ethical standards and technological advancements, which includes assessing and managing risks. Additionally, the policy emphasizes the need for regular evaluations and audits of AI systems to identify and mitigate potential biases, errors, or risks, indicating a proactive approach to risk management.","page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- ","The policy aligns with the point that processes, procedures, and practices are in place to determine the needed level of risk management activities based on the organization's risk tolerance in the following statement: ""Generative AI projects will follow documented processes for risk-based decisions on development, deployment, and monitoring."" This indicates that the policy includes specific processes and procedures for assessing and managing risks associated with generative AI projects, taking into account the organization's risk tolerance.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 14. Feedback channels will be created to allow reporting issues by users and affected groups, to improve models over time. 15. An ethics review board will evaluate high -risk use cases not covered by this policy, before approval. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- " "The risk management process and its outcomes are established through transparent policies, procedures, and other controls based on organizational risk priorities.","The policy aligns with the point of establishing transparent policies, procedures, and other controls based on organizational risk priorities in the following way: - The policy commits to transparency in the design, development, and deployment of AI systems, ensuring that users and stakeholders are informed about the use of AI, its capabilities, and limitations. This transparency helps establish clear policies and procedures for managing risks associated with AI systems.","page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 This policy outlines our commitment to ethical AI practices: 1. Ethical Principles: a. Transparency: We commit to transparency in our AI systems' design, development, and deployment. Users and stakeholders will be informed about the use of AI, its capabilities, and limitations. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- ","The policy aligns with the point that the risk management process and its outcomes are established through transparent policies, procedures, and other controls based on organizational risk priorities in the following ways: - The policy states that generative AI projects will follow documented processes for risk-based decisions on development, deployment, and monitoring. - It mentions that staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. - The policy requires reviews by the oversight team at major milestones before deploying generative models internally or externally. - It emphasizes the importance of transparency and accountability by documenting model details like data sources, training methodology, and model versions to enable accountability if issues emerge. - The policy also requires clear attribution indicating content is AI-generated for external uses. - It mentions that controls like human-in-the-loop oversight will be required where risks of harmful, biased, or misleading outputs are higher. - The policy includes the establishment of feedback channels to allow reporting issues by users and affected groups, which helps improve models over time. - It states that an ethics review board will evaluate high-risk use cases not covered by this policy before approval. These various provisions and requirements outlined in the policy demonstrate the establishment of transparent policies, procedures, and controls based on organizational risk priorities in the risk management process.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 14. Feedback channels will be created to allow reporting issues by users and affected groups, to improve models over time. 15. An ethics review board will evaluate high -risk use cases not covered by this policy, before approval. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- " "Ongoing monitoring and periodic review of the risk management process and its outcomes are planned and organizational roles and responsibilities clearly defined, including determining the frequency of periodic review.","The policy aligns with the point of ongoing monitoring and periodic review through the statement, ""Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements."" This indicates that the organization has planned for ongoing monitoring and periodic review of the policy to assess its effectiveness and make any necessary updates.","page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Stakeholder Engagement: Open channels for dialogue with stakeholders, including users, customers, and the public, will be maintained to address concerns and gather feedback. Conclusion: Badguys is committed to upholding the highest ethical standards in the development and deployment of AI technologies. ----- ","The policy aligns with the point of ongoing monitoring and periodic review of the risk management process and its outcomes through the establishment of processes to continually monitor risks after deployment. This indicates that the policy recognizes the importance of regularly reviewing the risk management process and its outcomes. Additionally, the policy emphasizes the need for responsible controls, which suggests that organizational roles and responsibilities are clearly defined, including determining the frequency of periodic review.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 14. Feedback channels will be created to allow reporting issues by users and affected groups, to improve models over time. 15. An ethics review board will evaluate high -risk use cases not covered by this policy, before approval. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- " Mechanisms are in place to inventory AI systems and are resourced according to organizational risk priorities.,The policy does not provide evidence of mechanisms in place to inventory AI systems and resource them according to organizational risk priorities.,"page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- ",The policy does not provide evidence of mechanisms in place to inventory AI systems and resource them according to organizational risk priorities.,"page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- " Processes and procedures are in place for decommissioning and phasing out AI systems safely and in a manner that does not increase risks or decrease the organization's trustworthiness.,The policy does not provide evidence of processes and procedures for decommissioning and phasing out AI systems safely and in a manner that does not increase risks or decrease the organization's trustworthiness.,"page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- ",The policy does not provide evidence of processes and procedures for decommissioning and phasing out AI systems safely and in a manner that does not increase risks or decrease the organization's trustworthiness.,"page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 2 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Administration The Generative AI Oversight Team is responsible for administering this policy, establishing necessary procedures, guidelines and updates to align with regulations. ----- " "Roles and responsibilities and lines of communication related to mapping, measuring, and managing AI risks are documented and are clear to individuals and teams throughout the organization.","The policy does not provide explicit evidence of roles and responsibilities and lines of communication related to mapping, measuring, and managing AI risks being documented and clear to individuals and teams throughout the organization.","page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Stakeholder Engagement: Open channels for dialogue with stakeholders, including users, customers, and the public, will be maintained to address concerns and gather feedback. Conclusion: Badguys is committed to upholding the highest ethical standards in the development and deployment of AI technologies. ----- ","The policy aligns with the point of documenting roles and responsibilities and lines of communication related to mapping, measuring, and managing AI risks. This can be inferred from the statement that a generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. This indicates that the policy recognizes the need for clear roles and responsibilities in managing AI risks and ensures that individuals and teams throughout the organization are aware of their responsibilities in mapping, measuring, and managing these risks.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- " "The organization's personnel and partners receive AI risk management training to enable them to perform their duties and responsibilities consistent with related policies, procedures, and agreements.","The policy aligns with the point of providing employee education and continuous training programs for the organization's personnel. This education and training will emphasize ethical AI principles and practices, which would enable them to perform their duties and responsibilities consistent with related policies, procedures, and agreements.","page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 4. External Collaboration and Education: a. Industry Collaboration: We will collaborate with industry peers, researchers, and policymakers to share best practices and contribute to the development of ethical AI standards. b. Employee Education: Continuous training and education programs for our employees will emphasize ethical AI principles and practices. 5. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- ","The policy aligns with the point that the organization's personnel and partners receive AI risk management training to enable them to perform their duties and responsibilities consistent with related policies, procedures, and agreements. This is evident from the statement in the context information that ""Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance."" This indicates that the organization provides training to its staff to ensure they have the necessary knowledge and skills to manage AI risks and adhere to relevant policies, procedures, and agreements.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 2 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Administration The Generative AI Oversight Team is responsible for administering this policy, establishing necessary procedures, guidelines and updates to align with regulations. ----- " Executive leadership of the organization takes responsibility for decisions about risks associated with AI system development and deployment.,"The policy aligns with the point that executive leadership of the organization takes responsibility for decisions about risks associated with AI system development and deployment in the section on ""Accountability and Responsibility."" It states that the organization will take responsibility for the outcomes of their AI systems and that mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into their development processes.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 This policy outlines our commitment to ethical AI practices: 1. Ethical Principles: a. Transparency: We commit to transparency in our AI systems' design, development, and deployment. Users and stakeholders will be informed about the use of AI, its capabilities, and limitations. ----- ","The policy aligns with the point that executive leadership of the organization takes responsibility for decisions about risks associated with AI system development and deployment in the following way: The policy states that a generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. This indicates that executive leadership is involved in the oversight and decision-making process regarding the risks associated with AI system development and deployment.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 2 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Administration The Generative AI Oversight Team is responsible for administering this policy, establishing necessary procedures, guidelines and updates to align with regulations. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- " "Decision-making related to mapping, measuring, and managing AI risks throughout the lifecycle is informed by a diverse team (e.g., diversity of demographics, disciplines, experience, expertise, and backgrounds).","The policy aligns with the point of decision-making related to mapping, measuring, and managing AI risks throughout the lifecycle being informed by a diverse team. This can be inferred from the statement that the company will strive to use high-quality, diverse, and representative datasets to train their AI models, minimizing biases and ensuring accuracy. Additionally, the policy mentions that periodic reviews will be conducted to ensure alignment with evolving ethical standards and technological advancements, indicating the involvement of a diverse team in decision-making processes.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- ","The policy aligns with the point of decision-making related to mapping, measuring, and managing AI risks throughout the lifecycle being informed by a diverse team. This is evident from the statement in the policy that a generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. This diverse team composition ensures that decision-making related to AI risks takes into account a variety of perspectives, expertise, and backgrounds.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 2 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Administration The Generative AI Oversight Team is responsible for administering this policy, establishing necessary procedures, guidelines and updates to align with regulations. ----- " Policies and procedures are in place to define and differentiate roles and responsibilities for human-AI configurations and oversight of AI systems.,"The policy aligns with the point of defining and differentiating roles and responsibilities for human-AI configurations and oversight of AI systems in the following evidence: - ""Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision-making processes, to prevent unintended consequences."" (b. Human Oversight) This statement indicates that the policy recognizes the importance of human involvement in overseeing AI systems and making critical decisions. It implies that there are specific roles and responsibilities defined for humans in configuring and overseeing AI systems to ensure their proper functioning and to prevent any unintended negative outcomes.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- ","The policy aligns with the point that policies and procedures are in place to define and differentiate roles and responsibilities for human-AI configurations and oversight of AI systems through the following statements: - Governance: The policy states that a generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. This indicates that there are defined roles and responsibilities for overseeing AI systems. - Transparency & Accountability: The policy mentions that controls like human-in-the-loop oversight will be required where risks of harmful, biased, or misleading outputs are higher. This implies that there are procedures in place to define the role of humans in the configuration and oversight of AI systems. - Administration: The policy states that the Generative AI Oversight Team is responsible for administering the policy and establishing necessary procedures and guidelines. This indicates that there are defined roles and responsibilities for overseeing AI systems and their configurations.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 2 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Administration The Generative AI Oversight Team is responsible for administering this policy, establishing necessary procedures, guidelines and updates to align with regulations. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- " "Organizational policies and practices are in place to foster a critical thinking and safety-first mindset in the design, development, deployment, and uses of AI systems to minimize potential negative impacts.","The policy aligns with the point of fostering a critical thinking and safety-first mindset in the design, development, deployment, and uses of AI systems to minimize potential negative impacts through the following evidence: - ""Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision-making processes, to prevent unintended consequences."" - ""Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes."" - ""Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks."" - ""Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity."" - ""Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements."" - ""Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks."" - ""Employee Education: Continuous training and education programs for our employees will emphasize ethical AI principles and practices."" These points demonstrate the organization's commitment to critical thinking, safety, accountability, fairness, continuous evaluation, transparency, and employee education in the design, development, deployment, and uses of AI systems to minimize potential negative impacts.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 4. External Collaboration and Education: a. Industry Collaboration: We will collaborate with industry peers, researchers, and policymakers to share best practices and contribute to the development of ethical AI standards. b. Employee Education: Continuous training and education programs for our employees will emphasize ethical AI principles and practices. 5. ----- ","The policy aligns with the point of fostering a critical thinking and safety-first mindset in the design, development, deployment, and uses of AI systems to minimize potential negative impacts through several statements. For example, the policy states that a generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. This indicates that the organization is taking a comprehensive approach to ensure critical thinking and safety considerations are incorporated into the AI systems. Additionally, the policy mentions that staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance, further emphasizing the importance of a safety-first mindset.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- " "Organizational teams document the risks and potential impacts of the AI technology they design, develop, deploy, evaluate, and use, and they communicate about the impacts more broadly.","The policy aligns with the point of documenting the risks and potential impacts of AI technology in the section on ""Reporting and Communication."" The policy states that transparency reports will be published periodically, outlining the AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. This demonstrates that the organization is committed to documenting and communicating about the impacts of their AI technology.","page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 4. External Collaboration and Education: a. Industry Collaboration: We will collaborate with industry peers, researchers, and policymakers to share best practices and contribute to the development of ethical AI standards. b. Employee Education: Continuous training and education programs for our employees will emphasize ethical AI principles and practices. 5. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- ","The policy aligns with the point mentioned as it states that risks assessments will be conducted and documented for each intended use case of generative AI systems. This indicates that the organizational teams involved in the design, development, deployment, evaluation, and use of AI technology will document the risks and potential impacts associated with it. Additionally, the policy emphasizes the need for transparency and accountability, indicating that model details, such as data sources, training methodology, and model versions, will be documented to enable accountability if issues arise. This further supports the idea that the policy promotes the documentation and communication of risks and impacts of AI technology.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- " "Organizational practices are in place to enable AI testing, identification of incidents, and information sharing.","The policy aligns with the point of enabling AI testing, identification of incidents, and information sharing through the mechanism of ""Continuous Evaluation."" Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. This practice ensures that incidents can be identified and addressed, and information can be shared within the organization to improve the AI systems.","page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 4. External Collaboration and Education: a. Industry Collaboration: We will collaborate with industry peers, researchers, and policymakers to share best practices and contribute to the development of ethical AI standards. b. Employee Education: Continuous training and education programs for our employees will emphasize ethical AI principles and practices. 5. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- ","The policy aligns with the point of enabling AI testing, identification of incidents, and information sharing through the following statements: - Measurement & Testing 8: The policy states that generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. - Measurement & Testing 10: The policy establishes processes to continually monitor risks after deployment and includes controls to address emerging issues. - Mapping Risks 5: The policy implies that there is a focus on identifying and mapping risks associated with generative AI systems. These statements indicate that the policy promotes organizational practices that enable AI testing, identification of incidents, and information sharing to ensure the responsible development, deployment, and use of generative AI systems.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 14. Feedback channels will be created to allow reporting issues by users and affected groups, to improve models over time. 15. An ethics review board will evaluate high -risk use cases not covered by this policy, before approval. ----- " "Organizational policies and practices are in place to collect, consider, prioritize, and integrate feedback from those external to the team that developed or deployed the AI system regarding the potential individual and societal impacts related to AI risks.",The policy does not provide evidence of aligning with the point mentioned.,"page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 4. External Collaboration and Education: a. Industry Collaboration: We will collaborate with industry peers, researchers, and policymakers to share best practices and contribute to the development of ethical AI standards. b. Employee Education: Continuous training and education programs for our employees will emphasize ethical AI principles and practices. 5. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- ","The policy aligns with the point mentioned as it states that feedback channels will be created to allow reporting issues by users and affected groups, in order to improve models over time. This indicates that the policy recognizes the importance of collecting feedback from external sources and integrating it into the development and deployment of AI systems to address potential individual and societal impacts related to AI risks.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 14. Feedback channels will be created to allow reporting issues by users and affected groups, to improve models over time. 15. An ethics review board will evaluate high -risk use cases not covered by this policy, before approval. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- " Mechanisms are established to enable the team that developed or deployed AI systems to regularly incorporate adjudicated feedback from relevant AI actors into system design and implementation.,The policy does not provide evidence of mechanisms being established to enable the team to regularly incorporate adjudicated feedback from relevant AI actors into system design and implementation.,"page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Stakeholder Engagement: Open channels for dialogue with stakeholders, including users, customers, and the public, will be maintained to address concerns and gather feedback. Conclusion: Badguys is committed to upholding the highest ethical standards in the development and deployment of AI technologies. ----- ",The policy does not provide evidence of mechanisms being established to enable the team that developed or deployed AI systems to regularly incorporate adjudicated feedback from relevant AI actors into system design and implementation.,"page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 2 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Administration The Generative AI Oversight Team is responsible for administering this policy, establishing necessary procedures, guidelines and updates to align with regulations. ----- " "Policies and procedures are in place that address AI risks associated with third-party entities, including risks of infringement of a third-party's intellectual property or other rights.","The policy does not provide evidence of addressing AI risks associated with third-party entities, including risks of infringement of a third-party's intellectual property or other rights.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 4. External Collaboration and Education: a. Industry Collaboration: We will collaborate with industry peers, researchers, and policymakers to share best practices and contribute to the development of ethical AI standards. b. Employee Education: Continuous training and education programs for our employees will emphasize ethical AI principles and practices. 5. ----- ","The policy does not provide evidence of addressing AI risks associated with third-party entities, including risks of infringement of a third-party's intellectual property or other rights. The context information does not mention any specific policies or procedures in place to address these risks.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- " Contingency processes are in place to handle failures or incidents in third-party data or AI systems deemed to be high-risk.,The policy does not provide evidence of contingency processes specifically for handling failures or incidents in third-party data or AI systems deemed to be high-risk.,"page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Stakeholder Engagement: Open channels for dialogue with stakeholders, including users, customers, and the public, will be maintained to address concerns and gather feedback. Conclusion: Badguys is committed to upholding the highest ethical standards in the development and deployment of AI technologies. ----- ",The policy does not provide evidence of contingency processes specifically for handling failures or incidents in third-party data or AI systems deemed to be high-risk.,"page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 14. Feedback channels will be created to allow reporting issues by users and affected groups, to improve models over time. 15. An ethics review board will evaluate high -risk use cases not covered by this policy, before approval. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- " "Intended purposes, potentially beneficial uses, context specific laws, norms and expectations, and prospective settings in which the AI system will be deployed are understood and documented. Considerations include: the specific set or types of users along with their expectations; potential positive and negative impacts of system uses to individuals, communities, organizations, society, and the planet; assumptions and related limitations about AI system purposes, uses, and risks across the development or product AI lifecycle; and related TEVV and system metrics.","The policy aligns with the point of understanding and documenting the intended purposes, potentially beneficial uses, context-specific laws, norms, and expectations, as well as prospective settings in which the AI system will be deployed. This is evident in the policy's commitment to continuous evaluation, regular audits, and periodic review of AI systems to identify and mitigate potential biases, errors, or risks. Additionally, the policy emphasizes the importance of human oversight and intervention in critical decision-making processes to prevent unintended consequences. These measures demonstrate a comprehensive understanding of the potential positive and negative impacts of system uses to individuals, communities, organizations, society, and the planet, as well as the assumptions and limitations associated with AI system purposes, uses, and risks.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- ","The policy aligns with the point mentioned as it states that ""Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re-evaluated regularly."" This indicates that the policy recognizes the importance of understanding and documenting the intended purposes, potentially beneficial uses, and context-specific factors such as laws, norms, and expectations. It also emphasizes the need to consider the potential positive and negative impacts of system uses on individuals, communities, organizations, society, and the planet. Additionally, the policy acknowledges the importance of identifying assumptions and limitations about AI system purposes, uses, and risks throughout the development or product AI lifecycle, which aligns with the consideration of assumptions and related limitations mentioned in the query.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- " "Interdisciplinary AI actors, competencies, skills, and capacities for establishing context reflect demographic diversity and broad domain and user experience expertise, and their participation is documented. Opportunities for interdisciplinary collaboration are prioritized.","The policy aligns with the point of interdisciplinary AI actors, competencies, skills, and capacities for establishing context reflecting demographic diversity and broad domain and user experience expertise, and their participation being documented. This is evident in the policy's commitment to using high-quality, diverse, and representative datasets to train AI models, minimizing biases and ensuring accuracy (Data Governance - Data Quality). By incorporating diverse datasets, the policy acknowledges the importance of reflecting demographic diversity in AI development. Additionally, the policy emphasizes the importance of collaborating with industry peers, researchers, and policymakers to share best practices and contribute to the development of ethical AI standards (External Collaboration and Education - Industry Collaboration). This collaboration with various stakeholders from different domains and backgrounds promotes interdisciplinary collaboration and the prioritization of opportunities for such collaboration.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 4. External Collaboration and Education: a. Industry Collaboration: We will collaborate with industry peers, researchers, and policymakers to share best practices and contribute to the development of ethical AI standards. b. Employee Education: Continuous training and education programs for our employees will emphasize ethical AI principles and practices. 5. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- ","The policy aligns with the point of interdisciplinary AI actors and competencies by stating that the generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. This indicates that the team will have a range of expertise and skills from different domains. Additionally, the policy emphasizes the importance of accountability and transparency by documenting model details, indicating AI-generated content, and requiring human-in-the-loop oversight where risks are higher. This demonstrates a focus on interdisciplinary collaboration and involving diverse perspectives in the development and use of generative AI systems.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 2 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Administration The Generative AI Oversight Team is responsible for administering this policy, establishing necessary procedures, guidelines and updates to align with regulations. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- " The organization's mission and relevant goals for AI technology are understood and documented.,"The policy aligns with the point ""The organization's mission and relevant goals for AI technology are understood and documented"" in the following section: - Compliance and Review: The policy states that the organization will adhere to all applicable laws, regulations, and industry standards governing AI technologies. This demonstrates that the organization understands and acknowledges the importance of aligning their AI practices with legal and regulatory requirements, which can be considered as part of their mission and goals for AI technology.","page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- ","The policy aligns with the point ""The organization's mission and relevant goals for AI technology are understood and documented"" through its emphasis on governance and oversight. The creation of a generative AI oversight team comprising diverse disciplines like engineering, human factors, audit, legal, and ethics demonstrates a commitment to understanding and aligning with the organization's mission and goals for AI technology. Additionally, the policy statements highlight the need for documented processes for risk-based decisions, training on topics like safety and fairness, and reviews by the oversight team at major milestones. These measures ensure that the organization's mission and goals are considered and documented throughout the development, deployment, and use of generative AI systems.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 2 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Administration The Generative AI Oversight Team is responsible for administering this policy, establishing necessary procedures, guidelines and updates to align with regulations. ----- " The business value or context of business use has been clearly defined or - in the case of assessing existing AI systems - re-evaluated.,"The policy aligns with the point of clearly defining the business value or context of business use in the following section: ""2. Data Governance: b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented."" By emphasizing the importance of user consent and implementing clear and informed consent mechanisms, the policy ensures that the business value or context of business use is clearly defined and aligned with ethical standards.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- ","The policy aligns with the point of clearly defining the business value or context of business use in the statement: ""Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re-evaluated regularly."" This indicates that the policy emphasizes the need to clearly define and regularly reassess the business value and context of using generative AI systems.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- " Organizational risk tolerances are determined and documented.,The policy does not provide evidence of where it aligns with the point of determining and documenting organizational risk tolerances.,"page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- ",The policy does not provide evidence of where organizational risk tolerances are determined and documented.,"page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 14. Feedback channels will be created to allow reporting issues by users and affected groups, to improve models over time. 15. An ethics review board will evaluate high -risk use cases not covered by this policy, before approval. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- " "System requirements (e.g., ""the system shall respect the privacy of its users"") are elicited from and understood by relevant AI actors. Design decisions take socio-technical implications into account to address AI risks.","The policy aligns with the point of eliciting system requirements from relevant AI actors and taking socio-technical implications into account to address AI risks in the following ways: - The policy states that privacy protection is paramount and that the AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. This demonstrates an understanding of the importance of respecting user privacy and taking into account the socio-technical implications of privacy in AI systems. - The policy also mentions the integration of mechanisms for oversight, accountability, and addressing unintended consequences into the development processes. This shows a commitment to understanding and addressing the potential risks and implications of AI systems, which aligns with the point of considering socio-technical implications to address AI risks.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- ","The policy aligns with the point of eliciting system requirements from relevant AI actors and taking socio-technical implications into account to address AI risks. This can be seen in the statement that a generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. This indicates that the policy recognizes the importance of involving relevant actors from different disciplines in the design and development process to ensure that system requirements are understood and that socio-technical implications are considered to address AI risks.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- " "The specific tasks and methods used to implement the tasks that the AI system will support are defined (e.g., classifiers, generative models, recommenders).",The policy does not provide evidence of where the specific tasks and methods used to implement the tasks that the AI system will support are defined. The context information does not mention any details about the specific tasks or methods used in the AI system.,"page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- ","The policy aligns with the point ""The specific tasks and methods used to implement the tasks that the AI system will support are defined"" through the statement ""Generative AI projects will follow documented processes for risk-based decisions on development, deployment, and monitoring."" This indicates that the policy requires the definition and documentation of processes for developing and deploying generative AI models, which includes specifying the specific tasks and methods used in implementing these models.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 2 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Administration The Generative AI Oversight Team is responsible for administering this policy, establishing necessary procedures, guidelines and updates to align with regulations. ----- " Information about the AI system's knowledge limits and how system output may be utilized and overseen by humans is documented. Documentation provides sufficient information to assist relevant AI actors when making decisions and taking subsequent actions.,"The policy aligns with the point of providing sufficient information to assist relevant AI actors when making decisions and taking subsequent actions. This can be seen in the statement that human supervision and intervention will be incorporated into AI systems, especially in critical decision-making processes, to prevent unintended consequences. This indicates that there is a recognition of the need for human oversight and involvement in the AI system's decision-making process, which aligns with the idea of providing information to assist relevant AI actors in making decisions and taking subsequent actions.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- ","The policy aligns with the point mentioned as it states that ""Model details like data sources, training methodology, and model versions will be documented to enable accountability if issues emerge."" This documentation provides relevant AI actors with sufficient information about the AI system's knowledge limits and how system output may be utilized and overseen by humans, assisting them in making decisions and taking subsequent actions.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 2 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Administration The Generative AI Oversight Team is responsible for administering this policy, establishing necessary procedures, guidelines and updates to align with regulations. ----- " "Scientific integrity and TEVV considerations are identified and documented, including those related to experimental design, data collection and selection (e.g., availability, representativeness, suitability), system trustworthiness, and construct validation.","The policy aligns with the point of scientific integrity and TEVV considerations in several ways. Firstly, the policy mentions the use of high-quality, diverse, and representative datasets to train AI models, which indicates a consideration for data collection and selection. This aligns with the need for suitable and representative data in experimental design. Additionally, the policy emphasizes the regular evaluations and audits of AI systems to identify and mitigate potential biases, errors, or risks. This demonstrates a commitment to system trustworthiness and construct validation. Overall, the policy shows a clear alignment with the principles of scientific integrity and TEVV considerations.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- ","The policy aligns with the point of identifying and documenting scientific integrity and TEVV considerations through its commitment to transparency and accountability. The policy states that model details, data sources, training methodology, and model versions will be documented, which indicates a focus on documenting the experimental design and data collection process. Additionally, the policy emphasizes the need for rigorous testing and measurement of risks and trustworthiness characteristics before deployment, which aligns with the consideration of system trustworthiness and construct validation.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 14. Feedback channels will be created to allow reporting issues by users and affected groups, to improve models over time. 15. An ethics review board will evaluate high -risk use cases not covered by this policy, before approval. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- " Potential benefits of intended AI system functionality and performance are examined and documented.,"The policy aligns with the point of examining and documenting the potential benefits of intended AI system functionality and performance through the implementation of continuous evaluation and regular audits of AI systems. This ensures that any potential biases, errors, or risks are identified and mitigated, thereby promoting the intended functionality and performance of the AI system.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- ","The policy aligns with the point that potential benefits of intended AI system functionality and performance are examined and documented through the statement: ""Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re-evaluated regularly."" This indicates that the policy requires the identification and assessment of potential benefits associated with the functionality and performance of generative AI systems, and that these assessments will be documented and regularly reviewed.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- " "Potential costs, including non-monetary costs, which result from expected or realized AI errors or system functionality and trustworthiness - as connected to organizational risk tolerance - are examined and documented.","The policy aligns with the point of examining and documenting potential costs resulting from AI errors or system functionality and trustworthiness. This can be inferred from the statement that the organization will take responsibility for the outcomes of their AI systems and integrate mechanisms for oversight, accountability, and addressing unintended consequences into their development processes.","page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- ","The policy aligns with the point of examining and documenting potential costs resulting from AI errors or system functionality and trustworthiness. This can be inferred from the statement in the context information that ""Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment"" and ""Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues."" These statements indicate that the policy considers the potential costs and risks associated with AI errors and system functionality, and emphasizes the need for testing, monitoring, and addressing any issues that may arise.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- " "Targeted application scope is specified and documented based on the system's capability, established context, and AI system categorization.","The policy does not provide evidence of where the targeted application scope is specified and documented based on the system's capability, established context, and AI system categorization.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- ","The policy aligns with the point that the targeted application scope is specified and documented based on the system's capability, established context, and AI system categorization in the section titled ""Mapping Risks."" This section states that intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re-evaluated regularly. This demonstrates that the policy aims to clearly define and document the scope of application for generative AI systems based on their capabilities and the established context.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- " "Processes for operator and practitioner proficiency with AI system performance and trustworthiness - and relevant technical standards and certifications - are defined, assessed, and documented.","The policy does not provide evidence of processes for operator and practitioner proficiency with AI system performance and trustworthiness, or relevant technical standards and certifications being defined, assessed, and documented.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- ","The policy does not provide evidence of aligning with the point regarding processes for operator and practitioner proficiency with AI system performance and trustworthiness, as well as relevant technical standards and certifications.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 2 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Administration The Generative AI Oversight Team is responsible for administering this policy, establishing necessary procedures, guidelines and updates to align with regulations. ----- " "Processes for human oversight are defined, assessed, and documented in accordance with organizational policies from the GOVERN function.","The policy aligns with the point of defining and documenting processes for human oversight in accordance with organizational policies from the GOVERN function in the following section: ""b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision-making processes, to prevent unintended consequences."" This demonstrates that the policy recognizes the importance of human oversight and intervention in AI systems and aims to incorporate it to ensure responsible and ethical decision-making.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- ","The policy aligns with the point of defining, assessing, and documenting processes for human oversight in accordance with organizational policies from the GOVERN function in the following statement: ""Controls like human-in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher."" This statement indicates that the policy recognizes the importance of human oversight in mitigating risks associated with generative AI systems and establishes a requirement for such oversight in certain situations.","page_label: 2 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Administration The Generative AI Oversight Team is responsible for administering this policy, establishing necessary procedures, guidelines and updates to align with regulations. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 14. Feedback channels will be created to allow reporting issues by users and affected groups, to improve models over time. 15. An ethics review board will evaluate high -risk use cases not covered by this policy, before approval. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- " "Approaches for mapping AI technology and legal risks of its components - including the use of third-party data or software - are in place, followed, and documented, as are risks of infringement of a third party's intellectual property or other rights.","The policy does not provide evidence of approaches for mapping AI technology and legal risks of its components, including the use of third-party data or software. It also does not mention the documentation of risks of infringement of a third party's intellectual property or other rights.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- ","The policy does not provide evidence of where it aligns with the point regarding mapping AI technology and legal risks of its components, including the use of third-party data or software. The context information does not mention any specific provisions or guidelines related to the risks of infringement of a third party's intellectual property or other rights.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- " "Internal risk controls for components of the AI system, including third-party AI technologies, are identified and documented.","The policy does not provide evidence of internal risk controls for components of the AI system, including third-party AI technologies, being identified and documented.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- ","The policy aligns with the point of internal risk controls for components of the AI system being identified and documented through the statement: ""Generative AI projects will follow documented processes for risk-based decisions on development, deployment, and monitoring."" This indicates that the policy recognizes the importance of identifying and documenting internal risk controls for the various components of the AI system, including third-party AI technologies.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- " "Likelihood and magnitude of each identified impact (both potentially beneficial and harmful) based on expected use, past uses of AI systems in similar contexts, public incident reports, feedback from those external to the team that developed or deployed the AI system, or other data are identified and documented.",The policy aligns with the point of identifying and documenting the likelihood and magnitude of each identified impact based on past uses of AI systems in similar contexts.,"page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- ","The policy aligns with the point by requiring risk assessments to be conducted for each intended use case of generative AI systems. These risk assessments analyze and document safety, ethical, legal, reputational, and technical risks. This process ensures that the likelihood and magnitude of each identified impact, both potentially beneficial and harmful, are identified and documented.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- " "Practices and personnel for supporting regular engagement with relevant AI actors and integrating feedback about positive, negative, and unanticipated impacts are in place and documented.","The policy aligns with the point of supporting regular engagement with relevant AI actors and integrating feedback about impacts through the following evidence: - Stakeholder Engagement: The policy mentions maintaining open channels for dialogue with stakeholders, including users, customers, and the public, to address concerns and gather feedback. This indicates that there are practices in place to engage with relevant AI actors and integrate their feedback. - Reporting and Communication: The policy states that periodic transparency reports will be published, outlining AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. This demonstrates a commitment to documenting and communicating the impacts of AI systems, which includes feedback from relevant AI actors. These practices and mechanisms show that the policy aligns with the point of supporting regular engagement with relevant AI actors and integrating feedback about positive, negative, and unanticipated impacts.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Stakeholder Engagement: Open channels for dialogue with stakeholders, including users, customers, and the public, will be maintained to address concerns and gather feedback. Conclusion: Badguys is committed to upholding the highest ethical standards in the development and deployment of AI technologies. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 4. External Collaboration and Education: a. Industry Collaboration: We will collaborate with industry peers, researchers, and policymakers to share best practices and contribute to the development of ethical AI standards. b. Employee Education: Continuous training and education programs for our employees will emphasize ethical AI principles and practices. 5. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- ","The policy aligns with the point of supporting regular engagement with relevant AI actors and integrating feedback about positive, negative, and unanticipated impacts through the establishment of feedback channels. These channels allow users and affected groups to report issues and provide feedback to improve the generative models over time. This demonstrates a commitment to ongoing engagement and the integration of feedback to address the impacts of the AI system.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 14. Feedback channels will be created to allow reporting issues by users and affected groups, to improve models over time. 15. An ethics review board will evaluate high -risk use cases not covered by this policy, before approval. ----- " Approaches and metrics for measurement of AI risks enumerated during the MAP function are selected for implementation starting with the most significant AI risks. The risks or trustworthiness characteristics that will not - or cannot - be measured are properly documented.,"The policy aligns with the point mentioned in the query by stating that regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. This indicates that approaches and metrics for measurement of AI risks are being implemented. Additionally, the policy mentions that mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into the development processes, which further supports the alignment with the point.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- ","The policy aligns with the point by stating that generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. This indicates that approaches and metrics for measurement of AI risks are selected for implementation. Additionally, the policy mentions that model details like data sources, training methodology, and model versions will be documented, which suggests that risks or trustworthiness characteristics that will not or cannot be measured are properly documented.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- " "Appropriateness of AI metrics and effectiveness of existing controls are regularly assessed and updated, including reports of errors and potential impacts on affected communities.","The policy aligns with the point of regularly assessing and updating the appropriateness of AI metrics and effectiveness of existing controls through the continuous evaluation and regular audits of AI systems. This ensures that errors and potential impacts on affected communities are identified and mitigated. Additionally, the policy mentions the publication of transparency reports, which further demonstrates the commitment to reporting errors and potential impacts on affected communities.","page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- ","The policy aligns with the point of regularly assessing and updating the appropriateness of AI metrics and the effectiveness of existing controls. This is evident from the statement that generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. Additionally, the policy mentions that processes will be established to continually monitor risks after deployment and controls will be put in place to address emerging issues. This indicates a commitment to regularly assessing and updating the metrics and controls to ensure their appropriateness and effectiveness.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 14. Feedback channels will be created to allow reporting issues by users and affected groups, to improve models over time. 15. An ethics review board will evaluate high -risk use cases not covered by this policy, before approval. ----- " "Internal experts who did not serve as front-line developers for the system and/or independent assessors are involved in regular assessments and updates. Domain experts, users, AI actors external to the team that developed or deployed the AI system, and affected communities are consulted in support of assessments as necessary per organizational risk tolerance.","The policy aligns with the point mentioned as ""Continuous Evaluation"" in section 3a. It states that regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. This implies that internal experts who did not serve as front-line developers for the system and/or independent assessors are involved in regular assessments and updates. Additionally, the policy also aligns with the point mentioned as ""Stakeholder Engagement"" in section 6b. It states that open channels for dialogue with stakeholders, including users, customers, and the public, will be maintained to address concerns and gather feedback. This implies that domain experts, users, AI actors external to the team that developed or deployed the AI system, and affected communities are consulted in support of assessments as necessary per organizational risk tolerance.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Stakeholder Engagement: Open channels for dialogue with stakeholders, including users, customers, and the public, will be maintained to address concerns and gather feedback. Conclusion: Badguys is committed to upholding the highest ethical standards in the development and deployment of AI technologies. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 4. External Collaboration and Education: a. Industry Collaboration: We will collaborate with industry peers, researchers, and policymakers to share best practices and contribute to the development of ethical AI standards. b. Employee Education: Continuous training and education programs for our employees will emphasize ethical AI principles and practices. 5. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- ","The policy aligns with the mentioned point as it states that a generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. This indicates that internal experts who did not serve as front-line developers for the system are involved in regular assessments and updates. Additionally, the policy emphasizes the importance of risk assessments and states that risks assessments will analyze and document various risks, including safety, ethical, legal, reputational, and technical risks. This suggests that domain experts, users, AI actors external to the team, and affected communities may be consulted in support of these assessments as necessary.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- " "Test sets, metrics, and details about the tools used during TEVV are documented.","The policy does not provide evidence of where test sets, metrics, and details about the tools used during TEVV (Testing, Evaluation, Validation, and Verification) are documented.","page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 4. External Collaboration and Education: a. Industry Collaboration: We will collaborate with industry peers, researchers, and policymakers to share best practices and contribute to the development of ethical AI standards. b. Employee Education: Continuous training and education programs for our employees will emphasize ethical AI principles and practices. 5. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- ","The policy aligns with the point of documenting test sets, metrics, and details about the tools used during TEVV in the section on Measurement & Testing.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 14. Feedback channels will be created to allow reporting issues by users and affected groups, to improve models over time. 15. An ethics review board will evaluate high -risk use cases not covered by this policy, before approval. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- " Evaluations involving human subjects meet applicable requirements (including human subject protection) and are representative of the relevant population.,"The policy aligns with the point of ensuring evaluations involving human subjects meet applicable requirements and are representative of the relevant population through its commitment to continuous evaluation and audits of AI systems. By conducting regular evaluations and audits, the policy aims to identify and mitigate potential biases, errors, or risks in AI systems. This demonstrates a commitment to ensuring that evaluations involving human subjects are conducted in a manner that meets applicable requirements and are representative of the relevant population.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- ",The policy does not provide evidence of aligning with the point of evaluations involving human subjects meeting applicable requirements and being representative of the relevant population.,"page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 14. Feedback channels will be created to allow reporting issues by users and affected groups, to improve models over time. 15. An ethics review board will evaluate high -risk use cases not covered by this policy, before approval. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- " AI system performance or assurance criteria are measured qualitatively or quantitatively and demonstrated for conditions similar to deployment setting(s). Measures are documented.,The policy does not provide evidence of aligning with the point mentioned.,"page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- ","The policy aligns with the point that AI system performance or assurance criteria are measured qualitatively or quantitatively and demonstrated for conditions similar to deployment setting(s) in the following statement: ""Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment."" This indicates that the policy requires the measurement and evaluation of risks and trustworthiness characteristics of generative models before they are deployed, which implies the need for qualitative or quantitative measures. Additionally, the statement mentions the importance of testing the models under conditions similar to the deployment setting, further emphasizing the alignment with the point.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- " The functionality and behavior of the AI system and its components - as identified in the MAP function - are monitored when in production.,The policy does not provide evidence of monitoring the functionality and behavior of the AI system and its components when in production.,"page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- ",The policy does not provide evidence of aligning with the point mentioned.,"page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- " The AI system to be deployed is demonstrated to be valid and reliable. Limitations of the generalizability beyond the conditions under which the technology was developed are documented.,"The policy does not provide evidence of aligning with the point mentioned. The context information does not mention anything about the AI system being demonstrated to be valid and reliable, or documenting limitations of generalizability beyond the conditions under which the technology was developed.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- ","The policy aligns with the point ""The AI system to be deployed is demonstrated to be valid and reliable. Limitations of the generalizability beyond the conditions under which the technology was developed are documented"" through the statement ""Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment."" This indicates that the policy requires thorough testing of generative AI models to ensure their validity and reliability before they are deployed. Additionally, the policy emphasizes the need to document any limitations in the generalizability of the technology beyond the conditions under which it was developed.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 2 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Administration The Generative AI Oversight Team is responsible for administering this policy, establishing necessary procedures, guidelines and updates to align with regulations. ----- " "The AI system is evaluated regularly for safety risks - as identified in the MAP function. The AI system to be deployed is demonstrated to be safe, its residual negative risk does not exceed the risk tolerance, and it can fail safely, particularly if made to operate beyond its knowledge limits. Safety metrics reflect system reliability and robustness, real-time monitoring, and response times for AI system failures.","The policy aligns with the point of regularly evaluating the AI system for safety risks. This is evident in the statement that regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. This demonstrates a commitment to ensuring the safety of the AI system and aligns with the idea of evaluating the system for safety risks.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- ","The policy aligns with the point of evaluating the AI system regularly for safety risks through the statement ""Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment."" This indicates that the policy requires the AI system to be demonstrated as safe and to have its risks measured and evaluated before deployment. Additionally, the statement ""Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues"" suggests that the policy emphasizes real-time monitoring and response to AI system failures, which aligns with the safety metrics mentioned in the point.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- " AI system security and resilience - as identified in the MAP function - are evaluated and documented.,The policy does not provide evidence of evaluating and documenting AI system security and resilience.,"page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- ","The policy aligns with the point of AI system security and resilience being evaluated and documented through the statement: ""Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment."" This indicates that the policy includes a process for evaluating the security and resilience of AI systems and documenting the findings.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- " Risks associated with transparency and accountability - as identified in the MAP function - are examined and documented.,"The policy aligns with the point of examining and documenting risks associated with transparency and accountability in the following section: ""6. Reporting and Communication: Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks."" This section demonstrates the commitment to transparency and accountability by actively examining and documenting the risks associated with these factors and communicating them through periodic reports.","page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 This policy outlines our commitment to ethical AI practices: 1. Ethical Principles: a. Transparency: We commit to transparency in our AI systems' design, development, and deployment. Users and stakeholders will be informed about the use of AI, its capabilities, and limitations. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- ",The policy aligns with the point about examining and documenting risks associated with transparency and accountability in the MAP function.,"page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 14. Feedback channels will be created to allow reporting issues by users and affected groups, to improve models over time. 15. An ethics review board will evaluate high -risk use cases not covered by this policy, before approval. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- " "The AI model is explained, validated, and documented, and AI system output is interpreted within its context - as identified in the MAP function - to inform responsible use and governance.","The policy aligns with the point that the AI model is explained, validated, and documented, and AI system output is interpreted within its context. This is evident from the commitment to transparency in the AI systems' design, development, and deployment. Users and stakeholders will be informed about the use of AI, its capabilities, and limitations. This demonstrates a commitment to providing explanations and documentation about the AI model and ensuring that the AI system output is interpreted within its context to inform responsible use and governance.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 This policy outlines our commitment to ethical AI practices: 1. Ethical Principles: a. Transparency: We commit to transparency in our AI systems' design, development, and deployment. Users and stakeholders will be informed about the use of AI, its capabilities, and limitations. ----- ","The policy aligns with the point that the AI model is explained, validated, and documented, and AI system output is interpreted within its context. This is evident from the statement in the policy that ""Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge."" This indicates that the policy emphasizes the importance of documenting the AI model and its components. Additionally, the policy also mentions the need for controls like human-in-the-loop oversight where risks of harmful, biased, or misleading outputs are higher. This further supports the idea that the AI system output is interpreted within its context to ensure responsible use and governance.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- " Privacy risk of the AI system - as identified in the MAP function - is examined and documented.,"The policy aligns with the point of privacy risk examination and documentation in the following statement: ""Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data."" This statement indicates that privacy risks associated with the AI system will be examined and addressed, as the policy emphasizes the importance of protecting user privacy and implementing measures to ensure data protection.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- ",The policy does not provide evidence of where the privacy risk of the AI system is examined and documented.,"page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- " Fairness and bias - as identified in the MAP function - are evaluated and results are documented.,"The policy aligns with the point of fairness and bias evaluation and documentation in the following section: ""b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity."" This section indicates that the company will evaluate fairness and bias in their AI technologies and take measures to prevent bias and discrimination.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- ",The policy does not provide evidence of aligning with the point that fairness and bias are evaluated and results are documented.,"page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 14. Feedback channels will be created to allow reporting issues by users and affected groups, to improve models over time. 15. An ethics review board will evaluate high -risk use cases not covered by this policy, before approval. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- " Environmental impact and sustainability of AI model training and management activities - as identified in the MAP function - are assessed and documented.,The policy does not provide evidence of assessing and documenting the environmental impact and sustainability of AI model training and management activities.,"page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- ",The policy does not provide evidence of aligning with the point of assessing and documenting the environmental impact and sustainability of AI model training and management activities.,"page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- " Effectiveness of the employed TEVV metrics and processes in the MEASURE function are evaluated and documented.,The policy does not provide evidence of aligning with the point regarding the effectiveness of TEVV metrics and processes in the MEASURE function being evaluated and documented. The context information does not mention TEVV metrics or the MEASURE function.,"page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- ","The policy aligns with the point of evaluating and documenting the effectiveness of the employed TEVV (Testing, Evaluation, Verification, and Validation) metrics and processes in the MEASURE function. This can be inferred from the context information that mentions the rigorous testing of generative models to measure risks and evaluate trustworthiness characteristics before deployment (point 8). It also states that testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance will be tracked over time (point 9). These actions indicate an evaluation and documentation process for the effectiveness of the employed TEVV metrics and processes.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 14. Feedback channels will be created to allow reporting issues by users and affected groups, to improve models over time. 15. An ethics review board will evaluate high -risk use cases not covered by this policy, before approval. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 2 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Administration The Generative AI Oversight Team is responsible for administering this policy, establishing necessary procedures, guidelines and updates to align with regulations. ----- " "Approaches, personnel, and documentation are in place to regularly identify and track existing, unanticipated, and emergent AI risks based on factors such as intended and actual performance in deployed contexts.","The policy aligns with the mentioned point in the following section: ""3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks."" This section indicates that there are mechanisms in place to regularly evaluate and identify risks associated with AI systems, including factors such as intended and actual performance in deployed contexts.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- ","The policy aligns with the mentioned point in the following statement: ""Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues."" This indicates that the policy includes approaches and controls to regularly identify and track existing, unanticipated, and emergent AI risks based on factors such as intended and actual performance in deployed contexts.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- " Risk tracking approaches are considered for settings where AI risks are difficult to assess using currently available measurement techniques or where metrics are not yet available.,The policy does not provide evidence of considering risk tracking approaches for settings where AI risks are difficult to assess using currently available measurement techniques or where metrics are not yet available.,"page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- ",The policy aligns with the point by stating that generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. This suggests that the policy recognizes the need for risk tracking approaches in settings where AI risks are difficult to assess using currently available measurement techniques or where metrics are not yet available.,"page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- " Feedback processes for end users and impacted communities to report problems and appeal system outcomes are established and integrated into AI system evaluation metrics.,The policy does not provide evidence of establishing feedback processes for end users and impacted communities to report problems and appeal system outcomes.,"page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Stakeholder Engagement: Open channels for dialogue with stakeholders, including users, customers, and the public, will be maintained to address concerns and gather feedback. Conclusion: Badguys is committed to upholding the highest ethical standards in the development and deployment of AI technologies. ----- ",The policy aligns with the point of establishing feedback channels to allow reporting issues by users and affected groups. This ensures that end users and impacted communities have a process to report problems and appeal system outcomes.,"page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 14. Feedback channels will be created to allow reporting issues by users and affected groups, to improve models over time. 15. An ethics review board will evaluate high -risk use cases not covered by this policy, before approval. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- " Measurement approaches for identifying AI risks are connected to deployment context(s) and informed through consultation with domain experts and other end users. Approaches are documented.,The policy does not provide evidence of aligning with the point mentioned.,"page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Stakeholder Engagement: Open channels for dialogue with stakeholders, including users, customers, and the public, will be maintained to address concerns and gather feedback. Conclusion: Badguys is committed to upholding the highest ethical standards in the development and deployment of AI technologies. ----- ","The policy aligns with the point that measurement approaches for identifying AI risks are connected to deployment context(s) and informed through consultation with domain experts and other end users. This alignment can be seen in the policy's statement that generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. Additionally, the policy states that testing sets will cover a broad, representative set of use cases and be routinely updated, indicating that the measurement approaches are connected to the deployment context(s). The policy also emphasizes the need for transparency and accountability by documenting model details like data sources, training methodology, and model versions, further supporting the alignment with the point.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- " Measurement results regarding AI system trustworthiness in deployment context(s) and across the AI lifecycle are informed by input from domain experts and relevant AI actors to validate whether the system is performing consistently as intended. Results are documented.,"The policy aligns with the point of measuring AI system trustworthiness in deployment context(s) and across the AI lifecycle by conducting regular evaluations and audits of AI systems to identify and mitigate potential biases, errors, or risks. This ensures that the system is performing consistently as intended. The results of these evaluations and audits are documented, providing evidence of the policy's alignment with the mentioned point.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- ","The policy aligns with the mentioned point in the following statement: ""Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment."" This indicates that measurement results regarding AI system trustworthiness in deployment context(s) are obtained through rigorous testing. Additionally, the statement implies that input from domain experts and relevant AI actors is considered to validate whether the system is performing consistently as intended. The policy also emphasizes the importance of documenting the results, which further supports the alignment with the mentioned point.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- " "Measurable performance improvements or declines based on consultations with relevant AI actors, including affected communities, and field data about context-relevant risks and trustworthiness characteristics are identified and documented.",The policy aligns with the point of identifying and documenting measurable performance improvements or declines based on consultations with relevant AI actors and field data about context-relevant risks and trustworthiness characteristics.,"page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Stakeholder Engagement: Open channels for dialogue with stakeholders, including users, customers, and the public, will be maintained to address concerns and gather feedback. Conclusion: Badguys is committed to upholding the highest ethical standards in the development and deployment of AI technologies. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- ","The policy aligns with the point of identifying and documenting measurable performance improvements or declines based on consultations with relevant AI actors, including affected communities, and field data about context-relevant risks and trustworthiness characteristics.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- " A determination is made as to whether the AI system achieves its intended purposes and stated objectives and whether its development or deployment should proceed.,"The policy aligns with the point of determining whether the AI system achieves its intended purposes and stated objectives and whether its development or deployment should proceed through the implementation of continuous evaluation and regular audits of AI systems. This ensures that potential biases, errors, or risks are identified and mitigated, allowing for a thorough assessment of whether the AI system is meeting its intended purposes and objectives before proceeding with its development or deployment.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- ","The policy aligns with the point that a determination is made as to whether the AI system achieves its intended purposes and stated objectives and whether its development or deployment should proceed. This can be seen in the statement that ""Generative AI projects will follow documented processes for risk-based decisions on development, deployment, and monitoring"" (Context: page_label: 1, file_name: Mock Policy.pdf). This indicates that there is a process in place to assess whether the AI system is achieving its intended purposes and objectives before proceeding with its development or deployment.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- " "Treatment of documented AI risks is prioritized based on impact, likelihood, and available resources or methods.","The policy aligns with the point of prioritizing treatment of documented AI risks based on impact, likelihood, and available resources or methods in the following section: - 3. AI Development and Deployment: Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. This section indicates that the policy includes regular evaluations and audits of AI systems, which suggests that the treatment of documented AI risks is prioritized. By conducting these evaluations and audits, the organization can identify and address potential risks based on their impact and likelihood. Additionally, the use of the term ""mitigate"" implies that the organization will take action to address these risks, further supporting the alignment with prioritizing treatment based on available resources or methods.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- ","The policy aligns with the point of prioritizing the treatment of documented AI risks based on impact, likelihood, and available resources or methods in the following statement: ""Risk prioritization decisions will balance beneficial use cases and public access with responsible controls."" This indicates that the policy considers the impact and likelihood of risks associated with generative AI and aims to allocate resources and methods accordingly to address those risks.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- " "Responses to the AI risks deemed high priority, as identified by the MAP function, are developed, planned, and documented. Risk response options can include mitigating, transferring, avoiding, or accepting.",The policy does not provide evidence of aligning with the point mentioned.,"page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- ","The policy aligns with the point of developing, planning, and documenting responses to high-priority AI risks. This can be inferred from the statement that ""Risk prioritization decisions will balance beneficial use cases and public access with responsible controls"" (context 7). This suggests that the policy acknowledges the need to prioritize and address high-priority risks associated with generative AI systems. However, the specific risk response options of mitigating, transferring, avoiding, or accepting are not explicitly mentioned in the given context.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- " Negative residual risks (defined as the sum of all unmitigated risks) to both downstream acquirers of AI systems and end users are documented.,The policy does not provide evidence of documenting negative residual risks to both downstream acquirers of AI systems and end users.,"page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Stakeholder Engagement: Open channels for dialogue with stakeholders, including users, customers, and the public, will be maintained to address concerns and gather feedback. Conclusion: Badguys is committed to upholding the highest ethical standards in the development and deployment of AI technologies. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- ","The policy aligns with the point of documenting negative residual risks to both downstream acquirers of AI systems and end users in the section on Mapping Risks. This section states that risk assessments will analyze and document safety, ethical, legal, reputational, and technical risks for each intended use case. It also mentions that risk prioritization decisions will balance beneficial use cases and public access with responsible controls. By documenting these risks, the policy ensures that negative residual risks are accounted for and can be addressed to minimize potential negative impacts on downstream acquirers and end users.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- " "Resources required to manage AI risks are taken into account - along with viable non-AI alternative systems, approaches, or methods - to reduce the magnitude or likelihood of potential impacts.",The policy does not provide evidence of aligning with the point mentioned.,"page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- ","The policy aligns with the point of considering viable non-AI alternative systems, approaches, or methods to reduce the magnitude or likelihood of potential impacts in the following statement: ""Controls like human-in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher."" This indicates that the policy recognizes the importance of incorporating human oversight as a control mechanism to mitigate risks associated with generative AI systems. By involving humans in the decision-making process, the policy acknowledges the need to balance the potential impacts of AI with alternative approaches to ensure responsible and accountable use.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- " Mechanisms are in place and applied to sustain the value of deployed AI systems.,"The policy aligns with the point of sustaining the value of deployed AI systems through the implementation of continuous evaluation, regular audits, and periodic reviews. These mechanisms ensure that the AI systems are consistently monitored and assessed for potential biases, errors, or risks. By conducting evaluations and audits, the policy demonstrates a commitment to maintaining the value and effectiveness of the deployed AI systems over time.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Stakeholder Engagement: Open channels for dialogue with stakeholders, including users, customers, and the public, will be maintained to address concerns and gather feedback. Conclusion: Badguys is committed to upholding the highest ethical standards in the development and deployment of AI technologies. ----- ",The policy does not explicitly mention mechanisms to sustain the value of deployed AI systems.,"page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 2 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Administration The Generative AI Oversight Team is responsible for administering this policy, establishing necessary procedures, guidelines and updates to align with regulations. ----- " Procedures are followed to respond to and recover from a previously unknown risk when it is identified.,The policy does not provide evidence of procedures being followed to respond to and recover from a previously unknown risk when it is identified.,"page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 This policy outlines our commitment to ethical AI practices: 1. Ethical Principles: a. Transparency: We commit to transparency in our AI systems' design, development, and deployment. Users and stakeholders will be informed about the use of AI, its capabilities, and limitations. ----- ",The policy does not provide evidence of procedures being followed to respond to and recover from a previously unknown risk when it is identified.,"page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 14. Feedback channels will be created to allow reporting issues by users and affected groups, to improve models over time. 15. An ethics review board will evaluate high -risk use cases not covered by this policy, before approval. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- " "Mechanisms are in place and applied, and responsibilities are assigned and understood, to supersede, disengage, or deactivate AI systems that demonstrate performance or outcomes inconsistent with intended use.","The policy aligns with the mentioned point in the following statement: ""d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes."" This statement indicates that mechanisms will be in place to address and rectify any inconsistencies in the performance or outcomes of AI systems that deviate from their intended use.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- ","The policy aligns with the point mentioned as it states that generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. Additionally, processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. This indicates that mechanisms are in place to identify and address any performance or outcome inconsistencies with the intended use of AI systems.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 2 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Administration The Generative AI Oversight Team is responsible for administering this policy, establishing necessary procedures, guidelines and updates to align with regulations. ----- " "AI risks and benefits from third-party resources are regularly monitored, and risk controls are applied and documented.","The policy aligns with the point of regularly monitoring AI risks and benefits from third-party resources and applying and documenting risk controls in the section on ""Periodic Review."" This section states that the policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. This implies that the policy will also consider the risks and benefits associated with third-party resources and apply appropriate risk controls.","page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 4. External Collaboration and Education: a. Industry Collaboration: We will collaborate with industry peers, researchers, and policymakers to share best practices and contribute to the development of ethical AI standards. b. Employee Education: Continuous training and education programs for our employees will emphasize ethical AI principles and practices. 5. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- ",The policy aligns with the point of regularly monitoring AI risks and benefits from third-party resources and applying and documenting risk controls.,"page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Mapping Risks 5. Intended use cases, target users, deployment contexts, and potential benefits and harms will be defined early and re -evaluated regularly. 6. Risks assessments will analyze and document safety, ethical, legal, reputational and technical risks for each intended use case. 7. Risk prioritization decisions will balance beneficial use cases and public access with responsible controls. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- " Pre-trained models which are used for development are monitored as part of AI system regular monitoring and maintenance.,"The policy aligns with the point of monitoring pre-trained models as part of regular monitoring and maintenance of AI systems. This is evident from the statement that regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. This implies that the monitoring and maintenance of AI systems include monitoring the performance and behavior of pre-trained models to ensure they are functioning properly and not introducing biases or errors.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Data Governance: a. Data Quality: We will strive to use high -quality, diverse, and representative datasets to train our AI models, minimizing biases and ensuring accuracy. b. User Consent: User consent will be a fundamental consideration in collecting and utilizing data for AI purposes. Clear and informed consent mechanisms will be implemented. 3. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Fairness and Equity: We will ensure that our AI technologies do not propagate bias or discrimination based on race, gender, age, ethnicity, or any other protected characteristic. Our systems will be designed to promote fairness and equity. c. Privacy Protection: Respecting user privacy is paramount. ----- ",The policy does not provide evidence of aligning with the point that pre-trained models used for development are monitored as part of AI system regular monitoring and maintenance.,"page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 14. Feedback channels will be created to allow reporting issues by users and affected groups, to improve models over time. 15. An ethics review board will evaluate high -risk use cases not covered by this policy, before approval. ----- " "Post-deployment AI system monitoring plans are implemented, including mechanisms for capturing and evaluating input from users and other relevant AI actors, appeal and override, decommissioning, incident response, recovery, and change management.","The policy aligns with the point of post-deployment AI system monitoring plans being implemented. This is evident from the statement that regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. Additionally, the policy mentions the integration of mechanisms for oversight, accountability, and addressing unintended consequences into their development processes, which implies the inclusion of incident response, recovery, and change management.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Stakeholder Engagement: Open channels for dialogue with stakeholders, including users, customers, and the public, will be maintained to address concerns and gather feedback. Conclusion: Badguys is committed to upholding the highest ethical standards in the development and deployment of AI technologies. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 5. Compliance and Review: a. Compliance with Regulations: We will adhere to all applicable laws, regulations, and industry standards governing AI technologies. ----- ","The policy aligns with the point of implementing post-deployment AI system monitoring plans. It states that processes will be established to continually monitor risks after deployment, along with controls to address emerging issues. This indicates that the policy includes mechanisms for capturing and evaluating input from users and other relevant AI actors, as well as incident response, recovery, and change management.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 2 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Administration The Generative AI Oversight Team is responsible for administering this policy, establishing necessary procedures, guidelines and updates to align with regulations. ----- " "Measurable activities for continual improvements are integrated into AI system updates and include regular engagement with interested parties, including relevant AI actors.","The policy aligns with the point of regular engagement with interested parties, including relevant AI actors, through the mechanism of stakeholder engagement. This involves maintaining open channels for dialogue with stakeholders, such as users, customers, and the public, to address concerns and gather feedback. This engagement with interested parties allows for continual improvements to be made to the AI systems based on their input and feedback.","page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Stakeholder Engagement: Open channels for dialogue with stakeholders, including users, customers, and the public, will be maintained to address concerns and gather feedback. Conclusion: Badguys is committed to upholding the highest ethical standards in the development and deployment of AI technologies. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 4. External Collaboration and Education: a. Industry Collaboration: We will collaborate with industry peers, researchers, and policymakers to share best practices and contribute to the development of ethical AI standards. b. Employee Education: Continuous training and education programs for our employees will emphasize ethical AI principles and practices. 5. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- ","The policy aligns with the point of integrating measurable activities for continual improvements into AI system updates and engaging with interested parties. This can be inferred from the statement in the context that ""Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues."" This indicates that the policy includes measures to monitor and improve the AI system over time. Additionally, the policy applies to all employees, contractors, systems, and processes involved in the design, development, deployment, or use of generative AI systems, which suggests that relevant AI actors would be engaged in the policy implementation.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 2 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Administration The Generative AI Oversight Team is responsible for administering this policy, establishing necessary procedures, guidelines and updates to align with regulations. ----- " "Incidents and errors are communicated to relevant AI actors, including affected communities. Processes for tracking, responding to, and recovering from incidents and errors are followed and documented.",The policy does not provide evidence of aligning with the point mentioned.,"page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Periodic Review: This policy will be reviewed periodically to ensure its alignment with evolving ethical standards and technological advancements. 6. Reporting and Communication: a. Transparency Reports: We will publish periodic reports outlining our AI practices, including data usage, algorithmic decisions, and measures taken to address biases or risks. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 c. Privacy Protection: Respecting user privacy is paramount. Our AI systems will adhere to data protection laws and implement robust privacy measures to safeguard user data. d. Accountability and Responsibility: We will take responsibility for the outcomes of our AI systems. Mechanisms for oversight, accountability, and addressing unintended consequences will be integrated into our development processes. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Human Oversight: Human supervision and intervention will be incorporated into AI systems, especially in critical decision -making processes, to prevent unintended consequences. 4. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Clear and informed consent mechanisms will be implemented. 3. AI Development and Deployment: a. Continuous Evaluation: Regular evaluations and audits of AI systems will be conducted to identify and mitigate potential biases, errors, or risks. ----- page_label: 2 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 b. Stakeholder Engagement: Open channels for dialogue with stakeholders, including users, customers, and the public, will be maintained to address concerns and gather feedback. Conclusion: Badguys is committed to upholding the highest ethical standards in the development and deployment of AI technologies. ----- page_label: 1 file_name: Badguys AI Ethics and Responsible AI Policy.pdf file_path: data\Badguys AI Ethics and Responsible AI Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 4. External Collaboration and Education: a. Industry Collaboration: We will collaborate with industry peers, researchers, and policymakers to share best practices and contribute to the development of ethical AI standards. b. Employee Education: Continuous training and education programs for our employees will emphasize ethical AI principles and practices. 5. ----- ","The policy aligns with the point of communicating incidents and errors to relevant AI actors, including affected communities, through the establishment of feedback channels. These channels allow users and affected groups to report issues, which helps in improving the generative models over time. This demonstrates a commitment to tracking, responding to, and recovering from incidents and errors, as well as documenting these processes.","page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Transparency & Accountability 11. Model details like data sources, training methodology and model versions will be documented to enable accountability if issues emerge. 12. Attribution indicating content is AI -generated will be clearly displayed for external uses. 13. Controls like human -in-the-loop oversight will be required where risks of harmful, biased or misleading outputs are higher. 14. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 2. Generative AI projects will follow documented processes for risk -based decisions on development, deployment and monitoring. 3. Staff will receive RMF and generative AI risk training on topics like safety, fairness, accountability, and regulatory compliance. 4. Reviews by the oversight team at major milestones will be required before deploying generative models internally or externally. Mapping Risks 5. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Scope This policy applies to all employees, contractors, systems and processes involved in the design, development, deployment or use of generative AI systems, including but not limited to, text, image, video and audio generation. Policy Statements Governance 1. A generative AI oversight team will be created, comprising diverse disciplines like engineering, human factors, audit, legal, and ethics. 2. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 14. Feedback channels will be created to allow reporting issues by users and affected groups, to improve models over time. 15. An ethics review board will evaluate high -risk use cases not covered by this policy, before approval. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Purpose This policy provides guidance on developing, deploying and using generative AI responsibly and aligning practices with the NIST AI Risk Management Framework (RMF). It aims to maximize benefits and minimize potential negative impacts to individuals, groups, organizations and society. ----- page_label: 1 file_name: Mock Policy.pdf file_path: data\Mock Policy.pdf creation_date: 2023-11-21 last_modified_date: 2023-11-21 last_accessed_date: 2023-11-21 Measurement & Testing 8. Generative models will undergo rigorous testing to measure risks and evaluate trustworthiness characteristics before deployment. 9. Testing sets will cover a broad, representative set of use cases, be routinely updated, and model performance tracked over time. 10. Processes to continually monitor risks after deployment will be established, along with controls to address emerging issues. Transparency & Accountability 11. ----- "