Responsible AI Toolkit IconHome

RAI Toolkit

The following questions have been designed to help you assess risks throughout the implementation of AI projects. The questions are organized based on the AI product life cycle. We recommend you return to this tool throughout the project and complete the relevant questions. At the end you will be able to export answers as a PDF or JSON.

2. Ideation

2.1 Define Requirements

  1. Use the outputs from Stage 1 and ensure that the requirements are framed in operational terms and include a complete set of situations and conditions expected.
  2. Translate the operational requirements into functional requirements.
  3. Translate each functional requirement into technical design requirements and performance specifications.
  4. Data Privacy Considerations.
    1. Describe the bounds in which this project is working in from a data perspective.
    2. Describe whether the project needs to be in the cloud, and if so, why
    3. In regards to data sensitivity, describe the platform requirements and efforts needed to meet them.
    4. Describe how data will be pulled for this project, and the risks of aggregation for this project.

2.2 Identify Risks & Opportunities / Navigate Tradeoffs

  1. Are policies, processes, procedures, and practices across the organization related to the mapping, measuring, and managing of risks associated with AI in place, transparent, implemented, and validated? This includes an organizational culture committed to safe, ethical, responsible, and trustworthy AI capabilities.
  2. Have risk assessments that examine the full scope of possible risks been conducted? The risk assessments should examine a scope of risks much greater than just the operation, such as risks and potential negative outcomes to society, the environment, political and economic structures, and sustainability initiatives: See also [Appendix 2 for additional impact analysis questions]
    1. What are the potential risks across the entire risk landscape (impact and likelihood)? Consider social, technological, operational, political, economic, and sustainability factors, which may include privacy, human and civil rights, bias, cybersecurity, supply chain risks, and concerns to traceability and transparency. Consider indirect and 2nd or 3rd-order effects, and possible emergent behaviors of the system.
    2. What is the risk response (including avoidance, mitigation, transference, and acceptance) for each risk?
    3. What is the evaluated risk and residual risk for each risk after mitigation? How are the risks evaluated (qualitatively or quantitatively)? Refer to DAGR and other risk management frameworks for recommendations.
    4. How is each risk monitored and measured throughout the AI capabilities lifecycle?
    5. Prioritize risks. Refer to DAGR and other risk management frameworks for recommendations. This outcome may be used for incident response prioritization. Consider prioritizing based on data, infrastructure, security, accountability, resources, and model operation (DISARM Hierarchy of Risk in DAGR).
    6. Document the risk relationships and dependencies between AI capabilities, if they exist. Document changes to the residual risk based on the relationships and dependencies if they exist. Refer to DAGR and other risk management frameworks for recommendations.
    7. Identify those responsible for risk, and describe coordination cadence in relation to the risk relationships and prioritization of risks of this project
    8. What possible risks are posed by use of the system for purposes other than those for which it was originally developed or procured, combining the system with other components (including with future, yet-to-be-developed technology or capabilities), training on datasets that go beyond the originally intended types of datasets, etc.? How can these be mitigated?
    9. How will different error types and failure modes be handled?
      1. How will error rates and failure modes be measured?
      2. What steps have been taken to ensure error rates, failure modes, and behavior in general are consistent for edge cases and imbalanced groups?
      3. How has sensitivity testing been conducted to ensure consistent and reliable error rates and failure modes in real-world settings?
  3. Have the risk assessments considered impacts to other nations, allies, and partners? Do international treaties, agreements, and policies require review?
  4. Consider establishing a procedure for prioritizing or navigating ethical tradeoffs.
  5. Establish a regular cadence through which risk analyses are revisited and updated throughout product life cycle or in the event of unanticipated consequences.
  6. Describe discussions that have occurred with the relevant authorizing official, and explain the process needed to receive an Authority To Operate (ATO) for this project. An Authority to Operate (ATO) is required by an authorizing official.

2.3 Write Statements of Concern

Statements of Concern (SOC) are RAI-related issues to be tracking across the life cycle — they may be either related to risks or potential opportunities for innovation that may be leveraged. For each SOC, include its estimated impact and likelihood. SOCs can be as short as 1-2 sentence bullet points for further tracking. See Appendix 3 for SOC examples.

  1. Using the legal/ethical/policy frameworks and the risks and opportunities you identified from your impact assessments in Section 2.2, write a list of Statements of Concern. [Statements of Concern Worksheet]
  2. For each SOC, identify a means of establishing, updating, and tracking its priority level.
  3. For each SOC, analyze and propose mitigation options.
  4. Begin thinking about the optimal modes of assessment for each mitigation action measure to monitor progress.

2.4 Design to Reduce Ethical/Risk Burdens

  1. Given the risk and ethical issues surfaced by your risk assessments and SOC Worksheets, plan for how you can design your system to mitigate against these issues.
  2. [If applicable] Begin thinking about how data and AI-enabled capabilities could potentially be leveraged to solve the SOC/ethical/risk issues that might otherwise arise from the employment or existence of the system.
  3. How will you measure the effectiveness of these mitigations in reducing cognitive load, moral injuries, dilemmas, and other risk/ethical burdens on operational users, operational commanders, developers, and senior leaders.

2.5 Accountability, Responsibility, & Access Flows and Governance

  1. Is the scope of responsible use for the system identified in terms easily understood by stakeholders, developers, and users of the system? Please describe the scope, as presented to these different group.
  2. Have processes for avoiding, identifying, and mitigating compromise or misuse of the system been identified?
  3. What degree of human involvement is needed for the system once deployed?
    1. Is there a procedure for when automated decisions or activities of the system will require human approval?
    2. Are responsibilities clearly defined between the system and the human, including areas of overlap?
  4. Establish accountability/responsibility flows for monitoring and addressing the areas (use the Responsibility Flows Questionnaire Tool in Appendix 5). How do these responsibilities evolve at each stage of the product development life cycle?
  5. What oversight mechanisms will be established to ensure and monitor these responsibility flows and other RAI issues?
    1. How will RAI-related issues be identified, tracked, and communicated to relevant personnel? Can Executive Dashboards, visualizations, or reports be used for this?
    2. How will the project's consistency with the DoD AI Ethical Principles be monitored and measured?
    3. What vendor oversight plans are in place? All vendors, not just those who provide algorithms are responsible for appropriate oversight.
    4. When will Program Management Reviews occur and how will they involve assessments of the project's alignment with the DoD AI Ethical Principles?