Overview of RAI Activities Throughout the Product Life Cycle

3. SHIELD Assessment

3.1 HONE: Assess Requirements, Statements of Concern, Mitigations, and Metrics

  1. Ensure all mitigation action measures and controls have a method of being assessed and monitored throughout the life cycle.
    1. Do all requirements (operational, functional, and technical) have appropriate modes of assessment, and benchmarks of success and failure?
    2. Do all statements of concern and mitigation actions have appropriate modes of assessment, and benchmarks of success and failure?
    3. Do all tradeoffs, trustworthiness, and confidence measures have appropriate modes of assessment, and benchmarks of success and failure?
    4. How will performance metrics be established?
    5. How will baseline metrics for system performance be established?
    6. How will errors be detected?
    7. Given the use case, potential consequences, and affected stakeholders for this context, is it better to minimize certain types of error (i.e. considerations of precision vs. recall, Type I error vs. Type II error, etc.)?
      1. How will error rates be measured, and how will they be measured in terms of how they affect different sub-populations?
      2. Will error rates be recorded in the Impact Assessment?
    8. Will the metrics need to evolve as the system behavior changes during use (i.e. feedback loops)?
    9. How will user understanding be measured?
    10. Are there any measurement gaps or limits to the precision of measurement?
      1. Are there latent constructs or other factors that will be difficult to operationalize or measure?
      2. How will these issues affect risk calculations/impact analyses – do these need to be revisited?
    11. Is the system’s ontology appropriate for the use case and for tracking alignment with the DoD AI Ethical Principles?
  2. Are all of your Statements of Concern and all aspects of your legal/ethical/policy frameworks sufficiently addressed?
    1. If not, re-conduct activities under the Intake and Ideation phases.
  3. What are the anticipated failures? How will these be detected?
  4. Is there a process for system rollback and/or stoppage?
  5. Please describe the required artifacts your organization requires, like data ethics reviews, and your team's plans to complete them.
  6. Describe the access controls that verify the model is only accessed by those who are approved to do so, and their access is appropriate for their specific roles. Please see DoDD 5411 for guidance.

3.2 HONE: Exploratory Data Analysis

  1. How was the data collected or acquired?
    1. Could certain classes or populations have been undersampled?
    2. Is the data representative of the use case/deployment context?
    3. Has the data become stale? How often will it need to be updated?
    4. Given the above, does the data need to be re-collected?
    5. Will re-collection of the data place additional burdens upon the sampled population?
    6. What other steps can be taken to improve the quality of the data?
  2. How was the data labelled?
    1. Is ground truth accessible given the data type?
    2. Could human biases affect how the data was labelled?
    3. Could societal context affect how the data was labelled?
    4. Given the above, does the data need to be re-labeled or is it insufficient to proceed to development?
  3. Data Provenance, Protection, and Access
    1. How is the data accessed? Who has access and how is it controlled?
    2. Where is the data stored?
    3. How is the data protected?
    4. What ensures data provenance? How are transformations and cleaning recorded?
    5. Is any of the data generated synthetically, or should it be?
    6. How is the data used?
  4. Data Exploration
    1. What abnormalities, outliers, or irregularities are present in the data?
    2. Were these irregularities a source of human error, sensor error, processing error, or natural or adversarial perturbation? What mitigations are required for greater accuracy?
  5. Will data or feedback used to update or fine-tune the model at later stages (such as through Reinforcement Learning with Human Feedback [RLHF]) have any of the issues contained in 3.2.1 or 3.2.2? How will these be mitigated?
  6. Utilize bias identification and mitigation techniques
    1. Have you determined which operationalization of 'fairness' is appropriate for your purposes?
    2. Has the underlying dataset and the model been checked for unintended bias and (if applicable) mitigations been applied (including dataset, in-processing, or post-processing bias mitigations)? Has the team considered how the underlying datasets may reflect the biases of the institution or individuals that collected it (including prejudice bias), the sampling or measurement methods used (measurement, sample/exclusion bias), or of the individuals represented in the dataset?
    3. Are stakeholders being consulted in terms of their domain knowledge regarding sources of unintended bias?
    4. Have any biases and possible biases (including cognitive biases, such as automation bias) of the designers and operational users been addressed through training or system design, or leveraged in ways that contribute to system success?
  7. Update Data Card with the information above. [Data Card Template]

3.3 Conduct AI Suitability Assessment

  1. According to your legal/ethical/policy frameworks, SOCs, mitigations, use case, and mission domain – is AI suitable for this use case? Be sure to answer the following:
    1. Is AI suitable for the task at hand? Is the model type appropriate for the task at hand? Do the advantages outweigh the disadvantages, known or possible? Does utilizing AI in this case achieve something that a non-AI tool could not accomplish?
    2. What is the specific task that the system performs?
    3. What is the system input and output required to perform that task?

3.4 Update Documentation

  1. Update SOCs worksheet, as necessary.