Overview of RAI Activities Throughout the Product Life Cycle

Appendix 2. Impact and Harm Assessment

Have impact assessments, harms analyses, opportunities scoping, and risk assessments been conducted to address the following concerns:

  • Privacy
    1. How is sensitive, identifying, or impactful data about individuals or groups safeguarded in the development and use of the system?
    2. What are the bounds on the types of sensitive information that will be gathered and stored?
      1. Under what conditions will sensitive information be used or continue to be stored, and when will it not?
      2. What is the plan for prompt and auditable data deletion once it is no longer required?
    3. How will de-identifying, anonymizing, or aggregation techniques be used for datasets containing sensitive information?
      1. Can the data be stored and processed directly on users’ personal devices?
      2. At the model development stage, could training techniques such as federated learning be used to protect sensitive information?
    4. How does your system implement purpose-based access controls to limit access to the data?
      1. How is data encrypted when moved or stored?
    5. How will the data collection method maintain the trust and well-being of relevant stakeholders?
  • Human Rights and Civil Liberties
    1. What positive or negative outcomes could occur from the use of the system (including effects related to material/economic interests, opportunity, impacts upon human rights & civil liberties, emotional/moral/psychological injury or benefits to individuals or groups, physical injury or benefit, effects upon trust or reputation, impacts upon social & democratic values, etc.)?
    2. Does the system provide risks to individuals from vulnerable populations? How will the project avoid over or under-sampling them in a way that disadvantages them? How have these considerations been weighed? What burdens are placed on individuals by collecting, storing, or using their data?
  • Protection of Property
    1. What are the implications for protection of public property that could occur from the use of the system?
    2. What are the implications for protection of private property that could occur from the use of the system?
  • Deterrence and Self-Defensibility
    1. What are the implications for deterrence and self-defensibility that arise from the employment of the system?
    2. What are the implications for deterrence and self-defensibility that arise from the existence of the system?
    3. What are the implications for deterrence and self-defensibility due to the non-existence of the system?
  • Traceability and Transparency
    1. How will data/model/system cards be created and maintained?
    2. How will the system balance explainability versus performance concerns?
    3. Are human-understandable explanations critical for this case?
    4. Is the explanation for the system’s behavior/decision included in the decision report or output?
    5. Will the system be transparent (possible to know how it made a decision), opaque (possible to use post-hoc techniques to arrive at an accurate inference of how the decision came about), or a black box (not human understandable)?
      1. Why was this design choice made?
    6. How will sufficient documentation of the system’s development and functioning be ensured and made available (even if no human understandable explanation for how it arrived at a particular decision is possible)?
    7. How will the project ensure the system can be understood by stakeholders with different levels of technical expertise and domain knowledge?
    8. How are system functionality, development, and changes being communicated to stakeholders?
      1. What are the risks of not communicating sufficiently or of providing too much information?
  • Fairness and Unintended Bias
    1. How will the underlying dataset and models be checked for unintended bias and (if applicable) mitigations be applied (including dataset, in-processing, or post-processing bias mitigations)? How has the team considered how the underlying datasets may reflect the biases of the institution or individuals that collected it (including prejudice bias), the sampling or measurement methods used (measurement, sample/exclusion bias), or of the individuals represented in the dataset?
    2. Have the biases of the designers and operational users been assessed?
      1. How will these affect the functioning of the system?
      2. How can these biases be mitigated through training or system design, or leveraged in ways that contribute to system success?
    3. Have stakeholders been consulted (or a diverse team involved in the design and testing of the system been assembled), to consult domain knowledge regarding sources of unintended bias stemming from protected characteristics including age, gender, sexual orientation, race or ethnicity, socio-economic status, physical attributes, level of education, degree of ability, religion, etc.
    4. Determine which operationalization of fairness is appropriate for your purposes.
  • Supply Chain & Architecture Security, and Open-Source Dependencies
    1. What are potential risks to your hardware supply chain?
      1. How might this compromise your system functionality?
      2. What mitigations are available?
    2. How are you ensuring that any open-source resources or dependencies are secure and will continue to receive regular updates until the system is sunsetted?
    3. What vulnerabilities to your architecture exist?
      1. Are you able to implement zero-trust architecture?
  • Sustainability
    1. How have the energy and manpower costs of your approach been weighed?
    2. What are more sustainable approaches to storage, training, compute, collection, or labelling that could be leveraged?