Statement of Concern: Sensitive, identifying, or impactful data about individuals or groups might be inadvertently disclosed in the development and use of the AI capability.
Measurement: Privacy monitoring software tools to be deployed to monitor re-identification risk.
Mitigations:
- Employ de-identifying, anonymizing, or aggregation techniques for datasets containing sensitive information.
- Restrict users’ ability to store and process data directly on their personal devices.
- At the model development stage, employ federated learning to protect sensitive information.
- Incorporate purpose-based access controls to limit access to the data.
- Encrypt data when it is moved or stored.
- Concern: New behaviors in the AI capability will emerge after deployment, possibly generating negative outcomes.
Possible Mitigations:- Employ software tools to detect emergent behavior in the AI capability.
- Designate individuals to monitor for emergent behavior.
- Assess each detected instance of emergent behavior for risk.
- Concern: Users will reuse and adapt the AI capability
- Designate individuals to monitor uses of the AI capability.
- Assess each detected adaptation of the AI capability for risk.
- Concern: Data that has been collected will adversely affect the trust and well-being of relevant stakeholders.
Possible Mitigations:- Bound what types of sensitive information will be gathered and under what conditions it will be used or continue to be stored.
- Design a plan for prompt and auditable data deletion once it is no longer required.
- Concern: Sensitive, identifying, or impactful data about individuals or groups is inadvertently disclosed in the development and use of the AI capability.
Possible Mitigations:- Employ de-identifying, anonymizing, or aggregation techniques for datasets containing sensitive information.
- Restrict users’ ability to store and process data directly on their personal devices.
- At the model development stage, employ federated learning to protect sensitive information.
- Incorporate purpose-based access controls to limit access to the data.
- Encrypt data when it is moved or stored.
- Concern: System functionality, development, and changes are not communicated sufficiently to stakeholders, or too much information is provided.
Possible Mitigations:- Schedule when data, model, and system cards will be populated and updated.
- Document data provenance.
- Include an explanation for the system’s decision or behavior in the decision report or output automatically.
- Make sufficient documentation of the system’s development and functioning available to stakeholders, even if no human-understandable explanation for how it arrived at a particular decision is possible.
- Concern: The AI capability provides risks to individuals from vulnerable populations.
Possible Mitigations:- Ensure vulnerable populations are not oversampled for the dataset in a way that disadvantages them.
- Concern: System failures will result in risky downtime for the users of the service hosting the AI capability.
Possible Mitigations:- Review incidents and failure modes compiled from past experiences. Anticipate similar failures and instrument the system to detect them.
- Put in place a process for system rollback.
- Concern: Issues in the hardware supply chain for components in the system will cause a compromise to system functionality.
Possible Mitigations:- Evaluate and mitigate risks in the hardware supply chain.
- Concern: Open-source components in the system will go out of date as the system itself continues to be developed.
Possible Mitigations:- Ensure open-source components will continue to receive regular updates until the system is sunsetted.
- Concern: Threat actors will exploit vulnerabilities in the architecture of the system.
Possible Mitigations:- Evaluate architecture vulnerabilities.
- Implement a zero-trust architecture.