4.1 Improve & Innovate: Instrument AI to promote Assurance
Have the acquisition requirements been developed in accordance with the DoD AI Ethical Principles and (if applicable) in ways that avoid vendor lock-in? Including issues such as:
Documentation requirements
Usage rights
Permissions
Data and data pipeline access (i.e. avoiding proprietary formats)
Distribution Protocols
Access for model auditing
If something goes wrong with an externally procured system while it is in use, has it been established and agreed upon between AI suppliers and your organization who is responsible and who is accountable, depending on the scenario?
Have you and the vendor budgeted for RAI activities, including:
Data/model/systems card and traceability matrix creation and updating
Continuous monitoring
Model retraining and system updating
Continuous harms and impact modeling
Stakeholder engagement
Human systems integration/human machine teaming testing
User training
Assurance and Trust metrics testing
Routine system (and component) auditing
Sunset procedures
Uploading lessons learned into use case and incident repositories
Ensure appropriate documentation procedures are in place:
Have you updated the data/model/system cards?
Are these understandable by and accessible to various personas/roles and stakeholders, and at various levels of technical expertise?
Will the documentation be regularly monitored and updated at each stage of product development and deployment?
Are there plans for the documentation of data provenance containing information such as where the data was sourced, why it was collected, who collected the data, who labelled the data, what transformations were applied, how the data was modified, etc.?
Are there plans to use a traceability matrix for tracking model versions and validation and verification results?
Is the explanation for the system's decision/behavior automatically included in the decision report/output?
Establish procedure and scope for user testing:
What are the possible sources of human error?
How will operator performance be evaluated and how can it be improved?
Establish procedures through which trust and assurance will be measured and supported:
Has justified confidence/trust of the operational users been measured? Is it at acceptable levels? Can it be increased?
Has justified confidence of other stakeholders been measured? Is it at acceptable levels? Can it be increased?
Have other tools been integrated to promote assurance and justified confidence in the system?
Have tools for explainability, uncertainty quantification, or competence estimation been used to increase assurance and reduce human error? How are you tracking that these metrics are understood correctly?
Have you established a cadence and procedure through which new data will be collected, models will be retrained, and the system will be updated?
Describe the periodicity and processs of adding new data.
Revisit 2.4. How are you designing your system or leveraging other data/AI-enabled capabilities to reduce the ethical/risk burden on operational users, decision makers, senior leaders, developers, and impacted stakeholders that would otherwise be present due to the employment or existence of the system.
4.2 Update Documentation
Update SOCs and data/model cards, as necessary. Have team consult and update DAGR to support continuous risk identification - as new risks (or opportunities) are identified.