The Responsible Artificial Intelligence (RAI) Toolkit provides a centralized process that identifies, tracks, and improves alignment of AI projects to RAI best practices and the DoD AI Ethical Principles, while capitalizing on opportunities for innovation. The RAI Toolkit provides an intuitive flow guiding the user through tailorable and modular assessments, tools, and artifacts throughout the AI product lifecycle. The process enables traceability and assurance of responsible AI practice, development, and use.
Explore the SHIELD Assessment
The SHIELD Assessment is built around the AI product lifecycle described in the RAI Strategy and Implementation Pathway.
RAI ToolkitThe Responsible Artificial Intelligence (RAI) Defense AI Guide on Risk (DAGR) is intended to provide DoD AI stakeholders with guiding principles, best practices, and other governing Federal and DoD guidance.
A collection of tools and resources to help you design, develop, and deploy responsible AI applications.