AI is reshaping defence faster than governance frameworks can keep up.
- Human agency and accountability are increasingly unclear
- Testing and verification practices are inconsistent
- There are no common standards across jurisdictions
- Industry, militaries and states are not aligned
The non-negotiable baseline is that all critical AI-assisted decisions must remain with human operators. GC
REAIM* defines what Responsibility by Design development, deployment and oversight of military AI requires and who is accountable for what**.
FOUR THINGS YOU CAN START DOING RIGHT NOW
1. Audit your current position
Map your existing AI applications against the GC REAIM lifecycle framework, e.g.:
- Identify actors, audit trails, compliance requirements and incident reporting structures
- Identify where human oversight and accountability are unclear or undocumented
- Check whether your current testing and verification practices go beyond predefined milestones
2. Embed responsibility by design
Building on your audit, build ethical and legal considerations into your processes, e.g.:
- Involve legal and ethical experts from the start of development, not just at review stages
- Design systems so that humans can override AI whenever decisions have legal or safety implications
- Empower all team members to raise ethical concerns
3. Build organisational capability
Invest in verification, training and incident-reporting mechanisms before they become mandatory, e.g.:
- Establish TEVV (Test, Evaluate, Validate, Verify) standards that include data quality requirements as well as milestones
- Conduct comprehensive risk assessments at every stage, not just pre-deployment
- Offer continuous training so individuals can exercise informed human agency
- Design and test features that strengthen human understanding and oversight
4. Engage beyond your organisation
GC REAIM calls on stakeholders to participate in the related governance structures, e.g.:
- Join the permanent multilateral dialogue on responsible AI in the military
- Use internationally agreed standards and best practices as a baseline, not a ceiling
- Be transparent about system capabilities, limitations and testing outcomes
Sopra Steria has extensive experience in implementing AI-related solutions for organisations in the public and defence sectors. Contact us to find out how we can help you.
[dennis.struyk@soprasteria.com]
*Global Commission on Responsible Artificial Intelligence in the Military Domain (GC REAIM)
**Based on the GC REAIM Strategic Guidance Report: Responsible by Design, 2025