Overcoming the governance gap: 4 steps to responsible AI in defence

| minute read

AI is reshaping defence faster than governance frameworks can keep up.  

  • Human agency and accountability are increasingly unclear 
  • Testing and verification practices are inconsistent 
  • There are no common standards across jurisdictions 
  •  Industry, militaries and states are not aligned 

The non-negotiable baseline is that all critical AI-assisted decisions must remain with human operators. GC 

 REAIM* defines what Responsibility by Design development, deployment and oversight of military AI requires and who is accountable for what**.

FOUR THINGS YOU CAN START DOING RIGHT NOW 

1. Audit your current position 

Map your existing AI applications against the GC REAIM lifecycle framework, e.g.: 

  • Identify actors, audit trails, compliance requirements and incident reporting structures  
  • Identify where human oversight and accountability are unclear or undocumented  
  • Check whether your current testing and verification practices go beyond predefined milestones 

2. Embed responsibility by design 

Building on your audit, build ethical and legal considerations into your processes, e.g.: 

  • Involve legal and ethical experts from the start of development, not just at review stages 
  • Design systems so that humans can override AI whenever decisions have legal or safety implications 
  • Empower all team members to raise ethical concerns 

3. Build organisational capability 

Invest in verification, training and incident-reporting mechanisms before they become mandatory, e.g.: 

  • Establish TEVV (Test, Evaluate, Validate, Verify) standards that include data quality requirements as well as milestones 
  • Conduct comprehensive risk assessments at every stage, not just pre-deployment  
  • Offer continuous training so individuals can exercise informed human agency 
  • Design and test features that strengthen human understanding and oversight 

4. Engage beyond your organisation 

GC REAIM calls on stakeholders to participate in the related governance structures, e.g.: 

  • Join the permanent multilateral dialogue on responsible AI in the military 
  • Use internationally agreed standards and best practices as a baseline, not a ceiling 
  • Be transparent about system capabilities, limitations and testing outcomes 

Sopra Steria has extensive experience in implementing AI-related solutions for organisations in the public and defence sectors. Contact us to find out how we can help you. 

[Dennis Struyk]  

[Head of Public Sector] 

[dennis.struyk@soprasteria.com] 

*Global Commission on Responsible Artificial Intelligence in the Military Domain (GC REAIM) 

**Based on the GC REAIM Strategic Guidance Report: Responsible by Design, 2025 

Search

artificial-intelligence

compliance

credibility-integrity

data-management

Related content

Responsible artificial intelligence

As organisations race to seize AI’s benefits, prioritising responsibility is key. Embracing responsible AI practices is not just about staying ahead but building a sustainable competitive advantage. 

AI and cybersecurity

In today’s digital age, traditional cybersecurity measures are no longer sufficient. Cyber threats are evolving rapidly, and adopting innovative solutions is essential to protect your business. Discover how AI is revolutionizing cybersecurity and giving you a strategic edge. 

Digital Banking Experience Report 2023 The AI-enabled banking era

Banks must leverage their trust capital if they are not to lose market share to tech giants broadening their offer into financial services. Our Digital Banking Experience Report 2023 outlines the key trends globally shaping banking in the hyper-connected era.