As artificial intelligence becomes part of everyday operations, organisations must ensure that innovation goes hand in hand with strong governance and trust.
For a specialised international organisation, this challenge is particularly significant. The organisation manages large volumes of sensitive information and operates in an international environment where transparency, accountability, and responsible technology use are essential.
At the same time, it does not operate under a single binding legislative framework. Instead, governance must be built on internationally recognised standards and best practices.
To support the responsible adoption of AI while strengthening data protection, the organisation set out to reinforce its privacy governance and AI risk management capabilities.
From governance frameworks to daily practice
As AI-enabled tools began to appear across projects and internal operations, the organisation needed a consistent way to identify and manage both
privacy risks and AI-related risks.
Privacy practices were strengthened in line with the ISO 27701 privacy information management standard, reinforcing processes such as privacy impact assessments, management of data subject rights requests, and monitoring of personal data breaches.
In parallel, a structured AI risk management framework was developed in alignment with the principles of the ISO 42001 AI management system standard.
This enables teams to assess AI-enabled use cases, identify operational and ethical risks — including those related to generative AI — and implement mitigation measures.
More than 40 AI-enabled tools and solutions have already been assessed as part of this governance process.
To support long-term oversight, the organisation also digitalised its AI risk assessment process through a governance and risk management platform, enabling consistent evaluations and clearer traceability of risks and mitigation actions.
Embedding privacy and AI risk management into operational processes helps organisations turn governance into a practical enabler of responsible innovation.
Maria-Alexandra Enescu
Senior GRC Consultant, Sopra Steria
The impact
This initiative enabled the organisation to move from unstructured practices to a more structured and scalable governance model for privacy and AI risks.
- 40+ AI-enabled tools assessed to support responsible deployment
- Improved readiness for ISO 27701 certification
- 50+ staff members trained on privacy and data protection practices
- 20+ security champions engaged on emerging AI risks
- Digitalised AI risk assessments enabling consistent governance processes
By strengthening governance and internal awareness, the organisation can continue adopting new technologies while maintaining the high standards of trust and accountability expected from an international organisation.