AI Assurance – what is it about?
In simple terms, AI assurance encompasses the processes, frameworks and standards, which are needed to ensure that AI systems are safe, reliable, ethical and aligned with their intended purpose. While ‘explainability’ is about making AI decisions understandable, ‘assurance’ goes further by verifying that the system behaves in a consistent, transparent and fair manner. Ultimately, assured AI helps mitigate risks, protect stakeholders, and build confidence among users and organisations that these systems will perform as expected in the real world.
Why AI assurance is essential
There are four elements that demonstrate why AI assurance is so fundamental when deploying AI into an organisation:
- Building Trust: Trust is essential for AI adoption, since users need to believe that AI systems will make accurate, fair and ethical decisions.
- Mitigating Risk: AI systems, like all technological advances, can have unintended outcomes and without assurance, these risks can lead to regulatory issues, reputational damage or even harm to individuals and society.
- Regulatory Compliance: Many regions are beginning to implement standards and regulations for AI. Assurance helps companies comply with evolving standards, avoid legal repercussions and maintain a positive public image. The EU AI Act (published in June 2024) is only just the start of this new regulatory era.
- Reducing Bias and Ensuring Fairness: AI systems learn from data, which can sometimes carry historical biases. Assuring AI means incessantly checking and correcting for biases, ensuring the technology benefits all users impartially.
The Challenges
As with any new technology, they present great challenges as well as great opportunities, which any user must be wary of. As AI systems become more complex, understanding and verifying every decision becomes increasingly difficult. Furthermore, assurance depends on high-quality, unbiased data, so identifying and mitigating historical biases within datasets is a continuous challenge.
There is also a self-learning consideration. Unlike traditional software, many AI models adapt over time, which requires continuous assurance efforts as systems evolve. And then there is the challenge of transparency. Some AI systems – especially those based on deep learning – operate as black boxes, making it hard to understand and assure their behaviour. Subtle differences have recently been highlighted between Microsoft’s and OpenAI’s usage of large language models (LLMs).
Microsoft’s Python SDK for Azure Machine Learning includes a function called model explainability, which in recent versions is set to “true” by default. This gives developers insights into interpretability, meaning they can understand the decisions and ensure they are made fairly and ethically. On the other hand, OpenAI – creators of ChatGPT and the image generation model Dall-E – has been accused of failing to be transparent over what data is used to train their models. This has led to lawsuits from artists and writers claiming that their material was used without permission. However, some believe that OpenAI’s users could face legal action in the future if copyright holders are able to successfully argue that material created with the help of OpenAI’s tools also infringes their IP rights. This example demonstrates how opacity around training data can potentially lead to a breakdown in trust between an AI service provider and its customers.
Key elements to success
AI assurance should not be viewed as a one-off task. Ensuring that it remains robust and versatile will require a series of ongoing activities to be conducted periodically. These are:
- Stout Testing and Validation: Continuous testing, both during development and deployment, ensures AI systems perform as intended across a variety of scenarios.
- Bias Audits and Fairness Checks: Periodic audits can identify and correct predispositions, ensuring that AI outcomes remain fair and just.
- Interpretability and Explainability: Enabling stakeholders to understand AI decision-making builds confidence and accountability.
- Ethical and Compliance Judgements: Assurance frameworks need to include ethical reviews and compliance with industry regulations and standards.
- Continuous Monitoring and Feedback Loops: AI systems should be monitored throughout their lifecycle, with feedback mechanisms in place to update and adjust models as needed.
Establishing a Framework
Establishing a framework for AI assurance involves setting standards to ensure that AI systems are developed and deployed responsibly, balancing innovation with ethical standards and societal impact. The following steps are important to set up a governance framework for AI assurance:
- Developing Internal Policies and Standards: Institutes can start by creating clear policies for AI assurance, defining what “trustworthy AI” means to them.
- Industry Standards and Guidelines: Familiarity with industry standards, like those from Institute of Electrical and Electronics Engineers (IEEE) or International Organization for Standardization (ISO), can help organisations implement robust assurance practices.
- Integrating Cross-functional Expertise: Effective assurance requires input from data scientists, ethicists, domain experts, and legal teams. Collaboration across disciplines is essential to develop holistic and effective AI assurance frameworks.
- Investing in Education and Training: Developing skills around responsible AI, fairness and ethics will empower teams to make more informed decisions throughout the AI lifecycle.
The Future
Looking ahead, we see several things happening that will strengthen AI assurance. The first is AI-specific regulations. As governments establish more stringent regulations, proactive assurance frameworks will be a competitive advantage. The second is advances in assurance tools. Just as AI technology is evolving, so are the tools and techniques for assurance. From explainability tools to bias detection software, innovation is making assurance more feasible. The final one is the role of transparency in building public confidence. Transparency in AI processes can further instil public confidence and provide a clear view of what an AI system is and is not intended to do.
Assuring AI is not only about minimising risks, but also about maximising the potential benefits of AI in a responsible and ethical manner. As AI continues to reshape our world, building robust assurance practices today is essential for a future where AI enhances human life safely, fairly and transparently. Organisations that prioritise assurance will be better equipped to navigate the regulatory landscape, earn the trust of their users and ultimately lead the way in the responsible adoption of AI.