How to Measure Trust in AI

Initiative with NTU, The Alan Turing Institute and more

As AI increasingly drives critical sectors such as healthcare, finance, and defense, ensuring it is also trustworthy is essential. ‘Trust’ is a term commonly associated with AI but what lies behind it? What practical measures should be adopted to ensure that the ultimate outcomes for consumers are always taken into account?

To build Trusted AI, three key pillars must be addressed: Safety, Assurance, and Accountability. Each addresses different risks and uses different methods and it is imperative that all are measured effectively. For example:

  • You can’t trust an AI system that is reliable but unsafe (missing safety).
  • You can’t trust a safe system if you can’t verify or validate its claims (missing assurance).
  • You can’t trust any system if no one is responsible for its actions (missing accountability).

To follow the initiative, fill in your details here

Initiative with NTU, The Alan Turing Institute and more

As AI increasingly drives critical sectors such as healthcare, finance, and defense, ensuring it is also trustworthy is essential. ‘Trust’ is a term commonly associated with AI but what lies behind it? What practical measures should be adopted to ensure that the ultimate outcomes for consumers are always taken into account?

To build Trusted AI, three key pillars must be addressed: Safety, Assurance, and Accountability. Each addresses different risks and uses different methods and it is imperative that all are measured effectively. For example:

  • You can’t trust an AI system that is reliable but unsafe (missing safety).
  • You can’t trust a safe system if you can’t verify or validate its claims (missing assurance).
  • You can’t trust any system if no one is responsible for its actions (missing accountability).

Page printed in July 10, 2025. Plase see https://www.feedzai.com/how-to-measure-trust-in-ai for the latest version.