Transparency & Fairness in Biometrics

Motivation & Goals

Face recognition systems have frequently been labelled as “biased”, “racist”, “sexist”, “unfair” by numerous media outlets, organisations, and researchers. Since this is an emerging challenge, further research in this area is required to enable the same treatment across different demographical groups. The goal of this thesis is to use explainable IA (xIA) methods to increase the transparency of biometric recognition systems and tackle bias or fairness issues for given demographic groups. Either new methodologies or a fusion of established approaches can be used.


  • Study the state-of-the-art in explainability and/or fairness for biometric systems
  • Develop and evaluate new methods to add transparency and increase fairness
  • Benchmark the developed methods against the state-of-the-art


Marta Gomez-Barrero (