
Advancing Fairness in AI: Key Insights from the MAMMOth Project’s 5th Plenary Meeting in Vienna
16 January 2025
Day 1: Progress Presentations and Key Findings
The first day of the 5th Plenary Meeting of the MAMMOth project in Vienna began with a series of insightful presentations, with Swati Swati, Arjun Roy, and Prof. Eirini Ntoutsi, highlighting recent advancements in fairness-aware machine learning within the scope of MAMMOth. With consortium members gathered to discuss advancements across multiple domains, the presentations set a strong tone for collaborative progress in tackling fairness and bias in AI systems.
The first day focused on key findings from various research areas:
- Bias in Multimodal Fusion: Challenges and implications of bias when integrating different modalities of data in AI systems were addressed, with proposed solutions to ensure fairness across integrated data modalities.
[Swati Swati, Arjun Roy, Eirini Ntoutsi, Exploring Fusion Techniques in Multimodal AI-Based Recruitment: Insights from FairCVdb. Proceedings of the 2nd European Workshop on Algorithmic Fairness (EWAF’24)]
- Bias Mitigation in Multi-task Learning and Federated Environments: Progress was shared on bias mitigation strategies across multiple tasks, as well as the exploration of privacy-discrimination trade-offs in federated learning systems.
[Arjun Roy, Christos Koutlis, Symeon Papadopoulos, Eirini Ntoutsi, FairBranch: Mitigating Bias Transfer in Fair Multi-task Learning, International Joint Conference on Neural Networks (IJCNN’24)]
- Synthetic Data and Counterfactual Explanations: Ongoing work on generating synthetic data for bias mitigation was presented, along with the use of counterfactual explanations to assess fairness in mixed tabular data.
[Panagiotou Emmanouil, Arjun Roy, and Eirini Ntoutsi, Synthetic Tabular Data Generation for Class Imbalance and Fairness: A Comparative Study. 4th BIAS workshop, co-located with ECML PKDD 2024 (BIAS’24).]
[Emmanouil Panagiotou, Manuel Heurich, Tim Landgraf, Eirini Ntoutsi, TABCF: Counterfactual Explanations for Tabular Data Using a Transformer-Based VAE. 5th ACM International Conference on AI in Finance (ICAIF'24).]
- Adversarial Robustness and Subgroup Disparities: Research on robustness and disparities across demographic subgroups was discussed when tested against adversarial attacks.
[Ramanaik Chethan Krishnamurthy, Arjun Roy, and Eirini Ntoutsi. Adversarial Robustness of VAEs across Intersectional Subgroups. 4th BIAS workshop, co-located with ECML PKDD 2024 (BIAS’24).]
Day 2: Public Engagement and Workshop
On the second day, a hands-on workshop introduced the MAMMOth toolkit to a broader audience, receiving valuable feedback. Prof. Ntoutsi also gave a presentation on the many facets of bias in AI, categorizing them into i) useful biases, such as inductive biases that enable learning, ii) problematic biases that hinder model generalization (using the crop classification use case from the project STELAR) and iii) harmful biases that lead to discrimination and harm.
Looking Ahead
The meeting in Vienna set new goals for refining the MAMMOth toolkit with insights from both internal teams and external participants. With UniBwM’s contributions central to the project’s development, the team is excited to continue working towards ensuring fairness and transparency in multimodal AI.