MAMMOth Project Concludes with UniBw Advancing Multidimensional Bias Mitigation in AI
31 October 2025
MAMMOth Project Concludes with UniBw Advancing Multidimensional Bias Mitigation in AI
31 October 2025
The Horizon Europe-funded project MAMMOth (Multi-Attribute, Multimodal Bias Mitigation in AI Systems) officially concluded on 31 October 2025 after three years of pioneering work to make Artificial Intelligence fairer, more inclusive, and more accountable. The project brought together academic, industrial, and societal partners to address bias in AI systems and to deliver practical tools and knowledge for fairness-aware AI development. Its outcomes provide policymakers, technology developers, and society with concrete strategies for embedding fairness at the centre of AI innovation.
As the leading contributor to the project’s work on multidimensional bias mitigation, team UniBw played a key role as a scientific, research, and technical partner, shaping the project’s outcomes through the development of methods and tools for defining, analysing, mitigating, and explaining bias in multidimensional settings.
Contributions from team UniBw
Throughout the project, team UniBw developed a broad set of theoretical, methodological, and software contributions addressing fairness across multiple protected attributes, data modalities, and algorithmic pipelines. These include:
- A generic formulation of multidimensional discrimination and contributions to the AI Fairness Definition Guide
- Multi-objective optimisation approaches for multi-attribute fairness
- Methods for bias mitigation across multiple tasks and bias analysis in multimodal fusion
- A method for socio-economic fairness aligned with principles of the EU AI Act
- MMM-Fair, an open-source library for multidimensional fairness analysis with interactive visualisation, tradeoff exploration, a no-code chat-based along with CLI interface, and LLM-powered explanations
- Fairness-aware synthetic tabular data generation, including
- synthetic data generators for class imbalance and fairness
- the TABFAIRGDT tabular data generator
- methods for fairness-aware and privacy-preserving synthetic data
- Techniques for assessing disparity in adversarial robustness and adversarial attacks on autoencoders
- Methods for analysing the fairness–utility tradeoff in graph clustering
Broader Achievements of the MAMMOth Project
According to the official project press release, MAMMOth delivered wide-reaching results beyond scientific development, including:
- The MAI-BIAS Toolkit, enabling detection, analysis, and mitigation of bias in datasets and AI models
- Fairness-aware libraries and methods applied to finance, identity verification, multimodal fusion, and academic impact analysis
- Extensive training and public engagement, with more than 30 workshops, five webinars, podcasts, and public exhibitions
- Policy briefs aligned with the EU AI Act, offering practical guidance for fairness-oriented AI governance
- Outreach to more than 12,000 organisations, supported by broad dissemination activities
Impact and Legacy
MAMMOth has significantly advanced the foundations of intersectional, multimodal, and multidimensional fairness in AI. By pairing methodological innovation with practical tools, the project has set new standards for evaluating and mitigating bias in real-world AI systems.
The project’s open-source software, datasets, methodological frameworks, and policy recommendations remain accessible via the AI-on-Demand platform, GitHub, and mammoth-ai.eu, ensuring sustained impact beyond the project’s completion.