19th EAWE PhD seminar in Hannover 1 September 2023 Manolis is organizing the 19th EAWE doctoral seminar in Hanover.
MMM-Fair presented in the Demo Track at CIKM 2025 “MMM-Fair: An Interactive Toolkit for Exploring and Operationalizing Multi-Fairness Tradeoffs” was presented in the Demo Track at CIKM 2025, held in Seoul, Republic of Korea.
Kick-off Workshop of the DZdA Project – German Center for Digital Tasks in Higher Education Teaching The kick-off workshop of the project German Center for Digital Tasks in Higher Education Teaching (DZdA) took place as a two-day intensive event with all nine project partners from universities across Germany. The host was OTH Amberg-Weiden, which, together with the strategic consulting firm pictomind GmbH, facilitated the event. The goal of the workshop was to establish a shared understanding of the project, define strategic objectives, and lay the foundation for successful collaboration.
MAMMOth Project Concludes with UniBw Advancing Multidimensional Bias Mitigation in AI The Horizon Europe-funded project MAMMOth (Multi-Attribute, Multimodal Bias Mitigation in AI Systems) officially concluded on 31 October 2025 after three years of pioneering work to make Artificial Intelligence fairer, more inclusive, and more accountable. The project brought together academic, industrial, and societal partners to address bias in AI systems and to deliver practical tools and knowledge for fairness-aware AI development. Its outcomes provide policymakers, technology developers, and society with concrete strategies for embedding fairness at the centre of AI innovation.
Paper accepted: IEEE Big Data 2025 Our Paper "A Deep Latent Factor Graph Clustering with Fairness-Utility Trade-off Perspective" was accepted at IEEE Big Data 2025. Congratz, Siamak!
The Multifaced Nature of Bias in AI: Impact on Model Generalization, Robustness, and Fairness At last week’s EU AI Fairness Cluster meeting in Brussels, Eirini presented insights on the different forms of bias in AI in a talk titled "The Multifaced Nature of Bias in AI: Impact on Model Generalization, Robustness, and Fairness".
EU AI Fairness Cluster Brussels Eirini participated in the final event of the EU AI Fairness Cluster in Brussels. It was a great opportunity to explore different aspects of fairness in AI.
Paper accepted: IEEE ICDM 2025 Our paper "TABFAIRGDT: A Fast Fair Tabular Data Generator using Autoregressive Decision Trees" was accepted at IEEE ICDM 2025. Congratulations to Manolis and all people involved!