Bias, rischio e opacità in Intelligenza Artificiale

Abstract

La ricerca studia le tipologie di bias nell’uso dei dati in IA, analizza i rischi delle tecnologie basate su IA, e discute i criteri per un’IA trasparente e affidabile

Parole chiave

Bias, Rischio, Intelligenza Artificiale, algoritmi, trasparenza

Pubblicazioni e progetti in corso

PRIN 2020 BRIO – Bias, Risk, Opacity in AI (Progetto n. 2020SSKZ7R)
https://sites.unimi.it/brio/

  • G. C. M. Amaral, T. P. Sales, G. Guizzardi, D. Porello. Towards a Reference Ontology of Trust. On the move to meaningful Internet Systems (OTM 2019). Rhodes, Greece, October 21-25. Lecture Notes in Computer Science 11877. 3-21.  2019.
  • R. Confalonieri, P. Galliani, O. Kutz, D. Porello, G. Righetti, N. Troquard. Towards Knowledge-driven Distillation and Explanation of Black-box Models. In Proceedings of the Workshop on Data meets Applied Ontologies in Explainable AI (DAO-XAI 2021) part of Bratislava Knowledge September (BAKS 2021), Bratislava, Slovakia, September 18th to 19th, 2021. CEUR Workshop Proceedings 2998. 2021.
  • Glenda C. M. Amaral, Daniele Porello, Tiago Prince Sales, Giancarlo Guizzardi. Modeling the Emergence of Value and Risk in Game Theoretical Approaches. In Advances in Enterprise Engineering XIV - 10th Enterprise Engineering Working Conference, EEWC 2020, Bozen-Bolzano, Italy, September 28, October 19, and November 9-10, 2020, Revised Selected Papers. Lecture Notes in Business Information Processing 411, Springer 2021

Dipartimenti 

Dipartimento di Antichità, Filosofia e Storia

Contatti

Prof. Daniele Porello (daniele.porello@unige.it)  

Ultimo aggiornamento 21 Aprile 2022