Bias, risk and opacity in Artificial Intelligence

Abstract

The research studies the types of bias in the use of data in AI, analyses the risks of technologies based on AI and discusses the criteria for a transparent and reliable AI. 

Key words

Bias, Risk, Artificial Intelligence, algorithms, transparency

Publications and ongoing projects

PRIN 2020 BRIO – Bias, Risk, Opacity in AI (Project n. 2020SSKZ7R)
https://sites.unimi.it/brio/

  • G. C. M. Amaral, T. P. Sales, G. Guizzardi, D. Porello. Towards a Reference Ontology of Trust. On the move to meaningful Internet Systems (OTM 2019). Rhodes, Greece, October 21-25. Lecture Notes in Computer Science 11877. 3-21.  2019.
  • R. Confalonieri, P. Galliani, O. Kutz, D. Porello, G. Righetti, N. Troquard. Towards Knowledge-driven Distillation and Explanation of Black-box Models. In Proceedings of the Workshop on Data meets Applied Ontologies in Explainable AI (DAO-XAI 2021) part of Bratislava Knowledge September (BAKS 2021), Bratislava, Slovakia, September 18th to 19th, 2021. CEUR Workshop Proceedings 2998. 2021.
  • Glenda C. M. Amaral, Daniele Porello, Tiago Prince Sales, Giancarlo Guizzardi. Modeling the Emergence of Value and Risk in Game Theoretical Approaches. In Advances in Enterprise Engineering XIV - 10th Enterprise Engineering Working Conference, EEWC 2020, Bozen-Bolzano, Italy, September 28, October 19, and November 9-10, 2020, Revised Selected Papers. Lecture Notes in Business Information Processing 411, Springer 2021

Departments

Department of Antiquities, Philosophy and History

Contacts

Prof. Daniele Porello (daniele.porello@unige.it)  

Last update 17 January 2023