Fundamental Rights and AI Impact Assessment: an innovative quali-quantitative framework Qorus-Infosys Finacle Banking Innovation Awards 2024- Nominated

Submitted by

Intesa Sanpaolo

Premium
25/06/2024 Banking Innovation
New EU AI Act requires high-risk AI to assess impact on fundamental rights. We offer a 2-stage tool for businesses: 1. Survey identifies potential threats, 2. Matrix measures impact on each right. It ensures accountability and concreteness.
Innovation details
Country
Italy
Category
Social, Sustainable & Responsible Banking
Keyword
AI & Generative AI, Regulation, Innovation, ESG & Sustainability, Risk management

Innovation presentation

The European Artificial Intelligence Act requires that deployers of AI systems perform a Fundamental Rights Impact Assessment (FRIA) when such systems are classified as high-risk as defined in Annex III of the Regulation. The aim of our project – conducted synergistically by the Departments of Data Science & Responsible AI and Compliance Digital Transformation – is to offer a comprehensive framework, specifically thought for private businesses, to assess the impact of AI systems on fundamental rights of individuals. The assessment approach that we elaborated consists of two stages:

(1) an open-ended survey that helps gather the contextual information and the technical features, to properly identify potential threats for fundamental rights, and

(2) a quantitative matrix that considers each right guaranteed by the European Charter of Fundamentals Rights and tries to measure the potential impacts with a traceable and robust procedure. In light of an increasingly pervasive use of AI systems in the banking sector as well as in all industries, we believe that a structured and quantitative process for assessing the impact on fundamental rights of individuals was still lacking and could be of great importance in discovering and remedying possible violations.

Indeed, the framework that we constructed could allow to:

(1) be accountable and transparent in assessing the risks of implementing AI systems that affect people;

(2) gain insights to understand if any right is threatened or any group of people is more vulnerable;

(3) put in place, if necessary, remediation strategies before the deployment of AI systems through demonstrable mitigative actions, with the aim of being compliant with the regulation and limiting reputational damage.

Want to keep reading?

Become a Qorus member to get access to all our innovations

Interested in learning more?

Qorus has a library of almost 8,000 innovation case studies across critical areas like customer experience, sustainability, marketing & distribution and more that can be used to inform your decision-making.

Related Content