Artigo Acesso aberto Revisado por pares

Algorithmic Fairness in AI

2023; Springer Nature; Volume: 65; Issue: 2 Linguagem: Inglês

10.1007/s12599-023-00787-x

ISSN

2363-7005

Autores

Jella Pfeiffer, Julia Gutschow, Christian Haas, Florian Möslein, Oliver Maspfuhl, Frederik Borgers, Suzana Alpsancar,

Tópico(s)

Explainable Artificial Intelligence (XAI)

Resumo

In 2016, an investigative journalism group called ProPublica analyzed COMPAS, a recidivism prediction algorithm based on machine learning used in the U.S. criminal justice sector.This instrument assigns risk scores to defendants that are supposed to reflect how likely that person is to commit another crime upon release.The group found that the instrument was much more likely to falsely flag black defendants as high risk and less likely to falsely assess them to be low risk than it was the case for white defendants.ProPublica assessed this to be highly problematic as false decisions in this area of application can have a major impact on the defendants' lives, possibly affecting their prospects of early release, probationary conditions or the amount of bail posted (Angwin et al. 2016).This example from the criminal justice sector shows that discrimination is not only a problem of human but also of algorithmic decision-making.Algorithmic fairness is particularly interesting when considering machine learning algorithms because they typically learn from past data, which might already be biased.Furthermore, a machine learning algorithm that tends to make unfair decisions might lead to systematic discrimination because, once trained, the algorithm might decide for a large amount of future cases.As such AI algorithms are used in many contexts such as personalized advertising, recruiting, credit business, or pricing (Dastile et al. 2020;Lambrecht and Tucker 2019;Raghavan et al. 2020;Sweeney 2013), they can gravely impact the further development of peoples' lives both on the individual and on the societal level, e.g., by increasing the wealth gap, but also impact organizations, e.g., by violating equal opportunity policies (Kordzadeh and Ghasemaghaei 2022).It is, therefore, of utmost importance to not only ensure that AI systems do not discriminate systematically but, going one step further, to also understand them as a chance to mitigate potential unfairness stemming from human-based decision-making.This discussion paper mainly draws from a symposium on algorithmic fairness that was held in March

Referência(s)