Notions of Fairness in Automated Decision Making: An Interdisciplinary Approach to Open Issues

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Artificial Intelligence (AI) systems share complex characteristics including opacity, that often do not allow for transparent reasoning behind a given decision. As the use of Machine Leaning (ML) systems is exponentially increasing in decision-making contexts, not being able to understand why and how decisions were made, raises concerns regarding possible discriminatory outcomes that are not in line with the shared fundamental values. However, mitigating (human) discrimination through the application of the concept of fairness in ML systems leaves room for further studies in the field. This work gives an overview of the problem of discrimination in Automated Decision-Making (ADM) and assesses the existing literature for possible legal and technical solutions to defining fairness in ML systems.

Cite

CITATION STYLE

APA

Yousefi, Y. (2022). Notions of Fairness in Automated Decision Making: An Interdisciplinary Approach to Open Issues. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13429 LNCS, pp. 3–17). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-12673-4_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free