Journal
Minds and Machines
Minds & Machines publishes on the relation between human beings and technologies.
- Publishing model
- Hybrid
- Journal Impact Factor
- 3.4 (2024)
- Downloads
- 550.3k (2024)
- Submission to first decision (median)
- 49 days
Socio-technical systems imbued with artificial intelligence (AI) increasingly shape societies worldwide by making important decisions, for example, in public administration (AlgorithmWatch, 2019), medicine (Rajkomar et al., 2019), media (Thurman et al., 2019), and the legal system (Chouldechova, 2017). AI can improve decision-making because algorithms are impartial, do not grow tired, and are not distorted by emotions (Lee, 2018). However, unfair AI systems can also undermine fundamental ethical principles. They have been shown to systematically reinforce racial or gender stereotypes, marginalize communities or denigrate certain members of society (Veale & Binns, 2017). For instance, AI systems have wrongly excluded citizens from food support programs, mistakenly reduced their disability benefits, or falsely accused them of fraud (Richardson et al., 2019).
To mitigate AI-based social inequalities and discrimination, fairness is endorsed as a main principle for trustworthy AI by the OECD (2019) and the European Commission (2019), and it has been mentioned in more than 80 percent of guidelines for AI ethics (Jobin et al., 2019). AI fairness has further emerged as a growing research field in philosophy, computer science, social science, and legal science. As different notions of fairness exist, we aim to bring different perspectives together and advance the productive dialogue on AI (un)fairness.
In this Special Issue, we want to explore interdisciplinary perspectives of AI (un)fairness. We bring together researchers from different academic disciplines to zoom in on the intricate ethical, social, psychological, and regulatory issues that arise when transferring decision authority to an AI-based system.
We invite papers focusing on but not restricted to, the following topics:
- How can we advance conceptual and epistemological work on AI (un)fairness?
- How does AI sustain vs. change existing social power structures?
- Which trade-offs emerge between AI fairness, accuracy, explainability, security, privacy, and accountability? And how can we address them?
- What are the drivers and consequences of AI (un)fairness?
- How should humans be integrated into AI decision-making (Human-in-the-loop)?
- How can and should policymakers implement fairness in AI regulation?
The Special Issue is guest edited by members of the interdisciplinary project Human(e) AI funded by the University of Amsterdam as a Research Priority Area. To streamline the editing process, we invite all authors to submit a short declaration of interest clearly stating the research question, objectives, and main argument of the paper (max. 500 words) via humane-ai@uva.nl. Positive feedback on the declaration of interest does not guarantee acceptance in the Special Issue. All full paper submissions will undergo rigorous peer review (double-blind) by at least two reviewers.
Submission Details To submit a paper for this special issue, authors should go to the journal’s Editorial Manager https://www.editorialmanager.com/mind/default.aspx The author (or a corresponding author for each submission in case of co- authored papers) must register into EM. The author must then select the special article type: "Interdisciplinary Perspectives on the (Un)fairness of Artificial Intelligence” from the selection provided in the submission process. This is needed in order to assign the submissions to the Guest Editor. Submissions will then be assessed according to the following procedure: New Submission => Journal Editorial Office => Guest Editor(s) => Reviewers => Reviewers’ Recommendations => Guest Editor(s)’ Recommendation => Editor-in-Chief’s Final Decision => Author Notification of the Decision. The process will be reiterated in case of requests for revisions.
For any further information please contact: Dr. Christopher Starke (christopher.starke@uva.nl)
Assistant Professor of The Human Factor in New Technologies, University of Amsterdam
Full Professor of Artificial Intelligence and Humanities, University of Amsterdam
Full Professor of Artificial Intelligence and Society, University of Amsterdam
Full Professor of Law and Digital Technology, University of Amsterdam
Full Professor of Logic and Epistemology, University of Amsterdam