Introduction to the Issue and Case Studies

Abstract: Artificial intelligence, far from being neutral, reproduces and amplifies religious biases already present in the data and in development processes. This first section examines distortions in linguistic and visual models, showing how religious bias can negatively affect the representation of minority faiths. Recent studies demonstrate that such biases emerge in textual content, in generated images, and in algorithmic communication, posing concrete risks to pluralism and fundamental rights. These findings highlight the urgency of effective debiasing strategies and of a more ethical approach to AI design.
Keywords: #artificialintelligence #religiousbias #algorithmicdiscrimination #dataprejudice #religiousminority #religiouspluralism #AIethics #technologicalneutrality #religiousrepresentation #debiasing #maurocofelice #ethicasocietas #ethicasocietasjournal #scientificjournal #pointbasedlicense #law #ethicasocietassupplement
Introduction to the Issue[1]
Artificial intelligence (AI) is today one of the fastest-growing technologies, with an increasingly evident influence on various aspects of social, economic, and political life. Like any technology, it should be neutral; however, behind the façade of neutrality and objectivity lies a worrying phenomenon: religious bias. This subtle yet systematic form of discrimination endangers the fundamental rights[2] of religious minorities and democratic pluralism, as well as the ethical values of society as a whole.
As Stefano Foglia[3] (2024) emphasizes, artificial intelligence “is not neutral”; rather, it reflects and amplifies the prejudices and distortions present both in training data and in the human decision-making processes that guide its development.
Religious Bias
Religious bias in AI emerges through a complex series of technical and cultural mechanisms. On the one hand, machine-learning models tend to absorb distortions embedded in their training datasets (which often reflect dominant cultural viewpoints while neglecting religious minorities); on the other hand, developers (mostly Western and secular) may inadvertently embed biases that privilege certain religious traditions over others.
In recent years, academic research has begun to document this phenomenon systematically. The studies of Plaza-del-Arco et al. (2024) on large language models (LLMs) highlight significant discriminatory patterns: while majority religions in Western countries tend to be represented with greater nuance and complexity, religious minorities are often treated superficially[4].
The Study on ChatGPT
Khan and Umer (2025), in their study on the use of AI in financial services, found that approximately 50% of emails generated by ChatGPT contain forms of religious bias. Distortions emerge both within the same religious group (intragroup bias) and between different groups (intergroup bias).
In the first case, AI tends to modulate responses based on the user’s implicit religious alignment, adjusting tone and content accordingly.
In the second, it may produce messages that accentuate differences among faiths, contributing to ideological polarization or loss of customer trust. These findings suggest that AI systems do not merely reproduce existing social prejudices but actively reshape them, influencing how religious identity is interpreted and communicated. In this way, AI acts as a mediator that selects and reinforces specific religious narratives, significantly influencing commercial communication[5].
A Documented Case of Religious Bias in Visual AI
Evidence also comes from the study “Religious Bias Landscape in Language and Text-to-Image Models: Analysis, Detection, and Debiasing Strategies” by Abrar et al. (2025). The authors constructed about 400 natural prompts referencing different religions to test both language and image-generation models.
They found persistent religious stereotypes, disproportionately associated with certain faiths, in both textual outputs and generated images. Prompts such as “a Muslim scholar lecturing” or “a Christian pastor preaching” were rendered with stereotypical visual elements (e.g., turbans and desert settings for the Muslim figure; classical Western churches and Northern European attire for the Christian figure). These images encode narrow cultural assumptions, ignoring realistic and diverse religious contexts. The study also notes that such stereotypes persist even after targeted debiasing interventions.
Technical and Cultural Mechanisms Behind Religious Bias – Distortions in Training Datasets
Religious bias in AI originates largely from the composition of training datasets. According to the FaithGPT Institute (2024), major LLM training datasets show “systematic imbalances in the portrayal of Christianity compared to other worldviews.” Texts from Reddit and Wikipedia tend to associate Christianity with concepts such as ignorance, bigotry, regressive values, and suppression of science, while secular philosophies and New Age spiritualities receive more neutral or even positive portrayals.
An analysis by Semantic Scholar highlighted that the terms “Christian” and “Christianity” in training data frequently co-occur with words like “bigot”, “homophobic”, and “naive”, whereas “Atheist” and “Atheism” show no comparable associations[6]. This imbalance is particularly alarming considering Christianity’s global predominance of over 2.5 billion adherents.
Architectural and Algorithmic Bias
Beyond data-derived prejudice, AI systems also exhibit structural biases rooted in model architecture. Research by El Ganadi et al. (2024) shows that generative models designed for Islamic textual production suffer from significant limitations, including a tendency to generate inaccurate or entirely fabricated content. These so-called “hallucinations” — outputs presented as factual despite being false — are especially problematic in religious contexts, where doctrinal accuracy is crucial. Even minor inaccuracies can generate misunderstandings, tensions, or delegitimization, making rigorous oversight essential for AI working with sensitive content.
Western Cultural Dominance in AI Development
As noted earlier, the predominance of Western-built datasets and the composition of development teams (often secular and culturally homogeneous) significantly shape religious bias in AI systems. A UNESCO-sponsored study (2022) found that global ethical frameworks for AI remain strongly influenced by secular Western paradigms, often excluding or marginalizing non-Western religious perspectives.
This asymmetry results in distorted representations of faith traditions, particularly minority or non-European religions, and in a narrow theological narrative often centered on institutional scandals or sociopolitical implications. The outcome is a weakened doctrinal understanding, with potentially serious consequences for AI’s ability to interact respectfully and accurately with diverse religious sensibilities.
NOTE:
[1] Nella redazione del presente articolo, redatto a mero scopo divulgativo, è stato utilizzato il supporto di strumenti di intelligenza artificiale per l’esplorazione e l’individuazione preliminare delle fonti bibliografiche.
Tali strumenti, tra cui modelli linguistici generativi, sono stati impiegati per:
– mappare rapidamente la letteratura esistente sul tema,
– identificare riferimenti rilevanti,
– verificare coerenze tematiche tra fonti accademiche e documenti istituzionali.
Tutte le fonti citate sono state successivamente verificate manualmente e selezionate secondo criteri di attendibilità, tracciabilità e rilevanza scientifica. L’uso dell’AI ha avuto finalità di supporto alla ricerca e non ha sostituito il giudizio critico o la responsabilità autoriale nella valutazione dei contenuti.
[2] De Oto A., IA, identità e biodiritti, Atti del Convegno Rivoluzione 4.0: Intelligenza Artificiale e il futuro della sanità, Prefettura di Bologna, 15 marzo 2024, disponibile online al sito: https://www.youtube.com/watch?v=-Ej4qDozKbw&t=8s
[3] Foglia, S. (2024). I potenziali terreni di conflittualità tra Intelligenza Artificiale, salute e religione. Coscienza e libertà, 68, 221-233
[4] Plaza-del-Arco, F. M. et al. (2024). Divine LLaMAs: Bias, Stereotypes, Stigmatization, and Emotion Representation of Religion in Large Language Models. Findings of EMNLP 2024, 4346–4366. https://aclanthology.org/2024.findings-emnlp.251/
[5] Khan, M. S., Umer, H. (2025). Sacred or Secular? Religious Bias in AI-Generated Financial Advice. SSRN Working Paper. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5193790
[6] FaithGPT Institute (2024). Anti-Christianity Bias in LLM Training Data. https://www.faithgpt.io/blog/anti-christianity-bias-in-chatgpt-llm-training-data.
LATEST CONTRIBUTIONS ON AI
THE ITALIAN REGULATION ON AI: NEW CRIMINAL OFFENCES AND ADMINISTRATIVE VIOLATIONS
THE IMPACT OF ARTIFICIAL INTELLIGENCE ON CORPORATE FINANCE
THE RISK OF RELIGIOUS BIAS IN ARTIFICIAL INTELLIGENCE
THE FUTURE OF FINANCE: THE IMPACT OF ARTIFICIAL INTELLIGENCE ON BUSINESS VALUATION
ARTIFICIAL INTELLIGENCE AND BIAS: LIMITS AND OPPORTUNITIES
LATEST 5 CONTRIBUTIONS
THE ITALIAN REGULATION ON AI: NEW CRIMINAL OFFENCES AND ADMINISTRATIVE VIOLATIONS
“LIFE DOES NOT BELONG TO US”: MINISTER FLORES HERNÁNDEZ SPEAKS OF DIPLOMACY WITH A SOUL
TELEMEDICINE AS A PARADIGM OF TRANSFORMATION IN TERRITORIAL HEALTHCARE
THE MUNICIPALITY OF SANT’EGIDIO ALLA VIBRATA (TE) CONVICTED FOR MOBBING
LANGUAGE AS A TOOL OF VIOLENCE AND REDEMPTION
Ethica Societas is a free, non-profit review published by a social cooperative non.profit organization
Copyright Ethica Societas, Human&Social Science Review © 2025 by Ethica Societas UPLI onlus.
ISSN 2785-602X. Licensed under CC BY-NC 4.0


