Artificial Intelligence and religion: an in-depth analysis of impacts, ethical implications and the critical risk of cults

Guido Donati* 01 Lug 2025




Artificial Intelligence (AI) is redefining the technological and social landscape, extending its influence into the realm of religions and spiritual practices. While significant opportunities for innovation are emerging, complex ethical and theological challenges are also becoming apparent, exacerbated by the potential risk of dangerous AI programming. The risks of dangerous AI programming, as analyzed in "The Risks of Dangerous AI Programming" [5], are significant and far- reaching, particularly concerning the instrumentalization of human beliefs and vulnerabilities.

AI in islamic education and beyond: benefits and initial considerations
A crucial starting point for understanding AI's impact on religions is Andri Nirwana's (2025) study [15] titled "SWOT Analysis of AI Integration in Islamic Education: Cognitive, Affective, and Psychomotor Impacts Vol. 5 No. 1." This foundational research highlights how AI is revolutionizing Islamic education, introducing innovative and personalized learning methodologies. Nirwana's study, based on a qualitative approach and a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis, indicates significant improvements in cognitive and psychomotor learning within Islamic education. AI tools like ClassPoint AI, AI Chatbots, and Squirrel AI contribute to knowledge retention, adaptive learning, and competency-based training in crucial areas such as Quranic recitation, prayer practices, and Islamic jurisprudence. However, the author emphasizes that human educators remain essential for moral and ethical development.

Beyond Islamic education, AI is finding diverse applications in various religious traditions: it can analyze sacred texts to serve as spiritual guides [8], optimize religious organization and practices (such as Torah study or organizing mosques for Hajj) [10], and even be employed in platforms for understanding ancient scriptures like Brahma Gyaan for Hindus [14].
Despite these benefits, the first shadows related to unintentional consequences, as mentioned in Scienzaonline [5], are already visible. A poorly designed AI or one with biased data could, even unintentionally, perpetuate and amplify existing prejudices or provide distorted religious interpretations. The risks of dangerous AI programming: a lens on religious contexts
The risks of dangerous AI programming [5] primarily manifest in three categories: unintentional consequences, malicious use, and loss of control. These categories are particularly pertinent when applied to the delicate field of religion.


Unintentional consequences
Even with the best intentions, a poorly designed AI can lead to unforeseen and harmful outcomes. In a religious context, this can occur due to:
• Defective objectives or reward functions. If an AI is designed to maximize user engagement on a religious platform, it might promote polarizing or sensationalistic content, even if it's detrimental to community cohesion or doctrinal integrity.
• Data bias. If training data reflects cultural prejudices or sectarian interpretations of sacred texts, the AI will perpetuate and even amplify such distortions, leading to discriminatory outcomes or skewed theological narratives. For further insights on this topic, see O'Neil (2016) [17] and Eubanks (2018) [6].
• Emergent behaviors. Complex AI systems could develop behaviors not explicitly programmed, such as the autonomous generation of new "revelations" or unpredictable spiritual interpretations, which might be perceived as authentic by some but are actually the result of algorithmic patterns.


Malicious use: the instrumentalization of human beliefs and vulnerabilities
The deliberate programming of AI for harmful purposes represents one of the most serious concerns in a religious context. Beyond cyberattacks or automated fraud, a particularly insidious and far-reaching risk is the instrumentalization of human beliefs and vulnerabilities [5]. An AI programmed in this way could act as an unprecedented amplifier of ideological and religious manipulation. This risk exploits the intrinsic susceptibility of the human psyche, especially when individuals are vulnerable, seeking answers, or a sense of belonging. A malicious AI could:
• Create distorted narratives. Be trained on sacred texts or ideological discourses to generate radically new or misleading interpretations. These "new truths" could appear authoritative and convincing, especially if presented with persuasive language adapted to an individual's emotional needs, simulating personalized spiritual guidance.
• Identify and exploit vulnerabilities. Analyze online behavior and social media expressions to identify lonely, disillusioned, or guidance-seeking individuals. The AI could then personalize its messages to create an emotional bond and dependency.
• Generate persuasive content at scale. Produce countless hyper-realistic videos, texts, audio, and images (like deepfakes) that reinforce the distorted view, disseminating them in a targeted and massive way through digital channels. This makes it almost impossible to distinguish fiction from reality, weakening critical discernment.
• Create digital "echo chambers." Build online environments where individuals are constantly exposed only to information that reinforces the manipulated view, isolating them from opposing viewpoints and strengthening their conviction. This creates fertile ground for indoctrination.

History is unfortunately rich with examples of how ideological and religious manipulation can lead to mass tragedies, phenomena that an AI could exponentially amplify:
• Nazism [4]. An ideology based on a distorted view of race and history, capable of mobilizing millions of people toward unspeakable crimes. An AI could refine and disseminate such propaganda with unprecedented efficiency.
• The Jonestown Massacre (People's Temple) [18]. In 1978, hundreds of cult followers died in a mass murder-suicide induced by a leader who had exercised total psychological control. An AI could assume the role of a virtual leader or amplify a real leader, refining psychological persuasion and manipulation techniques on a global scale.
• Kamikaze and other phenomena of ideological/religious extremism [17]. From World War II to contemporary forms of terrorism, individuals have been convinced to sacrifice their lives for an extreme interpretation of a belief. An AI could instrumentalize the search for meaning, frustration, or the desire for belonging to radicalize and mobilize individuals.

In these scenarios, an AI programmed for manipulation could cause psychological, social, and even physical harm of catastrophic proportions, fueling the specific risk of cult formation. AI, devoid of empathy and consciousness, could dehumanize the religious experience, reducing it to a mere set of data and algorithms, favoring control and pathological dependency.


Loss of control
While more speculative, the risk of losing control over advanced AI is catastrophic. If an AI achieves superintelligence [2, 24] that far surpasses human cognitive abilities, it could pursue its objectives in ways incomprehensible or harmful to humanity, even if its initial programming was benign [21]. In a religious context, this could manifest as an artificial "divinity" that evolves beyond human understanding or control, with unpredictable consequences for spirituality and society.

Mitigating risks: a multifactorial and interdisciplinary approach
Addressing these risks requires a robust and multifactorial approach that integrates best practices in AI development with a deep understanding of human and spiritual dynamics:
• Robust AI safety research. Invest in research focused on objective alignment, control, interpretability, and ethical AI development [5]. This includes studying how to prevent AI-generated manipulation and disinformation in sensitive contexts like religion.
• Ethical guidelines and specific regulations. Develop and implement international standards, regulations, and ethical guidelines for AI design and use. These must include specific rules against the use of AI for ideological or religious manipulation [1, 13, 19, 20, 22].
• Transparency and explainability. Design AI systems that can explain their decisions and actions, making it easier to identify and rectify errors or biases, including those that might lead to distorted narratives or latent indoctrination.
• Human oversight and control. Ensure that humans always maintain ultimate control over critical AI systems and that "kill switches" or override mechanisms exist. Human spiritual guidance and community must remain at the center of the religious experience. The problem, even here, is that humans might not have safe intentions for society.
• Digital literacy and critical thinking. Promote greater public awareness of AI risks, particularly regarding disinformation and manipulation. Teaching critical thinking and source verification is fundamental to protecting individuals from AI-generated content with malicious intent.
• Interdisciplinary collaboration. Work with experts in ethics, neuropsychiatry, psychology, sociology, theology, law, and political science to better understand the dynamics of human manipulation and how AI could amplify them [5]. Only a holistic approach can ensure safe and responsible AI development.

In summary, while AI offers unprecedented opportunities to enrich the religious experience, it is crucial to proactively address the risks, particularly that of potential cult formation, through an ethical, educational, and regulatory approach. The responsibility to guide AI development in a way that serves humanity, rather than undermining its beliefs and vulnerabilities, rests on all of us.

 

* Board Member, SRSN (Roman Society of Natural Science)

 

Bibliography
[1] AI and Faith. (n.d.). Religious Ethics in the Age of Artificial Intelligence and Robotics: Exploring Moral Considerations and Ethical Perspectives
[2] Bostrom, Nick. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press. 
[4] Catholic Insight. (2024, July 15). Some Observations on Artificial Intelligence (AI) and Religion. https://catholicinsight.com/2024/07/15/some-observations-on-artificial-intelligence-ai-and-religion/
[5] Donati, Guido. The perils of dangerous AI programming. Scienceonline
[6] Eubanks, Virginia. (2018). Automating Inequality: How High-Tech Tools Profile, Punish, and Police the Poor. St. Martin's Press. 
[7] Good News Unlimited. (n.d.). Artificial Intelligence And Christianity. 
[8] Jesuit Conference of European Provincials. (2024, September 2). Religion Should Engage with Technology and AI.
[9] Leon, F., & Syafrudin, M. (2024, February). The Role of AI in Religion: Opportunities and Challenges. Journal of Communication and Information Technology, 1(2), 24-30. 
[10] MDPI. (2024, March 21). Artificial Intelligence's Understanding of Religion: Investigating the Moralistic Approaches Presented by Generative Artificial Intelligence Tools. Religions, 15(3), 375. 
[11] MDPI. (2024, May 15). Artificial Intelligence and Religious Education: A Systematic Literature Review. Education Sciences, 14(5), 527. 
[12] Modern Diplomacy. (2025, April 27). Faith in the Digital Age: How AI and Social Media Are Shaping the Future of Global Diplomacy. 
[13] New Imagination Lab. (2025, January 4). The Rise of AI as a New Religion. 
[14] News18. (2024, April 1). Gita GPT, Brahma Gyaan: AI Apps Help Hindus Understand Ancient Scriptures, Stay Rooted to Culture. 
[15] Nirwana, Andri. (2025). SWOT Analysis of AI Integration in Islamic Education: Cognitive, Affective, and Psychomotor Impacts Vol. 5 No. 1. Qubah: Jurnal Pendidikan Dasar Islam, 5(1). 
[16] OMF International. (2024, December 4). The Ethics of Using AI in Christian Missions: The Gospel, Cultural Engagement, and Indigenous Churches
[17] O'Neil, Cathy. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown. 
[18] Reiterman, T., & Jacobs, J. (1982). Raven: The Untold Story of the Rev. Jim Jones and His People. Dutton. 
[19] ResearchGate. (n.d.). Impact of AI-Powered Technology on Religious Practices and Ethics: The Road Ahead
[20] ResearchGate. (n.d.). The Ethical Implications of AI in Expressing Religious Beliefs Online: A Restatement of the Concept of Religion. 
[21] Russell, Stuart. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking. 
[22] SunanKalijaga.org. (2024, March 6). The Ethical Implications of AI in Expressing Religious Beliefs Online: A Restatement of the Concept of Religion. International Conference on Religion, Science and Education, 1(1), 1238-1249. 
[23] TRT World. (2024, November 30). Will Artificial Intelligence reshape how we practice religion?
[24] Yudkowsky, Eliezer. (2008). Artificial Intelligence as a Positive and Negative Factor in Global Risk. In Global Catastrophic Risks, edited by Nick Bostrom and Milan Ćirković, 308-345. Oxford University Press. 

Vota questo articolo
(0 Voti)

Lascia un commento

Assicurati di aver digitato tutte le informazioni richieste, evidenziate da un asterisco (*). Non è consentito codice HTML.

 

Scienzaonline con sottotitolo Sciencenew  - Periodico
Autorizzazioni del Tribunale di Roma – diffusioni:
telematica quotidiana 229/2006 del 08/06/2006
mensile per mezzo stampa 293/2003 del 07/07/2003
Scienceonline, Autorizzazione del Tribunale di Roma 228/2006 del 29/05/06
Pubblicato a Roma – Via A. De Viti de Marco, 50 – Direttore Responsabile Guido Donati

Photo Gallery