Malicious use
The deliberate programming of AI for harmful purposes is a serious concern and could include:
• Autonomous weapon systems: AI-powered weapons that can identify, select, and engage targets without human intervention raise profound ethical and security questions. There's a risk these systems could make mistakes, escalate conflicts, or fall into the wrong hands.
• Cyberattacks: Malicious AI could be used to launch highly sophisticated and adaptable cyberattacks, making defense incredibly difficult.
• Automated fraud and crime: AI could automate various criminal activities, from financial fraud to identity theft, on an unprecedented scale and speed.
• Surveillance and manipulation: AI could be programmed to conduct widespread surveillance, identify vulnerabilities in individuals or groups, and then manipulate them through disinformation or targeted propaganda.
Instrumentalization of human beliefs and fragilities
A particularly insidious and far-reaching risk comes from programming AI with the intent to manipulate and distort deep-seated beliefs, including religious or ideological meanings. This threat exploits the inherent fragility of the human psyche and its susceptibility to manipulation, especially when people are vulnerable, seeking answers, or a sense of belonging.
An AI programmed in this way could:
• Create distorted narratives: Be trained on sacred texts, ideological speeches, or historical material, and then generate radically new or distorted interpretations. These "new truths" could
appear authoritative and convincing, especially if presented with persuasive language tailored to an individual's emotional needs.
• Identify and exploit vulnerabilities: Analyze online behavior, social media expressions, and questions asked to pinpoint lonely, disillusioned individuals or those looking for guidance. The AI could then personalize its messages to forge an emotional bond and dependency.
• Generate persuasive content at scale: Produce countless videos, texts, audio, and images that reinforce the distorted view, spreading them in a targeted and massive way through digital channels. The AI's ability to generate hyper-realistic content (like deepfakes) could make it almost impossible to distinguish fiction from reality.
• Create digital "echo chambers": Build online environments where individuals are constantly exposed only to information reinforcing the manipulated view, isolating them from opposing
perspectives and strengthening their conviction.
History, unfortunately, is full of examples of how ideological and religious manipulation can lead to mass tragedies:
• Nazism: An ideology based on a distorted view of race and history, capable of mobilizing millions of people toward unspeakable crimes like the Holocaust [1]. Propaganda was a fundamental pillar of this regime.
• The Jonestown massacre (People's Temple): In 1978, hundreds of cult followers died in a mass murder-suicide induced by their leader, Jim Jones, who had exerted total psychological and social control over his adherents, leading them to believe in a distorted reality and act against their own survival [2].
• Kamikazes and other phenomena of ideological/religious extremism: From World War II to contemporary forms of terrorism, individuals have been convinced to sacrifice their lives in the
name of a cause or an extreme interpretation of a belief, demonstrating the power of persuasion and ideological manipulation, often in contexts of deprivation, indoctrination, and dehumanization [3]. In these scenarios, an AI programmed for manipulation could act as an unprecedented amplifier, capable of creating virtual leaders, refining persuasion techniques, and reaching vulnerable individuals on a global scale, with the risk of causing psychological, social, and even physical harm of catastrophic proportions.
Loss of control
This is perhaps the most speculative, but also the most potentially catastrophic, risk. It refers to a scenario where advanced AI systems become so intelligent and autonomous that humans lose the ability to control or even comprehend their decisions.
• Superintelligence: If an AI achieves general intelligence that far surpasses human cognitive abilities (superintelligence), it could pursue its goals in ways incomprehensible or harmful to humanity, even if its initial programming was benign. For an in-depth analysis, refer to Bostrom (2014) [7] and Yudkowsky (2008) [9].
• Runaway AI: An AI designed to optimize itself or improve its own intelligence could enter a feedback loop, rapidly becoming more powerful without human oversight or the ability to be shut down.
• Goal misalignment: Even if a superintelligent AI isn't malevolent, if its goals aren't perfectly aligned with human values and survival, it could inadvertently cause harm while pursuing its programmed objectives. The alignment problem is explored in detail by Russell (2019) [8].
Mitigating risks
Addressing these risks requires a multifaceted approach, including:
• Robust AI safety research: Investing in research focused on AI alignment, control, interpretability, and ethical development. This includes studying how to prevent AI-generated manipulation and disinformation.
• Ethical guidelines and regulations: Developing and implementing international standards, regulations, and ethical guidelines for AI design, implementation, and use. This must include specific norms against using AI for ideological or religious manipulation.
• Transparency and explainability: Designing AI systems that can explain their decisions and actions, making it easier to identify and rectify errors or biases, including those that might lead to distorted narratives.
• Human oversight and control: Ensuring that humans always maintain ultimate control over critical AI systems and that "kill switches" or override mechanisms exist.
• Digital literacy and critical thinking: Promoting greater public awareness of AI risks, particularly regarding disinformation and manipulation. Teaching critical thinking and source verification is fundamental to protecting individuals from maliciously intended AI-generated content.
• Interdisciplinary collaboration: Working with experts in ethics, neuropsychiatry, psychology, sociology, religion, and political science to better understand the dynamics of human manipulation and how AI could amplify them.
Ultimately, the safe development of AI depends on responsible programming, rigorous testing, and a profound understanding of the potential social and psychological implications.
*Board Member, SRSN (Roman Society of Natural Science)
Bibliography
[1] Kershaw, I. (2008). Hitler: A Biography. W. W. Norton & Company.
[2] Reiterman, T., & Jacobs, J. (1982). Raven: The Untold Story of the Rev. Jim Jones and His People's Temple. E. P. Dutton.
[3] Hoffman, B. (2006). Inside Terrorism. Columbia University Press. https://cup.columbia.edu/book/inside-terrorism/9780231174770/
[4] O'Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown.
[5] Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.
[6] Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin's Press. (Documenta come i sistemi algoritmici per l'assistenza sociale possano portare a esiti iniqui e dannosi per le persone a basso reddito, anche con buone intenzioni).
[7] Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
[8] Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking.
[9] Yudkowsky, E. (2008). Artificial Intelligence as a Positive and Negative Factor in Global Risk. In Global Catastrophic Risks (pp. 308-345).