VOLUME 11 ISSUE 2 FALL 2025

Spirituality Studies 11-2 Fall 2025 73 Gábor Pék, Gejza M. Timčák 3.3 Security Concerns of Using AI Another key concern regarding the use of AI as a spiritual guide for self-study is the potential risk posed by threat actors who could intentionally manipulate AI systems to mislead or deceive aspirants. Security researchers (Kang et al. 2024), for example, have also observed that instruction following LLMs have abilities like standard computer programs. Based on the concept of Return-Oriented Programming (Roemer et al. 2012), LLMs have several “gadgets” that can be chained together to create an instruction sequence, for example, by assigning variables, concatenating strings or branching. Moreover, these so-called Prompt Injection (PI) attacks originally suggested by (Perez and Ribeiro 2022) can “either hijack the original use-case of the model or leak the original prompts and instructions of the application” (Greshake et al. 2023, 2). PI can be triggered directly (Kang et al. 2024) via interacting with the model itself (e.g., ChatGPT, GPT4 prompt interface) or indirectly (Greshake et al. 2023) as an augmented component of a remote service or system. More precisely, (Greshake et al. 2023, 2) shows “that Indirect Prompt Injection can lead to full compromise of the model at inference time analogous to traditional security principles. This can entail remote control of the model, persistent compromise, theft of data, and denial of service.” It has been also demonstrated that even custom-tailored malicious content or scam can be generated by competitive models such as ChatGPT (Kang et al. 2024) that has a reportedly state-ofthe-art defence mechanism (Markov et al. 2023) against such manipulations. Another interesting study (Yu et al. 2024) reveals that even a single critical parameter called super weight can destroy an LLM’s text-generation capability. Others (Zhang et al. 2025) demonstrated how to persistently poison LLMs with only the 0.1 % of a model’s pre-training dataset to successfully accomplish three out of four security attack objectives (e.g., belief manipulation). Further understanding on how to attack LLMs can be found in (Bagdasaryan et al. 2023; Zhu et al. 2023; Zou et al. 2023) while defensive strategies against such adversarial manipulations are evaluated, for example, in (Jain et al. 2023). A comprehensive list of LLM Safety, Security and Privacy studies is maintained by ThuCCSLab on GitHub (ThuCCSLab 2024). 4 AI as a Guru? With the recent advancement of LLMs, there is a strong interest from scientists, researchers and philosophers to answer and resolve the uncertainty around AI and its relation to consciousness. Many hope to “awaken” AI to enable conscious actions so that the scope of agency which was so far attributed to humanity (see kalā of kañ cukā s [2]) could be now owned by a “deified” machine. The 2024 Physics Nobel prize winner Geoffrey Hinton, considered to be the godfather of AI, claims that current multimodal AI chatbots (i.e., processes various data inputs such as text, video, voice, etc.) are already conscious (This Is World 2025 b, 2:16). In this section, we elaborate a bit more on the idea whether an AI can be conscious and replace a real guru who helps and observes the whole process of svādhyāya. To achieve this, an AI should get qualified to access the subtle dimensions of existence which only opens up for prepared sādhakas after completing all the preliminary tests brought about by Guardian principles (Timčák 2017, 17). Thus, to conjoin traditional and contemporary approaches we must lay down the frameworks of possible collaboration. 4.1 AI and its Potential Relation to Consciousness As discussed above one of the leading interpretations of consciousness today is phenomenal or “subjective” that is related to the experience of thinking, emotions and so on. In accordance with this definition, computer science hopes for creating a conscious AI that could resolve the most painful and urging problems that humanity failed to resolve so far. New initiatives like technophilosophy aims at discussing these aforementioned questions as David Chalmers phrases it in his talk with Swami Sarvapriyananda (Vedanta Society 2022a, 8:53): “I see as a two-way interaction between philosophy and technology. So, it’s partly thinking philosophically about technology. Taking new technologies like Artificial Intelligence, Virtual Reality… What can we know? What kind of realities will these create.” At the same time, investigations related to a “conscious” AI raise hot debates and public concerns also as building such systems may carry high threats to the society, culture, and private zone. One of the main events that triggered the as-

RkJQdWJsaXNoZXIy MTUwMDU5Ng==