VOLUME 11 ISSUE 2 FALL 2025

74 Spirituality Studies 11-2 Fall 2025 sumption of such an AI was when in June 2022, one of the Google employees, Blake Lemoine claimed that the company’s LLM has a soul. Google responded as follows: “Our team has reviewed Blake’s concerns and has informed him the evidence doesn’t support his claims. He was told there was no evidence that LaMDA was sentient and lots of evidence against it” (quoted in Chalmers 2022, 1). Philosophers like Dennett said that we are far from a conscious AI today even with the recent advancements of machine learning, however, he expected that in the future, maybe in 10–50 years “it will be absolutely possible” (Tufts University 2020, 0:31). He added though that “we don’t need artificial colleagues” who may rule over humanity. Chalmers kind-of shares this view when stating that “it’s reasonable to have a significant credence that we’ll have conscious LLMs within a decade” (Chalmers 2022, 19). His conclusion assumes that by time all the challenges that we face currently to build a conscious AI might be resolved. “Within the next decade, even if we don’t have human level artificial general intelligence, we may have systems that are serious candidates for consciousness. Although there are many challenges and objections… meeting those challenges yields a potential research program for conscious AI” (Chalmers 2022, 20). One of the interesting understandings to relate AI with subjective consciousness is called computational functionalism which is a “claim about the kinds of properties of systems with which consciousness is correlated” (Butlin 2023, 13). Furthermore, according to this approach the consciousness of a system depends on features that are more abstract than the lowest-level details of its physical make-up. Without going into the details, Butlin et al. conclude that building a conscious AI based on computational functionalism is feasible: “Our analysis suggests that no current AI systems are conscious but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators” (Butlin 2023, 1). On the other end of the scale, the proponents of integrated information theory (Tononi and Koch 2015, 1; Oizumi et al. 2014, 1–24) hold that even if a system implemented the very same algorithms of brain functionalities is unlikely to be conscious anytime. 4.2 Transhumanism and the Realisation of Consciousness through Living Systems Another interesting field of study is transhumanism which is the modification of humans via emerging sciences such as genetic engineering or digital technology to alleviate suffering and enhance human capabilities. Anil Seth, a proponent of materialist explanations of consciousness explores “the idea that true consciousness can only be realised by living systems” (Ghosh 2025). He says that “a strong case can be made that it isn’t computation that is sufficient for consciousness but being alive… in brains, unlike computers, it’s hard to separate what they do from what they are” (quoted in Ghosh 2025). Without this separation, he argues, it’s difficult to believe that brains “are simply meat-based computers” (quoted in Ghosh 2025; TED 2017, 15:21). We note that Seth’s observation on the difficulties to separate functionality (i.e., doing) and identity (i.e., being something) points to the direction of recognising the sameness of kriyā and jñ āna, which is fundamentally the realisation of the essential oneness of all things (Abhinavagupta 2023, 124). Muruganar records a similar teaching from Ramana Maharshi in verse 435 of Guru Vachaka Kovai (Muruganar 2004, 120): “The natural consciousness of existence [I am], which does not rise to know other things, is the Heart [note: (Sa. hṛdayaṁ)]. Since the truth of Self is clearly known only by this actionless Consciousness, which [merely] remains as Self, this [i.e. the Heart] alone is the supreme Knowledge.” In contrast to transhumanism, Seth’s vision is a technology that consists of tiny collections of nerve cells called “cerebral organoids” or “mini brains” a more advanced and larger version of which could give rise to the emergence of subjective consciousness instead of their silicon-based counterparts (Ghosh 2025). While this approach seems to be much more “realistic” at first glance compared to a computational alternative, approximation of life via “mini brains” assumes the replication of the complex functional model of the kośas (Sa. “sheaths”) also, a system that is rooted in ignorance (Timčák and Pék 2024, 50). According to Abhinavagupta ajñ āna (Sa. partial or dualistic knowledge) is not the complete negation of knowledge, which is the typical understanding of ignorance, but the perception of

RkJQdWJsaXNoZXIy MTUwMDU5Ng==