Not Just Hallucinations: The Virtual in GAI Era | Chia-rong Tsao

by Critical Asia

by Chia-rong Tsao, Dec. 2023】

The “old” virtual

Although the concept of the virtual has a long history, today, we often associate “the virtual” with information technology, computers, and the internet. Or, to put it another way, at least since the invention of the internet in the 1990s, phenomena like virtual spaces and virtual reality have become integral parts of contemporary daily life. Our imaginations and perceptions of the virtual often involve three dimensions. First and perhaps the earliest one is that the virtual also meant not real. We can see this kind of observation in Sherry Turkle’s book Life on the Screen: Identity in the Age of the Internet, published in 1995. Turkle argued that the internet brought about seemingly unreal and deceptive interpersonal relationships yet profoundly and authentically influenced people’s identities.

Secondly, with the advancement and widespread adoption of information technology, we began to take the virtual space more seriously and desire that the virtual could resemble reality more closely. In other words, we hoped the virtual space could fully replicate real life and the real world. This is exemplified by the development and utilization of various types of head-mounted VR displays, like those designed for museum tours. In other words, the virtual has shifted from being “not real” to something people hope can be “true to life.”

In the third dimension, however, we can observe that whether it is the belief that “virtual is not real” or the desire that “virtual can be true to life,” both reflect an anthropocentric imagination. That is to say, what is reality is estimated from a human-centric perspective. In the early stages of internet development, we believed that online interactions and experiences were different from real life offline and, therefore, not taken seriously. With the advancement of VR technologies, we began to hope that they could simulate our life “in reality.” Here, what these notions of the “real” or “reality” refer to is based on the premise of human physical perception and understanding, which is an anthropocentric point of view.

In other words, anything outside of the anthropocentric perspective would be considered fantasy or not sufficiently real, which sometimes even implies a form of downgrading or a potential threat. For example, Michael Heim (1993) once argued that the most dangerous aspect of the virtual space created by the internet is that we may lose touch with our inner states, that is, “to lose the acute sensitivity to our bodies, the simplest kinds of awareness like kinesthetic body movement, organic discomfort, and propriosensory activities like breathing, balance, and shifting weight.”

However, with recent breakthroughs in artificial intelligence, it seems necessary to reconsider the concept of the virtual.

The generative AI and the “new” virtual

The third wave of artificial intelligence began around 2010, following two previous winters of AI research. This wave witnessed significant advancements in various machine learning and deep learning algorithms. From the emergence of AlphaGo, which showcased the power of AI in board games, to the development of ChatGPT, based on Large Language Models (LLMs), we are now facing a moment where the boundaries between humans and nonhumans are gradually becoming blurred and even crossed.

In particular, technologies like ChatGPT, also known as generative AI, have brought about a transformation that highlights the necessity of rethinking the concept of the virtual. Generative AI refers to “an artificial intelligence field that concentrates on generating new and original information by machine learning on massive databases of experiences.” (Aydin & Karaarslan, 2023) Its operation is primarily “performed by using a model that has been trained on a large dataset of examples and constructing new instances that are comparable to the training dataset.” (Aydin & Karaarslan, 2023) For example, ChatGPT operates on this mechanism to generate text content that appears as if it were written by a human, while Midjourney is capable of creating images based on user instructions.

Regarding the development of generative AI, borrowing from the discussion by Coeckelbergh and Gunkel (2023), there are generally two attitudes. Firstly, some argue that while ChatGPT may appear to speak and respond like a human, it is not truly intelligent and may often produce nonsensical responses. Secondly, others believe that generative AI has indeed acquired real cognitive capabilities. While it may make mistakes at times, it can also reflect on its own statements and make corrections, suggesting it has evolved some form of consciousness.

What is remarkable is that these two attitudes correspond precisely to the two views about the virtual in the past. Some people believe it to be false and unreal, while others believe or expect it to be real enough. At the same time, regardless of which stance one takes, as argued by Coeckelbergh and Gunkel (2023), they presuppose a distinction between appearance and reality. In other words, they all argue on the premise that human intelligence is real and machines can just simulate, imitate, or fake intelligence. In my own words, this is a form of anthropocentric criterion. Definitions of AI, such as the one initially proposed at the Dartmouth workshop (to simulate “human” intelligence) or the well-known thought experiments like the Chinese Room, are, in fact, anthropocentric criteria.

However, the emergence of generative AI like ChatGPT challenges this anthropocentric premise. Therefore, just as Coeckelbergh and Gunkel (2023) argue, we need to move beyond the distinction between “appearance and real.” I believe that generative AI, such as ChatGPT, makes us reconsider what “the virtual” means.

The assembling generative process: a de-anthropocentric perspective

Firstly, the virtual created by generative AI is neither less real nor a pursuit of real enough. On the one hand, in terms of its capacity to handle vast amounts of data that surpass human capabilities, generative AI like ChatGPT may be considered “closer to reality” than any individual. This is why a group of people firmly believe that generative AI not only possesses genuine cognitive abilities but may even achieve what is known as artificial general intelligence (AGI).

On the other hand, strictly speaking, generative AI does not actually pursue for “real enough” but rather seems to be creating its own version of reality. For example, in the early stages of generating images, Midjourney, another generative AI, often depicted human fingers as six instead of five. From the perspective of a human user, this might be considered an error, but what if we step outside of the anthropocentric criterion? As Coeckelbergh and Gunkel (2023) argue when discussing the phenomenon of hallucination in ChatGPT, rather than calling it the “illusion” of text (or a “hallucination”), it is simply text, more specifically, text produced in and through a process and performance that contains human and nonhuman elements (Coeckelbergh & Gunkel, 2023: 5).

In other words, the “errors” of generative AI, or its “hallucinations,” highlight why we need to rethink the concept of the virtual. When viewed from an anthropocentric perspective, these hallucinations are problems that need correction, like the adjustments made by Midjourney. However, this can lead us to miss the opportunity to reconsider the notion of the virtual, potentially trapping us in an anthropocentric view and overlooking the hidden issues that the virtual may pose in the era of generative AI. That is to say, when “errors” are no longer as obvious as six fingers, will they be accepted as reality? Or, within the confines of the distinction between “appearance and real,” will those not discerned as “hallucinations” be regarded as real and thus accepted?

As argued by Coeckelbergh and Gunkel (2023), it is only when we step out of the anthropocentric perspective that we may be able to see whether the so-called “hallucinations” can actually be an “error” or be considered as products co-assembled by humans and nonhumans. Generative AI like ChatGPT generates content based on vast databases in a reinforcement learning mode. This implies that in this process, humans are not only no longer the primary authors but are also incapable of fully comprehending what AI has done within it. Consequently, relying solely on an anthropocentric perspective, attempting to understand “reality” through an external criterion, may overlook the intrinsic meaning inherent in the generative process itself.

Only by stepping out of an anthropocentric perspective and reexamining what both humans and nonhumans (including generative AI) have jointly accomplished in the generative process can we possibly contemplate from a different, more ethically and politically charged perspective. How does each generative process unfold? How does it acquire meaning as a virtual entity? And how does it generate impacts? In other words, each “hallucination” may not necessarily be an “error” but rather a manifestation of a certain “virtual” that has the potential to generate real-world impacts. Conversely, every seemingly correct generation could also be a form of “reality” imbued with specific ethical and political significance.

Drawing on the words of Coeckelbergh and Gunkel (2023: 5) as a conclusion, this implies that we need to acknowledge: “No pre-existing metaphysical reality or real/appearance dichotomy is presupposed. Instead, the process and the performance create what is (taken to be) real; it produces a particular reality-experience.”


AUTHOR
Chia-rong Tsao, Assistant Professor, Graduate Institute for Social Transformation Studies, Shih Hsin University, Taiwan


WORKS CITED

Aydin, Ö. & Karaarslan, E., (2023). “Is ChatGPT leading generative AI? What is beyond expectations?” Academic Platform Journal of Engineering and Smart Systems. 11(3), 118-134.

Coeckelbergh, M. & Gunkel, D. J., (2023). “ChatGPT: Deconstructing the debate and moving it forward.” AI & SOCIETY. https://doi.org/10.1007/s00146-023-01710-4

Heim, M. (1993). The Metaphysics of Virtual Reality. Oxford University Press. Turkle, S. (1995). Life on the Screen: Identity in the Age of the Internet. Simon & Schuster.

You may also like

Leave a Comment