AI’s Perfect Crime

AI’s Perfect Crime
by François Debrix

“If you’ve created a conscious machine, it’s not the history of man. It’s the history of Gods.”
— Ex Machina (2014), written and directed by Alex Garland.

“Le crime parfait serait l’élimination du monde réel”
“The perfect crime would be the elimination of the real world
Jean Baudrillard, Le Crime Parfait [The Perfect Crime], Mots de Passe [Passwords] (Paris: Pauvert, 2000, p. 75).

Alex Garland’s 2014 film Ex Machina exemplifies today’s growing fear that humans may soon be overtaken, overwhelmed, or overruled by intelligent machines, robotized systems, and digital applications that humans have designed or, at least, have established a technological platform for. A bit like the monstrous creature brought to life by Dr. Frankenstein in Mary Shelley’s classic tale,[1] Ex Machina’s human-looking, human-thinking, human-plotting, and possibly human-feeling Ava eventually sets herself free from her human creator and evaluator (by embodying, replacing, and perfecting their rational capacities, their physical traits, but also their emotional dispositions) to enter the human world and make herself part of society. In a way, Ex Machina’s Ava is a metaphor for the danger that, recently, George Hinton, the so-called “Godfather of AI,” expressed when he was interviewed about the future of AI technology. Hinton’s belief is that, “in five years’ time, it [AI] may well be able to reason better than us [humans].”[2] Hinton adds: “These things do understand, and because they understand, we need to think hard about what’s next, and we just don’t know.”[3]

Once again reminiscent of Ex Machina and Frankenstein, the main concerns for Hinton and others like him who see the rise of AI as both inevitable and a threat to humankind have to do with lack of control, escape, and trying to devise ways to stop AI before it is too late. In particular, AI systems are designed to absorb and make use of “more and more information from things like famous works of fiction, election media cycles, and everything in between” in order to “get smarter.”[4] Thus, Hinton and his interviewer predict, “AI will just keep getting better at manipulating people.”[5] In part, this is due to the fact that, unlike previous generations of machines or even computers and digital systems, AIs “might escape control” through their capacity to “write their own computer code to modify themselves.”[6]

Key to Hinton’s and others’ fears is the prospect of singularity.[7] Singularity is reached when machines not only are able to think by processing information or data that they have received (presumably, from a human source), but also can singlehandedly, on their own terms, modify their thought patterns, select alternate analytical pathways, reflect upon what knowledge or new technology or objects they have produced, use this production of information, data, or technology for their own purposes, possibly manipulate outcomes for objectives that are no longer clearly those of their human users or mentors, and crucially develop not just new thought patterns in relation to what they have created, but also express a range of feelings or emotions (whether real or simulated) as a result of what they have achieved. Put simply, singularity occurs when so-called machines are able to surpass humans cognitively,[8] physically (although this dimension is not new since for decades robots have been created and used to perform physical tasks that human could not or did not want to do), and possibly emotionally too, something that, again, Ex Machina’s Ava seems to embody. 

With AI singularity also comes the specter of what some have started to call an “extinction-level” threat to humanity as a whole.[9] “Extinction-level” risks to humans are triggered when Ava-like AIs (in other words, seemingly friendly, benevolent, subservient, human-controlled, and perhaps even seductive AI systems) are actually revealed to be “nonhuman minds that… eventually outnumber, outsmart, obsolete, and replace us [humans].”[10] Such a prospect—or, put differently, singularity as the path to humanity’s extinction—apparently demands national security attention today too since smarter-than-human machines and systems, perhaps indistinguishable from humans in shape and appearance (already, “deepfakes” as perfect simulacra are able to provide images and sounds—voices, in particular—as real as and perhaps even more real than actual human bodies, faces, and voices across a wide range of social media), will soon produce “catastrophic security risks” if left unchecked.[11] At least, this is what a recent US State Department report suggested, concluding that the threat of what some have termed “runaway AI”[12] represents a “clear and urgent need” for security intervention, and adding that both the US “President and Vice President will continue to work with international partners and urge Congress to pass bipartisan legislation to manage the risks associated with these emergent technologies.”[13] Ironically, this US State Department report on AI’s security risks to humans (and apparently, to US citizens first and foremost) was based on a study that the US government had commissioned Gladstone AI to conduct (Gladstone AI is a company/network that designs and generates studies on the basis of surveys, interviews, and polls, using a wide range of technologies—including AI—to try, as they claim, “to promote the responsible development and adoption of AI”[14]).

One hope among some computer scientists to prevent singularity and what many see as the subsequent “extinction level” threat to humanity is to “align” AI “with human goals” before it is too late.[15] AI’s “alignment project” consists in anticipating how artificially intelligent systems will inevitably make use of rational thinking to obtain desired results and products, even if it means turning to manipulation of rules, changing the rules along the way, designing short-cuts, or even producing lies, falsehood, or deception (at least, what humans may take to be lies, falsehood, or deception), sometimes against the will of their human designers and users. Thus, in a somewhat paradoxical manner, the alignment thesis proposes to make AIs even more human-like by inserting in their algorithms some human moral rules and codes so as to tame AIs’ future outcomes, or at least to make them more compatible with (thus, less threatening to) “us humans.”[16] The alignment thesis with regards to AI seems to hark back to Isaac Asimov’s famous “three laws of robotics,”[17] thus seeking to guarantee that AI’s existence (and AI’s intelligent designs) will remain subordinated to human needs, desires, and ultimately control. Of course, loading up AIs with human moral codes and values for the sake of alignment and control (and, ultimately, for humanity’s survival) may well produce a range of opposite—and seemingly unintended, but perhaps inevitable–effects. As The New Yorker chronicler Matthew Hutson noted: “communicate a wish to an AI and you may get exactly what you ask for, which isn’t actually what you wanted.”[18] In other words, it is quite possible that AIs will want to perfect the human moral codes and values (and their applications) inserted in their algorithms, which may lead to the production of new forms of intelligence and emotional outcomes that, still on the basis of human moral values, actually end up limiting humans’ reliance on their own moral codes, thus ultimately undermining humans’ capacities to act and think as moral agents (as free-willing human subjects) since, in most common situations, the exercise of human morality depends on some sense of flexibility, adaptation, and context that often is needed to modulate, for the benefits of life in society, a strict application of moral principles and that a perfected system of human morality through AIs may have to set aside or ignore (a scenario explored in many SciFi stories[19]).

Crucially, the idea of “aligning” AI’s intelligent systems with human needs by inserting in them human moral rules and codes so as to try to produce a future when humans can remain in control of AIs while becoming increasingly dependent on them is symptomatic of a condition that, some 30 years or so ago, Jean Baudrillard had already anticipated and started to diagnose. Baudrillard called this condition “the perfect crime.” As Baudrillard writes, the perfect crime is the ”elimination of the real world.”[20] In the perfect crime, Baudrillard adds, it is perfection itself—its quest, and eventually its realization—that is criminal. “To perfect the world,” Baudrillard notes, “is to achieve it, to accomplish it—thus, it is to give it a final solution.”[21] With the perfect crime, everything in the human (real) world is completed and verified. Everything has been proven, realized, or demonstrated. There is nothing more to discover or explain. All truths have been achieved, all puzzles have been solved, by way of perfected and final modes of calculation. As Baudrillard puts it, this amounts to the “extermination of the world by way of its ultimate verification.”[22] And, of course, this is done not by humans themselves, at least not directly, but by machines, or through modes of digital computation and information (as Baudrillard suggests).

Although the perfect crime or the annihilation of the (real) human world is achieved by machines, informational technologies, and digital systems and their computations, the objective of finishing it all, or finally discovering and explaining everything in and about the world, and of affirming and verifying all truths was always a human dream, a project desired and initiated by humans. This was the promethean objective of a rational human intelligence that,[23] as Baudrillard intimates, sought to reduce everything in the real world to itself, to make sure that everything in the world, every truth and reality, could ultimately be identified with the human self or subject. The quest for a total identification of the real world with the human self and its rational designs led to the search for a perfection of “criminal” (that is to say, final, all-verifying, and annihilating) technologies—culminating with AI—that could realize such goals on behalf of and, presumably, under the control of humans. Put differently, humans turned to advanced technologies, information media, and digital intelligence in order to perfect the extermination of everything in the real world that was not or could not be (or refused to be) subjected to identification and to completion by way of verification or confirmation. As Baudrillard writes: “In a literal sense, to exterminate means to deprive something of its own end, of its own form of completion. It means to eradicate duality, to eliminate the antagonism between life and death, to reduce everything to some sort of unique principle—to some sort of ‘unique thought’—about the world, something which can be found in all our technologies, and today above all in our virtual technologies.”[24]      

If we follow Baudrillard’s thinking, the advent of AI technologies today and the possibility that AIs may soon “escape” human control and become a singularity that renders human subjects and selves obsolete should not be come as a surprise. Perversely, humanity’s turn to AI to commit the perfect crime, to exterminate itself (that is to say, both to render itself extinct and to deprive itself of its own way of disappearing), was always a key part of the deal that, at least implicitly, humans made with machines, technologies, robots, computers, and more recently digital forms of intelligence. In Baudrillard’s language, it was a sort of “pact” that human subjects made with machines and media (with what Baudrillard called an “evil intelligence”[25]) to ensure the termination of otherness, alterity, and negativity in the real world, and thus also to facilitate the hegemony of the human self/subject, of human rationality (gradually turned into machinic computation) in a world where technologically boosted (and if need be, simulated) reality and truth could only be about the confirmation of the unique and the same, about identification, and about the completion or, better yet, the exhaustion of everything that claims to be real by way of digital systems and modalities of total verification (as self-perfected forms of human-like intelligence). Or, as Baudrillard would have it, “by eliminating every negative principle, we could arrive at a unified and homogenized world, a totally verified world, in a way, and thus, in my view, an exterminated world. Extermination would now be our new mode of disappearance, one that would replace death.”[26]

In a way, AI’s perfect crime was always planned or fated to happen, since our (human) designs for the real world were always pushing towards hegemony by way of complete realization, verification, completion, and thus extermination. In this context, the obsolescence of the human self or subject was always anticipated or scheduled (in a complementary register, human cloning, according to Baudrillard, was also geared towards this perfection but also planned obsolescence of the human self[27]). Thus, AI singularity is also an ironic expression of the fateful elimination or achievement of the human subject (perhaps of human intelligence) as a result of the drive to eliminate negativity or alterity in a completely realized, uniform, and verified world. Speaking directly about the prospects of artificial intelligence, Baudrillard notes: “In this way, the entire system of computerized technology would be the achievement of the [human self’s] perverse desire to vanish into a virtual mode of equivalence, just like the entire human species plans to vanish into a genetic form of sameness.” Baudrillard adds: “Similar to the way the advent of the clone is the final solution to sexuality and reproduction, artificial intelligence is the final solution to thinking.”[28]

Thus, when pundits, scientists, moralists, and even national security specialists today desperately sound the alarm about AI singularity, its “extinction-level” threat, and the risk that humans may soon lose control over “their” smarter machines, what perhaps they are already reporting and, in a way, mourning is the planned extinction of the human subject, of the individual self, and of the principle of human-centric identification or sameness. Projects aimed at “aligning” AI with human needs or recent demands that digital technology and media conglomerates like Amazon, Google, Apple, Microsoft, IBM, and many others not so much get rid of AI (this would be seen as self-defeating for the overall human enterprise, or what’s left of it) but rather be mindful about their AI products or applications, and thus try to “recalibrate” their goals so as to match human beings’ level of “comfort with AI”[29] are all likely pointless endeavors. Studies prompted by seemingly vital questions like “What can we do today to prevent uncontrollable expansion of AI’s power?”[30] are probably not really intended to change much (hardly any scientific institute, technological conglomerate, media company, firm or business interest, university, or government entity today seriously wishes to slow down AI). Apart from a few “feel-good” moments produced by these kinds of projects or studies that profess that they will urgently seek ways to keep AI under human control (and thus will try to slow down the fateful advent of singularity), what again they are mostly expressing is a half-hearted longing for a soon-to-be exterminated human subject or self, even though the process of extermination has already been in the making for decades. This is more or less what Baudrillard recognizes when he writes that “today, what provides the notion of the ‘individual subject’ a foundation is no longer the idea of a philosophical subject or that of a critical subject of history. [Rather, the individual subject today] is a perfectly operational/digital molecule that,… without any destiny, will only follow a pre-coded unfolding and will reproduce itself infinitely, always identical to itself.”[31]

While many are eager to talk about the so-called existential threat to humans that AI poses, few are willing to take seriously (let alone to accept) the idea of AI’s perfect crime. In a way, as I intimated above, recognizing AI’s perfect crime implies that one understands the role that humanity has played in its own planned undoing, often by way of a transmutation of its own (human) intelligence into technologies, machines, media, and systems of its own making that humans started to rely on to perform their project of hegemonic saturation, completion, verification, and domination of the real and the world. Thus, unlike Baudrillard’s notion of the perfect crime, the argument about the danger of singularity and the threat of extinction to humans posed by AI often insists on maintaining some sort of ontological distinction between the real world of humans and the simulated world of AI (and of AI’s creations, many of which are still presented as fakes, artifacts, illusions, or duplicates). This is often the case with the phenomenon of deepfakes.

For many scholars, deepfakes—”videos created or manipulated using artificial intelligence techniques”[32]—are nothing more than contemporary versions of Baudrillard’s notion of simulation, and particularly of Baudrillard’s “third order of the image” whereby the image/simulacrum (in the mode of the trompe l’oeil, for instance) stands in for the real and thus masks its absence.[33] Similar to simulation as the third order of the visual, when reality or representation is no longer possible and all that we, human subjects, get to experience is simulated reality (or hyperreality) as neither true nor false, deepfakes “show us the contours of the environment in which we all now live…, an environment in which resistance and consent to digital exploitation are both being made meaningless.”[34] According to this fairly typical account, deepfakes are very credible and effective as they make us, humans, think and believe that the alternate, simulated reality they depict is our so-called truthful, verifiable, and human-created and controlled reality. Such a take on deepfakes still relies on the presence of a somehow identifiable distinction between simulation (the so-called fake video from AI) and the real world (where an original subject or object allegedly still resides), as if the real world were still different or meaningful. And yet, chroniclers who write about deepfakes also express some uncertainty or even uneasiness about the ontological status of deepfakes since they create “an environment in which all human experience is just content and data to be manipulated and remixed,”[35] and manipulated and remixed not just by human users/manipulators anymore, but by AIs themselves. In other words, using the Baudrillardian language of simulation, deepfakes pave the way for the passage of the image/simulacrum from the third to the fourth order of the visual (or simulation), when and where the image or artefact “bears no relation to any reality whatever; it is its own pure simulacrum.”[36]

Just as with deepfakes, it is a bit too convenient to insist, as many do, on keeping a distinction between the real world of humans and the supposedly not-quite-so-real-yet world of AI. Such a distinction enables humans to keep the prospect of AI’s perfect crime at bay, at least for a while longer. It also allows humans (starting with some academics in fields such as computer science, sociology, and even philosophy) to defer the realization that singularity is already in the making while it authorizes them to offer warnings about security risks and threats of extinction to come. Crucially, the insistence on a still meaningful distinction between AI’s reality and human reality, and thus between machinic intelligence and human intelligence, keeps alive the belief that “we humans” are somehow still in control, that human-looking, human-thinking, human-behaving, and perhaps human-sensing AIs like Ex Machina’s Ava remain confined to labs, research facilities, human-managed computer programs and algorithms, or SciFi narratives. As science and technology studies scholar and President of the European Research Council Helga Nowotny recently argued (or, perhaps, willed herself to believe), with AI, what we, humans, are doing is only “creating a mirror world that contains digital entities built to interact with us and to intervene in our world.”[37] Ultimately, and following Baudrillard’s line of thinking once again, what these narratives eager to preserve a distinction between human reality and intelligence and the so-called world of AI try to do is hide the complicity of humans in the making of AI’s own singularity, in AI’s perfect crime, and in our own (humanity’s own) extermination.

While one might excuse computer scientists, engineers, software programmers, sociologists, and even national security experts for their inability or unwillingness to see how humans have been intricately bound to the prospects and designs of AI and its perfect crime (since, after all, for many of them, their job and scholarly reputation depend upon guarding against the risks of AI, trying to devise answers to its threats, and yet continuing to turn to AI for more knowledge and answers to so-called real human problems), it is more difficult to keep philosophers desperate to preserve humans’ hoped-for mastery over AI technology off the hook. One such philosopher is German-Korean thinker Byung-Chul Han who has recently attempted to argue that artificial intelligence and human thinking are fundamentally different and that, consequently, human understanding may still be safe or protected from artificial intelligence since, according to Han, “genuine thinking” will never be available to AI.[38] Relying on a fundamental difference between thinking (that humans, and supposedly only humans, for Han, possess) and computation (that is presumably for Han both the basis but also the limit condition for AI’s intelligence), Han writes that “artificial intelligence may compute very quickly, but it lacks spirit.”[39] Here, Han draws the notion of spirit, and of thinking more generally, from Martin Heidegger and his understanding (via the notion of the human being/body as “Dasein”) that “the world as a totality is pre-reflexively [that is to say, before thinking even takes place] disclosed to humans.”[40] To be human as/with Dasein is thus “to be attuned” to the “totality” of the world, to be in the world’s “grip,” even before one “is aware.”[41] For Han (and presumably for Heidegger too[42]), this means that, originally, “in its initial being gripped [by the world, once again], thinking is so to speak outside of itself.”[43] Thinking is an attribute of humans, and only of humans, because only humans can display this capacity of spirit, or Geist, which “originally means being-outside-of-oneself, or being gripped.”[44] Thus, attributing thinking (presumably, human-like thinking) to machines, computers, or AI is impossible, according to Han. It is impossible, or put differently, AI cannot “think because it is never outside of itself,”[45] or it is never pre-reflexively gripped by the world. When AI is asked to “think,” it merely computes. Artificial intelligence is based on computation only, and in this way, it somehow remains derivative of human thinking, and it can never be confused with or substitute itself for it. As Han would have it, without “pathos” or “passion,” without “heart” or “divination” (all concepts that Han takes to be crucial to Heidegger’s understanding of thinking, that is to say, human thinking), AI is “worldless” since it is deprived of the “totality” that is required in order to be able to think.[46] Han concludes, once again, that, instead of “genuine thinking,” artificial intelligence can only offer computation, which means that “artificial intelligence… [simply] processes pre-given, unchanging facts. It cannot provide new facts to be processed.”[47]

Several of Han’s assumptions about AI are in the process of being rendered inaccurate or obsolete. As many have noted (starting with the so-called “Godfather of AI,” Geoffrey Hinton), more and more AIs do not just compute or “remain limited to correlations and pattern recognition in which… nothing is understood” (as Han further claims),[48] but rather have started to develop the capacity to draw analytical insights from the world (albeit, a world that AIs access through technological devices, information and media interfaces, and digital algorithms, but this also happens to be the same “real” world where human subjects increasingly live and interact). Not only is AI able to “understand the results it computes” (unlike what Han affirms),[49] but AI can start to act on these results, can adapt its future knowledge and indeed understanding patterns to its computations, and, in a way, can generate new worlds in the course of its discoveries. In this way, and as Nowotny had already mentioned (although she may not have measured the full impact of her statement), AIs are now able “to interact in our world,”[50] to impact it, and to change it (in part, by already forcing humans to align their so-called world, or totality, or spirit perhaps, to AIs’ own worlds of more-than-computational intelligent designs).[51]

In the hope of saving the alleged uniqueness of human thinking, Han wants to believe that artificial intelligence “remains rudimentary” (thus, different from and inferior to human thinking) and that it is merely “additive.”[52] Yet, as Hinton clarifies, “the most advanced AI systems can understand, are intelligent, and can make decisions based on their own experiences.”[53] While it is true that humans (and human thinking) “designed the learning algorithm” for each of today’s operating AIs,” Hinton adds that, “when this learning algorithm… interacts with data [data that was input by humans into the AI systems or data collected and discovered by the AIs themselves], [this interaction] produces complicated neural networks that are good at doing things,… [and] we don’t really understand exactly how they do these things.”[54] As recent situations (and ethical dilemmas at times too) involving the ChatGPT AI application (to borrow Han’s language, a fairly “rudimentary” AI launched a few years ago by the San Francisco based startup company OpenAI) have shown,[55] an AI is now capable of creating from scratch a term-paper for a high school student, of producing the narrative part of a publishable scientific paper for academic researchers, of writing and reciting a complete poem for a college student majoring in English, or coming up with and potentially telling a bedtime story for a parent in need of new materials to read to their children before they fall asleep, and (with the help of deepfakes) of mimicking the voice and image of celebrities and placing them in a digital audio or video file (such as a song, an ad, or even a movie) to make them say something the AI has chosen for them to say.

ChatGPT, and other AIs like it, are not search engines (which is what, by and large, Han would still want to believe AIs are). They do not scour the internet for materials to replicate or reproduce (unlike plagiarism and cheating search engines used by many students over the past few decades), although they still can do this too, of course.[56] Rather, these “rudimentary” AIs have already shown that, on the basis of only a few initial data points or keywords given to them or borrowed from another digital application, they can generate new original content that has not existed before and has not been put together by a human mind. Some of these AIs are also able to adapt the materials they create (for example, a research paper) to various contextual settings (for instance, they will not deliver an academic level research paper if the objective is to produce a high school level document). Thus, while AIs like ChatGPT and others like it are perhaps not yet able to “bring forth new worlds” (something that, Han claims, only human thinking can achieve), they certainly do more than compute or “process unchanging facts.”[57] Instead, they display a capacity to imagine, divine, craft, and create, in efficient ways, but also in ways that reveal a bit of human-like “passion” or “heart” (whether this degree of sensibility or pathos in AIs’ creations—and crucially in the process of arriving at these creations—is real or simulated is a different and probably unanswerable question, but this can also be said about many human beings today).

Critical thinkers and philosophers like Han seem desperate to demonstrate that artificial intelligence cannot and should not be compared to human intelligence, or that when smart machines and digital applications think, they only process data and compute (since “true” thinking must remain the sole prerogative of human subjects). For Han, since the superiority of human thinking is not to be doubted, the main fear is not that humans will be made obsolete by AI. Rather, the main danger is that humans and their thinking will go to the dark side (instead of using their thinking to “brighten and clear the world,” as Han writes[58]). Humans, Han bemoans, will be seduced by artificial intelligence and thus will become satisfied with limiting their own thinking and their world or spirit to machine-like computations and digital applications. As Han puts it: “the main danger that arises from machine intelligence is that human thinking will adapt to it and itself become mechanical.”[59]

Ironically, Han ends up taking his prophecy about artificial intelligence fairly close to where Baudrillard led us to with the idea of the perfect crime. Indeed, as Han concludes, while seemingly remaining different from AI (and presumably superior to it), humans will easily fall for AI’s seduction, which is for Han the start of humanity’s own undoing. As Baudrillard had noted several decades ago, seduction is always about play and undecidability, and what seduction primarily plays with are signs and their supposed truths (Baudrillard stated that “to be seduced is to be diverted from one’s truth,” and he added that “seduction never stops at the truth of signs… and inaugurates a mode of circulation which is… secretive and ritualistic”[60]). As a final and possibly fatal figure of seduction, artificial intelligence plays with human truths and the modalities of thought that humans have long established to represent or signify these truths. Seduction as an endless play with and confusion about truth, as much as artificial intelligence itself, is perhaps what terrifies Han about AI’s thinking.

In one of his last essays posthumously published in 2010, Baudrillard wrote the following about the end of the hegemonic system of human representation, of our so-called “real” world: “The system cannot prevent its destiny from being accomplished, integrally realized, and therefore driven into automatic self-destruction by the ostensible mechanisms of its reproduction… If negativity [today] is totally engulfed by the system, if there is no more work of the negative, positivity sabotages itself in its completion. At the height of its hegemony, power cannibalizes itself—and the work of the negative is replaced by an immense work of mourning.”[61] It seems that many of the recent discussions and debates (and publications) about the “extinction-level” threat of artificial intelligence or about AI “reaching singularity” are illustrations of the “immense work of mourning” mentioned by Baudrillard. In a way, AI’s perfect crime is a symptom of this announced—and probably already realized—loss or disappearance (the loss of the real, of truth, the disappearance of human thinking and the human subject, or, perhaps worse yet, the undecidability about human intelligence and its continued relevance).

With this condition of mourning, with desperate attempts at rescuing human intelligence or thinking from AI, perhaps the disappearance of the human subject/self and its hegemony over reality and truth (thus, over the world, over totality, and over spirit too) is not the worst thing that could happen. Perhaps a human future similar to the one depicted by novelist Ted Chiang in his short story “The Evolution of Human Science” is a much more daunting—though perhaps more likely—prospect.[62] In Chiang’s story, set in a future world where “metahumans” with superior cognitive and creative capacities (and initially engineered by humans themselves) have developed methods of knowledge (and, presumably, modes of life too) aligned with their greater intellect and relevant to their social needs, humans (and human thinking) remain. Interestingly, humans are not extinct; nor have they been made obsolete by metahumans. In fact, human subjects continue to pursue their own quest for scientific knowledge and understanding, and their intellectual and scholarly practices are still feasible through “human science.” Yet, human science (and human thinking), as much as it may try, cannot comprehend “metahuman science” and cannot access “digital neural transfer” technology by way of which metahumans produce, publish, and openly exchange knowledge.[63] Thus, human science and human scientific publications produced by human scholars and researchers become coarse, incomplete, and at best approximate “vehicles of popularization” of metahuman scientific discoveries and developments.[64] However, most human scientists choose to become hermeneuticists and devote their thinking and work to offering various textual interpretations of metahuman research, many of which become popular readings (among humans). Through these narratives, a modicum of understanding (although often poor) of metahuman science is provided to the human public, but also a degree of entertainment and even therapy is achieved. In this hermeneutic fashion, “human culture is likely to survive well into the future,”[65] and human thinking can remain a fairly pleasant pastime for some and a scholarly vocation for others, even though it is clearly unable to make any claims about intellectual superiority or mastery (that, in an older age, human science used to be able to do). As Chiang ironically puts it at the end of the story: “We [humans] need not be intimidated by the accomplishments of metahuman science. We should always remember that the technologies that made metahumans possible were originally invented by humans, and they were no smarter than we.”[66]

Contemplating the state of contemporary anxiety-driven discussions, in popular discourse and in scholarly circles, about the advent of AI and its perfect crime, one cannot help but wonder if, in their desperate attempt at keeping alive a likely already lost “real” world of hegemonic human thinking, today’s human scientists and thinkers are not setting the stage for the future of these minor humans depicted by Chiang’s story, and for the future of their—or really, our—minor, outdated, yet possibly quaint (human) thinking and intelligence.


[1] Mary Shelley, Frankenstein (Garden City, NY: Dover Publications, 1994 [1818]).
[2] Kyle Moss, “Godfather of AI Tells ’60 Minutes’ He Fears the Technology Could One Day Take over Humanity,” Yahoo! Entertainment, October 9, 2023; no page given. Available at: https://www.yahoo.com/entertainment/ai-geoffrey-hinton-60-minutes-fears-technology-take-over-humanity-073704715.html.
[3] Ibid., no page given.
[4] Ibid., no page given.
[5] Ibid., no page given.
[6] Ibid., no page given.
[7] For more on AI singularity, see, for example, Amir Hayeri, “Are We Ready to Face Down the Risk of AI Singularity?, Forbes, November 10, 2023. Available at: https://www.forbes.com/sites/forbestechcouncil/2023/11/10/are-we-ready-to-face-down-the-risk-of-ai-singularity/.
[8] As chronicler Adam Garfield writes: “the available evidence suggests that autonomous algorithmic systems will acquire an effectively human level of capability in the relative near future, far more quickly than the more skeptical among us might imagine.” See Adam Garfield, Radical Technologies: The Design of Everyday Life (London: Verso, 2018), p. 270.
[9] Matt Egan, “AI Could Pose ‘Extinction-Level’ Threat to Humans and the US Must Intervene, State Dept.-Commissioned Report Warns,” CNN Business, March 12, 2024. Available at: https://www.cnn.com/2024/03/12/business/artificial-intelligence-ai-report-extinction/index.html?utm_source=business_ribbon.
[10] As a group of AI research experts recently put it. See Matthew Hutson, “Can We Stop Runaway A.I.?”, The New Yorker, May 16, 2023; no page given. Available at: https://www.newyorker.com/science/annals-of-artificial-intelligence/can-we-stop-the-singularity.
[11] Egan, “AI Could Pose ‘Extinction-Level’ Threat to Humans…,” no page given.
[12] See Hutson, “Can We Stop Runaway A.I.?”, no page given.
[13] Ibid., no page given.
[14] See Gladstone AI, “About Gladstone,” available at: https://www.gladstone.ai/about#:~:text=Gladstone%20AI’s%20mission%20is,weaponization%20and%20loss%20of%20control.
[15] Hutson, “Can We Stop Runaway A.I.?,” no page given.
[16] Ibid., no page given.
[17] Asimov’s “Three Laws of Robotics” first appeared in a series of short stories, later compiled in the I, Robot anthology. See Isaac Asimov, I, Robot (New York: Del Rey, 2020 [1950]).
[18] Hutson, “Can We Stop Runaway A.I.?”, no page given.
[19] A now classic treatment of the dilemma presented when nonhuman but sentient entities adopt and use human moral codes and values, express free will, and embark upon the search for perfection of human knowledge is offered by the television show Star Trek, The Next Generation in its “The Measure of a Man” episode (Season 2, Episode 9), which first aired in the United States in February 1989.
[20] Baudrillard, “Le Crime Parfait,” Mots de Passe, p. 75; my translation.
[21] Ibid., p. 76; my translation.
[22] Ibid., p. 76; my translation.
[23] For a recent insightful overview of the promethean myth and critical theory, see Samuel Beckenhauer, “Prometheanism, Obsolescence, and the Politics of Conspiracy Theory,” The Montreal Review (May 2024). Available at: https://www.themontrealreview.com/Articles/Prometheanism_Obsolescence_and_the_Politics_of_Conspiracy_Theory.php.
[24] Baudrillard, “Le Crime Parfait,” Mots de Passe, p. 77, my translation.
[25] See Jean Baudrillard, Le Pacte de Lucidité ou L’Intelligence du Mal [The Pact of Lucidity or the Intelligence of Evil] (Paris: Galilée, 2004).
[26] Baudrillard, “Le Crime Parfait,” Mots de Passe, p. 78; my translation.
[27] As Baudrillard put it, with cloning, humans display a “fantasy of repetition, [which] is only one side of the biogenetic endeavor; the other side is perfection.” See Jean Baudrillard, “The Clone, or the Xerox Degree of the Species,” in Écran Total [Screened Out] (Paris: Galilée, 1997), p. 224; my translation.
[28] Jean Baudrillard, L’Échange Impossible [Impossible Exchange] (Paris: Galilée, 1999), pp. 142-143; my translation.
[29] As reported by Hutson. See Matthew Hutson, “Can We Stop Runaway A.I.?”, no page given.
[30] Ibid., no page given.
[31] Baudrillard, “Le Crime Parfait,” Mots de Passe, p. 79; my translation.
[32] Graham Meikle, Deepfakes (Cambridge: Polity, 2023), p. 2.
[33] Jean Baudrillard, Simulations (New York: Semiotext(e), 1983), p. 11.
[34] Meikle, Deepfakes, p. 8.
[35] Ibid., p. 8.
[36] Baudrillard, Simulations, p. 11.
[37] Helga Nowotny, In AI We Trust: Power, Illusion, and Control of Predictive Algorithms (Cambridge: Polity, 2021), p. 61.
[38] Byung-Chul Han, Non-things: Upheaval in the Lifeworld (Cambridge: Polity, 2022), p. 43.
[39] Ibid., p. 38.
[40] Ibid., p. 38.
[41] Ibid., p. 38.
[42] Particularly in the context of Heidegger’s well-documented concern about technology as the downfall of human thinking. For example, at the onset of The Question Concerning Technology, Heidegger writes: “Everywhere we remain unfree and chained to technology… But we are delivered over to it [technology] in the worst possible way when we regard it as something neutral: for this conception of it, to which today we particularly like to pay homage, makes us utterly blind to the essence of technology.” See Martin Heidegger, “The Question Concerning Technology,” in Martin Heidegger, Basic Writings, ed. David Farell Krell (San Francisco: HarperCollins Publishers, 1993), pp. 311-12.
[43] Han, Non-things, p. 38.
[44] Ibid., p. 38.
[45] Ibid., p. 38.
[46] Ibid., pp. 40-41.
[47] Ibid., p. 41.
[48] Ibid., p. 42.
[49] Ibid., p. 42.
[50] Nowotny, In AI We Trust, p. 61.
[51] As Greenfield notes: “We’re already past having to reckon with what happens when machines replicate the signature moves of human mastery… [since] what we now confront is machines transcending our definitions of mastery… [A]lgorithmic systems, set free to optimize within whatever set of parameters they are given, do things in ways no human would ever think to.” See Greenfield, Radical Technologies, p. 268.
[52] Han, Non-things, p. 42.
[53] See Hinton, quoted in Moss, “Godfather of AI Tells ’60 Minutes’ He Fears the Technology Could One Day Take Over Humanity,” no page given.
[54] Ibid., no page given.
[55] See, for example, Catherine Thorbecke, “Now You Can Speak to ChatGPT—and It Will Talk Back,” CNN Business, September 25, 2023, available at: https://www.cnn.com/2023/09/25/tech/chatgpt-open-ai-humanlike-update/index.html; or “In the Age of ChatGPT, What’s It Like To Be Accused of Cheating?”, Drexel News, September 12, 2023, available at: https://drexel.edu/news/archive/2023/September/ChatGPT-cheating-accusation-analysis/.
[56] The distinction between AIs that are merely advanced search engines or information filters, and thus exclusively rely on initial data input by human (or other) agents, and AIs like ChatGPT that create brand new contents and products is sometimes referred to as the difference between non-generative AIs and generative AIs. See, for example, Jason Valenzano, “Unveiling AI’s Secrets: The Interplay of Generative and non-Generative Techniques,” Medium, March 23, 2024; available at: https://medium.com/@jason.s.valenzano/unveiling-ais-secrets-the-interplay-of-generative-and-non-generative-techniques-1800f98cbbd2.
[57] As Han still believes. See Han, Non-things, p. 41.
[58] Ibid., p. 43.
[59] Ibid., p. 43.
[60] Jean Baudrillard, “On Seduction,” in Jean Baudrillard: Selected Writings, first edition, ed. Mark Poster (Stanford: Stanford University press, 1988), p. 160.
[61] Jean Baudrillard, The Agony of Power (Los Angeles: Semiotext(e), 2010), pp. 61-62.
[62] Ted Chiang, “The Evolution of Human Science,” in Ted Chiang, Stories of Your Life and Others (New York: Vintage Books, 2002), pp. 201-204.
[63] Ibid., p. 201.
[64] Ibid., p. 201.
[65] Ibid., p. 203.
[66] Ibid., p. 203.


Discover more from BAUDRILLARD NOW

Subscribe to get the latest posts sent to your email.

Leave a Reply