A friend of mine, Shahal Khoso, a PhD student in Spain, shared with me the other week a short paragraph on Lahore’s smog written in the style of German philosopher Martin Heidegger (1889-1976). “A basic but not a bad note,” was my initial response. “Who wrote this?” I inquired. ChatGPT (an OpenAI project) was the answer.
I was partially startled. I knew of, and had actually at one point in Barcelona written about, the artificial intelligence (AI) research laboratories producing voices, images and texts that mimic humans. But I had no idea of the mimicking reaching a point where an AI operation would robotically apply late Heidegger’s philosophy to the pollution situation in a city (Lahore) he never took note of in his life. This is what the ChatGPT text says:
“The smog in Lahore is a thick, suffocating presence that envelops the city in a shroud of haze. It is a reminder of the human activity that has so thoroughly altered the natural world and the ways in which we have lost touch with the earth. The smog is a Dasein, a ‘being-there’ that is both part of and separate from the human experience. As we move through the streets, our visibility obscured and our lungs filled with toxic fumes, we are forced to confront the consequences of our actions and the ways in which we have neglected our relationship with the world around us. The smog is a constant reminder of our fallenness, of the ways in which we have strayed from the path of being-in-the-world.”
Intrigued by this standardised (but perhaps not completely awful) reading of Heidegger by OpenAI, I immediately tried a slightly broader query: “Heidegger and smog”. Thereafter, I tried a large number of combinations and came across interesting results, although, after some time, the overlaps outgrew the differences. At one point, I found the ChatGPT completely exhausted.
The rise of and easy access to AI-generated content raises several questions about authenticity and the ethics of using such material. But could it also have implications for democracy?
How do large-scale language models such as ChatGPT actually work? Based on the data (books, newspapers, journal articles, etc.) available on the internet up until 2021, ChatGPT is trained to trace common patterns among words and write grammatically correct syntax in response to a prompt.
Although the results might appear sophisticated to some, the underlying operation is a simple one, involving the oft-known interplay between data, patterns, algorithms and balance.
Take the example of the Heidegger-smog interface. The word-and-concept-play that we see in action would revolve around some of Heidegger’s most popular terms: ‘being there’ (Dasein), enframing (Gestell), standing reserve (Bestand), forgetfulness of the Earth and so on. ChatGPT would take into account the most common interpretations of these Heideggerian concepts and try to relate them to a phenomenon such as smog.
Basically, the language model is completely dependent on what’s been written on (and by) Heidegger and smog. Its job is to make sense of the data, decipher common patterns and write “meaningful” sentences following the principles of some kind of coherence and transition.
Growing more thoughtful, as Heidegger would urge, let’s quickly look at the ChatGPT response I’ve quoted above. While it adequately draws attention to our dangerous divergence from the lived sense of being-in-the-world, it reflects a very superficial understanding of Heidegger on a phenomenon such as smog.
Yes, we fill our lungs with toxic fumes while walking through the streets of Lahore in wintertime, but Heidegger would likely not simply call it an anthropogenic event. He’d like to narrow it down and likely relate it to technogenic, modernity-driven or occidental causalities. And that’s the kind of nuance ChatGPT would continue to miss. Because it’s entirely dependent on what’s been written already, it’s completely incapable of both original thinking and novel or nuanced interpretation.
At times, it might even be completely wrong. Let’s see what ChatGPT has to say on “Heidegger and renewable energy”:
“Heidegger’s philosophy has been interpreted in various ways but, in general, he emphasised the importance of ‘being-in-the-world’ and the need to find a more authentic way of living in the world. In this context, renewable energy can be seen as a way to reconnect with the earth and to live in a more sustainable way. Renewable energy sources such as solar, wind and hydro power are not only a more environmentally friendly alternative to fossil fuels, but also a way to break free from the enframing mode of thinking that sees the earth as a standing reserve of resources to be exploited and manipulated. By using renewable energy sources, humans can take a step towards a more authentic and sustainable way of living in the world, and reconnecting with the earth.”
Instead of looking at the existing large-scale, privatised renewable energy operations as ways “to reconnect with the earth”, Heidegger would look at them as structures not too different from fossil fuel plants.
Because they’re placed within the commodity chain as raw materials to be ordered, controlled and exploited, they too end up reducing nature — wind and sunlight, for instance — to a standing reserve. Far from breaking “free from the enframing mode of thinking”, centralised, large-scale renewable structures are a mere distraction: a distraction from arriving at our Gelassenheit moment.
What is our Gelassenheit moment in the current ecological context? Translated as “releasement toward things”, Gelassenheit can be ecologically interpreted as a proposal of releasing ourselves from the compulsions imposed by Cartesian, Western technological thinking largely responsible for the climate crisis.
Gelassenheit, in other words, raises an important question: is renewable energy in its existing form a breaking free from calculative thinking, a liberation from the will to power?
A detailed account of why Heidegger would not think so and what he’d regard as “a more authentic and sustainable way of living” is out of the scope of this short piece, but Heidegger’s appreciation for renewable energy would be conditional on a certain combination of scale, control and creative involvement.
That is, he’d privilege small-scale, decentralised, locally produced and managed assemblages over large-scale, centralised and corporately owned complexes. At the heart of Heidegger’s proposal lies the idea of ecological repair by discovering care-based foundations for the human-nature relationship.
But the point here is not to prove ChatGPT wrong. The fact that it drives some of us to deconstruct its misleading responses makes us fall foul to its half-truth and to be impressed by its plausibility says a lot about the future.
A few weeks ago, bioRxiv published a study revealing how academics confused 32 percent of the ChatGPT-generated abstracts with real ones and 14 percent of original abstracts with AI-generated ones. Earlier this year, New York City schools banned ChatGPT, fearing AI-assisted plagiarism.
More alarming, however, is the possibility of AI-generated texts being instrumentalised as political content.
Not long ago, we saw how Facebook gave Cambridge Analytica access to the data of 50 million users, data that were then sold to political campaigns (Trump’s, most notably). Imagine a possible confluence taking place between AI, advanced human-mimicking content, big capital and politics.
Not only would capital-intensive, tech-savvy political campaigns be able to plug AI-generated content into their digital components, but they’d also be able to pump artificial life into their overall campaigns.
This is a situation where the ability to manipulate and instrumentalise AI-generated content can define the new norms of democracy. This is a situation, in which politics is subjugated to the question: who can subjugate AI?
The writer is the founder of Folio Books, teaches Philosophy at Beaconhouse National University, Lahore, and is the editor of Rethinking Pakistan:
A 21st-Century Perspective.
He tweets @BilalZahuur
A version of this essay was published in The Philosophical Salon
Published in Dawn, EOS, February 26th, 2023