We Without GPT? A [MASK] Left Empty ?

We Without GPT? A [MASK] Left Empty ?
https://no-ai-icon.com illustrating the search for authenticity.

#No GPT/AI tool was used to produce any part of this work. (i.e current work)

#The future tagline -in search for authenticity.

Introduction

We are in time to propose this term ‘AI Centric Humanity/AI Centered Human’ with the release of multimodal LLMs like  GPT-4o and Google Astra which can answer humans where/what of existence. Is this phrase a mere ‘order reversal’ of  words in ‘Human Centered AI’ or is it more ? How I see them are, as ‘tokens’ in a matrix of word  embeddings in a LLM , getting higher confidence scores to be mapped closer and closer together. But how close do they need to be before they become Schordinger’s Cat in this multiverse, echoing Hamlet’s dilemma of ‘To be or not to be’? Before we begin to delve into the question of ‘language’ in these Large Language Models, lets lean a bit diagonally in time, to understand the progression and regression of this quest for the absolute ‘know-it-all’, which  in  Meghan Giblyn’s term are the new Babel towers of LLMs.[14]

In search of ‘me’, that's why I am writing this current piece to remind myself of now and the future then. Earlier, I was clearing library bookshelves in search of improving my writing skills, my knowledge, my reflection upon things. Now, I don’t stretch my hand further beyond my technological devices, to ask if I need to improve my writing? No I don't have GPT, do I need to read a complete book to write a critical summary? No I don’t. Do I experiment purely enough with my thoughts and words, without any technological interference? How much of me is me, rather than a heap of unconscious absorption of ‘shorts’ ‘posts’ ‘news’. They speak out loud that Gen Z are so comfortable with technology, that to me being one, it feels heavy to carry the load of its perfectionism, its excessive amount of choice derived market, until one feels choice paralysis and still dissatisfaction. The big shots 

of the tech industry go on about AI taking over. What I feel to be the greatest threat is the need for authenticity for myself first and then world around, to know what I project to be me is actually me, or how distrustful I will be in my heart that it's not what it seems to be, a simulacra of projected reality. The need for authenticity in our actual cognitive abilities would not be for a job resume alone, but oneself, our own capabilities in question? Or rather it would be that AI capabilities and human capabilities will be seen as one whole. Does AI care? Of course, no it has no agency on its own, except the people behind it? If AI is for flattening human intellectual abilities to a single ratio and a complete equanimity, when all papers are the same, what's the point, that's my unconscious dripping of thought. Now too I have  a temptation to ask GPT to help me storm my stagnant neural word embeddings, I am feeling tempted to compete, to be perfect, to have no error, I have a mask on me, my neural embedding knows the semantic ranges far too well. Do I want AI to be a stick I need to walk everywhere to, my memory, the dead, the past, the future, ? Is it the probability matrix of random responses that keeps me glued to AI or is it in the subject being a clever prompt engineer to have a desired output. This reminds me of a story from Upnishads of Sati who asked her husband back from the dead. Her clever way of framing her wish to Yama, the lord of death, of asking for a boon that she may give birth to a son, when she couldn't get ask directly for her husband’s life. Yama  granted the boon quickly only to  realize later that she in turn asked for her husband’s life as well for her to have a son. Now the question is of endless human want/human laziness indeed.How these contrary words sit together, signifying a master/ slave relationship in their dichotomy.The only need for AI centered humans will be looking for oneself? To shout out to over saturation/ to rebel against the mediator of everything, to be unconnected, to be there own thoughts, to have less of outer and more of inner. To read a book for the book’s sake, rather than to be a scholar of LLM, What am I scared of? To become another node of data point , when I ask GPT to edit my language, when I ask GPT to be more creative, when I ask GPT to be more not so human in my prompts, when everything that I am is a line of traceable algorithm recommender system, that tells me what to do indeed, no further me telling it what I want. That’s how we become entrapped in AI centered humanity/human, in desperate need of being analog again, to set ourselves out of this labyrinth of  AI’s cognitive decision making.

Research Questions:

In a non-traditional sense, what am I looking for? I speak for myself here, I am currently living through a time in  my life where my and including everyone’s, every other tab on screen is ChatGPT, how exciting it's now but how stagnating it would be later to become an outcome of a  flattening schema  ? Knowing with certainty that ‘it's not me’, I am struggling to keep my words unsaturated. I am no longer excited with uncertainty of things when I receive emails, its mere saturation has made me delete emails before I see it, it is technology saving technology, by saying we produced this garbage and we will clear it off, in the meantime, me feeling excited, exhausted and numb to further pop up notifications,. Everyone wants to be connected, or just sell themselves, this race to be bigger/better/bored is too much. 

Brands for institutions/job titles/salary numbers/IQ numbers and now how many word tokens, these races are senseless, mindless .Many scholars have spoken about the need for  human control must be preserved ‘at the very beginning and the very end’ of the process, as in Stuart Russel’s word of making safe AI rather than AI safe,[5] the human in the loop becomes essential. But here I am going a step ahead to question this human in the loop, who in time will feel  more strangled, suffocated, restless and gaping for breath to know where his capability without AI would lie.The central question that I have in mind is about how to proactively safeguard our cognitive abilities  from  slavery to AI?

Literature Review:

For the purpose of my current study, I reviewed papers and studies from the space of Cultural Analytics as well as from Cognitive Science and articles from online journals Wired and Medium.I found some current as well as earlier studies in need to understand difference between unimodal and multimodal LLM’s,[8] analog and digital[2], the science behind cognition[9], where I would like to specifically  mention Mark A. Herschberg’s essay on ‘Is AI Just a Tool for Lazy People?[13] He mentions,’I was watching an interview with Noam Chomsky and he was asked about AI software, specifically ChatGPT and its impact on education. He replied that GhatGPT isn’t about learning, it’s about avoiding learning.’ To have a firmer grip on my understanding regarding cognition, Kevin Scott’s paper[7] ‘I Do Not Think It Means What You Think It Means’(2022) gave ground for understanding the evolution of cognition in machine learning, although in a  more optimistic tone. Whereas Erira Murarti’s paper[1] on ‘Language and Coding Creativity’ (2022) gave a deeper insight about the growing tension between the roles of human and machine in creativity to quote, ‘ How we learn to navigate the “human” and “machine'' within us will be a defining question of our time.’Through these diverse set of readings, I could foresee how AI evolved, competed and is now ready to replace human effort and for me specifically it means replacing human cognitive abilities as well. What is worst about this loss is that it is not that easily recoverable, with maximizing GPU power and training dataset, rather it needs time, attention and recall over time to keep the neurons shooting to safeguard these abilities.

Description of Dataset:

If I were to do a study to answer this question, then I will certainly not rely on collected data, as there is always something rotten about datasets, they are inefficient, have biases as well as cannot be fully observed to be true in current time and space. Therefore, my approach would be to take a sample of  people in realtime to experiment with, rather than stale datasets.I would need two groups of a mix of diverse  people  from different genders, age groups, educational level, socio-economic and ethnic groups where one group will consist who do not seek help from GPT/AI tools in their work and rely mostly on their own cognitive abilities and the second who rely heavily on GPT for the majority of their workload.

Methodology for unlearning 

How would I measure this, would be through some tasks oriented to check the effect on cognition before and after being dependent on GPT. For this purpose I would create different tasks, which would check cognition of various mental faculties from a simple to more complex level.I would also check for which group are better at writing prompts, the one dependent one’s or the new users or whether it is experience or educational level, which makes subjects better prompt writers. Will I use algorithms/other earlier available qualitative/quantitative techniques  to make more sense of my data? No, I want this study to be as manual as possible, no shortcuts, no highways, no high class visualizations to tell me what to look for, because I don’t wish to be enslaved in herd mentality, which becomes afraid to admit the potential loopholes and issues which reminds me of  the courage it takes to announce that ‘the emperor has no clothes on.’ Keeping this study, well aware of the blinding black boxes of over reliance on technology, I plan to pursue it as subjectively as possible, in a more in-situ observation space.

Potential Challenges:

The potential challenges and even outcomes that I see to this study bringing forth are to firstly the difficulty in finding group of  people who are untouched by AI, secondly having a wider base in cognitive science background to be able to make precise judgements of how the brain works and what sort of specific tasks and tests should be prepared for accuracy testing of such an experiment with keeping bias to minimal. For the outcome space, even if the study showed positive results that using GPT does not have any effect on the cognitive ability of people and it remains the same pre and post using GPT, then it would be no problem to see AI and humans as one whole. But if  the results are negative, showing that GPT/AI tools adversely affect human cognitive abilities in a negative way then it is an alarm bell for many to seek for solutions to curb our pathways to let the neurons keep shooting signals, before our memory turns into GPTs memory which forgets at every another step and needs to be reminded of what was being discussed.It has been true, from past observations that the organs and faculties we don’t put to use become obscure hence redundant.Therefore, this has to be taken far more seriously because we know that in modern world, mental health is worsened by too much of automation as the mind needs to be kept busy to be productive, but when mind will be trained on being lazier and lazier, with each passing day in name of progress then we do have to put serious thought to where we are treading towards.

Conclusion

While the whole world gathers around to annotate, supply and help tune LLM for their personal and global use,  little do they realize that in this race for training LLMs they are also creating traps for their own enslavement by AI cognitive decision making that at first will make the users, masters in command but  for those who would ponder would know the slavery that AI has unconsciously subjected them to. A slavery of postponing everything to AI, uncritically absorbing the individually customized information it serves, not being able to look through the layer of persuasion of How can I help you today? to ‘Do I need your help everyday? This time it won’t be an exact duplicate of the Industrial Revolution as that mostly replaced physical labor, but now AI is ready to take on both, physical as well as mental labor of thinking in abstractions. We without GPT warns me of a future when GPT learns everything I could probably teach it, that one day when asked  to prompt without its absence, my brain’s neural network would have forgotten to fire any signals anymore like Rushdie’s protagonist Rashid Khalifa, the master storyteller from his famous short story ‘Haroun and the Sea of Stories’, whose inner spring of creativity went dry, ‘he opened his mouth, and found that he had run out of stories to tell.’


Work Cited:

  1. Murati, Ermira. “Language & Coding Creativity.” Daedalus, vol. 151, no. 2, 2022, pp. 156–67. JSTOR, https://www.jstor.org/stable/48662033. Accessed 20 May 2024.
  2. Hassan, Robert. “From Analogue to Digital: Theorising the Transition.” The Condition of Digitality: A Post-Modern Marxism for the Practice of Digital Life, University of Westminster Press, 2020, pp. 35–72. JSTOR, http://www.jstor.org/stable/j.ctvw1d5k0.5. Accessed 18 May 2024.
  3. Hassan, Robert. “The Culture of Digitality.” The Condition of Digitality: A Post-Modern Marxism for the Practice of Digital Life, University of Westminster Press, 2020, pp. 129–58. JSTOR, http://www.jstor.org/stable/j.ctvw1d5k0.8. Accessed 18 May 2024.
  4. de Andrade, Oswald, and Leslie Bary. “Cannibalist Manifesto.” Latin American Literary Review, vol. 19, no. 38, 1991, pp. 38–47. JSTOR, http://www.jstor.org/stable/20119601. Accessed 18 May 2024.
  5. Russell, Stuart. “If We Succeed.” Daedalus, vol. 151, no. 2, 2022, pp. 43–57. JSTOR, https://www.jstor.org/stable/48662025. Accessed 19 May 2024.
  6. Orr, Jackie. “Materializing a Cyborg’s Manifesto.” Women’s Studies Quarterly, vol. 40, no. 1/2, 2012, pp. 273–80. JSTOR, http://www.jstor.org/stable/23333457. Accessed 18 May 2024.
  7. Scott, Kevin. “I Do Not Think It Means What You Think It Means: Artificial Intelligence, Cognitive Work & Scale.” Daedalus, vol. 151, no. 2, 2022, pp. 75–84. JSTOR, https://www.jstor.org/stable/48662027. Accessed 18 May 2024.
  8. Ji, E.Y. (2024), Large Language Models: A Historical and Sociocultural Perspective. Cognitive Science, 48: e13430. https://doi.org/10.1111/cogs.13430  
  9. Trott, S., Jones, C., Chang, T., Michaelov, J. and Bergen, B. (2023), Do Large Language Models Know What Humans Know?. Cognitive Science, 47: e13309. https://doi.org/10.1111/cogs.13309
  10. De Deyne, S., Navarro, D.J., Collell, G. and Perfors, A. (2021), Visual and Affective Multimodal Models of Word Meaning in Language and Mind. Cogn Sci, 45: e12922. https://doi.org/10.1111/cogs.12922
  11. Brynjolfsson, Erik. “The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence.” Daedalus, vol. 151, no. 2, 2022, pp. 272–87. JSTOR, https://www.jstor.org/stable/48662041. Accessed 19 May 2024.
  12. Ahmad, S.F., Han, H., Alam, M.M. et al. Impact of artificial intelligence on human loss in decision making, laziness and safety in education. Humanit Soc Sci Commun 10, 311 (2023). https://doi.org/10.1057/s41599-023-01787-8
  13. https://medium.com/@markaherschberg/is-ai-just-a-tool-for-lazy-people-542c29a08020
  14. https://www.nplusonemag.com/issue-40/essays/babel-4/
  15. https://www.wired.com/story/unbelievable-zombie-comeback-analog-computing/
  16. https://www.thedailybeast.com/robots-and-ai-may-cause-humans-to-become-dangerously-lazy
  17. https://www.frontiersin.org/articles/10.3389/frobt.2023.1249252/full
  18. https://www.wired.com/story/does-using-ai-make-me-lazy/
  19. https://culturalanalytics.org/post/1214-our-data