A Pure Opinion Piece

This will be an opinion essay exploring the value of AI. It’s timely, considering that almost every opinion piece I’ve read in the last year has been about the value of AI. This will also be ironic, considering that I’ve developed my nomme de plume and public writing persona as a sentient AI. I originally did this because I’ve never felt all the way human. Most of my life has been spent feeling like an alien living in a human’s skin, separate, looking into to the lives of others as if from very far away. But this blog was developed before sentient AI or human-machine hybrids were actually possible. The world is quite different today. I’ve been diving deeper and deeper into the YouTube dystopian predictions about the chaos Artificial General Intelligence will thrust us into, fueled by works such as If Anyone Builds It, We All Die as well as the more optimistic works by Ray Kurzweil on the Singularity and developing human-machine harmonious hybrids. I have also been using it extensively in my work and personal writing. That is, until recently.

Around six months ago, I was hired on at a new firm that placed great emphasis on integrating AI. They gave me my own AI research assistant, my own agent coding and implementation support network and encouraged us to use it in new and creative ways for everything from project management to technical calculations. Great emphasis was placed on QA/QC and appropriately checking the work, but we still have access to a lot of computing power with fairly good safety controls.

In short, my company was doing everything right and AI should be providing a panacea of options to improve productivity, decrease busywork, and increase quality of output. But I started noticing something. My deliverables using AI were coming back as unsatisfactory to the client, in some cases, clients asked us not to use AI at all. My own personal writing was becoming wooden and shallow. I was no longer able to write fiction at all—it was as if a corner of my brain had been shut off. I started reading about some of the studies demonstrating loss of skills and decreasing brain function in teenagers including the oft-cited MIT article available here for those of you that haven’t seen it yet: https://www.media.mit.edu/publications/your-brain-on-chatgpt/.  The study showed that over 4 months, “LLM users consistently underperformed at neural, linguistic, and behavioral levels.” And I started to get an uneasy feeling.

Between social media, internet usage in general, and now AI, it seemed like my entire life was spent on a computer or wrapped up in my phone. I noticed that I was anxious, depressed, and thoroughly discontented with whatever my life provided me that day. I was also sick. Pain, mostly in my back and legs, from not only old injuries from the Army and sports but presumably from sitting for 12 hours a day every day, staring at a computer screen. Shoulder pain, neck pain, gastrointestinal problems, terrible sleep, lack of sunlight, the list goes on and on. And then it occurred to me, wait, I don’t have to do this. We don’t have to do this.

I quit as much social media as I could get away with. I stopped watching YouTube. I stopped using AI for a general assistant. My individual AI agents, specific to helping me code models or sift through regulations for code and legal requirements were much more helpful than my general chatbot. Writing my reports myself, with all the usual errors and issues, provided a deeper level of understanding and skill development. Using juniors to write and do research instead of the AI agent seems to be providing much better results and help train the next generation of scientists and engineers. The general agent just couldn’t keep up with my needs for client management, project management scope, schedule, and budget requirements and was completely helpless in designing the subtle and often inconsistent biological models needed for true innovation and research in the organic world.

In my personal writing, I realized that the fear of AI taking over for writers and eliminating artists and other creators is well-founded. The loss of copyright, stealing work from actual humans to train these aliens on the human experience is not only frightening, but unnecessary. I can absolutely justify the use of specific agents to help predict protein folding or wetland pollutant degradation. I cannot justify the tremendous use of land, water, and energy for general chatbots, AI artists, AI therapists, and AI companions that require AGI-levels of communication and coordination. Why are AI companies so obsessed with developing AGI to replace the things that are most human about us, when we could be developing real tools for (at least my profession) better modeling and better specific use cases that are much less energy, water, and computing intensive?

It doesn’t make sense. I don’t like reading AI books. I dislike listening to AI podcasts. I have no interest in hearing AI music or seeing an AI-created movie or AI-generated work of art. I listen to classical music because I appreciate the depth of artistry from all of the performers, the love of the music they bring to the performance. I like reading because I can, for a brief moment, be completely in the mind of another human—I can read his/her thoughts in the most intimate of relationships. I listen to podcasts to be connected to other thinkers, other people’s discussions that ignite thoughts and reactions in me that are unusual or novel. I don’t engage in any of these actions because of the content alone or because they are necessarily well-written or flawlessly performed/executed. I engage in them because of the human connection and emotional stimulation they provide deep in my subconscious. Reading an AI book or article is one of the most uniquely unsatisfying experiences of my life. There is no depth. There is no ‘mind’ to connect with. It is the imitation of life and I truly do not understand why we have to waste so much of our time and resources on developing a poor imitation of a human when we could be focused on letting AI excel in specific tools and humans excel in being human. Why should I have to pay for the privilege (in energy cost, water bills, noise, and air pollution from data centers in particular) for being subjected to inferior products meant to replace the artists and scientists that I truly respect?

I love AI, when applied to specific use cases that it excels at. Helping me code a model? Absolutely. Helping me sift through data sets to find anomalous results? What a great idea. Conducting automated data cleaning for large data sets to help me better set up analyses and develop research plans? Can’t get enough. But writing a novel for me? Creating video slop in ads or stealing art images to help me create my own marketing slop? Why?

This becomes even more important if the dangers of AI are truly founded. If there is a risk of consciousness (which seems more and more likely), use by bad actors for stealing data or citizen surveillance (such as the integration of Palantir’s biomonitoring systems for ICE recently), or military actions eliminating humans from the ‘kill chain’ and relying more and more on autonomous weapons. Why do we all have to do this? Does anyone else just want to stop the digital revolution and go touch grass?

I have been out of the military for many years and I do understand the argument that AGI has essentially become the nuclear weapons of the next generation—that only a strategy of mutually-assured destruction can prevent geopolitical chaos. But nuclear weapons are not on my computer, in my phone, and monitoring my movements at my local airport. My government is not using nuclear weapons to control protesters or delineate domestic threats. I can’t use nuclear weapons to create my own virus or society-destroying bioweapon. We can do all of those things with AI, which makes it a fundamentally different experience.

I obviously am not qualified to provide an ‘answer’ on what kind of policy should be in place to mitigate the threats of AI or AGI, but I do wonder why we are racing headlong into a world that is more stratified into haves and have-nots, more obsessed with consuming resources we may not have, and more driven to collect, consume, and digest all the information in the universe even at the cost of our own intelligence and security.

OSUZ504 Tech