“With this kind of stance, you might alienate a lot of people.”
My mother means well—I’ve just had a fight with a family friend. She tweeted that she had used ChatGPT to help her whip up a meal out of leftovers. She had included an unappetizing photo.
I slid into her DMs, “really? you used unethical technology that’s bad for the environment to make something that looks disgusting?”
Yikes. Even for me, this felt brash. She felt similarly.
I feel bad for speaking like a brat, but I don’t feel bad for how I feel: disgusted. When I find out someone I know has willingly opted into using generativeAI1, I feel revolusion and, at times, even betrayed.
Let me tell you a few specifics about me—because I want to use data to support my feelings since I know my staunch stance about generativeAI is a product of some very personal intersections:
I have have been an early adopter of nearly every social media platform, and the precursors: ICQ, IRC, MSN Messenger. I’ve long been a part of digital discourse. I love 2 be online.
I was raised by a linguist who worked in Silicon Valley—I grew up exposed to the intersection of language and technological advancement.
I’m a doctoral candidate in Creative Writing and English.
I teach 50-100 students a year at the college level and have since 2020.
My second job out of college was working in an anti-human trafficking division for a District Attorney. It was a harrowing and strange job. I was exposed to the most vulnerable people—digitally and physically—who had been violated in every conceivable dimension. I saw monsters and their mothers weep for them, I read reports on the most foul things you can imagine a person doing to another. I sat in on hearings. I saw the world differently—and realized how easy it was to ignore things that were inconvenient, ugly, violent. And, I saw how my beloved Internet, was one of the great prepretators.
All of these factors together make me predisposed to not me a fan of generativeAI.
But when ChatGPT was first made public, I was extremely curious. I’ve had my own personal computer since I was nine years old, I’ve been tooling around every corner of the Internet, easily accessible and not, since. I had used chatbots—my first was SmarterChild on AIM. I had seen, many years online, how interactive “artificial intelligence” quickly degraded with human influence. Most importantly: I teach undergraduates.
I researched ChatGPT, generativeAI LLMs and OpenAI. Even in 2022, the information I got felt felt grim: Peter Thiel was an investor, labor exploitation abroad and nationally, biases, a large carbon footprint, and of course, the problems of digital privacy.
I still made an account, still gave it a shot—asked it to write me a poem (they all rhymed) and make me a workout schedule.
Did I know how dire things were after my first foray with ChatGPT? I mean, I spent the next 3 days preparing a 20 point slide on generativeAI technology for my students. I tried to keep the slides neutral: informing my students on the pros and cons, letting them know about the ethical quantities but also about the personal benefits (meal planning, study buddy).
I immediately started to redesign assignments. What value is the work I give my students if it can be replicated by an algorithm? I rethought classroom dynamics: what does it mean about state of my classroom and education itself that my students want opt out of?
(Note: As an educator, I am told I cannot share my political opinions in my capacity as an educator. I’m writing this on summer break, where I currently not getting paid, and it’s 10pm locally and this is my personal student blog. In case anyone asks. )
Now, in 2025, I no longer use this slideshow. The policy in my classroom is that if you use generative AI, you fail the class.
Look, I don't blame my students for succumbing to an irresistible apple placed before them2, but I will not be complicit in them taking a bite. I'm in the process of generating (ha!) a completely analog syllabus for the fall.
What students do in other classrooms is beyond my control. I am an English teacher, I am a writer—and generativeAI is not conducive to my classroom.
As generativeAI has barrelled its way into our lives, it has become unavoidable. I mean literally: even if I hate the technology, I know it is being deployed around me dozens of times a day—a thousand tiny cuts, I suppose. My disdain has grown into full blown hatred and horror.
But why? Well, it turns out every corner of my life is impacted by generativeAI in a negative way—one that outweighs the benefit of outsourcing an email.
Here’s is what I have learned:
Predators are using GenerativeAI to make explicit images of children at such a large rate that it is impeding law enforcement’s ability to catch the predators. (UN Study)
Oh and now there’s the problem of the digital violence against women through simulate rape.
ChatGPT in general is plagued by bias.
GenerativeAI takes a lot of energy. It’s already impacting air quality in Memphis, its “hidden water costs” are causing water stress across the US, North American data centers doubled in their energy demands from late 2022 to 2023 and it’s making your electricity at home worse.
(yes, I know that to store any data from the Internet requires these data centers, so they were already a huge contributor to electrical consumption. But every study I have read attributes the increase at least in part to the emergence of generative AI).
Studies emerging about generativeAI are slowly emerging, albeit from small test groups. They do seem to have some commonalities, in how the process of outsourcing thinking leads to a decreased ability to critically reason, problem solve, or address bias.
The use of generativeAI has impacts on cognitive health, people have been killed and committed suicide in part because of their use of generativeAI. It is absolutely causing delusions.
Using AI chatbots doesn’t have a significant impact on earnings and they are a security risk.
The hallucinations: are getting worse, and what happens when AI generated material is fed AI generated material? How will results degrade? Will models collapse?
God, not to mention when Meta illegally uploaded seven million books to train their AI. Do you know how many books SEVEN MILLION is? My mother’s architectural monograph was uploaded! Niche articles my aunt wrote in the 1980s on the economics of South American countries were uploaded!
GenerativeAI cannot generate anything—it studies patterns, it is very good at reading data, the answers it gives are the result of a complicated equation—not complex thought or inspiration— but the art it makes (visual, written) is the product of theft.
GenerativeAI cannot think. Cannot reason (this is a blog post but compelling!).
The world is fucked and it is impossible to opt out of its exploitation. Writing this on a laptop is participating in the exploitation needed to extract the materials, as an air conditioner cools me and warms the outside. But generativeAI is something you can opt out of. Nothing that it provides is worth the astronomical ethical, environmental or cognitive cost.
This is also why I feel so strongly. Because it is optional.
To me, person’s decision to continually use generativeAI is a reflection of their moral values: they do not care about the environmental impacts, the cognitive impacts, they do not care about the suffering and exploitation of the most vulnerable. A willingness to give up critical reasoning is horrifying to me—especially in an age of systemic defunding of education.
Lastly, people who opt into using generativeAI sure as shit don’t care about art. I have devoted my life to art—to (trying) to make it, to (trying) to teach it. It is at the core of my being. I cannot imagine making space for someone who does not respect something so intrinsic to me.
Of course her meal looked disgusting, it was antithetical to the core of my being, my profession, my experiences and my planet.
Some clarifications. In this post, I am referring to LLMs such as ChatGPT, Claude, Gemini, Grok, and image/video generators such as MidJourney, DeepAI, etc. I know there are incredible uses of generativeAI in the medical field, data processing, archeology, etc.
They have been sold an unfair bill of goods, told that in order to get a good paying job they must get a college degree, no matter how they felt about high school, or school. There is only one path, it is prohibitively expensive and not a conducive place for many.
THANK YOU!!!!! I work as a freelance editor and I have to have this conversation seemingly every other day with prospective clients.
I absolutely agree. I gave generative AI a chance especially since it was pushed by a few of my graduate level professors but it's getting a weird foothold and I can't help but see and hear more ill that it's doing to the environment, peoples safety, and their reasoning abilities. For me, a therapist, it's scary because people are using it to write notes or even "give" therapy. But what about the AI is therapeutic? Most of it isn't done well or correctly and it's also unable to think so it's giving bad information. It's usually keeping people in their "bubbles" or making things worse. Not to mention the need for human connection. So many issues in my field alone and it scares me to see what things will be like worldwide in just 3-5 years without restrictions and regulations.