ALG Blog 2: How GenAI Works
Published on:
Exploring how generative AI mirrors patterns, raises ethical concerns, and challenges fairness for people and the planet.
Case study:
How Generative AI Works and How It Fails
Summary of the Case Study
This case study dives into the functioning of generative artificial intelligence. Specifically, it explores the mechanics underlying the successes of deep learning in generating images and text. Then switch to the harm that generative AI can cause, for example, the lack of factuality, misuse such as non-consensual deepfakes, and labor exploitation.
Discussion Topics
Learning
A factor that was more prominent to me is that AI “learns” isn’t learning the way we do. Neural networks don’t understand meaning; they recognize patterns in what it was trained on in the massive database. This is something interesting because I sometimes think of AI as “smart,” but it’s more like a mirror reflecting patterns from the data it was trained on. This connects to how I learn as a student. When I study, I don’t just memorize patterns. I try to connect them to concepts and apply them in new contexts and even more experiences. That’s a big difference, and it makes me think twice about relying too heavily on AI for tasks that require deeper reasoning. Because I tried to give AI a math question and I discovered it gave wrong answers, and when I pointed out, it took different approaches to the question.
The use of creative work for training
Much of the data used to train Generative AI comes from creative works like journalists, writers, photographers, artists, and others, generally without consent, credit, or compensation. This made me think of fairness. I think it is not great to use someone’s work without their consent or credit. Because it would be fair if these creative workers also benefited from the consumption of their products. For example, I use tools like ChatGPT that benefit from this data. It feels like a tension between appreciating the convenience of AI and recognizing the exploitation behind it. This reminds me of the debates around Spotify, and streaming artists often don’t get fair compensation, even though people enjoy easy access to music.
Next-word prediction
The idea that all this technology is based on predicting the next word still amazes me. But I’ve seen how unreliable it can be. Once, I asked for details on an economics paper, and the AI completely made up citations. Even writing this blog, I noticed autocomplete predictions sometimes pull me away from what I actually mean to say. That showed me the danger: AI is fluent but not always truthful and accurate. Ethically, this raises questions about accountability if a false “fact” spreads, who is responsible: the user, the developer, or the model itself? Or maybe the wording can trick Arligorthim to give false information.
Environmental impact
The case study also pointed out that AI isn’t just “virtual.” Training models consumes massive amounts of energy and water. That hit home because today we are encouraged to recycle and save electricity, while companies running AI models are quietly consuming enormous resources. It feels contradictory. To me, this raises a tough question: are the benefits of AI worth the environmental costs, especially when many applications are about convenience or entertainment rather than essential needs?
My Discussion Question:
If generative AI depends on massive datasets and computation, how can we balance innovation with fairness for creators whose work is used and for the environment that bears the cost? I included this question because the reading showed me how creativity, exploitation, and sustainability are interconnected. It pushes us to think about whether we should slow down progress until we can make AI more fair and sustainable.
Reflection
Writing this blog helped me connect the technical side of AI with real-world ethical issues. At first, I thought of AI mainly in terms of usefulness, but now I see how questions of consent, labor, and environmental cost are just as important. I also realized my own role as a user: I benefit from these tools, but I have to think critically about where they come from and what they cost. AI can be valuable, but we can’t ignore what it takes to sustain it.