Some AI Articles:
I am taking a photo class at a community college so that I can use their expensive printers and film scanners. Most of the class is here to utilize this opportunity. But there is one student who is 20 years old, studying to transfer and earn a bachelor’s degree, who sits in the corner of the computer lab. She talks about using Chat GPT, like many young students. “I do not use it for much,” she says, “I just use it to edit my essays to make them sound better. It tells me what to do.” I do not think she knows that that is what her professor is for: to make suggestions for her to improve her writing. I do not think she knows that AI does not “sound” good, honestly. I sit at a computer that is turned off, listening, with a poetry book open. I read.
Yesterday, at church, Julie, the office administrator walked up to me. She knows that I am loudly opposed to using AI for church tasks. And she agrees with me. “AI Music,” she says, “cannot conjure the Holy Spirit.” I told her, “I do not know about that.” I imagined something like the event when Bethel, a megachurch in Redding, California, used Binaural Beats to induce a spiritual experience in its followers. If “AI” could reproduce this technique, then I think it could induce a similar spiritual experience. We could name this reproduction, this copy/pasting (no need to call it intelligence), the Holy Spirit. The divine is what we name it.
My brother clapped his hands in the kitchen when Chat GPT rose in popularity. “We’re encountering the Mind/Body problem again,” he said. I’ve heard over and over that Artificial Intelligence is “soulless,” or “not human:” it is all algorithmic probability from a dataset, and not caused by something uniquely human.
The articles I’ve linked below do not really deal with the issue of AI reproducing a sort of soul, or humanity, or agency. They deal with problems of classification, problems with interpretation, problems with the labor involved in Artificial Intelligence, and other ways of demystifying Artificial Intelligence. If there is not something soulful and ineffable at the core of these algorithms, then these works are a little beginning to make “AI” a bit clearer.
Refugees power Artificial Intelligence through “clickwork”:
https://restofworld.org/2021/refugees-machine-learning-big-tech/
Another article on Clickwork, or Amazon’s Mechanical Turk, which powers AI training databases:
https://www.nytimes.com/interactive/2019/11/15/nyregion/amazon-mechanical-turk.html
A Trevor Paglen interview from Aperture:
https://aperture.org/editorial/trevor-paglen-on-artificial-intelligence-ufos-and-mind-control/
Some quotes:
“The theory of perception underlying so much of AI and computer vision is shockingly bad from a humanities perspective. It’s crucial for people with backgrounds outside the tech industry to look at these systems critically.”
“’Aritificial Intelligence’ doesn’t really mean anything, and it has a lot of ideological associations. It’s a term that lends itself to mystification.”
“We are subject to vastly powerful systems built by people who never seem to question the relationship between representation and reality, pictures and meaning.”
Trevor Paglen’s project “Excavating AI,” focusing on machine learning and classification in databases, rather than the generative stuff:
https://excavating.ai
He pokes at ImageNet and some problems with classification, like physiognomy (it’s back!), problems of interpretation in images (he cites W. J. T. Mitchell, Picture Theory: Essays on Verbal and Visual Representation, Paperback ed, while also noting that there are “HUNDREDS OF SCHOLARLY BOOKS IN THIS CATEGORY.” I think of Paul Ricoeur for imagination and interpretation in general.), and other things.
AI and the American Smile:
https://medium.com/@socialcreature/ai-and-the-american-smile-76d23a0fbfaf
“It was as if the AI had cast 21st century Americans to put on different costumes and play the various cultures of the world. Which, of course, it had.”
On some “dangers” of Large Language Models
https://dl.acm.org/doi/pdf/10.1145/3442188.3445922
“We have identified a wide variety of costs and risks...environmental costs...financial costs...opportunity costs...and substantial harms, including stereotyping, denigration, increases in extremist ideology, and wrongful arrest.”
Anyways, this is a list of links, mostly for me to keep track of. I will update as I find more.