This was supposed to be posted on October 8th, but I didn’t want to change the narrative so as to encompass my thoughts and feelings at the time. My apologies for any confusion!
Howdy folk! Hope y’all have been well. It is almost Fall Break, and I could not be more grateful to be in the home stretch (I stayed up until 4:30 AM yesterday to finish a midterm project… writing this post feels like a dream). Now, I’ll kick off the end of the week by sharing what I’ve done since the last update!
We worked on a node (aka module) that revolved around Artificial Intelligence. Over the past couple of weeks, we worked with a number of different mediums including Perusall and CoLab Notebooks to analyze the functionality and presence of AI. I unfortunately missed completing the first few assignments (which I am making up) due to working on the theatre show and a bit of confusion on my part, though one of the questions we were tasked to answer is “what is AI?”
It is often painted as a dystopian figure that is set to take over the world and remove human autonomy. My immediate definition was a digital entity with the ability to make complex decisions, though others in class brought up good points such as them having “the ability to possess knowledge” or to have a “measure of learning”. They mirror human learning by following these three actions: possessing knowledge, learning from it, and making complex decisions.
What differentiates this from human learning is to make sure that one accounts for the “artificial” part of AI – that means it is unnatural, and that at its base it is an imitation. Look at these examples of AI in media and real life we compiled as a class. See how almost all of them align with human traits such as gender presentation, possession of a voice, moral alignment, and race/ethnicity?
I guess that’s where the imitation and the artificiality comes in. The morality part is probably the most prominent in media today, and as a class we got to try to answer the trolley problem. It revolves around a train that cannot be stopped, and must continue on tracks that can hit a various amount of people. I really enjoyed the narrative of Pippin Barr’s interactive Trolley Problem game that can be played here.
Right? Your decision is very important so try to choose the right thing. […]
You pulled the switch. Okay.Pippin Barr’s Trolley Problem
The unbiased wording of these phrases somehow sent chills down my spine. What is right? You made a decision, but what does it mean? We did another one in class called the “Moral Machine” which is similar, but brings in more factors like age and gender when making your decision. Here is a summary of the decisions my table and I made. We weren’t even aware of some of these biases in size or gender, mostly focusing on the legality of one’s actions and the number of people involved.
I wanted to now touch briefly on the correlation between AI and race. We annotated a chapter from Black Software: The Internet & Racial Justice by Charlton D. McIlwain about the Kansas City ‘burning’ in the 1960s.
Ghettos were not considered too threatening at the time, and while segregated, they were simply separate and hardly dangerous. However, following a peaceful protest consisting of a walkout by black youth who wanted Martin Luther King Jr.’s funeral as a day off (the rest of the nation had classes canceled), and the local police responded with force. I commented that it is just like the staging of an algorithm, such as that of YouTube recommendations. By clicking on one video – by taking that one action – it will define further interactions for a long time.
A police beat algorithm with an AI was created to determine where crime is and how it should be responded to. McIlwain notes, very poignantly, that black people had no hand in its creation: and as I discussed in my last reflection, algorithms are inherently biased. In this case, a racial bias exists. We can see in today’s discourse how this image of “black thuggery and lawlessness” persists, which can affect AI interpretations of things like education and government.
Whoops. I am forgetting how long-winded I am, so I will try to make this short(er).
The last subject we discussed was the intersection between AI and creativity. Do they have the ability to be truly creative? Can their artworks, which are synthesized from pre-existing art, be distinguished from that of people?
I tried my hand at that. We played a Kahoot in class called “Bot or Not?” where we were tasked to determine if various traditional and written artworks are made by humans or AI. I had a 10-answer correct streak which pretty much encompassed the images, but when it came to the writing, I got most of them wrong. While there is freedom in fine art, how a brush stroke is applied and how an aesthetic is judged is biased through the human eye.
However, I think writing can be easier for AIs to analyze and replicate due to its ability to quantify the natural construction of language. Humans also take advantage of writing on computers to create cool structures, such as various types repetition that can be deemed as “robotic”. I challenge you to give it a shot on a quiz I found here.
The last thing I wanted to share is the featured image of this post, which comes from an assignment where we were tasked to create an image based on an AI prompt generated by Janelle Shane. My prompt was “gangly moonlit grave rabbits lurk outside the windows,” and I used sites such as Pixlr and PhotoMosh for the image itself and the TV monitor GIF effect, respectively.
I used two different rabbits for the parts, including the whiskers, as well as a wolf whose body I distorted for the body. Is it spooky? Or would you want one as a pet?
Does this mean that AI can inspire great ideas? Can it be credited for the piece I created? All of these rhetorical questions, which will be tackled more and more over time. This node has made me excited to witness, and perhaps participate, in the growing discourse.