Reflection:「Node 1」

Howdy y’all! Welcome back to my blog. This week’s topic is a reflection on Node 1, part of which I explored through my last post.

Node 1 encompassed a fair amount of ground, including basic coding with dictionaries and the nature of algorithms. I was surprised at how broad some of these topics can be: I have known that algorithms control my interactions with media, but I have never dwelled upon how complacent I am. I sit through five, sometimes more advertisements in a row when listening to Spotify, and while it irritates me, I take it. (And stubbornly don’t invest in Premium. Though is there a point to ‘rebelling’ against it?)

Maybe it’s because I got used to watching Charlie the Unicorn and almost every other video without ads growing up.

Notice how ads, their duration, and placement grew throughout the years?

Because of this unit, we can now consider ourselves more digitally-fluent: I can tell you what some functions are for, and how they work (especially thanks to my basic Arduino background). The most interesting parts to me were when we dipped into digital citizenship, which requires ethical thinking. We got to try to answer the question: “what should we do with or about it?”

And while there is never a right answer, trying to answer these questions (emphasis on trying) has encouraged me to become digitally-fluent. According to Douglas Rushkoff in his talk Program or be Programmed, we can’t tell if the inherent bias in programming is from the media or from the programmer unless we understand the technology. By doing so, the understanding and execution is mutually-intentional.

“The more humans become involved in their design, the more humanely inspired these tools will end up behaving.”

Douglas Rushkoff

I felt as if we could start to see more ‘humanity’ in our code with Code Practice #2. Obviously, the very nature of quantifying traits isn’t very representative of the human condition, but it is the start. I made sure to incorporate different values that represent various parts of myself such as my general physicality, my needs, and my mindset respectively.

Can you tell which value belongs to which category? Should I have rated creativity lower?

In the podcast “Biased Algorithms, Biased World,” Cathy O’Neill calls the process of quantifying different aspects to measure something (ex. success) a proxy. One of Google’s definitions of a proxy is “a figure that can be used to represent the value of something in a calculation.”

Sure, pretty straightforward. But the real key to that definition is the word represent. Your interests, demographics, and even your face are represented by values as part of a calculation. O’Neill further mentions that no one can truly determine if an algorithm is working: there is no standard for them. Now, I’m not trying to paint them in a negative light. I just want to express how all these things I have taken for granted mean a little something more to me now.

My favorite point from “Biased Algorithms, Biased World” is when it is pointed out that the definition of success to the creator of an algorithm is opposite of that of its targets. This was probably the most thought-provoking part for me. Consider where that can be applied in your life.

This can be applied to Season 4, Episode 4 of the TV show Black Mirror, titled “Hang the DJ”. Take a peek at my pseudocode from Code Practice #3:

SPOILERS BELOW. BEWARE.

This episode is about a program that tries to find one’s “perfect” match (with a 99.8% success rate!™) and this example focuses on two of Frank’s partners: Amy and Nicola. The AI, named Coach, collects data from various interactions with various partners. However, as Amy speculates, it seems as if part of the system is to wear its users down from perpetual partnership to the point where they treat s*x as data too – just something to get over with. Of course, this contrasts with the interests of the user, which is to find someone that matches the values they already exhibit.

Coach may consider intimacy and humor as prominent traits, and assign them values such as what was seen in the practice. You can discern Frank’s ‘preferences’ (determined by Coach) in this example.

A fun little tidbit: we got to try StoryFace and its almost-parody of what it means to be authentic. I won’t spoil anything, but I got to keep trying to date Pam Beesly from The Office. See the results below (but try it for yourself).

Thanks for checking out my blog! Even if you didn’t learn something brand-new, I hope you will think critically about how algorithms present content in your everyday life, and the limits of information that can be discerned about your being.

Sincerely,
Sterling

Return + The Soft Truth

Howdy folk! Welcome to my blog. I see that I have been getting some traffic here in the past few days, which is a really nice surprise. I hope some of y’all will stick around for my upcoming content for DGST 395: Applied Digital Studies.

It will be a different kind of work – the stuff I wrote and created for Digital Storytelling was erratic and vulnerable. The posts came from a different person at a different time (of course, I’m still the same in many ways, but my content was last posted in Spring 2020). I hope to invest some of the same creativity and heart in these works as well, though it will revolve more around class readings and activities for now.

I will endeavor to structure my blog in such a way that you can browse my Digital Storytelling and Applied Digital Studies content separately. (Should I keep the 80s/They Live theme? I see some of you come from DS106 – does your class have a theme?) But without further ado, lets get into today’s work!

A video that would have made me hoot and holler a couple years ago, but still fun nonetheless (prepare the ears)!

My first task was to read The Soft Truth by Leigh Alexander, “an ‘algowave’ short fiction.” It is a short narrative written so vividly (despite its surreal elements) that it surprises me that it is a work of fiction. To summarize, the story revolves around a woman who keeps seeing an alternate version of herself: one that she seems to inwardly blame some of her delays, longings, and misfortunes on. This journey to connect with her other self happens simultaneously with her search for a video of a certain “satisfying” gelatin sphere video, as well as the aftermath of her firing from her consulting research job. I don’t want to spoil much, but essentially all three elements come to a convergence point that leaves the narrator feeling satisfied in more aspects than one.

A gelatin sphere such as this. I am surprised the cover isn’t clickbait. Click it. I dare you.
Any fellow players think this would be cool with a gelatinous cube?

If you look at the video, you can see how easy it is to destroy what is often considered the quintessential image of perfection.

Farewell, gelatin sphere. Even the idea of a sphere dissolves. A great and susurrating wave of pleasure washes coolly over the surface of my brain like one of those old mouthwash advertisements, and suddenly everything — I mean all of it, everything I know — makes exquisite sense.

Leigh Alexander, The Soft Truth

To further reflect on the ending of this story, I think it is important to determine what ‘algowave fiction’ is. And the answer isn’t on Google (I checked) – rather, it is in the text. Essentially, the narrator creates her own internal algorithm to pinpoint the aforementioned gelatin sphere video. She has an algorithm for deciding when to answer her boss Veronica, which depended on her state of hire and other factors. Her routine pre-firing is basically an algorithm. I think that the very existence of her “other me” symbolizes how one can make different choices and arrive to the same conclusion: the gelatinous sphere.

Therefore, algorithms are inherently flawed and not self-serving of the user.

I’m still in the learning process so I can’t quite agree or disagree yet. Regardless, it was a neat read that I hope you check out as well! As a writer, here is my favorite section. I am still hanging on such simple yet poignant imagery of the shoe:

The day I got fired, while I was waiting for the bus, I looked in my box of things and saw the Footprint Consulting foam sneaker, commissioned by Veronica as a staff gift. I compressed it in my hand as tightly as possible; I dug in my thumbnail and carved neat rows of shallow gills into it. I thought about how unfair it was that I never, no matter how much I searched and clicked around, got to see the red mesh sink slowly and ruthlessly into the firm face of the blue gelatin sphere.

Leigh Alexander, The Soft Truth

Note how organized the ‘gills’ seem to be. How the only way to deal with multiple very unsatisfying things is to make something satisfying. Anyways, the last part of my assignment here is to discuss my coding process of the assignments we have been working on.

Last semester, I took Honors Intro to Computer Science, which completely revolved around programming an Arduino Uno. Many concepts and terminologies are very similar, such as strings, printing, and if-then conditionals. I’ve been referring to my class notes to help me with the CoLab notebooks. I’m trying to take note of the differences and I try to Google alternative solutions as well to jot down in my notebook or at the bottom of the CoLab documents. Here are some of my Arduino notes (spot any Python equivalences?):

These coding exercises relate to the readings and videos/podcasts in the sense that we must “program or be programmed” according to Douglas Rushkoff. By learning these basic building blocks, we can understand more about Cathy O’Neill’s “weapons of math destruction” by observing how large algorithms such as standardized testing control society. After reading this passage by Leigh Alexander, one can see that we create algorithms in our own lives every single day. And now, we can learn the art of designing choices while always keeping ethics and biases in mind.

Kudos to anyone who stuck around for this long. Thank you for reading, and I hope you found my thoughts interesting!

Sincerely,
Sterling