No one asked for my opinions, but here’s my fist Collection post about my favorite games, music, and other stuff. Cause why not? (I'm joking. This series is inspired by Khoi Vinh.)
Sayonara Wild Hearts is a beautifully crafted piece about the journey of heart mending. Who knew that motorcycle racing, electronic pop music, and neon colors would go so well with a story about heartbreak?
The last time I came across a game with such a strong emotional impact was Life is Strange. Sayonara staged the story in an impressionist world with enchanting metaphors. The gold collecting system encourages you to follow your heart to find the way out. The final chapter combats the biggest enemy of heart mending in an unexpectedly sentimental method. The thrilling ride lasted for about an hour, but the aftertaste and the fantastic soundtrack could linger for a long time.
The control wasn’t perfect—I haven’t touched racing games for years, and it’s pretty difficult to complete some levels in one pass. Playing on Apple TV with the tiny remote is straight up painful. I had to skip multiple scenes in the final chapter, a decision I somehow regretted. But it doesn’t matter: you could feel the creativity bursting out of screens when you fly past the cityscape fixing your broken heart. I loved it.
Recommended by Kate Rutter, I took Rob Walker's advice in The Art of Noticing: 131 Ways to Spark Creativity, Find Inspiration, and Discover Joy and mapped out the perception of my home with the sense of touch.
Before I started the investigation, I looked around my house: there is literally no natural objects! Well, except for the last three oranges in the fruit basket. My house suddenly became a depressing place to live in after this realization.
Here are some particular observations I have in my investigation.
Based on my feelings (……), I produced a set of overly generalized tactile map of my home.
The tactile sensation is probably one of the most ignored senses in interaction design, and I’m surprised how powerful it is at producing emotional influence. When I tried to describe the tactile feelings, many of my words are extremely subjective. The physical contact elicits stronger emotion than I anticipated. I’m happy that I picked the investigation and got to be more aware of its significance.
What does a more transparent YouTube look like? In the last three weeks, I set out to re-imagine a YouTube experience that helps us understand the algorithmic personalization and make better sense of the world.
Thanks to Google’s deep neural network, only dozens of videos show up on our home page. The algorithm is capable of sifting through billions of videos and serving different videos for individuals in real-time. Google described the it as “one of the largest scale and most sophisticated industrial recommendation systems in existence.”
The algorithm allows us to find the most viral cat videos, the epic fails on the Voice, and vlog of the most expensive food. It also allows children to autoplay inappropriate videos after their favorite animated series. It also allows pedophiles to discover homemade videos of kids in swimming suits playing in the backyard pool. It also allows conspiracy believers to binge-watch a daily dose of radical political theories.
What videos are shown to us? Many might already know that YouTube recommends videos based on our activity histories, but the assumption ends there. When the impact is amplified to a global scale, we are left in a vulnerable position.
We don’t have agency of these video recommendations, and many other streams of information in our digital lives today.
Transparency is the first step towards agency. To start gaining control over what’s provided to people, we could try to answer:
How might we make the process and implications of video recommendations more transparent to viewers?
In an iterative control cycle, a personalization algorithm should demonstrate its transparency at these moments below.
The algorithm should state its objectives: to find the videos I’m more likely to engage with.
For each video, I should be able to see YouTube’s prediction of my engagement and the specific attributes.
I should be able to see how my actions influence YouTube’s understanding of my watching habits in real-time.
Here are two concepts that aim to help viewers gain better understanding of the video recommendations in the existing YouTube experience.
When viewers visit YouTube, they see attention scores under the recommended videos, a metric representing YouTube’s prediction on how likely viewers want to engage with each video or topic.
Attention scores declare the presence of algorithms. They provide entrances to understanding video recommendations.
Attentions scores are understandable. They transforms the numeric values of algorithmic output into percentages that viewers could understand and compare between videos.
Hovering on an attention score reveals attribute labels, the factors that influence the recommendation and contributes to the score of each video.
When playing a video, viewers can also see the attention score and the attribute labels at the side.
Hovering on labels reveals short descriptions and relevant statistics.
Attribute labels add clarity to why and how videos are recommended. They demonstrate the capability and limitation of the algorithm.
Attribute labels are contextual. They surface the adequate amount of behind-the-scenes based on viewers’ preference without disrupting their exploration.
The attention score stays on the page and updates itself in real-time as viewers continue to interact with the video (watch, like/dislike, comment, etc.)
Through long-term usage, viewers could further understand how their different behaviors refine the recommendation.
YouTube developed a sophisticated machine learning algorithm that still remains opaque to most people to a large extent. One could say it is also a challenge explaining the algorithm and its impact to non-engineers. How can we make the technical information more understandable to the viewers?
To define an appropriate scope, I drafted the principles that helped me unpack the meaning of transparency from the perspective of YouTube viewers. Inspired by the design frameworks that teams like Oracle Design have used for automation, I listed the principles above based on the popular iterative feedback loop models, such as plan-do-check-act and build-measure-learn.
From the principles, I started looking at the opportunities on the YouTube homepage, where most videos are recommended to us.
When brainstorming concepts and the key uses, I found it essential to treat the new features as organic addition to the current YouTube experience. The behind-the-scene information should not change the way viewers prefer to explore videos.
To complement the design process driven by secondary research, I sought early feedback on concept and usability from others. I reached out to six people who use YouTube frequently in the school community.
I believe that people could understand the concept and its implications better when they could see their data in the prototype. Therefore, the conversations started with the exploratory interview as I created the low-fidelity prototype with the interviewee’s data for testing.
Interviewees were asked to pick a video they wanted to watch at the moment and explain the reasons. I populated the attribute labels based on their described watching habits.
Most interviewees found the features difficult to understand from a glance due to limited interactions available in the prototype. Thus, the iteration focused on providing additional explanations and statistics to make the information more useful.
How will seeing the process of video recommendations change our watching habits? The interviewees answered the questions while they shared stories about their life with YouTube.
“I will find a feature like this interesting in the beginning, but after a while, I might feel creepy when I get to see all the things YouTube knows about me.”
Julie, a fine arts college student, frequently looks at the home feed when she’s looking for videos to watch with a meal. She subscribed to multiple vloggers who travel and explore food.
When I asked Julie to show me her homepage, she started explaining why videos are there. “My friend recommended this food channel to me, so I searched for it, and it has been on my homepage since.” “Why is this video even here?” I asked her to pick a video she wants to watch the most at the moment, and she pointed to a food vlog in Taipei. “This was from my favorite vlogger, but I haven’t looked at her stuff recently. This looks delicious.”
Speaking of what happens if features like these exist in reality, “I don’t know how it changes my habit in the future, but I guess ultimately I will have a more dynamic relationship with YouTube because I know what it thinks of me when I use it.”
Compared to Julie, Lynn is much more aware of her time spent on YouTube. As a busy architect, YouTube is where she looks for tutorials and immediate relaxation. She is very tactical in searching for the right episodes of drama shows or white noise videos to help her fall asleep.
She said, “I feel guilty that I spent time watching mostly useless stuff. I’ve wanted to watch news or something more meaningful on YouTube, but I haven’t found a reliable channel yet. It doesn’t help that the platform keeps pushing the same things to my homepage.”
Lynn continues, “I think I spent a lot of time on YouTube not knowing what I’m doing. YouTube keeps recommending me video after video, as if it keeps feeding while I’m trapped in this place.”
“If you’re conscious, this feature could act like a report card, a summary of the things I watched over the last month. I might treat it like the Screen Time feature. When I look at the labels, I would wonder: what have I watched?”
Taylor, an industrial designer and music lover, avoids interacting with YouTube as much as possible.
She describes the watched videos as the information she’s processed. “I’m tired of searching for something random just for research, and then seeing similar videos on my homepage. Seeing a large pile of them thrown back at me makes me feel like I’m bombarded with trash, it’s overwhelming.”
At the same time, she finds the intimate connections of the videos to her life uncomfortable. “It somehow seems that all the videos on the homepage have something to do with my life. Opening the homepage, especially in front of others, is almost like exposing my private life out in the open.”
She’s aware that many of her actions will leave YouTube an impression. “I tend to linger on the music videos that I like for a bit longer and hope YouTube will learn about my interests.”
“Sometimes, it feels like I’m not using a platform. It feels like I’m talking to an AI. Every action I have will be recorded and looked for meanings by it. It feels scary sometimes that it’s become more humane and pose judgment on my activity.”
“If the feature exists, I would run away from YouTube and force myself to use it less. It feels creepy when my personal info hangs next to the video I’m watching, because it suggests that my information is in someone else’s hands. It feels like I have no control of the situation.”
“It’s brutal. I’m already concerned about the tracking behaviors from companies like Google. People like me will probably be scared away when the camouflage of these behaviors disappears, and everything gets exposed.”
In June 2019, YouTube shared updates of their efforts to deal with harmful content on its platform. They are mostly changes to algorithms to promote, demote, and remove videos based on their policies. In October 2019, Mozilla questioned the effectiveness of YouTube’s approach and stated “the era of ‘trust us’ is over.” It listed recommendations for YouTube to provide independent researchers with better tools to understand the problem. Meanwhile, Mozilla’s #YouTubeRegrets project shed light on how people’s lives are affected by dangerous recommendations from YouTube. These individual stories could often be dismissed in our public discourse, but they are the reminders that designers aren’t only building experiences: design of such a scale poses an social and cultural impact on all of us.
When we make products with transparency and honesty in mind, we could create a culture where more dialogues exist between humans and technology, and a future where we are collectively empowered to understand our decisions.
All Eyes on It is a part of Aosheng Ran's senior thesis project, Semilattice.
This writing is also published on Medium.
Especially grateful to Kate Rutter for being an amazing mentor who motivates and challenges me to push forward.
Special thanks to Apurva Shah and Weiwei Hsu for giving enlightening feedback on drafts of this article.
Thanks to the lovely anonymous people who agreed to have conversations with me and sharing the vulnerable moments of life.
Thanks to Oracle Design team for sharing relevant insightful knowledge to the CCA community.
Thanks to Nathalia Kasman for many inspiring discussions.