Seriously, it feels so amazing when I bring a concept in the misted air down to a prototype that I can show around and explain to other people.
Last week, when I looked at the problem with algorithmic personalization in YouTube, I see 1. the lack of awareness to the existence and implication of recommendation engine; 2. the lack of agency in controlling their experience and received information.
The prototype I’m building currently explores awareness/transparency in the problem space. To an extent, I think agency requires awareness. I do want to build something people can experience themselves. To do so, I built Blinds, a Chrome extension that allows people to gain some control of their YouTube viewing experience by hiding video recommendations on homepage and when viewing a video. It is a very simple CSS-injecting extension. If you’re reading this, check it out on Chrome Web Store!
Back to the prototype. What does transparency look like? The question guided me to build a prototype that shows when a viewer could see the algorithm and understand how it influences the choices s/he sees. I drafted the definition of “transparency” in the video recommendation algorithms through secondary research, including YouTube’s explanation of neuron network algorithms and my learnings from the Design for Trust class at CCA.
The definition is below:
I brainstormed the concepts and built a prototype with the following features:
I’m at the end of the first sprint in my senior project, and I felt the need to extend it. I think it is worthy of doing so because there is so much promise in the first prototype – I’d love to see how other people will react to it.