Interested in the missing gaps in algorithmic personalization, I talked to Apurva Shah, my professor at CCA, who’s also the CTO/cofounder of Duality, a startup company that develops a QA and prediction platform for autonomous robotics. I was inspired by his thoughts on different relationships between customers and providers of algorithmic personalization. Below are two types and their examples.
I started to reflect on the moments that I feel overwhelmed, and especially overloaded with information. Did I ask for it? If not, why is it there? The questions struck me. The answers to them are important to differentiate what digital experiences take advantage of the lack of people’s consent.
In my short exploratory interviews with my friends, YouTube came to be a well-understood example among them as a platform that recommends deeply personalized content. It makes use of the cutting-edge deep learning technology by Google. Such effectively-designed algorithm boosts engagement by exploiting human desire. YouTube has been condemned for pushing inappropriate videos to childrenand accelerating political radicalization. The consequences are the results of people gaming YouTube’s algorithm.
Coming back to the project, what can be done here? After flowing the thoughts with sticky notes, I wrote down the molecules (people/problem/solution statements). Wow, so fast! Who knew?
It seems easier to decide which problem I want to tackle when the project is two weeks long. I’m more comfortable to roll with assumptions and experiment, because there is less commitment to the idea and thus less fear of failure.
The personal challenge is to find something worth making for the rest of the week. I can’t change YouTube’s algorithm, but I can find ways to address the problem in more impactful ways. Below are two facets of the problem/solution with different levels of fidelity.
Wow, I just gave myself so much work.