0% found this document useful (0 votes)
21 views

Conceptual Design: Cross-Platform Recommendation Engine: Angie Bellanich and Alex Wang INFOT780 October 27, 2021

Uploaded by

Alex Wang
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Conceptual Design: Cross-Platform Recommendation Engine: Angie Bellanich and Alex Wang INFOT780 October 27, 2021

Uploaded by

Alex Wang
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Conceptual Design: Cross-Platform Recommendation Engine

Angie Bellanich and Alex Wang


INFOT780
October 27, 2021
Table of Contents

Problem Statement 2

Brainstorm Process 2

Ideas 2

Conceptual Design Considerations 3

Conclusion 7

References 7

1
Problem Statement
Media streamers in America need a way to leverage their data preferences across
multiple applications so that streaming platforms can provide smarter recommendations and
create a more ubiquitous experience.

Brainstorm Process
To conceptualize designs for CPR, we decided to virtually meet via Google Meet due to
our geographic distance. We began the brainstorm process by setting a timer for five minutes
to write down ideas individually. This was to challenge ourselves to write down as many ideas as
possible without influencing each other. Once the five minutes were up, we reconvened and in
total came up with six ideas. Based on our discussion, we generated four more ideas. It is worth
noting that the first few ideas came rather easily and that the remaining four ideas pushed us to
think outside of the box and to explore the realm of speculative design.

Ideas
1. Aggregate user’s data from multiple streaming applications and provide
recommendations for new content to explore in a newly created CPR mobile app.
2. Create a voice user interface where users can ask the AI for recommendations based on
data from multiple streaming applications.
3. Aggregate user’s data from multiple streaming applications and provide in-app
recommendations for new content to explore within the existing streaming apps.
4. Create a Chrome Extension that aggregates data from streaming applications to provide
recommendations in a web based interface.
5. Make recommendations based on the mood/genre of the media content.
○ E.g. If users liked a “sad” song CPR will recommend a “sad” movie.
6. Create a VR experience where users can select media in an “immersive” world (gesture
based interaction).
7. Create an AR experience that provides recommendations based on the content users are
currently interacting with.

2
8. Project a hologram from a smartwatch that users can interact with similar to that of a
human to ask for content recommendations.
9. Create an app experience controlled by a user’s knee (Figure 1).
10. Create an app experience controlled by a user’s brainwaves.

Figure 1: Additional devices used in the first comparative evaluation of a mouse: (a) Joystick, (b)
Lightpen, (c) knee-controlled lever, (d) Grafacon [5].

After discussing our ideas, we decided to choose idea three : Aggregate user’s data from
multiple streaming applications (focusing on Spotify and Netflix) and provide in-app
recommendations for new content to explore within the existing streaming Spotify and Netflix
apps. We considered creating a dedicated app with a compiled list of recommendations, but
decided against this because we predict that users will be more likely to use CPR if it integrates
into the existing Spotify and Netflix user interface. The other ideas, though interesting, are
simply not feasible given the current state of our technology. In particular, idea ten (interface
controlled with brainwaves) is deep in the realm of “pseudoscience” according to Prof.
Youngmoo Kim at Drexel University. Despite what Elon Musk wants you to believe is possible
with Neuralink, at this moment it is not suitable for CPR.

Conceptual Design Considerations


Many AI researchers have begun crafting ethical frameworks to evaluate the impact of
an AI system on people and societies. Keyes et al. proposed an ethical framework focusing on
three pillars of fairness, accountability, and transparency [3]. Within this framework, an AI
system is considered fair when there are no biases that create unfair or discriminatory

3
outcomes, it is accountable when it answers the questions of the people using the system, and
it is transparent about when and why decisions are made [3]. Like the Keyes framework,
Guszcza et al. proposes an ethical framework resting on the three pillars of impact, justice, and
autonomy [2]. Impact focuses on the moral quality of a technology promoting non-maleficence
and beneficence, justice ensures people are treated procedurally and distributivity fairly, and
autonomy looks to provide people with their own choice, free of manipulative forces through
comprehension and control [2].
As we design CPR, we look to maintain the values of the ethical frameworks to ensure
users have an equitable experience. Historically, one of the greatest concerns with
recommendation systems is recommending too much of one type of content that can have
potential negative side effects like in the unethical study conducted by Facebook. Users were
divided into two groups in which one group was exposed to a reduced number of posts
containing positive emotions and the other exposed to a reduced number of posts containing
negative emotions. Concern was raised when Facebook users were not made aware of their
participation in the study and were also not given a choice to opt out of the study. Facebook
argued that users agreed to the study when they accepted the terms and conditions as part of
the signup process. However, many researchers in the field and people noted that exposure to
an increased number of negative posts could have negative mental health impacts especially for
those already battling with mental illness such as anxiety and depression [4].
As we continue to create a concrete design for CPR, we will keep in mind the guidelines
laid out by Amershi et al [1]. Unlike existing recommendation systems such as Netflix and
Spotify, there are limited ways to provide feedback to the system which causes the
recommendation system to be opaque. Netflix currently provides transparency by displaying
text explaining that recommended content is similar to something previously added to a users
watchlist (Figure 2), allowing users to thumbs up or thumbs down content, and displaying a
percentage to indicate how well it matches other content a user has watched (the greater the
percentage the more it matches previously watched content. Within the Spotify UI, users can
provide feedback that they like the content by “hearting” it.

4
Figure 2: Netflix provides feedback to users on recommended content to create transparency for
recommendation algorithms [9].

Our goal is to make CPR transparent and provide users with control over the content
that is recommended to minimize harm. We will achieve transparency through providing users
with explanations as to why certain content is recommended to them. Instagram has recently
adopted this approach to add transparency to their opaque black box algorithm (Figure 3). At
the bottom if the post the text reads, “Because you saved a post from allthingsgood”.
Instagram’s phrasing is similar to the feedback that Netflix displays.

Figure 3: Instagram provides feedback to users on recommended content to create transparency


for recommendation algorithms [6].

We believe providing users with the information needed to understand why certain content is
recommended will enable them to have better control over their experience and hold CPR
accountable for its actions.

5
When users are presented with the information to understand why certain content is
recommended, we plan to design mechanisms that give users control to provide explicit
feedback to CPR. Upon navigating to a website, many websites now ask users for feedback on
what information they want collected via cookies. Users have the option to turn certain cookies
on and off to best fit their needs (Figure 4). With CPR, users will have the ability to provide
explicit input on what content they do and do not want to see.

Figure 4: GOV.UK asks users for their preference on data collection [7].

While the exact design will be revealed in the concrete design stage, we envision users will have
multiple ways to interact with their content. For example, if a user no longer wants to receive
recommended content about Taylor Swift, we imagine they will have the ability to reduce the
amount of Taylor Swift content or stop seeing it all together without any time delay. Users may
have the option to set timers for when they might want to see Taylor Swift content reappear in
their recommendations. Including humans in the loop [8] will cause CPR to take immediate
action instead of relying on CPR taking the necessary time to learn new interaction and behavior
patterns. We plan to build off of these existing design mechanisms based on our exploration of
Instagram and cookie collection on websites. We want users to feel empowered by the control
they have over their content streaming experience.

Conclusion
Many Americans want media recommendations based on their preferences across
different platforms. We limited the scope of this project to two platforms, Spotify and Netflix, as
these are among the most popular streaming platforms in the country. During our brainstorm
process, we generated ten different ideas with varied levels of creativity and feasibility.

6
Ultimately, we chose to provide recommendations through the existing Spotify and Netflix user
interfaces to create seamless integration for a ubiquitous user experience.
There are many ethical concerns about content that is presented to the users, often
without their consent, and we want to reduce the possibility of harm by giving users more
control over the recommendations that they see. We explored existing feedback mechanisms in
the Netflix and Spotify UI as well as explored feedback mechanisms that exist in Instagram and
on the web. As we further develop our concrete design, we will continue to reflect on existing
ethical frameworks and AI design guidelines.

References
1. Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny
Collisson, Jina Suh, Shamsi Iqbal, Paul N. Bennett, and Kori Inkpen. 2019. Guidelines for
human-AI interaction. In Proceedings of the 2019 chi conference on human factors in
computing systems, 1–13.
2. James Guszcza, Michelle A Lee, Beena Ammanath, and Dave Kuder. Human values in the
loop: Design principles for ethical AI. 28.
3. Os Keyes, Jevan Hutson, and Meredith Durbin. 2019. A mulching proposal: Analysing and
improving an algorithmic system for turning the elderly into high-nutrient slurry. In Extended
Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, 1–11.
4. Adam DI Kramer, Jamie E. Guillory, and Jeffrey T. Hancock. 2014. Experimental evidence of
massive-scale emotional contagion through social networks. Proceedings of the National
Academy of Sciences 111, 24: 8788–8790.
5. I. Scott MacKenzie. 2012. Human-computer interaction: An empirical research perspective.
6. Instagram. Retrieved October 26, 2021 from https://www.instagram.com/
7. Cookies. Retrieved October 26, 2021 from https://design-system.service.gov.uk/cookies/
8. Humans in the Loop: The Design of Interactive AI Systems. Stanford HAI. Retrieved October
20, 2021 from https://hai.stanford.edu/news/humans-loop-design-interactive-ai-systems
9. Netflix. Retrieved October 26, 2021 from https://www.netflix.com/browse

You might also like