0% found this document useful (0 votes)
61 views171 pages

RCA Questions for Product Managers

Uploaded by

sombunakki4u
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views171 pages

RCA Questions for Product Managers

Uploaded by

sombunakki4u
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Top 100 RCA Questions w/ Solutions

I am Malay Krishna, director of product management at Vyapar, India's


fastest growing business management software. I mentor aspiring product
managers in their journey of cracking product manager interviews.
I have a built a mock interview led, personalised 1:1 program for product
managers that helps candidates with the following;
CV Reviews
Building a Portfolio
Assignments by Potential Recruiters
1:1 Live Mock Interviews for Every Product Module (Not Much Gyaan,
Focus on Practical Application in Interviews)
Unlimited Warzone Interviews (Mock Interviews just before your
Actual Interview)
Endless Job Application Support & Mentorship
If you think this is something that interests you, I would love to get on a call
with you and explain the program in detail. You can block time with me by
clicking on this link.
List of Questions
1. If you could choose 5 metrics for TikTok's customer service platform,
what would those be?
How I would approach this question:
1. Understand the goal of the customer service platform:
1. What is the main business goal of this company and how is the customer
service platform serving the company goals?
1. For this question, I'm assuming the user journey is that customers
come to the customer service platform when they run into issues.
Thus, ultimately the customer service platform is feeding into the top
line business metric of customer retention. Do customers keep using
TikTok even after hitting issues?

1
2. Brainstorming other assumptions I have about the customer service
platform goals:
1. CS platform goals:
1. Resolve customer issues
2. Customer satisfaction with service (note, this would likely
influence if customers are more likely to return to the product)
3. Assuming there are no other issues with the CS platform funnel (ex
what if suddenly the link to contact CS stopped working), overall it
would be important to TikTok to have lower customer service
requests meaning -> fewer bugs / issues.
7. Layout the user journey so I can identify key actions and metrics when
customers interact with the customer service platform: 3. Customer hits an
issue 4. Customer goes to the CS platform to ask for help 5. Customer waits
for a response 6. Customer gets a response 1. Either the issue is resolved 2.
Or the issue is put in the company backlog / left unresolved 7. Customer is
satisfied with the service (or not) 8. Customer keeps using the product (or
not)
8. Based on the goals I've defined and user journey, I'm now going to brainstorm
metrics. I will save prioritization of metrics for the last step.
2. # of incoming issues that come into the CS platform
3. Response rate
4. Handle time rate
5. % of issues that get resolved
6. CS quality rating / NPS
7. Customer retention after interacting with CS
10. Prioritization:
2. Customer retention after interacting with CS: This is the most correlated
with our top line business metric and goal.
3. # of issues that come into customer service: This can be a reflection of
issues with the product itself (e.g. suddenly there's a spike in bug
reports).
4. CS quality rating / NPS: Even if all issues got resolved, if customers aren't
happy, does it even matter?
5. % of issues get resolved: This metric can be used to address bug
resolution internally and product quality.

2
6. Response rate: This can potentially be used to evaluate the quality of
service your CS team is providing. However, if metrics a through c are
doing well, this metric probably carries less weight even if it's doing
poorly.
2. Imagine you are a PM at Meta, launching a new video conference
product. Define success metrics.
Clarifying questions:
Is this tied to any existing Meta ecosystem? WhatsApp or Messenger?
Who is this video conference product targeted at? Consumer or Business?
Assumptions:
It is a consumer product
It is integrated into WhatsApp
Meta's Mission is to Build community and bring people closer together. A video
conference app aligns with that mission by helping groups of people to come
together as a community and meet each other
Product mission: A video conference app - that builds community interactions by
allowing a group of people to come together and meet via video chat
Product goal: Make it easy for groups to organize video meetings that can be
attended by as many participants as desired
There are two User segments:
Organizers
Participants
North Star Success metrics:
Organizers:
Number of meetings per day/month
Participants:
Number of active meeting user-attendees per day/month

3
Engagement Metrics:
Average duration of meetings
Average Number of attendees per meeting
Number of recurring meetings
3. How would you improve the experience for Airbnb users following a
negative booking?
Problem statement here is a group of users had negative booking experience
which they informed AirBnB via feedback/complaints.
Goal of Airbnb is find perfect place to stay anytime anywhere.
First I would identify & bucketize the issues - understand what kind of negative
experience did the users face. It can be:
less information about host
unable to connect with host
unable to find places as per requirement
flow is breaking mid journey
price break up not correctly given
too expensive
booking methods are insufficient
My goal would be a) to reduce the impact of negative PR as the unhappy
customers will be leaving bad reviews in X, Trip Advisor etc. and b) to ensure we
can retain these customers.
The above issues can be divided into short term & long term resolution plan:
4. Short term fixes:
less information about host
unable to connect with host
flow is breaking mid journey
price break up not correctly given

4
I will work with CS team to ensure right messaging - Inform the user that we are
dedicated to improving our services. I will work with engineering team to release
hot fixes are done on why booking journey flow is breaking. I will work with
engineering team to check if messages/emails from the portal to hosts are not
getting delivered. Also, call a few numbers to check if host is not picking or
incorrect phone number mapping has happened. Then work with Supply team to
find out if those hosts have terminated contract with us or if there is a law & order
issue why phone lines are not working. I will work with business & strategy team to
see if we can show accurate break up price.
4. Long term product improvements:
unable to find places as per requirement
too expensive
booking methods are insufficient
I will work with supply team to get more inventory so that we have relevant search
results for users. I will work with my team to create a roadmap & improve filters and
overall search experience of user. Also, find out what are the major booking
methods and add it to our flow. I will work with business & strategy to do a
competitive analysis and value evaluation to provide good cheaper options for
users who find it too expensive.

4. How would you measure success for Facebook Events?


Interesting question - I have to measure success for Facebook Events.
I will approach this question by first defining the goals of Facebook Events - both
product goals as well as business goals and how it aligns with Facebook's vision. I
will then think about the user's actions and related metrics to define the metrics
using which we can define the success of Facebook Events.
Before I being, I have a few clarifying questions & assumptions which I would like to
state.
Clarifying questions

5
4. Are Facebook Events pre-release or post-release for the context of this
question?
2. Assume that the interviewer says it is a pre-release.
Assumptions
4. I will assume that this will be launched on the mobile first - no desktop app.
5. I will assume that Facebook Events is in the main FB app itself. Also, this is
not extended to Instagram or other Meta products. It is within the FB app
only.
6. I will also assume that we plan to roll this out to a specific user segment - I
will define this as we proceed.
What is Facebook Events - As a product, I am defining the scope I am considering
for Facebook Events. Facebook Events helps users discover and attend online
events as well as offline events curated to their interest and their location. A user
can initiate an event and ask the members in his/her network to join the event or
decide to join an event listed.
The vision of the company & the product
4. Facebook's vision is to make the world and people more connected.
5. Facebook Events is completely in line with this vision and it brings together
people - both online & offline.
Goals of Facebook Events
4. User & product goals:
2. Discovering events of their interest
3. RSVPing for online and offline events
4. Reminder before the event
5. Attending the event
6. Post-event - Feedback on the event
9. Business goals
2. Increase in ad revenue - Users spending more time on the platform
3. Earning by sponsored events - Higher in the ranking when users are
discovering the events

6
Actions
This section talks about the actions users should take to attend to the above goals.
I will divide the actions into three sub-sections:
4. Acquisition & Activation
2. How does the user discover the Facebook Events tab/listing in the FB
app?
3. How do users discover events of their choice within the FB events?
1. What info does the user need to provide to onboard themselves on
the FB events?
3. How do users RSVP for their first event? What is the preference for the
users - online events or offline events?
4. How many of the RSVP users actually attended the event?
6. Retention & Engagement
2. How many users come back and give feedback on their previous event?
3. How many users RSVP for another event within 30 days?
4. How many users create an event?
6. Monetization & Revenue
2. Have some users started to sponsor their events to promote them to
other users - this might be businesses conducting meet-ups.
3. Are users willing to pay to attend an event?
Now that we have the actions of the users which will help them achieve the above
goals, we can start defining the metrics.
Also, since the product is pre-release, I will have a higher focus on the acquisition
and activation metrics in terms of prioritisation.
Metrics
We will continue in the same construct as Actions to define metrics.
3. Acquisition & Activation
2. Discovery of FB events -> Users who have visited the FB Events tab /
Users coming to the FB app day-on-day -> This will help in monitoring
how easy or difficult users are finding it to discover the FB events within
the FB app

7
3. Discovery of specific events ->
1. Users who click on an event to see its details / Users coming to the
FB events page -> This metric helps us in understanding how well the
recommendation logic for events is working and whether users are
finding a relevant event.
2. Few additional metrics to judge the recommendation & discovery of
specific events:
1. Users scrolling on the events listing page
2. Users using filters or search options to search for specific events
4. RSVP -> 3. Users who RSVP to a particular event / Users coming to the
FB events page -> This metric is important to judge whether users have
found an event really relevant for them
5. Actually attending the event ->
1. This is a very important metric but might be difficult to directly
measure - we can come up with a proxy metric - but this is a really
important metric as is the actual activation of the user -> The metric
would be: users who actually attended an event/users who RSVP'd
an event
1. Retention & Engagement 5. Number of users who gave feedback on an event
/ Users who RSVP'd an event 6. Number of users who RSVP for another event
within 30-days of a previous RSVP (30-day retention on RSVP 7. Number of
users who create an event / Number of users who have attended an event
2. Monetisation and Revenue -> In the interview, I will check in with the
interviewer and focus more on the above 2 categories of events. If users are
discovering relevant events and are engaged, the ways to monetize FB events
are pretty straightforward.
Guardrail metrics
These metrics are to ensure that we don't impact the things which are already
working well and act as a guardrail for us while we define the above metrics.
0. Discovery of other products -> While we focus on enabling more users to
discover FB events, it shouldn't impact the discovery of other products for
FB.
Pitfalls with the metrics defined

8
Things we need to be careful about in the above metrics which we have defined:
0. RSVP -> We need to monitor that users aren't RSVPing an event without the
intention of attending the event.
1. Need to bifurcate business users from normal users once we see some
traction on FB events
5. Tell me about your most effective team. What made them effective?
What methods do you use to make them effective?
Context: As a people manager, my biggest thing is to foster psychological safety
for my team members and the way I do that: building rapport between me and my
direct reports, weekly team meetings I ensure all voices are heard, have people
share about themselves and encourage them to get to know one another. Team
members had a good understanding of each other’s working [Link] of the
recent projects my team was responsible for building out the espresso test
framework on android. This was a 4 month project and our goal was to automate
P0 and P1 tests for rider and driver apps. For this project, the SDETs pair
programmed with the QEs on automation and the QE’s paired with SDETs on
automatable test scenarios It was a well balanced team composed of a good mix of
senior, mid, and junior level eng. One of the ways the team was effective was that
they brought in a diverse perspective coming from their different skill sets/
backgrounds. They shared their knowledge within the team as well outside the
team with brownbags and All Hands presentation As their manager, my role was to
ensure that they had the resources and support needed. Helped the team with
code reviews Acted as a liaison with infra stakeholders (i..e Client Tooling team for
any CI needs) Shared out comms to the rest of QE team about the framework
updates Trying to get more resources from other teams (i..e Driver)
Outcome of this project - Delivered the project on schedule we were able to
automate P0 and P1 test scenarios and reduce manual testing by 25% every week

6. Friend requests are down 10%, what would you do?


Clarifying question 1: Define Friend requests -- no. of friend requests sent in the
platform through the "Add Friend" button

9
Clarifying question 2: Time period for comparison - 10% - WoW or DoD or MoM? --
WoW
Gathering context:
0. Is the decline progressive or a one-time event? --> progressive
2. Because the decline is progressive, ruling out technical glitches,
downtime, or any other reason impacting the feature uptime.
1. Is this decline global or regional? --> global
2. Because the decline is global, ruling out any regional influencing factors.
No change of preference in any region, no competing product in a
particular region, no regulatory constraints
1. Is the decline on any specific platform, web (chrome, firefox, IE, safari),
mobile (Android, IOS), mobile handset (MI, iPhone, Galaxy, etc) --> Decline in
Mobile both (IOS and Android) - no statistically significant difference
observed with handsets.
2. Is there a decline in any other feature usage in mobile --> No. The other
metrics dipped but not by 10% - no correlation with any other usage metric
found.
3. Is there a prominent user segment where we see the decline? - 19-24
Reframing the problem statement with the context:
0. Progressive decline in friend requests sent on mobile platforms globally and
no correlation with any other product usage metric among 19-24 user
segment
Hypothesis:
0. One of the sources for adding new friends is broken
1. People are not able to view friend requests and not accepting, discouraging
users from sending friend requests
2. Concerned about adding bots/fake accounts that scrape information - not
confident about the authenticity of the person
3. A new feature that negates the friend circle requirement to interact
Hypothesis testing:

10
0. Sources:
2. The sources of finding and adding a new friend is through - "people you
may know", "Add friend" on profile - Has there been a change in
algorithm recommending friends or ordering the people in people you
may know? -- Need to investigate.
3. Lower/infrequent placement of "people you may know" on mobile
4. UI issues with "people you may know"
5. Broken "Add Friend" feature in mobile might have been detected within a
week - unlikely cause
4. People not accepting friend requests:
2. QA the friend request display and acceptance functionality on mobile
3. Check the friend acceptance metrics for the decline at the same time
period
2. Bots/Fake Accounts:
2. Work with engineering to identify and crack down fake accounts, check if
there has been a rise in fake accounts or news articles in the same time
period
3. Focus groups to identify inhibitions and work with engineering to resolve
(FB verified or something that assures that the user is real)
2. New Feature: 1. Any interactive feature in FB ecosystem (all social products)
that do not require them to be friends yet interact with each other in the same
time period? 2. Has there been an increase in that particular feature usage in
this user segment? - considering FB's policies and vision this reason is
unlikely
In summary after drilling down through the problem; the probable reasons for the
decline of friend requests on mobile are algorithmic or UI changes in "people you
may know" or concerns over fake accounts. The next step is to test these
hypotheses to find the exact cause and fix it.

7. You're the PM for Facebook Reactions. Reactions are up 20% but


comments are down 10%. What would you do?
Problem: Reactions are up by 10% and comments are down by 20%
Purpose of Reaction (Goal), value Reaction and comments are the key ways for
meta users to engage on the platform. Value: Viewers: get value when they find

11
content which resonates with them, and they decide to engage in the form of
reaction or comment. Creators: get value when the content they have posted gets
engagement from the viewers in the form of reaction or comments.
Hypothesis Reaction: takes less time, so easy for users to give, hence the number
is up. Comment: takes more time, so users prefer to avoid them, hence the number
is down.
Metrics I will use to test "More interactions across FB": Avg # of posts with >1
reaction per week, Avg # of posts with >1 comments per week
Metrics I will use to test "More posts": Avg # of posts per week, Avg # of posts with
>1 reaction per week / Avg # of posts per week
Metrics I will use to test "More Users": Daily Active Users, Avg time spent on the
platform (*Cannibalization)
Key Success Metrics I will use to test my hypothesis Avg # of post per week
Weekly Active users Avg time spent by users per week
Data-driven Decision => ship, no ship, or retest a) Ship: Avg # of content per week
(up) Weekly Active users (up) Avg time spent by users per week (up/down) (If it is
not down significantly, I will be ok with shipping) Reason: Metrics are up proving
the hypothesis correct.
b) No Ship: Avg # of content per week (Down) Weekly Active users (up/down) Avg
time spent by users per week (up/down) Reason: If the Avg content is going down
WoW, it means in the long run viewers will have less content to engage with and
they will prefer other platforms where they can find more content.
c) Retest: Avg # of content per week (Up) Weekly Active users (down) Avg time
spent by users per week (up/down) Reason: WAU, while important, could have
been down due to some external factors (holidays) or internal factors (bug). So, I
would retest the A/B testing for a longer duration so that other factors can be
smoothen out.
8. How would you determine success for Instagram Reels?
To answer this question I am going to first clarify the product offering and what ti
does, talk about the goals for IG reels for Facebook, go through user actions and
then come up with some key metrics to measure success of the product based on
the goals.

12
Product Overview
IG reel allows creators to build short-form video with a bunch of different editing
features and post that to their followers or to the broader IG network. It allows
regular IG users to create their own content and build their followers.
Goals
IG designed the product to allow its users (creators) to build their Community on IG
by providing them an outlet to reach their massive IG network. Also, IG reels was
launched to combat the competition from TikTok which is growing rapidly among
the younger demographic through its short-form video. Given the background the
main measure of success should be rapid growth in usage and engagement and
the ability of this product to build a community of creators/influencers on
Instagram.
Business Goals
Grow the # of users on the product
Increase the number of videos being uploaded
Grow the time spent in the product
Build strong creator networks (habit forming)
Compete with TikTok in the short form video base (prevent user churn /
reduced engagement)
User goals
Share your creativity
Build a following
Build a revenue source (becoming an influencer)
View compelling video content
Actions
Breaking down the users into two groups here are the main actions by the group
Content creators

13
Create a video
Edit video
Share video with the broader IG network
Grow followers
Respond to messages
Content consumers
Browse/view videos
Comment on videos
Share / like videos
Follow creators
Metrics
Given the phase in which this product is success should be measured on being
able to increase usage and engagement of the product and successfully build a
creator community. To that end, the top three success metrics I would track are as
follows
The top 3 Success Metrics
Usage Metric —> DAUs for IG reels (Are DAU’s growing over time)
Engagement Metric —> Avg. time spent in IG Reels per week (how has the product
impacted user time spent on IG )
Community Metric —> # of creators with greater than 200K followers (Are we
building a community of creators / influencers. This points to compelling growth
and stickiness of the product offering)
Counter Metrics
I would look at the following counter metrics to ensure we dont impact other parts
of IG
Changes to Engagement/Time spent on other IG product or features such as
the newsfeed / stories

14
Summary:
We would measure success of IG reels based on whether it is able to increase
usage (users) and engagement (time) on the IG app and create a community of
creators/entertainers, each of whom have an active and engaged follower base.

9. You are a PM for Instagram Stories. You are considering to increase


the expiration time of Stories. What metrics would you look at to make
this decision?
Clarifying questions
To confirm, stories are available for 24 hrs
What is the target increase for stories?
Will we be applying this change globally vs. only in the close friends story view?
Do we have any particular OKRs that we are trying to achieve? <increase time on
platform>
Mission
Facebook is focused on creating and connecting communities. Instagram fits
within FB's portfolio of products as it connects individuals often through a more
creative and light-hearted medium. In terms of user demographics, Instagram
targets generally a younger demographic.
Business goal:
Our business objective for the next year is to increase time spent on platform, and
as an extension of that, increase engagement.
High level considerations with increasing stories
Pros
Instagram now has more content inventory to display to users
May encourage businesses / influencers to invest more time on stories as they are
no longer as fleeting

15
Encourage longer avg. session times with more content to view
Potential to increase revenue with more time spent on stories
Cons
The 24 hr / the ephemeral nature of stories may serve as a motivator for users to
engage on the platform to stay up-to-date
Extending the time may take away from the "spur of the moment" / ad hoc nature
of content creation
A/B Test
With these high level considerations in mind, I would like to proceed with an A/B
test to see if extending story expiration supports our overarching objective of
increasing time on the platform.
Control: No change in story expiration
Test: Increase expiration of users
Before we jump into the precise metrics we would be tracking, I would also like to
take a moment to list out the users that would be impacted by this change:
Content creators (individuals, influences, businesses)
Content consumers
Advertisers
Metrics
# of stories created / user
# of views / story
# of engagement /story
$ / 1k impressions
avg. session length
Since we are focused on increasing time spent in app, I would compare the
average session length. The group that results in the longer average time spent in
app will support / go against the cause of extending the expiration of a story.
10. If you were the PM for FB marketplaces, what goals would you set?

16
Clarification
I would first try to understand what FB Marketplace is since I haven't really used it
more than a few times.
My understanding is the following
0. Launched 2-3 years ago in the US, it is similar to web classified services such
as Craigslist
1. The sellers are not businesses but rather individuals, primarily in local areas
2. Broad range of products with no particular focus: ranging from electronics,
furniture, used cars to even apartment for rent. Sometimes posters are willing
to give things away for free
3. FB doesn't charge listing fees
4. FB runs ads on Marketplace
Why does FB Marketplace exist?
FB Marketplace is beneficial to the business in three key ways
0. Engagement
2. FB wants a greater share of user's time. There is only so much time
people are willing to spend browsing FB/IG feed and messaging their
friends before the marginal return from doing so hits negative
3. This is probably the reason FB is aggressively pushing FB Watch and Live
(as evidenced by their prominent positioning in the FB app), since they
are different kinds of content and reduce the news feed fatigue
4. Higher engagement also boosts monetization - however I do not believe
this to be a primary goal - the ad density on FB marketplace is a lot lower
than News Feed such that it may even be a net-negative to nudge people
to come here (assuming some cannibalization to News Feed)
3. Retention
2. By becoming a go-to place for people to buy/sell things, FB is increasing
the utility of the platform, making it stickier for the users
3. FB MAU has plateaued over time and while this is not disclosed in FB's
investors presentation, the user churn number should be considerable
4. Marketplace can reduce churn by countering the "social media is a waste
of time" sentiment and even resurrect some users

17
3. Strategic Reasons
2. As FB becomes home to more activities from users in the community, it
can launch a competitor to Nextdoor (a private, neighborhood-only social
network)
3. FB Marketplace can also be used to promote adoption of Facebook Pay -
> more users linking cards with FB -> lower barriers to get users to adopt
other future products
1. Creators marketplace selling custom stickers for Messenger /
WhatsApp (the LINE strategy)
2. "Tipping" live streamers (FB Gaming is recently launched and this
would be critical to attract streamers)
I believe #2 is the most important reason FB created Marketplace for the following
reasons
0. The natural frequency of FB Marketplace usage is not daily. Realistically, you
are not going to go there every day. Hence, it's not going to have a
meaningful impact on engagement metrics (as measured by time spend or #
of sessions per week)
1. #3 is unlikely to be the main goal here. There are better ways for FB to
promote 'local' usage - for example FB Groups can be more optimized
towards local. FB Pay adoption can also be better boosted by lower barrier
products such as FB Fundraising (unlike marketplace, users don't have to go
pick up the goods)
Metrics
We would like FB marketplace to be successful as a classified product, providing
utilities to both the sellers and the buyers. The key metrics I would measure are
going to be focused on user satisfaction. For each set of users, I will list the goals.
Sellers
% of listings that that get engagement (likes, comments, buyer's message) in
<7d, 7-14d, 14-30d and >30d
If most of the items that sellers list never get any view, likes or comments,
they will feel that it went into the black hole and not post more items in
the future

18
The time 'bucket' is to account for different 'velocity' of each vertical -
selling a used car will take a lot longer than selling an Airpod. We should
not punish a car listing if it doesn't get an interest quickly enough
% of listings that result in closed transactions (measured by # of closed sales
/ # of listings posted by the seller) in <7d, 7-14d, 14-30d and >30d
Transaction would not be closed for several reasons
Buyer/Seller never showed up
Product condition doesn't match the listing
Buyer tried to further negotiate after having agreed
% of listings that were closed outside of Facebook
This means the seller sold it on another platform because FB doesn't
have enough buyers or does a poor job showing the right listing to the
right people
Buyers
% of buyer's inquiries that get a response in <24 hours, 24-48 hours, and
>48 hours
Similar to the seller scenario, if a buyer reaches out to several sellers and
receive no or very laggy response, s/he will quickly stop using
Marketplace
% of buyer's inquiries that results in closed transactions (measured by # of
closed sales / # of inquiries made by the buyer) in <7d, 7-14d, 14-30d and
>30d
Transaction Experience (out of 5 star)
Why the above metrics?
We want to focus on retention which ultimately boils down to happiness
Active users (Marketplace DAU/WAU/MAU) should be measured, but are not
the goals here because they do not measure the 'values' that the users get
from Marketplace
Picking it as a goal can lead to FB excessively pushing the Marketplace
product while having sub-par user experience, leading to churn

19
The 'depth' of the marketplace as measured by number of new listings
posted in each category/market is not the goal. Having a lot of listings do not
necessary imply transactions. Given that in matured markets, FB Marketplace
has been around for several years, it should not lack marketplace depth (that
being said, they should still be a measured as counter-metric)
Depth alone does not create a sufficient moat to retain users. As a
marketplace, eBay has the most 'depth' but startups focusing on better
experience in a particular vertical (e.g. GOAT for high-end sneakers,
Chrono24 for watches) have chipped away eBay's market share
However, in a new market entry scenario, these sets of metrics would
become more important
Risks
Optimizing for response rate / listings percentage that lead to a successful
transaction could lead to us imposing a higher bar for a listing (requiring more
pics, more item details, etc). Being too aggressive here can prevent a good
amount of items from being posted, reducing marketplace depth
Number of new listings posted in each category should be monitored as a
guardrail metric
11. How would you measure the success of Facebook dating?
Why do we have FB dating? How does it fit within FB's mission of connecting
people and building community?
Research shows that lonely people have worse health outcomes, get sick more
easily, die faster, etc. There are many ways to solve for loneliness. FB already has
Groups which connect people based on shared interests/hobbies. FB also has
Messengers and other apps that help you connect with people you already know.
Dating is an important way FB can help people form more intimate connections and
is a missing part of FB portfolio today.
Assumptions
FB Dating is launched about 1 year ago. It remains free of charge and ad-free
today. So monetization is not a focus right now

20
FB Dating aims toward users looking for actual dating and not hooking up.
The profile displays job, education and other interests. These information
aren't really relevant unless you are looking to date
FB Dating is primarily designed for single people looking to get into an
exclusive relationship. Not people looking to get into a poly relationship
Goals
FB-level Goals
Engagement Boost: People spend a lot of time on different dating apps
(Tinder, Hinge, OKC, CMB, bumble, match - each with a different flavor). FB
wants to capture this significant portion of time spent' on other apps today
and does not want to monetize this usage for the time being
User acquisition: There are some people who do not have / deleted FB
accounts. The dating gives them a reason to re-activate their FB account
FB Dating Success Measure
Given the FB-level goals, we want to keep users on FB dating as long as their
relationship status remains 'single'. User retention (ex those who got into a
relationship) should be our success measure. Retention boils down to making our
users satisfied with using the dating feature. Crucially, optimizing for retention
means optimizing for different things depending on the lifecycle of the users.
Goals
What matters for users in dating?
0. Users have enough options to 'swipe right' on
2. (# of profiles swiped right / total profiles shown)
1. Users get enough matches rate
2. (# of matches / # of profiles swiped right)
1. A match leads to a good conversation
2. (# of matches with more than 3 messages exchanged / total # of
matches)
3. (# of bad convo as defined by FB dating guideline violation / total # of
matches)

21
2. A good portion of matches leads to a meetup (virtual or real-life meetup pre-
covid)
2. (# of meetup / # matches with conversation exchanged)
1. Users have good experience meeting up the match and decide to meet again
2. A meet up eventually lead to an exclusive relationship (relationship status
update) and the user will drop off from FB dating for some time
As discussed, each metric should be carefully optimized depending on user
lifecycle
#1 is important to retain users within the first weeks. If the user doesn't see anyone
appealing showing up, they would stop using pretty quickly
#2 is critical to retain users in the month. If users get very low match rate after
swiping for a few weeks, they are less likely to continue using FB dating.
#4 is important in the medium term retention (month 2 to 3 retention) - If users
repeatedly get matched with someone who aren't compatible (interest wise,
personality wise, aren't looking for the same thing), the match is unlikely to lead to
a meetup.
#5 is important in keeping the users on FB dating longer-term (month 4 and
beyond).
Evaluations
Optimizing for #1 means FB needs to 'acquire' more FB users to become FB dating
users. This could mean showing FB dating icon more prominently in the FB apps,
reducing the traffic to other corners of FB app (marketplace, videos and more).
Optimizing for 1 also means showing more people 'out of the user's league'
(people tend to like those who are ~20% more attractive than themselves) which
will hurt the second metric of focus (match rate)
Feasibility: One advantage FB has over other apps is that pretty much everyone is
on FB (but not everyone is on FB dating). The pool of FB dating users is
theoretically larger than any other apps. This means that they can show more
desirable 'profiles' to each user.

22
Optimizing for #2 could hurt #3 and #4, potentially by showing the profile of
people who pretty much swipe right on everyone but have relatively low
convo/meetup percentage.
Optimizing for #5 could be not feasible as it's difficult to predict a person
compatibility in real life vs on paper.

12. How would you measure success for preventing misinformation on


Facebook?

Clarify:
0. I will assume that misinformation includes posts, videos and pictures and that
sources could be Groups/ Pages and Individuals. High level structure:
Mission, Information funnel and drop-off points, Metrics & Trade-offs,
Mitigations Mission/ Strategy: Facebook's mission is to bring the world closer
together and empower people to build community. Reliable information is key
to fostering a healthy community. In light of recent world events and public
sentiment, it is important for Facebook to earn back the trust of its users,
governments and organizations. On the other hand, in an effort to quash
misinformation, we should not threaten another core value: free speech.
Information funnel: Laying out my understanding of how information flows in
facebook
1. User (individual/ group/ page) creates a post
2. Post is flagged by algorithm (some potential for false positives). I will assume
that algorithm is more front footed on posts which are trending topics. Most
obvious minsinfo posts are suppressed at this point while others are sent for
human review.
3. Facebook employee reviews posts flagged by algorithm: My assumption is
that most of these require some contextual understanding/ human
judgement. Here too, there is an opportunity for false positives (non-misinfo
posts labelled as misinfo)and false negatives (misinfo posts passed off as
reliable posts)
4. Facebook user flags posts: Sent back for algorithmic/ human review based on
number of flags raised and impact of trending topic.

23
Metrics: My metrics are distributed along the funnel to understand where the
biggest issues are.
0. % of posts labelled as misinfo by algorithm/ total posts reviewed by
algorithm: This will give us a sense of how much is captured by the algorithm
on the first pass. We can slice this by % for trending topics vs overall and
over different time slices e.g. Last 12 hours, 24 hours, 72 hours to give us a
better sense of virality. We can also look at % of false positives and false
negatives. We can also slice by region, demographic and device type to
detect any bot activity. Blind spot: Accuracy of algorithm not known.
1. # of posts flagged as misinfo by facebook users/ total posts viewed: ideally
we want this % to be small. Blind spot: doesn't tell us whether the user is
flagging misinfo because their opinion differs from that of the post.
2. Probability distribution of users flagging content: 20% of users are likely
flagging 80% of the content. These are likely self-appointed guardians of the
system (similar to wikipedia editors).Long term we might want to leverage this
community as an additional layer of checks.
3. Virality score of misinfo pots in 6, 12, 24 hours: This woudl be a weighted
average score fo likes, shares comments, messages, etc. As an example, a
like will get the lowest score as it has lowest impact while a share will get the
highest score since it promotes virality. We will need to work with data
science to determine the scoring system. This will help us understand the
severity of the problem.
4. Lead time to taking down misinfo posts from time of creation: should go down
over time but if system load is too high especially for top trending topics, this
may surge initially.
NSM is #1 as it tells us efficacy of the algorithm and ideally we want to catch
misinfo as high up in the funnel as possible to maintain scale and ensuring that our
mechanisms are strong enough to stem them before they get to the end user.
Guardrail metrics
0. % of false positives/ all posts labelled as misinfo: I would slice this at each
stage of the funnel i.e. By algorithm, human reviewer and Facebook user.
Ideally we would want this to decrease as we go lower in the funnel and we
want to ensure

24
1. % of false negatives/ misinfo posts not labelled: I would slice this at each
stage of the funnel i.e. By algorithm, human reviewer and Facebook user.
Ideally we would want this to decrease as we go lower in the funnel and we
want to ensure
2. Long term:
2. I would also look at NPS score to ensure that community trend maintains
an upward trend i.e. That users trust us.
3. Track repeat offenders and takedown after x legitimate offenses (x TBD
with data science based on virality score).
Mitigations: For top trending topics,
0. establish a center on the hamburger menu which is the single source of truth
e.g. Similar to COVID and Climate change portals
1. Surface center in the hamburger menu
2. Establish a task force of human reviewers to focus on evolving trending topic
to ensure that there is sufficient man power.
13. You are the PM of Messenger, you noticed that its DAU has gone
down significantly. How would you go about looking for the root cause?
First, I want to get a better sense of what is a DAU. What is considered “Active”?
User sending message, reading message, opening app?
All of the them, any activity within the Messenger App
Second, it’s important to know the context of the change. What timeframe was this
over?
Let’s assume this was over the past 30 days.
Ok, a follow up to this would be to check if this is a seasonal pattern? For example,
there could be a correlation to Messenger usage and school starting.
This is not seasonal.

25
Great, the way I want to address this is by thinking of (1) user issues; (2) feature
issues; (3) funnel issues (i.e. ways users access messenger); and (4) external
factors.
(1) Let’s first think about user issues.
I would want to see if we can use data to identify a segment of the userbase that is
at the cause of this problem. Examples of dimensions to analyze – platform,
device, geography, demographics, FB version, age of the account, usage of
Messenger - low, medium, high
Also, I think it’s important to understand DAU’s relative to that of other metrics
where there could be a correlation, example FB and/or WhatsApp
(2) Next, I would want to look at potential feature issues.
Here would take a look at some of the health metrics of the product. Examples
Messages sent and open rates
Notifications sent and the open rate
Message delivery latency
Breakdown of messages by type - text, image, video, url, etc.
Breakdown by sender type - bot, company, friend, stranger
(3) I also want to assess the funnel or the way people access messenger.
Breakdown of Message by channel - direct, facebook, API, 3rd party/widget
Breakdown of referral source by channel - direct, facebook (notifications,
messenger widget)
(4) Lastly, let’s think about potential outside forces that could be having an impact.
FB brand reputation
User behavior
Competition - iMessage, Google, Skype
Major event - Olympics, Elections

26
Let’s say that we cannot find a pattern based upon user info, however, we do find
there is a drop of messages being created from FB. Where would you go next?
I think here it’s important to breakdown the message by sender type as well as
message type. From looking at those segments, we could start thinking of a few
theories that are occurring. A few potential ideas that come to mind are.
Send a message to a friend is deprioritized in UI
Messaging is being cannibalized by other features (e.g. Facebook videos,
commenting, etc.)
Sharing via message is deprioritized in UI
14. YouTube comments are up but watch time is down. What do you do?

Clarifying Qns:
0. Did the watch time went down and comments went up for the same videos or
are we saying total watch time and total comments on Youtube? (Total)
1. I am assuming we are not including Youtube Live videos because those are
typically live chat 1000’s of comments for a single video. (Correct)
2. Do we know if these two could be independent problems not correlated with
each other (No)
Here is my approach to diagnose the root cause:
0. Will look at some general trends first
1. Internal factors
2. External factors
3. Potential reasons/Hypothesis
General Trends: Slice and dice the data by various factors below -
0. Was this happening for certain type of videos (Gaming, Entertainment, News,
etc.)
1. Is the change specific to a certain Geography (China, India, US, etc.)
2. Is this an issue on certain OS/device types
3. Is this change observed with certain user segments (Demographics)

27
4. Is this a gradual change or a sudden shift
Internal factors:
0. Did we change the approach we are tracking these metrics recently?
1. Did we launch any new feature like short form videos that will contribute less
to watch time but have high engagement in comments?
2. Did we do any PR campaign to encourage commenting on videos?
Before thinking of next factors, are we observing the same trend with our
competitors? (No) External factors:
0. Was these any social issue recently that led to more discussion on the
platform?
1. Was there any new product or feature launch from competitors?
Hypothesis: Since this trend is observed only for YouTube, I will look at internal
factors. #1 and #3 are easy to identify. Most likely, the change is due to the launch
of short form videos.
0. Short form videos are typically 30sec-1min in length compared to 5-10mins
long form videos
1. Short form videos have commenting feature inline leading to more comments
15. You're a PM on FB Watch. Your team is working on a redesign. This
redesign improved the watch time, but it also caused a drop in likes and
comments. Should you ship this?
Clarify
Functionality: Watch is a tab in FB app where people can upload and view
Videos.
What's the redesign? We collapsed the reaction and comment bar so people
can see more videos and spend more time watching videos.
What's the objective of the redesign? To help people discover and watch
more videos they'll enjoy._

28
Structure To decide whether or not to ship this redesign, i'd like to go over.
0. How does the Watch Tab fit into FB's mission and what's its goal?
1. What's the long-term impact on users and their user journey?
2. What are pros and cons in the short and long-term?
3. What's our decision criteria?
Mission & Goal FB's mission is to give people the power to build community and
bring the world closer together. The Watch tab furthers that mission by helping
people stay entertained together. The goal of the Watch tab is to drive FB
engagement by allowing people to share and discover and watch videos together.
Hypothesized Impact Watch tab has a double sized market place of producers and
consumers of Videos. The Redesign collapsed the reaction and comment bar
resulting in an increase in watch time and fewer likes and comments. Lets start
with hypothesized impact on consumers given they're the primary users of the
reaction and comment bar.
Consumers get success when they discover, watch, and engage with video
content. The redesign allowed consumers to discover more videos per vpv and
hence their watch time went up. However, it's harder to discover and access the
reaction and comment bar which, over time, may results in consumers being less
engaged missing the opportunity to respond to the content or connect with others
over the content. Engagement often begets more engagement.
Producers get success when consumers watch and engage with their videos.
Overall watch time went up meaning more videos are being discovered and being
watched contributing more value to Producers. This may incentivized Producers to
create more videos. However, there's less reaction and comments which doesn't
let Producers know if Consumers are actually getting value from their Videos. Over
time, producers may also be missing out on getting feedback to let them know
what consumers like or don't like and creating lower quality videos over time.
FB gets success when people create and discover and watch videos together.
While total watch time has gone up, FB maybe relying on reactions and comments
on videos to understand video quality and user interest and how to best distribute
videos to which type of audience. Over time, FB may be making lower quality and
less effective video recommendations to users over time.

29
Pro and Cons in Short vs Long Term
Short-Term Pros: More Watch Time Cons: Less Reactions and Comments
Long-Term Pros: More Videos Created Cons:
Less Watch time/Video
Less FB sessions/timespent
Decision
We should consider what's the short and long-term impact on FB's mission and
Watch goals when deciding whether or not to ship the redesign.
Ship - should further FB's mission and Watch goal No Ship - doesn't further FB
mission and Watch goal Re-test - furthers FB mission or Watch goal
Based on the hypothesized impact to users, we should not ship the redesign as is
given it'll hurt both FB's mission and Watch goal in the long-run. We should
reiterate on the redesign and see how we can drive up discovery of additional
videos and watch time without the regression on engagement.

16. You're the PM for Facebook Newsfeed. How would you decide
whether or not to auto-play videos or make the videos click-to-play.
Clarifying
0. How does autoplay work? I presume it's how it's done today (no audio, start
playing once the video is scrolled to be completely in the user's screen)
1. Desktop or mobile?
2. Which market is this? Users in US / Western Europe vs developing markets
have different behaviors
Goals & Metrics

30
FB's mission is to help people build deeper connections. As a news feed PM, the
goal is to make sure FB news feed is valuable to the users by being a great way to
keep in touch with friends. Deriving from that, these should be the primary goals
0. Boost time spent on FB News Feed (= less time on Youtube)
1. Boost users interactions - a user discovers a funny video and share them on
their own feed or in a private message in Messenger, this helps strengthen /
maintain connection between Facebook users
Other stakeholders goals
This decision will impact the Facebook Videos team, the Ads team and the
Instagram TV team. These are the primary metrics
Number of videos watched, % completed, interacted, skipped and shared
[Videos team]
Autoplay will most likely increase most of these metrics
Number of ad impressions [Ads team]
Users spending more time watching video could mean less 'scrolling',
which triggers fewer ads
Revenue per 1,000 ad impressions [Ads team]
More video watching
Engagement (time spent, number of videos watched, etc.) on IGTV
[Instagram team]
This is a direct substitute
Decision Making Criteria
I would run an experimentation on autoplay vs click-to-play and makes the decision
based on which one has more positive impact on following metrics
Frequency of interactions with friends (by private messaging, commenting
and reacting to posts)
Time spent on FB news feed
To reiterate, the assumption is autoplaying videos will save users a click, lowering
the barrier for the users to discover great videos to share with their friends.

31
Countermetrics
The primary one to watch would be monetization-related metrics, but they would
not be the main criteria since video ads are expected to be more monetizable than
news feed ads in the long-term. However, if they are disproportionately impacted,
we will have to revisit the criteria.
The video-related metrics are left out since Autoplay would most likely win out and
thus would not be a fair comparison. IGTV is also left out since we work for FB
News Feed and are primarily concerned with FB metrics

17. You're a PM at a food delivery app. There's a 10% decline in


restaurant supply over the past week.

AA is Interviewer and BB is Interviewee


BB: I would like to ask few clarifying questions: AA: Sure, Go ahead. BB: Is it
happening across the country or its region specific? AA: It's happening across all
North America BB: Has it happened in the past? With that I meant, is it a
seasonality issue? AA: No, we are facing this issue for the first time. BB: Are our
delivery competitors also facing the same issue? AA: Since this is the recent issue,
we don't have much info about our rivals in the market. BB: Are we seeing decline
in revenue due to our restaurant partners supply-side issue? AA: Yes, we have
started seeing gradual decline in the no. of orders. BB: Are supply shortage scoped
to any particular cuisine or is it uniform across all the cuisines? AA: It's affecting
the overall operations of the restaurants. BB: Have we made any recent UI changes
on our app? Maybe our users would be facing issues while ordering? AA: Yes we
have made some gradient changes. Changed our onboarding process. We have
moved from pagination to lazy loading of restaurant listing pages which is good in
terms of data consumption on the user end. BB: Cool. I would like to brainstorm to
further scope down the root cause and steps to counter the situation. AA: Go
ahead. BB: So here's the framework that I would like to go ahead with:
Brainstorm list of possible reasons by diving into use cases
Identifying root cause
Brainstorm ways to fix

32
Make a recommendation
Sounds good to you? AA: Go ahead. BB: By daily active users, we mean users who
visit our app, search for the meals they want to have, scroll through restaurants
which are offering that meal and place the order. Is that the safe assumption? AA:
Yes, that's the correct use case. BB: Okay, since our partner restaurants are facing
issue, I believe we should look at external factors as we would only be able to fulfil
demands if we have sufficient supply. So based on above discussion I have thought
of three potential reasons:
Trade sanctions among countries through which restaurants import their
supply
On and off Covid-19 lockdown issues among various states around US and
Canada.
False rumors about any particular restaurant which sparked protest which led
to supply-side shortage.
Are we facing any of the above issue? AA: Yes, on and off-lockdown situation is
making it worse for restaurants to get enough supplies. Can you think of few
solutions to counter it? BB: Okay. Let me brainstorm for a bit. AA: Sure BB: Since
pandemic is a global issue and we don't have powers to stop the inevitable. We can
surely think of other ways to provide value to our users. This involves:
Start partnering with local cafes, bakeries and giving out discounts on food
items which can keep the business going.
Get into grocery & medicine delivery segment since we have sufficient drivers
to fulfil the demand.
Acquire new customers via marketing & promoting cuisines which are
sufficient in supply.
I would like to go with 2nd solution as it's unique and provides a much higher value
to our customers in current crisis. We have drivers who can deliver groceries,
medicines and other products by following the same safety protocol which they are
accustomed to now. Although, it would be a fundamental change in our value
proposition and we would gain new set of rivals in the market but we can expand
our scope of business. We also have to change our compensation model for drivers
for delivering new set of products. Success metrics which I would like to track
would be:

33
No of users adding grocery, medicines to cart.
% increase in DAU/WAU
% of DAU placing orders relative to total users.
18. If you were the PM for FB marketplaces, what goals would you set?
Clarification
I would first try to understand what FB Marketplace is since I haven't really used it
more than a few times.
My understanding is the following
0. Launched 2-3 years ago in the US, it is similar to web classified services such
as Craigslist
1. The sellers are not businesses but rather individuals, primarily in local areas
2. Broad range of products with no particular focus: ranging from electronics,
furniture, used cars to even apartment for rent. Sometimes posters are willing
to give things away for free
3. FB doesn't charge listing fees
4. FB runs ads on Marketplace
Why does FB Marketplace exist?
FB Marketplace is beneficial to the business in three key ways
0. Engagement
2. FB wants a greater share of user's time. There is only so much time
people are willing to spend browsing FB/IG feed and messaging their
friends before the marginal return from doing so hits negative
3. This is probably the reason FB is aggressively pushing FB Watch and Live
(as evidenced by their prominent positioning in the FB app), since they
are different kinds of content and reduce the news feed fatigue
4. Higher engagement also boosts monetization - however I do not believe
this to be a primary goal - the ad density on FB marketplace is a lot lower
than News Feed such that it may even be a net-negative to nudge people
to come here (assuming some cannibalization to News Feed)

34
3. Retention
2. By becoming a go-to place for people to buy/sell things, FB is increasing
the utility of the platform, making it stickier for the users
3. FB MAU has plateaued over time and while this is not disclosed in FB's
investors presentation, the user churn number should be considerable
4. Marketplace can reduce churn by countering the "social media is a waste
of time" sentiment and even resurrect some users
3. Strategic Reasons
2. As FB becomes home to more activities from users in the community, it
can launch a competitor to Nextdoor (a private, neighborhood-only social
network)
3. FB Marketplace can also be used to promote adoption of Facebook Pay -
> more users linking cards with FB -> lower barriers to get users to adopt
other future products
1. Creators marketplace selling custom stickers for Messenger /
WhatsApp (the LINE strategy)
2. "Tipping" live streamers (FB Gaming is recently launched and this
would be critical to attract streamers)
I believe #2 is the most important reason FB created Marketplace for the following
reasons
0. The natural frequency of FB Marketplace usage is not daily. Realistically, you
are not going to go there every day. Hence, it's not going to have a
meaningful impact on engagement metrics (as measured by time spend or #
of sessions per week)
1. #3 is unlikely to be the main goal here. There are better ways for FB to
promote 'local' usage - for example FB Groups can be more optimized
towards local. FB Pay adoption can also be better boosted by lower barrier
products such as FB Fundraising (unlike marketplace, users don't have to go
pick up the goods)
Metrics
We would like FB marketplace to be successful as a classified product, providing
utilities to both the sellers and the buyers. The key metrics I would measure are
going to be focused on user satisfaction. For each set of users, I will list the goals.

35
Sellers
% of listings that that get engagement (likes, comments, buyer's message) in
<7d, 7-14d, 14-30d and >30d
If most of the items that sellers list never get any view, likes or comments,
they will feel that it went into the black hole and not post more items in
the future
The time 'bucket' is to account for different 'velocity' of each vertical -
selling a used car will take a lot longer than selling an Airpod. We should
not punish a car listing if it doesn't get an interest quickly enough
% of listings that result in closed transactions (measured by # of closed sales
/ # of listings posted by the seller) in <7d, 7-14d, 14-30d and >30d
Transaction would not be closed for several reasons
Buyer/Seller never showed up
Product condition doesn't match the listing
Buyer tried to further negotiate after having agreed
% of listings that were closed outside of Facebook
This means the seller sold it on another platform because FB doesn't
have enough buyers or does a poor job showing the right listing to the
right people
Buyers
% of buyer's inquiries that get a response in <24 hours, 24-48 hours, and
>48 hours
Similar to the seller scenario, if a buyer reaches out to several sellers and
receive no or very laggy response, s/he will quickly stop using
Marketplace
% of buyer's inquiries that results in closed transactions (measured by # of
closed sales / # of inquiries made by the buyer) in <7d, 7-14d, 14-30d and
>30d
Transaction Experience (out of 5 star)
Why the above metrics?
We want to focus on retention which ultimately boils down to happiness

36
Active users (Marketplace DAU/WAU/MAU) should be measured, but are not
the goals here because they do not measure the 'values' that the users get
from Marketplace
Picking it as a goal can lead to FB excessively pushing the Marketplace
product while having sub-par user experience, leading to churn
The 'depth' of the marketplace as measured by number of new listings
posted in each category/market is not the goal. Having a lot of listings do not
necessary imply transactions. Given that in matured markets, FB Marketplace
has been around for several years, it should not lack marketplace depth (that
being said, they should still be a measured as counter-metric)
Depth alone does not create a sufficient moat to retain users. As a
marketplace, eBay has the most 'depth' but startups focusing on better
experience in a particular vertical (e.g. GOAT for high-end sneakers,
Chrono24 for watches) have chipped away eBay's market share
However, in a new market entry scenario, these sets of metrics would
become more important
Risks
Optimizing for response rate / listings percentage that lead to a successful
transaction could lead to us imposing a higher bar for a listing (requiring more
pics, more item details, etc). Being too aggressive here can prevent a good
amount of items from being posted, reducing marketplace depth
Number of new listings posted in each category should be monitored as a
guardrail metric
19. How would you measure the success of Facebook dating?
Why do we have FB dating? How does it fit within FB's mission of connecting
people and building community?
Research shows that lonely people have worse health outcomes, get sick more
easily, die faster, etc. There are many ways to solve for loneliness. FB already has
Groups which connect people based on shared interests/hobbies. FB also has
Messengers and other apps that help you connect with people you already know.
Dating is an important way FB can help people form more intimate connections and
is a missing part of FB portfolio today.

37
Assumptions
FB Dating is launched about 1 year ago. It remains free of charge and ad-free
today. So monetization is not a focus right now
FB Dating aims toward users looking for actual dating and not hooking up.
The profile displays job, education and other interests. These information
aren't really relevant unless you are looking to date
FB Dating is primarily designed for single people looking to get into an
exclusive relationship. Not people looking to get into a poly relationship
Goals
FB-level Goals
Engagement Boost: People spend a lot of time on different dating apps
(Tinder, Hinge, OKC, CMB, bumble, match - each with a different flavor). FB
wants to capture this significant portion of time spent' on other apps today
and does not want to monetize this usage for the time being
User acquisition: There are some people who do not have / deleted FB
accounts. The dating gives them a reason to re-activate their FB account
FB Dating Success Measure
Given the FB-level goals, we want to keep users on FB dating as long as their
relationship status remains 'single'. User retention (ex those who got into a
relationship) should be our success measure. Retention boils down to making our
users satisfied with using the dating feature. Crucially, optimizing for retention
means optimizing for different things depending on the lifecycle of the users.
Goals
What matters for users in dating?
0. Users have enough options to 'swipe right' on
2. (# of profiles swiped right / total profiles shown)
1. Users get enough matches rate
2. (# of matches / # of profiles swiped right)

38
1. A match leads to a good conversation
2. (# of matches with more than 3 messages exchanged / total # of
matches)
3. (# of bad convo as defined by FB dating guideline violation / total # of
matches)
2. A good portion of matches leads to a meetup (virtual or real-life meetup pre-
covid)
2. (# of meetup / # matches with conversation exchanged)
1. Users have good experience meeting up the match and decide to meet again
2. A meet up eventually lead to an exclusive relationship (relationship status
update) and the user will drop off from FB dating for some time
As discussed, each metric should be carefully optimized depending on user
lifecycle
#1 is important to retain users within the first weeks. If the user doesn't see anyone
appealing showing up, they would stop using pretty quickly
#2 is critical to retain users in the month. If users get very low match rate after
swiping for a few weeks, they are less likely to continue using FB dating.
#4 is important in the medium term retention (month 2 to 3 retention) - If users
repeatedly get matched with someone who aren't compatible (interest wise,
personality wise, aren't looking for the same thing), the match is unlikely to lead to
a meetup.
#5 is important in keeping the users on FB dating longer-term (month 4 and
beyond).
Evaluations
Optimizing for #1 means FB needs to 'acquire' more FB users to become FB dating
users. This could mean showing FB dating icon more prominently in the FB apps,
reducing the traffic to other corners of FB app (marketplace, videos and more).
Optimizing for 1 also means showing more people 'out of the user's league'
(people tend to like those who are ~20% more attractive than themselves) which
will hurt the second metric of focus (match rate)

39
Feasibility: One advantage FB has over other apps is that pretty much everyone is
on FB (but not everyone is on FB dating). The pool of FB dating users is
theoretically larger than any other apps. This means that they can show more
desirable 'profiles' to each user.
Optimizing for #2 could hurt #3 and #4, potentially by showing the profile of
people who pretty much swipe right on everyone but have relatively low
convo/meetup percentage.
Optimizing for #5 could be not feasible as it's difficult to predict a person
compatibility in real life vs on paper.

20. How would you measure success for preventing misinformation on


Facebook?
Clarify:
0. I will assume that misinformation includes posts, videos and pictures and that
sources could be Groups/ Pages and Individuals. High level structure:
Mission, Information funnel and drop-off points, Metrics & Trade-offs,
Mitigations Mission/ Strategy: Facebook's mission is to bring the world closer
together and empower people to build community. Reliable information is key
to fostering a healthy community. In light of recent world events and public
sentiment, it is important for Facebook to earn back the trust of its users,
governments and organizations. On the other hand, in an effort to quash
misinformation, we should not threaten another core value: free speech.
Information funnel: Laying out my understanding of how information flows in
facebook
1. User (individual/ group/ page) creates a post
2. Post is flagged by algorithm (some potential for false positives). I will assume
that algorithm is more front footed on posts which are trending topics. Most
obvious minsinfo posts are suppressed at this point while others are sent for
human review.

40
3. Facebook employee reviews posts flagged by algorithm: My assumption is
that most of these require some contextual understanding/ human
judgement. Here too, there is an opportunity for false positives (non-misinfo
posts labelled as misinfo)and false negatives (misinfo posts passed off as
reliable posts)
4. Facebook user flags posts: Sent back for algorithmic/ human review based on
number of flags raised and impact of trending topic.
Metrics: My metrics are distributed along the funnel to understand where the
biggest issues are.
0. % of posts labelled as misinfo by algorithm/ total posts reviewed by
algorithm: This will give us a sense of how much is captured by the algorithm
on the first pass. We can slice this by % for trending topics vs overall and
over different time slices e.g. Last 12 hours, 24 hours, 72 hours to give us a
better sense of virality. We can also look at % of false positives and false
negatives. We can also slice by region, demographic and device type to
detect any bot activity. Blind spot: Accuracy of algorithm not known.
1. # of posts flagged as misinfo by facebook users/ total posts viewed: ideally
we want this % to be small. Blind spot: doesn't tell us whether the user is
flagging misinfo because their opinion differs from that of the post.
2. Probability distribution of users flagging content: 20% of users are likely
flagging 80% of the content. These are likely self-appointed guardians of the
system (similar to wikipedia editors).Long term we might want to leverage this
community as an additional layer of checks.
3. Virality score of misinfo pots in 6, 12, 24 hours: This woudl be a weighted
average score fo likes, shares comments, messages, etc. As an example, a
like will get the lowest score as it has lowest impact while a share will get the
highest score since it promotes virality. We will need to work with data
science to determine the scoring system. This will help us understand the
severity of the problem.
4. Lead time to taking down misinfo posts from time of creation: should go down
over time but if system load is too high especially for top trending topics, this
may surge initially.
NSM is #1 as it tells us efficacy of the algorithm and ideally we want to catch
misinfo as high up in the funnel as possible to maintain scale and ensuring that our

41
mechanisms are strong enough to stem them before they get to the end user.
Guardrail metrics
0. % of false positives/ all posts labelled as misinfo: I would slice this at each
stage of the funnel i.e. By algorithm, human reviewer and Facebook user.
Ideally we would want this to decrease as we go lower in the funnel and we
want to ensure
1. % of false negatives/ misinfo posts not labelled: I would slice this at each
stage of the funnel i.e. By algorithm, human reviewer and Facebook user.
Ideally we would want this to decrease as we go lower in the funnel and we
want to ensure
2. Long term:
2. I would also look at NPS score to ensure that community trend maintains
an upward trend i.e. That users trust us.
3. Track repeat offenders and takedown after x legitimate offenses (x TBD
with data science based on virality score).
Mitigations: For top trending topics,
0. Establish a center on the hamburger menu which is the single source of truth
e.g. Similar to COVID and Climate change portals
1. Surface center in the hamburger menu
2. Establish a task force of human reviewers to focus on evolving trending topic
to ensure that there is sufficient man power.
21. You are the PM of Messenger, you noticed that its DAU has gone
down significantly. How would you go about looking for the root cause?
First, I want to get a better sense of what is a DAU. What is considered “Active”?
User sending message, reading message, opening app?
All of the them, any activity within the Messenger App
Second, it’s important to know the context of the change. What timeframe was this
over?
Let’s assume this was over the past 30 days.

42
Ok, a follow up to this would be to check if this is a seasonal pattern? For example,
there could be a correlation to Messenger usage and school starting.
This is not seasonal.
Great, the way I want to address this is by thinking of (1) user issues; (2) feature
issues; (3) funnel issues (i.e. ways users access messenger); and (4) external
factors.
(1) Let’s first think about user issues.
I would want to see if we can use data to identify a segment of the userbase that is
at the cause of this problem. Examples of dimensions to analyze – platform,
device, geography, demographics, FB version, age of the account, usage of
Messenger - low, medium, high
Also, I think it’s important to understand DAU’s relative to that of other metrics
where there could be a correlation, example FB and/or WhatsApp
(2) Next, I would want to look at potential feature issues.
Here would take a look at some of the health metrics of the product. Examples
Messages sent and open rates
Notifications sent and the open rate
Message delivery latency
Breakdown of messages by type - text, image, video, url, etc.
Breakdown by sender type - bot, company, friend, stranger
(3) I also want to assess the funnel or the way people access messenger.
Breakdown of Message by channel - direct, facebook, API, 3rd party/widget
Breakdown of referral source by channel - direct, facebook (notifications,
messenger widget)
(4) Lastly, let’s think about potential outside forces that could be having an impact.

43
FB brand reputation
User behavior
Competition - iMessage, Google, Skype
Major event - Olympics, Elections
Let’s say that we cannot find a pattern based upon user info, however, we do find
there is a drop of messages being created from FB. Where would you go next?
I think here it’s important to breakdown the message by sender type as well as
message type. From looking at those segments, we could start thinking of a few
theories that are occurring. A few potential ideas that come to mind are.
Send a message to a friend is deprioritized in UI
Messaging is being cannibalized by other features (e.g. Facebook videos,
commenting, etc.)
Sharing via message is deprioritized in UI
22. You're a PM at a food delivery app. There's a 10% decline in
restaurant supply over the past week.
AA is Interviewer and BB is Interviewee
BB: I would like to ask few clarifying questions: AA: Sure, Go ahead. BB: Is it
happening across the country or its region specific? AA: It's happening across all
North America BB: Has it happened in the past? With that I meant, is it a
seasonality issue? AA: No, we are facing this issue for the first time. BB: Are our
delivery competitors also facing the same issue? AA: Since this is the recent issue,
we don't have much info about our rivals in the market. BB: Are we seeing decline
in revenue due to our restaurant partners supply-side issue? AA: Yes, we have
started seeing gradual decline in the no. of orders. BB: Are supply shortage scoped
to any particular cuisine or is it uniform across all the cuisines? AA: It's affecting
the overall operations of the restaurants. BB: Have we made any recent UI changes
on our app? Maybe our users would be facing issues while ordering? AA: Yes we
have made some gradient changes. Changed our onboarding process. We have
moved from pagination to lazy loading of restaurant listing pages which is good in
terms of data consumption on the user end. BB: Cool. I would like to brainstorm to

44
further scope down the root cause and steps to counter the situation. AA: Go
ahead. BB: So here's the framework that I would like to go ahead with:
Brainstorm list of possible reasons by diving into use cases
Identifying root cause
Brainstorm ways to fix
Make a recommendation
Sounds good to you? AA: Go ahead. BB: By daily active users, we mean users who
visit our app, search for the meals they want to have, scroll through restaurants
which are offering that meal and place the order. Is that the safe assumption? AA:
Yes, that's the correct use case. BB: Okay, since our partner restaurants are facing
issue, I believe we should look at external factors as we would only be able to fulfil
demands if we have sufficient supply. So based on above discussion I have thought
of three potential reasons:
Trade sanctions among countries through which restaurants import their
supply
On and off Covid-19 lockdown issues among various states around US and
Canada.
False rumors about any particular restaurant which sparked protest which led
to supply-side shortage.
Are we facing any of the above issue? AA: Yes, on and off-lockdown situation is
making it worse for restaurants to get enough supplies. Can you think of few
solutions to counter it? BB: Okay. Let me brainstorm for a bit. AA: Sure BB: Since
pandemic is a global issue and we don't have powers to stop the inevitable. We can
surely think of other ways to provide value to our users. This involves:
Start partnering with local cafes, bakeries and giving out discounts on food
items which can keep the business going.
Get into grocery & medicine delivery segment since we have sufficient drivers
to fulfil the demand.
Acquire new customers via marketing & promoting cuisines which are
sufficient in supply.
I would like to go with 2nd solution as it's unique and provides a much higher value
to our customers in current crisis. We have drivers who can deliver groceries,

45
medicines and other products by following the same safety protocol which they are
accustomed to now. Although, it would be a fundamental change in our value
proposition and we would gain new set of rivals in the market but we can expand
our scope of business. We also have to change our compensation model for drivers
for delivering new set of products. Success metrics which I would like to track
would be:
No of users adding grocery, medicines to cart.
% increase in DAU/WAU
% of DAU placing orders relative to total users.
23. Should Uber Eats be a different app from Uber Rides?

Approach:
0. Clarify question with interviewer
When we say "app", I'd like to confirm the app for which user. For example, in
Rides, we have an app for the rider and an app for the driver. For Eats, we
have an experience for a customer purchasing food but I don't know of the
experience for the restaurant. Could you confirm which users' experience
you're describing when saying "app"?
Let's say rider/customer purchasing food.
Also is this a situation prior to launch of Uber Eats, which was launched after
Rides? Or is the question geared towards the current state of Eats and Rides?
Let's say prior to Eats launch.
0. Dig into the user experiences - I want to start by first observing the user
experience for riders/eaters. It's helpful to look at their experiences to know
how similar or different they are to help us answer if they should be in the
same app.
Rider - As a rider:
I already know where I need to go when I open the app. I need to enter an
address/location.
when I open the app, my highest priority is finding a ride quickly (could
be an assumption but I would share some sort of personal insight to
empathize with the rider experience)
There are very few steps I take before ordering a ride (decide ride type,
enter location, view time until pickup and cost, reserve ride).

46
Eater - As someone looking to order food:
I may or may not have decided what kind of food I want to order. I may
need help deciding this - often the biggest task for the Eater.
I want to see restaurant options available to help me decide what I want.
When I find a restaurant, I want to see the menu and make selections.
There are multiple steps I take before ordering food and each step can
take me some time to deliberate.
0. Now that I've covered the customer experience, I want to dig into the impact
of each of the options on business goals, customer experience, and
engineering experience.
Combined app experience
Business
(+) Potential for faster adoption and growth - Leverage existing
customer base with Rides and get Eats to existing customers faster
without having them discover and download a separate app. I could
see this as a benefit to Uber and possibly align with a business
strategy for growth.
(+) Potential opportunity for bigger innovations combining the
experience - what if Uber wanted to make it so that a rider can order
and pick up food while on a ride? I don't know that the combined app
experience is necessary to do this but I could see that it may make it
easier to test.
Customer experience
( - ) Combining Ride and Eats in a single app makes a larger app. This
can be off-putting as it takes up more space on a user's phone.
( - )The process for getting a ride and ordering food differ greatly.
The UX for a combined app would need to accommodate for the
different processes involved for each task so it might make sense to
completely separate the app to accommodate the differences.
Engineering
( - ) Testing each new app release can take more time with a much
larger app. With the combined experiences, there is more room and
risk of bugs.
Separate apps
Business

47
( + ) Opportunity to create two independently valuable lines of
business to minimize risks - if one suffers due to market conditions or
other factors, we could minimize negative impact to the other
product line because customers compartmentalize app uses (totally
oddball hypothesis but it's possible that the usage of one product
line would not drop because users stop using the other)
Customer experience
( + ) Easier to accommodate the different processes involved for
getting rides vs. ordering food
Engineering
( + ) Easier to test each new release with smaller apps and more
targeted UX.
0. Decide and summarize
I really want to advocate strongly for the customer's experience here. From what I
know about the tasks they go through to hail rides and order food, the processes
differ greatly enough that the experience should be different. I feel that the best
way to do this would be to have separate apps. I don't want to ignore the possible
opportunity for more innovative thinking in the future but if we're trying to decide
how we launch Eats, it's best for the customer experience to launch separately. We
can still solve the problems of new customer growth and tapping into our existing
Rider customer base with marketing in the ride app to drive Eats app downloads -
we don't need a combined app to leverage our existing customer base.

24. What metrics would you define for success of Facebook Lite?
So first, I have some clarifying questions, what is Facebook lite?
If I understand correctly this is a version of the Facebook application for Android,
with the same functionality but lighter in terms of size. It is aimed to be used in
emerging markets where the bandwidth is limited
What are the tradeoffs for the size of the app?
I guess that some visual elements are stripped down from the app and reduced its
size

48
Is this application exist or I am launching it
The timeframe of the product? Is it before launch, or as it is today? The reason I
ask is that if it’s today since Facebook Lite already has a meaningful user base, I’d
focus on engagement rather than growth.
Goals
So the idea behind this application is to bring the same value of the Facebook
application to other regions and bring more users to the platform. The users can
then share their lives through posts or stories, react to other posts and comments,
and of course, they can connect to others.
The Facebook mission is to bring people together, and to break the technological
barriers and bringing people from emerging markets to the platform, serves the
Facebook mission.
Actions
The actions I would like users to do on Facebook lite are also similar to the
Facebook app, and I would like them to engage in one of the ways:
0. Download the application
1. Engage and increase engagement over time
Advanced user
Created a group
Opened a shop
Created an event
Everyday user
Became friend
Post
Share a post
Commented on a post
Basic user
Open the app and scroll, but did not react
Metrics

49
So the metrics to track my goals are
The first would be how would Facebook lite supports the Facebook mission,
and here I can track
The user base growth
Activation rate - How many accounts created on this app (MoM)
Acquisition rate - how many users downloaded the app (MoM)
User engagement - Monthly active users where an active user is a user
that engaged with the platform. I need to be able also to score the
engage level - a weighted score of the activity
The second bucket would be product health
Retention rate, and how it compares to the main Facebook application
Churn rate - how many users did not return to the platform
I think the true north would be the monthly active users, and a second metric is the
retention rate
Evaluate
The success of this app could hurt or cannibalize the Facebook application
for Android. users may switch to the lite app, to enjoy the optimized
performance of the lite application. Overall, it did not lead to churn, so it may
be okay.
If the engagement of both applications is the same, this may lead to the
question - do we really need all the features that we stripped down?
Abuse of the platform in emerging markets - fake accounts that are used to
share fake news for example.
25. One of our merchants is noticing an increase in fraud. How would
you solve the problem?
I'd approach this first with a clarifying questions geared towards fraud prevention
as these are going to be important to actually finding a root solution. I'd want to
take cues from the interviewer in order to then identify potential hypotheses.
Clarifying questions:

50
How is fraud calculated? It's important to clarify the metric used to know that
fraud is increasing. For example, is this a percent of all customers or
transactions?
Is the fraud that the merchant reported confirmed fraud (i.e., confirmed
fraudster vs. suspected fraud)? This is important to know as there is a
distinction between fraudster attacks and potential false positives. False
positives could lead us down the path of incorrect fraud ML scoring.
What type of fraud have they noticed? Fraud is categorized into different
types of chargebacks. It is important to know what type of chargeback is
occurring and the modus operandi (e.g., stolen payment method, stolen
identity, friendly fraud, triangulation, etc.) of fraud in order to solve the root
issue of the problem.
What type of business is this merchant in? Depending on answer, I'd dig to
see if other merchants in a similar industry are finding this issue.
What is the time period of increased fraud? Sudden or gradual increase?
Identifying fraud typically takes at least a month due to the nature of
chargebacks needing to mature. We would want to know if this is a continuing
trend or if there is a period of seasonality (e.g., peak volume periods in
shopping, holidays, pay periods, spikes in foreign exchange rates, etc. can all
influence increases in fraud).
Is fraud for this merchant occurring in a particular region (e.g., USA, UK, CAN,
etc.)? I ask as it may be possible that new regulations have come out for
which the merchant may not have prepared and fraudsters will take
advantage of.
Is this a relatively new merchant? I ask as it is possible that a merchant new to
Stripe or new to e-commerce may not be taking advantage of appropriate
fraud prevention tactics.
What were the merchant's fraud prevention tactics? What are our fraud
prevention tactics for this merchant? Knowing this can help us pinpoint a
failure in the system.
Once clarifying questions are answered, I would cover the types of issues at a
high-level. In general when digging into this type of a problem, we want to cover
our bases on the types of issues at hand at a high level.
I'd share a few examples of hypotheses, depending on answers from the
interviewer on the above:

51
External issues:
Merchant was unprepared for seasonality of high fraudster activity, which
could be confirmed based on peak period volume for merchant or similar
merchant in similar industry.
Bot attacks led to high fraud rate, which could be confirmed based on IP,
sign-in traffic, attempted sign-in rate spikes.
Something we changed recently:
We adjusted a feature/ML scoring on fraud that could have allowed more
fraudsters to act on the merchant. This could be confirmed if there is
uniqueness to this merchant's type of business and customer base that
makes it so that their bar for unwanted activity is misaligned to that of
other merchants on Stripe.
Something the merchant changed recently:
Merchant launched a new country/region or product at the time of fraud
spike. This could be confirmed based on fraud metrics on merchant
products and other segments.
Bugs:
Timeframe of fraud spikes occurred during an outage or unexpected
downtime of fraud prevention systems.
I'd ask the interviewer for context and if they'd like to pick a scenario. Once a
scenario is selected, we would go down 5 Why's to identify the root cause and
solution.
Example scenario: Interviewer tells me engineering changed something recently on
risk scoring.
If this is the case, I would check with the fraud engineering team on the changes
released approximately one month prior to fraud spike (this is due to fraud metrics
taking time to mature post feature launches).

26. 10% of Netflix users are inactive. How would you investigate?
Problem Statement: 10% of Netflix users are inactive. How would you investigate?
Clarifying Question:

52
0. Define inactive users? Assuming these are the users who have not logged
Netflix for last week/last month etc
1. Time span to measure inactive users? Assuming inactive users are inactive
from last week.
Investigation Questions:
0. Has there been a change or error logged on the analytical tool we are
measuring "Inactive users" metric?
1. Does the metric show Netflix inactive users across desktop/mobile/TV or
across all devices?
2. Does the metric highlight any particular demography?
3. Does the metric show spike of 10% or gradual increase in inactive users for
last 1-2 weeks?
4. Has there been a change/updates as part of Netflix releases in past week or
so?
5. Has there been any errors logged for the logins/signups in the past week?
6. Has there been a network issue reported in last week?
7. Has there been a launch of any competitive app in the market for last week or
so?
Note: The RCA can go in direction depending on external factors Or app/analytical
tool/Network issues.

27. How would you measure success metrics for Facebook Fundraisers?
Clarifying Questions/Statements:
Clarify understanding of Fundraisers:
My answer: product that lets users create fundraisers for a particular
cause, and invite/share it with other users to donate to that cause.
Usually set for a certain target donation amount to be met, and a
timeframe for which a user can donate to the cause

53
Are we assuming the lifecycle of the Fundraisers product today? If so, then I’d
focus on engagement, since we have a meaningful amount of users on FB
app today.
My assumed answer: yes, focus on engagement
High-level product strategy discussion:
FB’s mission is to give people the power to build community and bring the
world closer together
The concept of fundraising, even outside the context of FB, drives very much
to both aspects (community, world closer) of FB’s mission
Communities often come together to contribute money towards a
common cause that the community is passionate about
As a result of rallying behind a common cause, that community is closer
together
From a business standpoint, Fundraisers also make sense because it
strengthens FB’s moat via its Social Graph
As more users express interest, share, and contribute to fundraisers, FB
is able to gather more data on user interests, people they may be
interested in connecting with, etc.
Ultimately, the northstar, qualitative goal of Fundraisers is to be the most
trusted platform for people to contribute to causes they’re passionate about
with others also passionate about the same thing
Actions a user can take in Fundraisers:
My approach to identifying success metrics for Fundraisers will be to first
identify key actions users can take as it relates to Fundraisers, then prioritize
which actions we’d want to focus on for engagement
Actions a user can take
Fundraiser admin
Create fundraiser
Set terms (amount, timeframe)
Fill in info
Share fundraiser with initial group of users
+everything a Fundraiser donor or potential donor can do
Fundraiser donor or potential donor
Share fundraiser with users

54
Browse fundraisers
Contribute to a fundraiser
Post on a fundraiser page
I’d want to focus on actions that drive towards the value prop of Fundraisers
(i.e. understanding a cause and donating towards it). I.e.
Viewing a cause
Donating towards a cause
Since we’re prioritizing engagement, given the lifecycle of the product (i.e.
launched and presumably already has a large amount of regular users), I’d
want to focus on depth. I.e.
Avg. number of fundraisers viewed per user
Avg. number of fundraisers contributed to per user
I’d also still want to keep in mind general health metrics of the product to
make sure it’s not falling apart
MAUs
Active defined as interacting (viewing, sharing, commenting in one,
donating, etc.) with fundraisers in some way
Chose monthly because weekly seems to be the natural frequency
for which someone may contribute to a cause
% of fundraisers that are successful
Successful defined as the average percentage of the total amount of
money asked that admins would say for their “successful”
fundraisers
E.g. if we ask admins if they felt their fundraiser was successful,
and the average amount contributed to the fundraisers of those
admins was 76% of total requested, then a fundraiser with at
least 76% of total amount requested donated would be
considered successful
Number of fundraisers created per month
If I had to choose one metric to prioritize, it would be average number of
fundraisers viewed per user.
If we see this metric grow, we know that we are driving towards our
northstar, and doing it well since people are finding a lot of value in the
product
It is also worth considering the downstream effects of greater
engagement with Fundraisers that drive towards FB’s mission

55
The more people view and contribute to Fundraisers, the more likely
they are to come across and meet new people that share a passion,
become friends, DM each other, etc. ultimately bringing the world
closer together
Considerations:
If we over optimize for the metrics above, we might miss out on some key
things:
Integrity
We’d want to make sure that the Fundraisers that people are creating
are actually for safe/worthwhile causes
One way we could track this is through user reporting of Fundraisers
Cannibalization
If people are spending too much time browsing and contributing to
Fundraisers, that may take away from other valuable parts of the FB
product
We can control for this by also tracking non-fundraiser related
actions for Fundraiser MAUs
28. Set the success metrics for Google Maps.
Set Success metrics for Google Maps:
Assuming we are focusing on mobile experience…
Clarify the product: Overall, it is a directions app:
Gets you from point a to point b
Allows for several methods of transportation - car, public transit, walking,
biking, rideshare
After entering location, you click your transport method, and select get
directions which leads you to steps to get to you location or if you are driving
or walking it speaks to you giving youthe directions aloud based on settings
Shows businesses
Offers suggestions
Allows for you to search along your route, etc

56
Now that we have this clear, I would like to approach this problem by first thinking
about the painpoints this product solves, the main goals for the product and how it
fits into the larger goals of the business, and then thinking about the actions and
related metrics we would want to look at the define success for the product. Does
that sound like a good approach?
Now, I would like to think about the painipoints this app solves:
I need an easy way to know how to get from point a to point b
I need to know how far x is from y
I need to Figure out when to leave for an appt - time mgmt
I need to know what things near me
After looking at the painpoints, I believe the overall mission for the app is to easily
allow for people to get directions to somewhere they want to go. This definitely
ladders up to the main mission of the business to organize the worlds information
and make it accessible to everyone as the app is allowing people to understand
how to get around, understand their surroundings, and geography as a whole.
Since Google Maps is a well established product at this point, I believe that the
main goals we should focus on are retention and engagement. Are users
continuously gaining value from our app and wanting to come back and keep using
it? This is very important as Google Maps has many different competitors at this
time. Apple Maps, Waze, etc.
Next, I am going to brainstorm actions and then a couple of core tactical metrics to
track from them:
Search for a location from current location
Search for location with multiple stops
Search for closest restaurants/coffee/etc
Go through direction steps
End directions
Arrive at location and end session
Metrics:
# of searches per user per month

57
# of trips per user per month
# of completed trips
Look at directions & hitting end before arriving at location
Look at # of completed trips from end to end - arriving at location and
ending there
# of searches “near me” per month
MAU - looking at the stability of users month over month
North star: # of completed trips
I would like to look at both scenarios because I think both bring value, but if I
had to prioritize one, I would choose looking at # of completed trips from end
to end as our main focus because it ensure that the user successfully found
their location, they are where they wanted to be, and their transportation goal
was met.
Some counter-metrics I would also like to look at are:
# of times a user’s route has to be re-routed - if this is possible. I would like to
talk to data scientists to see how this might be tracked. Once we look at this
we could try to dig in to understand, why are users being rerouted so much?
Is it because we aren't warning them about the next step early enough? Are
the directions ineffective? etc.
Rating on how the trip was - if it was bad, looking at why it was a bad
experience. Are people not getting value from our directions and why?
29. What's your favorite fitness product, and what would you do if you
found that it users dropped off after 3 months?
My favourite fitness product is a social fitness app called Cult Fit. Users can book
fitness classes on this app and workout with peers. There are multiple programs on
the app, and you can shuffle between them as you wish and choose to attend
classes at your preferred time.
If users start to drop off after three months of attending the classes, I will do the
following thing:
I will try to go deep into the problem by identifying patterns:

58
Check for app store reviews by users
Check for feedback and support tickets
Check for the social media of the company (usually, a lot of users put
complaints over there)
Try to understand patterns in the demographics of users that are dropping off
from the product.
Try to understand if the drop is seasonal. Does it happen that users who
register in April- May generally see a high drop after 30 days
Speak to a few users personally to try and understand the reason if any of the
above things don't give a direct clue.
Basis on these tasks, I will most likely get to a list of reasons. I will prioritise the
problems by evaluating the effort and impact on each of them and prioritise them.
The objective will be to start with low-effort and high-impact problems.

30. You're a PM at LinkedIn and your data science team tells you that
64% of outbound messages receive no response. What do you do?
Structure: 1. Ask Clarifying Questions 2. Look at external factors 3. Look at Internal
factors ( Slice the data aross different cuts and plan accordingly)
Clarifying Questions:
0. What's an outbound message? Is it something Linkedin users send among
each other?
1. Decline trends: Is the decline steep or gradual? From when, are we seeing
this decline (Say since a week, fortnight, month, etc)
External Factors:
0. Have we seen any competition led changes/ new campaigns etc?
1. Have we seen any Bad PR etc?
2. Is the timeline of decline falling on some holiday when most people are
inactive?

59
Internal Factors:
1.** **Are there any data logging or instrumentation issues in the way me measure
this? 2. Is there any skewness on dip across these cuts
Geographies
Nature of CX ( Candidate: Normal/Premium, Recruiter)
Any Industry we're seeing a drop
Hour/Day of the Message
Usage of Linkedin of the user who initiates the message
(Dormant,MAU,WAU,DAU)
Experience of the candidate who initiate the message
Nature of Device (Android,iOS, Web)
App Version
0. Any rise on sponsorship led content?
1. In the same timeframe we saw decline how's the trend on User Engagement
metrics (WAU/DAU/MAU , Avg Time spent and Retention)
2. Did we do any product/UX/Design change in this timeline? Say wrt the Home
page, Messaging page, Push Notifications, etc
Basis the answers to above mentioned questions, discussion can be taken forward

31. There's a 15% decline in cart additions on Flipkart. What's the


issue?
For larger audience of the platform, Flipkart is an e-commerce website like Amazon
in India.
Gather more information
15% decline in comparison to last week, last month, last quarter or last year
or across a particular sale duration that runs at a fixed frequency like Big
Billion Days (similar to Great Indian Festival which Amazon puts up)
Across a particular item, category, brand or overall.
From which flow like Wishlist to cart, Product detail page to cart, Product
comparison to cart.

60
For which regions like Tier 1, Tier 2 or overall
Of which user segment like for a users less than 18 years, between 18-35
years, 35-60 and 60+
Let's just say that these numbers are for month-over-month across a particular
category covering user flows from wishlist and product details page to cart for
entire user base of all the regions.
Identifying the possible reasons
0. System bugs
2. Any possible bugs that may be creating hindrance in cart addition in
certain scenarios.
1. User Experience or Purchase Matrix issues
2. Better price or offers available on competitor websites.
3. Deals expiring very soon.
4. Recent changes in the CTA placement or icon leading to confusion or lack
of visibility.
5. Purchase decision making information repositioned or language difficult
to comprehend.
4. Performance issues
2. App crashes loading any of those pages or performing crashes from any
of those flows.
3. Increased time for the page loading or for the action to perform or even
timeouts.
4. Non-delivery or delay in reminder notifications for the user after adding
products to wishlist.
3. Logistics issues
2. Increase in delivery charges or estimated delivery timeline.
1. Feedback issues
2. Most recent/popular feedbacks are not up to the mark.
Narrowing down the reasons
0. System bugs
2. Will cross-check with other PMs or Engineering team to know of any
recent development that was rolled out across those flows or features.

61
3. Will do a sanity check myself across the flows for the category to see if
everything is working as expected.
4. Will ask the QA team to do a thorough investigation.
3. User Experience or Purchase Matrix issues
2. Visit the straight competitor websites to get an idea about their offers
and prices.
3. Check with pricing team to share the deal uptime duration w.r.t last time.
Also request them to run a quick offers and/or price check with the
competitors.
4. Cross-check with the PMs or Design team to know of any recent
changes.
5. Check with content team for any recent language updates made.
4. Performance issues
2. Check the app analytics tool for possible crash events, load time data.
3. Speak with DevOps and QA teams to discuss the observed load times,
load testing results for mobile internet, wifi, hotspot etc.
4. Gather data from the system to check for analysing the success delivery
rate of the reminder notifications (applicable to wishlist flow) or delay in
delivery from the stipulated time.
3. Logistics issues
2. Check with the team to know of any serviceability issues and the reasons
behind the same.
1. Feedback issues
2. Go through the most recent and most popular feedbacks to see if the
issue lies within the category being less preferred for some reason or any
issue within Flipkart reported across the category.
Through this process we should be able to identify the probable issue.

32. Facebook wants to split its News Feed into two. What metrics would
you look at to validate this decision?
Clarification Qs:
What does "split" mean here? - split the UI and if so how (through tabs?)
What counts as "media" - clarified already

62
What is prompting us to consider this change? - any negative feedback from
the users/ decrease in certain Newsfeed metrics like engagement etc/ any
particular business goal?
is this across or platforms eg desktop/mobile ?
Is this for all regions?
Mission of FB: Fb's mission is to bring people closer together by enabling them to
build communities.
Mission of Newsfeed:
To show users the stories/content that are the most relevant to them.
Overall goal of Newsfeed: Being a primary source of content, the goal of Newsfeed
is to keep people connected with their communities by sharing content and
interacting with the content - this ties back into the overall mission of FB.
With this the Primary goals of Newsfeed are:
0. To increase time spent on FB Newsfeed
1. To increase interactions of users
Decision making strategy:
I would conduct A/B experimentation to validate split vs no-split.
If we find out that split view has better results, then I will conduct further A/B
experiments on the UI for the same- eg 2 tabs (2 separate pages) vs colored
outline for different posts etc
Hypotheses:
0. Splitting the newsfeed into two parts would help users give a structured view
where they can bifurcate between content from their family and friends (close
circle) and ads/posts from businesses etc (interests) - thus increasing
engagement with newsfeed.
Metrics that I would monitor to validate that I would look into are:

63
0. DAUs and MAUs for Facebook as a platform - since newsfeed is the primary
source of content and holds a good amount of real estate - should increase -
P0
1. Avg time spent on newsfeed per user per day - P0
2. No of sessions per user per day
3. No of interactions per user per day through newsfeed posts - P0
4. % time spent on Newsfeed / total time spent on Facebook - per session - per
user -P1
5. Avg revenue per user per day from ads - P1
6. NPS metric for facebook
7. Time spent and no of interactions of UI 1 vs UI 2 - P2
Conclusion:
I will prioritise the bolded metrics above based on the primary goals for newsfeed
which time spent on newsfeed and interactions of users along with overall
retention of the Facebook platform.
If these metrics show increase for the split view vs non-split view, I would then also
look into the
ARPU from ads metric as well to make sure it is not drastically affected
negatively --> as it can severely affect the revenue from Facebook Ads.
% time spent on Newsfeed / total time spent on Facebook - per session - per
user --> to check if Newsfeed is cannibalising user's time and how is it
affecting other FB features
Further consideration:
In the future, ML can be used to inform the UI changes based on user's
interests, behavior etc
ML can be used to better the recommendation algorithms within split views to
show better ads, placement of ads etc:
Example: if user has spent a lot of time on friends and family view, the
first time he switches to the media view, should the first page be an ad or
a business post etc

64
33. How would you measure the success of Facebook Stories?
To answer this question, I’d like to first start with clarifying my understanding of FB
stories and key elements that are essential towards setting the success goal
FB Stories : Very similar to IG stories and infact based on the success seen from IG
Stories, FB stories allows users to create ephemeral/disappearing content lasting
24 hours . The content could be a video, collection of pictures or even text with
added animations and other enrichment - emulating a storytelling style. Stories
appear right on the top of a user’s feed and can be created by individual users as
by Brands for advertisements. Stories can be created within FB itself or can even
be imported from IG. Additionally unlike other FB posts, reactions and comments to
a story are visible only to a user. FB has been around for a while now and has
reached a stage where adoption has reached a point of saturation and infact has
started losing its users to other social media products. FB needed a new way to
drive meaningful engagement among its existing users outside of the features that
existed. Following the the success of IG stories in driving engagement within the
platform, FB decided to launch stories as a part of its core app as well. IG has been
mainly adopted and caters to a younger demographic, however FB still has a larger
user base and a broader demographic spectrum, which can be tapped into to drive
similar levels of engagement
Clarifying questions:
You’d like me to define success for Stories feature in its current state? YES
Similar to other feed, users can filter stories that they want to see on the top
of their feed? YES
I’d now like to go ahead establish the
main goal of FB stories in its current phase and how it ties to the overall
mission of FB.
Identify the relevant user segments,
walk through the customer journey for each one of the user segments,
define relevant metrics as applicable to each phase of the customer journey
and how these metrics can be used to derive a complete picture of the
success of FB stories
Define the NSM

65
Identify Trade-offs/Risks and potential ways to mitigate them
Assumptions:
FB stories has been around for over 1.5+ years now and is available to both
individual users as well as Brands, however, given the main intent at present
is to use it as a hook to keep the user base engaged, hence, I’d like to
eliminate monetization as one of key objectives for now ****
The mission of FB is to give people the power to create community and bring the
world closer together. People use FB to stay connected with their friends and
family, see what’s going in the world and express and share what matters to them
The main goal of FB Stories is to drive increased meaningful engagement and
retention among its users with creative ways to share content and with direct
interaction with their immediate n/w Additionally FB stories also enables IG users to
extend their reach to their FB n/w easily without having to onboard them separately
onto the IG platform This ties back directly to the main goal of FB of bringing the
world closer together by giving people a way to express and share what they care
about through a storytelling style as opposed to just posts and pictures.
User segments (Individual Users) :
Creators : Users who upload a story
Consumers : Viewers and responders to the story
Customer journey
Creators Click on create story > upload picture/short-length video/enter text/
adjust audience if needed > post story > clicks notifications received received on
story i.e Reactions, Comments through Messages, someone having shared story >
responds to reactions/comments > clicks on add story to upload more content
Click on create story within IG > import story from IG > adjust audience if needed >
post story> clicks notifications received received on story i.e Reactions, Comments
through Messages, someone having shared story > responds to
reactions/comments > clicks on add story to upload more content

66
Consumers Goes to Feed > scrolls through stories appearing on top of the News
Feed > clicks on story to view > views complete story > reacts/comments/shares
via messaging to another friend > continues to next story to view > continues till all
latest stories are viewed
Relevant Metrics: Acquisition & Adoption : Number of stories created per week #
users exclusively creating within FB Number of stories imported from IG
Engagement : DAU WAU MAU % of stories viewed per session per viewer % of
stories completely viewed per session per viewer % time spent on stories per
session vs % time spent on News feed per session
Retention : For Stories feature exclusively : Stickiness - DAU/MAU
NSM : I would focus on viewer Engagement Level to be the NSM for stories at this
point in time, as this would be the driving factor in creators adding more stories as
a result of the response rate, driving meaningful 1-1 engagement and being able to
identify correlation b/w users with high engagement levels and story creation
volumes Weekly Viewer Engagement level : Number of viewers responded/number
of views per story
Guardrail metrics:
Stickiness should be able to provide a view into engagement trends and the
impact of Stories in retention of its users
Additionally, % time spent on stories per session vs % time spent on News
feed per session will also give us a view into where users are spending their
time within the app and determine next moves
Trade-offs/Risks and mitigations:
Excessive number of stories to scroll through - resulting in UX experience
impacts and loosing viewers after stories being viewed - FB can determine
the number of stories to be shown per session based on the analysis of %
time spent on stories per session vs % time spent on News feed per session

67
Loss in depth of engagement - a lot of users may choose only to use
reactions to respond to stories viewed, which not only impacts the
engagement quality but also pose challenges in obtaining deeper insights
into user behavior
Not all users are encouraged by 1-1 engagement, higher levels of gratification
may be needed by users to come back to post more stories in the form of
public views of likes and responses from.
34. You are the PM of Lyft, there is a sudden increase of users canceling
the rides. How would you analyze this issue and what steps you would
take to correct this?

I'd like to start by understanding the basic scope of the change we've observed.
0. Who is the user? Lyft is a two-sided marketplace so it could be the driver or
rider. -Driver
1. When did the change happen? Was it slow or a sharp drop-off? - sharp, 2
weeks ago
2. Was there a specific geographic region? Urban v suburban? - no trends
3. Cancelling can happen at several points in ride acceptance, is there a
particular point where we see most cancellations? - just after accepting a ride
4. Is there any other metric which has changed significantly over the same
timeframe? -no
Ok, so we're thinking about driver cancellations across all regions which occurred
suddenly two weeks ago and primarily after accepting the ride.
This gives me some pointers about where to look, but I'd like to start by eliminating
any possible errors and obvious issues first, so I'll start with measurement and
technical issues.
If we don't find any issues, I'd like to take a deeper dive into the issue by examining
the 5Cs: climate, company, competitors, customer and collaborators.
Firstly, measurement: Have we changed the way we measure cancellations? Have
we made any changes to the systems we use to measure cancellations? - no

68
Ok, let's think about technical factors:
Internal - Has there been any change in the availability of any services, eg. average
uptime? Have any of our diagnostics surfaced issues with our reporting? -no
External - Have there been any updates to other services we integrate with? Any
OS updates? -no
If it's not an obvious error or issue, we need to look more deeply.
Starting with the climate, essentially the overall trends: Static - Do we see any
fluctuations like this on a weekly, monthly or annual basis? -no Dynamic - Were
there any recent major events that could affect cancellations - since it's national,
events like the election etc? -no Unlikely to be anything localised since it's national
Moving on to what the company has done recently: Did we start anything?
Examples - changes to incentives, changes to ratings, matching algorithm,
payment method, AB tests/experiments, changes to other features? -no Did we
stop anything? Examples - promotions, incentives? -no
What about our competitors? The users in this case are the drivers. The issue is
occurring after drivers have made themselves available for a ride so we can
eliminate indirect competition like drivers getting other jobs. The competition is
likely to be other on-demand services like Uber, Grubhub, Instacart etc. Have they
started any new incentives or communications recently? -yes, Uber launched
something
Ok, I'd like to know more about that - what was launched - new incentives for
being online and taking rides When did they launch these incentives? - 2 weeks
ago
Ok, this sounds like our cause, but the other possible causes I would explore would
be: Customer: Are there specific segments of drivers impacted, ie. when they were
onboarded, which class of Lyft they are in, types of rides they are getting, distance
to travel? Have we changed the number of drivers onboarded recently? -no
Collaborators: Lyft is a two-sided marketplace, so we also need to consider the
riders. Has there been any change in their behaviour? Examples - change in length
of rides requested, more customers onboarded, increase in number of requests?-
no

69
Since it looks like the incentives from our competitors are the cause of the increase
in cancellations, my first step would be to get in the shoes of drivers - either by
talking to them or using our existing insights.
Some of their needs might be (in order of priority):
maximising earnings for the amount of time they have
reducing idle (non-rider) driving time
getting rides to and from preferred areas
being matched with higher rated riders
A few ideas to counter our competitor incentives:
raise our own incentives per ride
add penalties for high driver cancellation rates
add bonuses for low driver cancellation rates
add bonuses for less popular trips (based on travel time, overall length etc.)
We might not want to get into an incentives bidding war with our competitors so we
might stick to incentivising behaviour that we want, especially if we can align the
incentives with an increase in our own revenue.

35. Rider cancellations have increased by 5%. Investigate what


happened.
What is rider cancellation? Is it a rider requesting a ride but hitting the cancel
button after the driver is on their way? Yes.
I can see why it's concerning since that affects our # of completed rides and driver
earnings. Let's try to understand the root cause.
Rider cancellations have increased in what time period? We noticed it last week.
Was it slowly dropping for a while or did we notice it only last week? It was a
sudden drop last week

70
How did other metrics behave last week? For eg, aggregate cancellations
increased but what if cancellations/rides remained constant, then it's expected
because cancellations increased proportional to rides requested ? No, rides were
same WoW but cancellations increased. DAU for riders and drivers was same as
well.
Data quality issue? I will check with data analytics team to make sure it's not a data
quality issue.
Let's see if it's in a specific segment:
Was it in a specific region? All across US
Was it for a specific Lyft car type? No specific type
Was it across a specific user segment? 1) May be we have some new users of a
specific type (maybe older users who don't know how the app works) that are
canceling their rides? No we don't know any big enough segment that caused this
drop. 2) Is it only on iOS app or Android app? No.
Ok so let's look at different types of hypotheses I will investigate to find a root
cause:
Internal: 1) I will check if there's a new product launch, eg: if users see the
'wait&Save' option when its new but select the usual lyft due to habit, they might
cancel and go back to try the new 'wait&save' option. This might esp be true if the
usual Lyft gives the similar pickup time as wait&save. So that's the next thing I will
check.
0. Avg pickup time estimates. Did that increase last week compared to previous
week. And same time last month. This might be because of lower supply.
1. Did the cancellations increase due to weaker driver ratings? Once the rider is
matched to driver, they see the avg rating and maybe the supply we had last
week was lower quality which caused higher cancelations.
Technical: 1) Latency issues. Is the app taking longer to load a driver match and by
the time the app surfaces it to user they get frustrated and cancel?

71
0. Did we deploy something that made the car order flow fail for a group of
users?
1. Any errors in the car ordering funnel. Are there higher client side errors in our
developer dashboards?
3)Is there an increase in bot traffic that made the cancellation metric look
artificially high?
0. Were there network or connectivity issues that canceled rides automatically?
External:
0. Competitor factors. Did uber launch any promotion last week that caused
riders to comparison shop and cancel lyft ride after ordering because uber
was cheaper?
1. External factors like protests etc that made users feel worried after they
ordered a lyft and then probably got a Google/Apple alert in that area.
Unlikely since they might not actually order a Lyft if they are worried about
protests.
2. Was there any PR issue? Negative lyft news? Unlikely because riders might
not request a ride if this is the case.
These are the things I'd investigate. Depending on the root cause, I will think about
how to fix it. If it's internal,
1)I will see if more people are shifting to 'wait&save' in the upcoming weeks, then it
might be ok since its a new feature and riders are adopting it more.
2)If avg pickup time estimates keep increasing , then I will work with Supply team
to get more drivers
0. If quality of drivers is declining, i will work with driver team to brainstorm
solutions on how we can improve quality
Technical: I will check next week if the cancelations normalize , otherwise work
with Engg team to find the technical bug and fix it.

72
External: I will find more details about Uber's promotion and work with marketing to
ideate if we should launch a promotion for our rides. If it's due to things like
protests I will keep monitoring the metric until the protests calm down and check
the cancelations metric again to make sure it goes back to previous levels.

36. How would you measure success for Instagram shops? Don't
consider the Ads revenue

Clarify: IG shops is the marketplace on IG, where users can view items listed for
sale by creators/businesses and also search for specific items or categories. Users
can actually buy through shops instead of getting navigated to the original
business website. Correct? Clarify: Can only sellers/businesses which own their
inventory have a "shop"? Or can a social media influencer (different IG account)
sell on behalf of a business?
Structure: In order to figure out what success looks like, I'd first want to outline the
product goals, some user actions need to happen to contribute to that goal, and
metrics to measure those actions or just performance in general. Finally, I will
evaluate the metrics, i.e. confirm a north star, prioritize the top 3-4 metrics and
outline any counter metrics or trade-offs.
Goals: IG Shops is fairly recent within the broader product/app, but it is also the
main product for commerce to take place (is that correct?). So based on where it is
in its lifecycle + the mission of the product (to empower
creators/makers/businesses to set up their shop and give businesses an immersive
storefront for people to explore and shop their best products), I might want to
focus on these 2 as the main high level goals. (1) Adoption and Activation on the
creator/seller side (2) Engagement and conversion (to sales) on the user/buyer side
Actions: As briefly mentioned above, IG Shops is a 2-sided marketplace product;
below are some user actions within the goal buckets outlined: (A) Creators/Sellers
(adoption/activation) ->Create Shops ->Add/create catalogs ->Maintain inventory -
>Add items to posts and stories (B) Users/Buyers (engagement -> conversion) ->
Search/Discover shops -> Browse and Click through catalog on shop page/profile -
> Click through links on stories or posts to product detail page -> Add to Cart ->
Complete purchase -> Message/DM seller about the product

73
Metrics: (A) Creators/Sellers (adoption->activation)
# of shops created per month
# of new sellers per month
# of active sellers per month (active - maintaining catalog and inventory and
posting product links)
Avg # of products per shop/Avg # of items (inventory health) per shop > may be
indicative of how much sellers are bringing to the platform
(B) Users/Buyers (engagement-> conversion) Impressions and CTR on shop items
through posts Impressions and CTR on shop items through stories CTR on "view
on website" button (not sure if we get data on purchases/conversion if users
choose this option, but may be good to collect as a proxy for sales/purchases)
# of purchases per month; Month over month
Delta between # of items in cart vs purchased > may be indicative of
poor/confusing checkout CX
Evaluate: Of all the tracking metrics outlined above, I would pick # of purchases (or
sales) per month as the north star for this product, because it signals
adoption/activation on the seller side as well as awareness, engagement and
conversion (complete funnel) on the buyer side. A trade-off here is that we may
not have a good baseline and just this # does not indicate if the CX is good. So we
should also keep an eye on conversion rates to understand missed sale
opportunities and # of active sellers for general health of the marketplace.

37. Nike's sitewide conversion has declined year over year. How would
you evaluate this?
I'd like to first clarify the scope of the problem and the definition of conversion...
Let's assume that this applies to Nike's worldwide ecommerce platform and that
"conversion" means (total users completing purchases)/(total visitors to site). Let's
also assume that this isn't a logging issue and we've double checked that it's a real
drop.

74
I'll start by testing for overall demand for Nike goods:
0. How are global economic conditions / demand for our goods? We can
measure this by looking at trends in retail and/or footwear specifically, our
competitors' sales, etc
1. Is there any negative perception of Nike that might be impacting us? We can
look at news cycles and our ratings
2. Do we know of any targeted campaigns by competitors against us? We can
look at competitor research here (e.g., are we seeing more targeted Reebok
ads to those who visit Nike properties).
Assuming nothing there is indicative of problems, I'll next move to testing for
interest / demand in buying from Nike directly:
0. Is it possible consumers are buying more from our distributors than us
directly? We can check out sales and/or conversion data where available on
sales via distributors to see
1. If yes, is it possible we've got higher prices on our site than our partners? We
can track pricing on given items for us versus distributors
Assuming there's no compensatory switch from us to partners, I'd hypothesize that
the root cause is somewhere in the purchase experience and next move to figuring
out exactly where in the sales funnel we are losing users.
Broadly, I'd think about the user journey on [Link] as these stages:
0. Landing page: could be our homepage, could be directly to women's shoes
1. Product page: the specific product in question (e.g., Women's Air Force 1s)
2. Add to cart: making the decision to add the product to the cart
3. Checkout: inputting payment information
4. Confirmation: hitting "buy"!
I'd want to look at conversion from stage to stage over time, average clicks for
each stage, and segmentation of the user journeys (e.g., how many people
navigate directly to product page, how many add multiple products to cart then
check out versus check out after a single product).

75
Let's say that we see that average clicks haven't changed, and segmentation of
user journeys is consistent (and ultimate conversion rate is consistent across these
segments). However, we find that there's been a steep drop in conversion from
checkout to confirmation: people aren't for whatever reason, taking that last leap.
I have a few hypotheses for this:
0. People are getting sticker shock and choosing not to move forward
1. There's some kind of block preventing the transaction from going through,
either as an error or detected fraud
2. Products are no longer available to be checked out
Given that our prices haven't changed much and/or are in line with the market,
which itself is healthy, I would prioritize investigating the last two hypotheses as
the causes of the declines.
I'd look into errors via reported bugs, as well as customer support and service
tickets, and into our fraud identification rates. Let's say these aren't revealing any
insightful trends.
I'd then look into product availability and see how many users are being served
with the note that their chosen product is no longer available. Sounds like this
seems to be a big cause! Lots of folks are adding stuff to their carts and getting all
the way through to not be able to check out because of product availability.
In terms of next steps, I'd segment all users at checkout (versus confirmation) and
see proportion being served with that notice. I'd also want to more deeply
understand these users: are they repeat buyers? possibly bot buyers? what kind of
transactions (or not) do they go on to have with Nike?
Once I confirmed this as the root cause and better understood its context and end
impact, I'd be able to make a recommendation on how to address this, whether by
possibly checking availability of a good more often through process, holding goods
in cart for longer, or something else altogether.

38. How would you measure success of facebook rooms?

76
Measure Success of FB rooms.
First I would like to clarify my understanding of the product:
FB Rooms is a video call product similar to zoom in which a user can create a
room and then invite people to join either with Facebook, if they are already
friends or with a link or invite.
Able to use filters and virtual backgrounds
Now that we have aligned on the understanding of the product, I would like to think
about the goals for the product.
Goals: Business goals:
Rooms was FB’s answer to video chat and ensuring that through the
pandemic (and beyond) people are still able to connect and see each other
near or far, virtually.
I would also assume that this feature was meant to compete with rivals like
Zoom, Teams chat, Google Chat, Facetime, etc.
With that said, I believe that the main goals the business was looking at were:
Acquisition - how can we draw more users away from our competitors
and over to FB?
Retention - how can we ensure that we are keeping our users on the FB
platform and allowing them to connect and build deeper connections via
video calls on our platform rather than going elsewhere?
User goals/pain points:
Goal: allow users to connect with others without actually being with them
Non-tech savvy folks might want to use video call functionality and they’re
already familiar with Facebook, so its less intimidating
Connecting via video is so much more personal than chat or voice calls
Connecting to larger mission: I can definitely see why FB has chosen to build out
this product because it definitely ties to the larger mission - giving users the tools
to connect and build communities across the world. Video chat provides an even

77
more meaningful and personal way of connecting and communicating to bring the
world closer together.
Now that we have identified the goals for the product, I would next like too think
about the actions users will take within the product.
Actions: Host:
Create a room
Invite people via FB
Invite people via text, email, etc.
End chat
Attendees
Accept invite and join room
Add other people to the chat
Use filters/backgrounds
Leave Room
Next, I would like to think through the metrics we could take a look at that would
ensure the health of the product through an acquisition and retention lens.
Metrics:
# of rooms/calls created per week
Avg time spent in the room
# of facebook & non-facebook members joining calls
# of first time calls made
# of attendees turned hosts MOM (viral coefficient)
# of second calls made
# of new fb accounts made post chat
North star metrics:
Acquisition: # of first time calls made
Retention: # of rooms/calls created per week/month

78
Countermetrics:
# of calls below 1 min or less?
This would help us to dig into performance. Why is this happening? Are
calls dropping? Are people just testing the tool? Are invites not being
sent, are people not joining, etc?
Regular messenger engagement - text based messaging back and forth - is
engagement with this dropping, increasing, or staying the same?
39. You're the PM for all of Instagram Stories. What is your North Star
metric?
Took a moment to think about and review structure with interviewer:
Alignment on product / assumptions Think about the goals of IG Stories Audiences
and User Paths Metrics Counters/etc Other thoughts….
Assumptions: Focused on United States
Audience: Creators - Think of something you want to put out → Create content →
decide on who you want to view it → post content → engage with viewers
Influencers / Journalists Regular folks Shops Consumers - Look at IG → view
content – > explore content → comment / engage with posts → view more Friends
People you don’t know Global reach
Goal of product in the funnel: Engagement Metrics: Creators: Number of views per
story Number of reactions per story Number of comments per story
of followers gained from a particular story
Consumers:
of stories reacted to
of stories watched before dropoff
of different creators seen
of stories commented on
North Metrics:
of reactions per story in relation to number of viewers

79
Reasons: engagement phase - and tying back to the mission of facebook to build
community, an active participant of stories means engagement and that's where
we want to be.
40. Define success metrics for Facebook pay
Questions Couple of questions to make sure we are on the same page with respect
to my overall product understanding and its positioning in the market.
Product understanding: FB Pay is a Payment/ Wallet offering using which
customers can securely make peer-to-peer payments using
Messenger/WhatsApp or even make purchases on FB / Instagram etc. Also,
competitors with similar offerings are Apple Pay, Google Pay (to name a few).
Is this a fair understanding of the product?
Ans: Yes.
Current state: Are we in the testing phase? Or is it already live across the
United States? This will help with getting a better sense of should we focus
on Growth or getting new users on the platform etc.
Ans: The product is live globally but limited to FB main app. In a few
countries it's available on FB and Messenger app, and in some countries
on IG.
Goals Facebook's Mission is to bring the world closer together and empower
people to build community. By offering a unified payment solution like FB Pay, we
are simplifying payments across the FB ecosystem(messenger, IG, and WhatsApp),
which in turn will improve User Engagement. For example, in order to split a bill and
pay, an FB user doesn’t have to log in to PayPal/Venmo.
Metrics
Core Product Metrics
Daily number of transactions using FB Pay (Applicable to all FB products
(main application, WhatsApp, IG, and Messenger)
Checkout Conversion rate - carts abandoned vs carts attempted to pay. (Not
applicable to WhatsApp/messenger)
Secondary Metrics

80
Successful vs unsuccessful payments.
Number of Chargeback/disputes in the 30 days. (Counter metric)
Detailed transactional volume across all FB products (main application,
WhatsApp, IG, and Messenger)
Ratio of users using FB Pay/total FB users (7-day average) (The same metric
can be extended to other FB products)
Summary : "Daily number of transactions using FB Pay" will help to track the health
of the FB Pay product as a whole, slicing it across various FB products. We need to
be sure that an increase in the number of the daily transactions shouldn’t result in
an increase in the Chargeback/dispute metric. So I would choose to go with "No of
Chargeback/disputes in the 30 days." as a counter metric.

41. How would you prioritize solving for the worst post-booking
experience for Task Rabbit?
What is a task rabbit? marketplace connecting users with freelance labor with local
demand, allowing consumers to find help with everyday tasks, including furniture
assembly, moving, delivery, and handy person work Explain the User Flow in short:
User logs in, searches for the services, sorts, picks and books the service
the gig worker gets 24 hours to pick the request, accepted or declined
In case it is accepted then it is considered a "BOOKed"gig
what is the post booking experience?
Gig worker will show up and does a great job.
Gig worker doesnt show up
Gig worker shows up late
Gig worker does a bad job
Gig worker fights and asks for additional money
Gig worker is abusive, fights, not safe to be with, stole etc.
Seggregate these experience by volume, % and frequency of the bad experiences,
location, demographics, categories of the purchase for which it happened, etc
Identify the top bad experiences Gather with the DS, tech and ops to find a

81
solution Eg: Highest complaints came for gig workers who had abusive behaviour
Long term Impact : platform engagement is reducing and bookings are reducing,
bad press about the platform, company may shit
Brain storm solutions :
Get background check or verification
block these individuals immediately
Offer online emergency support that links to the police or law enforcement
Priortize solution : Impact Effort Analysis solution 1 : Impact : M/H Effort : H Risk :
High (of data sharing and data verification) Solution 2: Impact: L Effort : L. - may
not move the needle in terms of the long term impact Solution 3: Impact M/H Effort
: H Prioritize 2 followed by 3 Monitor the metrics
Quantify this impact in terms of the metrics:
# of complaints received (if it has reduced or not)
# of bookings growth (if it has grown after the changes)
42. You are the PM for an e-commerce site. You see that sales have
dropped 7% in the last few days? What could be wrong?
Since you have indicated that the drop has been from the past few days and it is
the sales drop.
Assumption 1: It is not a gradual decrease and something that has been seen in the
last few days. My assumption is the e-commerce site is a google store site where
google sells all the products.
At a very High-level the decline in sales
could be caused by customers buying less on the site, customer churn, or decline
in new customers, higher cart abandonment rate, checkout process.
I would structure the following and narrow down the possibilities.

82
0. Gather Data for WOW on total orders and conversion process to identify a
few cohorts.
1. Look into the internal factors of the product and company.
2. Look into the customer journey
3. External factors
4. Summary
5. Recommendations
Firstly I would like to look into the data for Wow on the total orders and conversion
process to identify a few cohorts.
0. Geography ( US, Asia, UK, etc )
1. Devices ( ios, Andriod any other )
2. Browsers. ( safari,i.e,chrome,opera any other )
3. Product Line ( Phones/Connected home / Laptop's /Gaming/Accessories )
If there is any dip in the sales in any of the above I would like to further analyze my
data to formulate the hypothesis. My assumption is the sales drop is across all the
cohorts and this as well as rules out that there is no Bug with metrics reporting.
Now I would look into a few internal factors.
Did we introduce Any new features /bug fixes that have been released at the
top of the funnel/checkout/ pricing /catalogs?
Ask the tech-ops/system admins to check the logs to rule out the
infrastructure issues.
Is there any change in the catalog of products. Or the catalog is displaying
the right quantity of products or if there is any bug .
Check with the abandoned cart metrics to see if the drop is due to cart
abandonment. A higher % number would indicate that sales have not been
converted and there is an issue in the conversion of the order.
Check with Marketing if there is any promotions have ended. Customers have
been placing orders using this promotion and now it has ended customers is
looking for a better price for the products.
Did we announce any upgrade for the existing product lines which is
cannibalizing the current sales if that is the case we need to refind the pricing
strategy for Older generation products?

83
Gross merchandise Volume does that remain constant or is there a dip in the
volume.
Now coming to the customer journey of placing the order below are the things I
would like to evaluate.
Is there any drop in metrics in the top of the funnel activities ( Search engine,
Google Shopping)
Is there any decline in new and recurring customers any customer churn
metrics.
Is there any issue with adding the items cart page any logs or errors to cross-
check with.
Are there any issues with the payment process or payment rails. ( Are we
getting declines for the orders placed. If we are sending the correct payment
information to the payment processors.
Now looking into the external factors.
Competitors launched a new product (Apple, Samsung, etc which resulted in
drop in sales.)
Natural causes ( COVID, Recession, natural disasters, etc )
Any PR issues ( Like security concerns for Connected Homes/Data privacy
concerns )
Any specific country issues sanctions for google products.
Are the customers waiting longer to upgrade to the new products? If this is a
general trend across the industry we need to revisit either launch a cheaper
model or work on our existing trade-in program to convert into better sales.
Summary: The probable reasons would be
Is there a better price available from 3rd party sellers/stores which might be
cannibalizing sales on the e-commerce site since overall gross volume
remains the same.
Check for shipping delays or order delays in the past few weeks which has
shifted users to buy third party providers which volume in stock to fulfill the
shipping needs.
Here are a few recommendations:

84
Currently, the search for google products the listing is shown below the
sponsored ads. Work with Google Shopping team to list the google store
above the 3rd party sponsored ad content to drive better sales to google
stores.
Build A/B test to upgrade faster shipping to drive better sales on google store
e-commerce site.
44. How would you measure the success of Facebook Live?
What is Facebook Live?
Facebook Live is the posting of a live video -> Virtual events, QA, conversation,
presentation/performance. It can be on your Facebook page, event, or group.
Mission Facebook's mission is to connect people and bring the world closer
together.
The mission of Facebook Live, from my perspective, is to enable individuals to
engage ( message), discuss, and learn in real-time with their role models,
influencers, and others to discover and establish stronger connections.
High-level user flows and important metrics.
Creators: Find the feature, plan a live event, and publish it. Advertise for the
event, start it, interact with the audience (active users, messages in chats,
reactions), and then broadcast the live event footage ( check the views, likes,
and comments).
Measure the average duration of the event, The average length of
attendance at the event
Consumers: Learn more about the event. Register for the event, Interactions
with the host and other guests, Time spent at the event, meet individuals with
similar interests, and plan to attend a similar event in the future.
Advertisers: The percent of click-through on the items advertised in the live
video.
Prioritized final metrics:

85
The number of events created per week- health check of the feature being
adapted
The number of users who have attended the events- engagement
Time spent on these events- this will help with measuring the stickiness of
the feature
Tradeoff:
Increased time spent on social media platforms has a social stigma, but one
method to solve this is by educating people on being able to meet in real-time with
influencers, role models, professors, social cause leaders/groups, and other groups
with similar interests. It is a larger concept than simply going on Facebook to like
photos/ posts; it is a medium for learning and building deeper connections.
Newbie/upcoming creators may be discouraged if their event does not get a large
number of attendees. We may assist these users by linking them with
groups/events on similar issues and assisting them in building their connections in
order to acquire more traction. We could also share tips to have a successful event.

45. Diagnose an issue with Tiktok's usage decline.


My approach (attempted before watching the video):
Confirm my understanding of TikTok (short format video content app)
Define usage?
Articulate out the structure:
Clarifying questions
Look at the internal reasons
Explore external reasons
Summarize
Clarifying questions:
0. App availability in regions: any countries where we have stopped operating?
1. Have the aggregate number of app installs dropped?
2. How do the following metrics look?

86
2. #videos uploaded / day / user
3. #sessions /day / user
4. Amount spent on the app /day /user
3. Is the decline gradual?
Exploring internal reasons:
0. Specific to Apple or iOS? Is the decline more stark in any specific app
version?
1. How is the app performance looking? - latency, app stability issues, #support
tickets, etc.
2. Any changes to the algorithm? (maybe the videos shown are not as relevant
to users as they were before)
3. Any important changes to the UI or overall UX which might be affecting the
user viewership or upload experience?
4. Any changes in the policy for creators?
Exploring external reasons:
0. Changes in regulations in the countries of operation
1. How is the public sentiment towards the app/brand? Any negative press
recently, or during the time of usage decline?
2. Have we looked at various forums of our users (viewers and creators, both)?
What kind of discussions are happening there?
3. Any new players/competitors in the market?
4. Have our competitors introduced a new feature which is viral or attracting
much attention?
5. Have out competitors launched a marketing campaign recently?
46. Imagine you were in charge of Facebook Watch, what metric would
you want to measure?
Question #1 Imagine you were in charge of Facebook Watch, what metric would
you want to measure?
0. Clarify:

87
2. Facebook Watch offers users video content including news, sports, live
stream shopping and gaming, etc. it works on both desktops and
mobiles. It's a fairly recent feature.
1. Mission:
2. Facebook's overall mission is to connect people and help them build
communities, and Facebook Watch can help Facebook achieve this by
keeping users up to date with local news, or connecting them to people
with common interests through live stream gaming. Through video
content, Facebook Watch helps users develop a deeper connection with
their peers and communities
3. Given that Facebook Watch is a fairly recent product, I would like to focus
on Retention. I don't want to focus on Acquisition and Activation because
Facebook is an established product that's used by billions of users. Since
Facebook Watch is one of the main features on Facebook, it's relatively
frictionless and easy to get people to try. On the other hand, there are so
many video streaming services users can choose from, and Facebook
Watch needs to prove a unique value proposition so that users are
coming back. So in my opinion, Retention is a key metric to optimize for.
2. Main Stakeholders
2. Publishers - established publisher partners that make content such as
news channel, shows
3. Creators - any facebook users that want to upload video content or
stream live
4. Consumers - facebook users that watch videos on Facebook. With the
focus on retention,
5. Out of the 3 stakeholders here, I would like to prioritize consumers. I think
consumers are the most important stakeholders here because their
support and viewership can drive more content creation on the platform.
Would you agree? (Agreed)
4. North Star Metrics
2. Retention Curve - The % of users that stay on Facebook Watch over time
1. If you needed another metric what would you choose?
2. Every metric has its own data constrains. While this retention curve gives
me a high level understanding of user retention, it doesn't tell me why
users decide to stay, why users decide to leave. I need more metrics and
qualitative data to understand how we should improve Facebook watch.
Metrics such as viewership across different video category or NPS can
provide these important details.

88
1. If you were introducing ads, how would you decide between pre-roll or mid-
roll ads?
2. Clarify how pre-roll ads v.s. mid-roll ads work.
3. Talk about the pros and cons of each option
1. Pre-roll ads force all users to view the ads first before the video. It
helps drive ad impressions but potentially hinders viewership
2. Mid-roll ads display ads in the middle of the video content. Users
might drop off from the video before the ads and might impact ad
slots on the platform. Showing mid-roll ads may also have an
disruptive user experience
4. Identify metrics that would move and the trade-off
1. Metrics that might be impacted
1. Viewership
1. Metrics: retention curve, the avg # of videos watched per
users,
2. Ads
1. Metrics: ads impression, revenue, advertiser's retention
1. Talk about setting up an A/B test 2. Control group where no pre roll or mid
roll ads 3. A group to test pre roll ads 4. A group to test mid roll ads 5.
Suppose the impact would vary based on the content type. For live
stream videos, consumers might not want to stay for a midroll ads as
opposed to shows. Would be more accurate to see segment the AB test
based on content 6. Factors such as ads duration, ads placement and
ads will effect the test result. We should ensure those factors are
controlled during the experiment, and we should also get prior metrics to
understand what ads works best
2. Talk about decision making
0. The results from the A/B test will tell us how pre-roll and mid roll ads
impact viewership and advertising respectively, and we should
leverage this information to make the right decision at the time. If we
want to focus more on revenue, we can choose the path that leads to
higher revenue. In the long term, we should always optimize for user
experience, but ultimately the decision depends on the company's
strategic goal at the moment.
47. If you were the PM for Google drive, what metrics would you care
about?

89
-8. Acquisition
0. New sign-ups for Google Drive Service
1. Google Drive app installs
-6. Activation
0. Number of users with at least one file since sign-up
1. Paid Google Drive Users
-6. Engagement
0. DAU, MAU
1. Sessions, Frequency
2. Time spend
3. Files accessed per session
4. Task completion - sharing file, setting up automatic back up etc.
-3. Retention
0. % Repeat Users
1. % Users Churning out
2. Google App Uninstalls
-5. Referral
0. Inviting other users for collaboration
1. NPS
-6. Negative
0. Errors
1. Accounts with illegal files
2. Privacy issue
3. Reported Bugs
-4. Monetization
0. Total Revenue
1. ARPU
2. LTV of a customer
3. Conversion rate
48. How do you define success for Yelp reviews?
Success for Yelp reviews. (assuming these are standard reviews posted by
customers, for a restaurant or bar or a cafe listed on Yelp. Has a 5-star calibration
that users can enter and free-form text.)

90
Select Metrics should span the entire funnel across adoption, engagement, quality,
and overall impact for Yelp.
-8. Adoption metrics: Total # of reviews written, Avg reviews/ listing
-7. Engagement metrics: # reviews being scanned, marked as being useful, etc.
-6. Quality metrics: %ge of disputed reviews, % reviews by engaged members
(elite badge, or, >3 Yelp friends), % reviews with check-ins or pictures or
other verifications.
And then there are third-order metrics that track ripple effects for the business.
We know that Yelp is almost synonymous with reviews for users and is a core part
of the value prop. We want to be able to track the business impact of the review
feature for Yelp's business:
-8. Overall impact to Yelp: # new users attributed to an incremental Yelp review,
# orders attributed to an incremental Yelp review
49. During sign-up, users are required to upload a profile picture on
Facebook. Should we remove this requirement?
Trying to decide on if they should include a step to upload a profile picture in the
onboarding flow or not?
Why does facebook have a profile picture? What value does it add?
User value - Connections: having a picture on your profile, help drive
friending and follows. Because users can put a face with the name. And they
cna also recongize people they may not immediately know from a name (or a
common name) and want to friend them.
Business value -
Integrity: making users add a profile picture can help weed out fake
accounts

91
Engagement / retention: when a user is new facebook, they likely don’’t
have any posts or albums. Things that other users can view and interact
with. A profile picture offers that first profile posts that can jump start
interactions. Are super important for a new users, because that can drive
notification and bringing them back to the platform to view the reaction.
Also, ability to make connections, lead to content to show user and notify
user about.
Test a set of hypothesises.
Uploading a profile picture at time of sign up increases new user retention (1
week)
Setting up test
Control group: no profile picture
Variant 1: optional profile picture step
Variant 2: required profile picture step
All new users considered for treatment. No specific targetting outside of new users
entering flow.
Metrics I’d want to consider:
# of completed onboarding flows / # of started (probably want the entire
funnel here to debug if we see a low completion rate)
# of profile pic uploads (so I can tell the different b/w 0 (control) -> optional -
> required)
# of friends made in 1 week
# of interactions initiated or received in 1 week
# active days since joined
# of users retaining after 1 week
Time spent per user
How would i determine what to launch?

92
Ideally, we would launch the treatment that has the largest gain in 1 week
retention. If we see that none of the variants increase 1 week retention, all are
neutral. We should consider still launching the variant that shows increases in
friends & interactions made. Given this helps improve a lot of facebook
features and could lead to longer term retention.
The main trade off we would want to consider is drop offs in the onboarding
process. We know adding this additional friction (profile pic step) will result in
more users abandoning the flow, thus not becoming fb users. I think a good
way to identify if this is an OK trade-off is to make sure we are now dropping
overall new, retained users on the platform. It is likely we will see a drop in
newly created accounts, but some of these accounts that didn’t have a profile
picture are likely to churn quickly and not actually convert to a retained new
user. So, I think as long as retained new users is neutral (and hopefully)
increasing, this is fine. So, I’d look at retained users at 1 week, 2 week, 1
month.
One note, is if we see requiring a profile picture is incredible at driving long
term retention. We can explore research to understand the barriers to
uploading a picture during the flow (i.e. do they not have access to good old
photos? Is the background they are currently in not ideal for a selfie upload?)
- after we have the key barriers, we can add features to the profile picture
step to increase usage and conversions/uploads.
50. How would you set a quarterly goal for Stripe Atlas?
I'd start by trying to understand the high level north star for Stripe Atlas with
respect to Stripe's mission of increasing the GDP of the internet.
Example north star: Increase successfully created startups MoM, YoY
I'd ask the interviewer to confirm my understanding on the following: Atlas allows
entrepreneurs to easily launch their startups within a matter of only a few days
from signup to receiving a tax ID.
I would take this understanding and understand the input metrics towards the
north star.
Basic Atlas funnel understanding and metrics:

93
How much of the addressable market are we reaching? If high, I wouldn't
worry about this. If it's on the low end, there's possibly a goal to take on
increasing customer awareness and engagement with Atlas.
Out of applications created, what percent are officially submitted? If this
percentage is high, I would not focus efforts here. If it is on the low end, this
is probably ripe for digging into dropoff in the application process steps and
setting a goal for reducing dropoff.
From the point at which applications are submitted, what percent
successfully receive a tax ID? If this percentage is high, I would not focus
efforts here. If it is on the low end, it's possible that we need to do more to
better empower customers with the right tools to reach a successfully
created company.
I would want to know of the above basic inputs towards successfully created
startups, which contributes the most towards the north star? To find this out, we
should draw this out as a full funnel and identify the possible dropoff. Where we
have the most dropoff and most opportunity to impact should be the goal(s) set for
Atlas.

51. You're the PM for Coinbase and DAU is down by 10%, what do you
do to find out why?
Part 1 - Initial diagnostic questions
Question: To start, I want to confirm that this is actually an issue. Have we changed
any logging related to DAU in past couple of months?
Answer: No
Question: Ok, now I'm wondering if this could be a seasonal issue. In the past 6
months, have we seen a similar trend where DAU is down by 10%?
Answer: No

94
Question: Ok, to zoom out for a second, let's think about any macro-level factors
that may have impacted DAU. Is there any negative public sentiment with the
company?
Answer: No
Question: Ok, let's think about competitors then. Do we know if any of them have
done anything to impact our usage (i.e. promotional campaigns, new features,
etc.)?
Answer: No
Part 2 - Deeper dive on different types of errors
Question: Ok, so let's move onto different errors. Let's start with network errors.
Do we know if there's been an outage with a specific carrier (i.e. Verizon)?
Answer: No
Question: Ok, what about any errors specific to hardware (i.e. Apple, Samsung,
etc.) or OS (i.e. iOS/Android)?
Answer: No
Question: All right, what about app-specific errors. Are there any bugs? Or
correlation between app version and users that are no longer active, who normally
were?
Answer: No
Question: Ok, what about any live experiments. Is there an experiment that may be
affecting a group of users and thusly DAU?
Answer: No
Question: How about server side...do we see in the logs and sign that an endpoint
is 500'ing?

95
Answer: No
Question: Ok, lastly I am interested if there's any correlation between users who
are no longer active and their specific demographics. For example, is the decrease
in usage related to a specific geography, age, etc?
Answer: Yes, it looks like most of the users who have "dropped off" are based out
of Brazil. And oh, looking at the news, looks like they have a nation wide outage.
They expect to be back up in a couple of days.

52. What metrics would you pick as True North for Facebook Story?
First I want to look at how Facebook stories overlaps with Facebook’s mission. I can
see pretty instantly that stories stays true to Facebook’s mission with bringing the
world together and connecting friends and families.
I want to make sure I understand the product journey. Facebook stories allows
users to post videos and photos. While also being able to add filters, music, and
stickers. Users are also able to comment on other stories and react to them.
Does that sound correct? Interview: Yes
Next, I want to understand where is stories in its product lifecycle. I believe this is
something we have launched within the past few years. So is still pretty new. We
may be looking still at adoption while also looking at engagement.
Since Facebook stories has been out for a few years now I would like to assume we
have spent some time on adoption metrics.
Can I assume that? Interviewer: Up to you
Since we are looking for a true North start metric, and a metric to keep the product
healthy. I believe we should look more into engagement. If users do not engage
with Facebook stories will not have a healthy product regardless if users adopted it
in the beginning of stories launch day.
Next I want to look at the user journey.

96
There are two high level users that we know of:
Creators and Viewer
Creator User Journey:
The goal for this user is to be able to post a 24 hour video or photo to Facebook
Stories. They may start by taking a video, picture, or uploading an existing on. Next
this user may want to add filters, music, and stickers to make their media more
entertaining.
Viewers User Journey:
The goal for this user is to be able to view Facebook stories and react or comment.
This user will start by viewing a friends or an influencer's story. Next they have the
option to comment, share, or react to the video.
If we go back to our goal which is engagement I want to break down the most
important parts of each user that would have the most impact of engagement.
Creator:
-8. Avg # of stories created /mo
-7. Avg # of comment responses /mo
0. Tradeoff - This metric would be great to track to view if our users are
engaging with the viewers when they comment on their posts. However, I
dont think this would be our north start for viewing the product health.
Viewer:
-8. Avg # of views /mo
-7. Avg # of comments /mo
0. Tradeoff - Not all users comment on stories that are posted. I think we
should still keep an eye on this metric but I dont think it has the weight to
be the North star.
-7. Avg # of reactions /mo

97
0. Tradeoff - Some users may not choose to react, so this metric can
possibly miss out on some great insight.
I believe our North star metric based on our engagement goals should be:
Creator:
Avg # of stories created /mo
Viewer:
Avg # of views /mo
By viewing these North star metrics each month we can maintain Facebook story's
health.

53. If you had to create a dashboard for Netflix with only 3 metrics,
what would they be?
First, I would like to ask some clarifying questions. Are we talking about the
business metrics dashboard or health metrics(like response time)? - it is up to you
Do I understand right that Netflix is a subscription-based video streaming service
for watchers? So I know that they create original content for Netflix, but it is out of
the scope of this question. - right My approach will be to: Define the business goal
from Netflix’s mission Break down the actions taken by users at each stage of the
customer journey Set metrics for each user/stage Define 3 main metrics. Does it
sound like a reasonable approach?- yes The mission of Netflix is to entertain the
world. It is a mature product, but in streaming services, they are second: Youtube
wins the auditory goal, but not subscriptions. So I think their goal is to engage
more and more people and earn money to invest in services to engage more
people on the platform. User Actions: Acquisition&Activation
sign up for the free period
scroll recommendations
search for smth
Engagement/Retention

98
watch video/trailer, recommendations, collections
create lists
download
subscribe for new comings
come back next days
open notifications
Conversion
subscribe for the paid version
I want to pause here just for a second to see if you have any questions before I
move on to the next section, which is listing the metrics. Considering those
actions, we can list several essential metrics: -Cost per acquisition reflects how
easy it is to expand for us -Time spent on the platform by users/day This metric
tells us how successful we are in entertaining users. -#Active users (who watch
video/trailer, recommendations, collections -create lists -download -subscribe for
new comings) -Time between sessions this metric reflects how engaged are our
users and helps to induce them to more frequent use (for example, we can track it
with new content) -Conversion rate reflects how users understood the value of
service during the complimentary subscription period -LTV LTV is highly
connected with the duration of the subscription, so by increasing this metric, we
reduce the number of unsubscriptions (associated with a decrease in engagement
or high costs) -Number or share of paid subscriptions it is a straightforward
revenue metric -Health metrics (I want to track them, but I don’t think that they are
those who are north-STAR. Our North-Star metric should reflect if smth with health
isn’t good) errors, response time, avg quality of viewed content
Does it make sense?
Now I'll start to drill down to the 3 essential metrics based on how they tie back to
our business goal. Since Netflix is a mature product, my primary concern is not
with the acquisition but with users’ satisfaction, engagement, and monetization.
So, first of all, I want to track the time spent on the platform by users/per day
because it reflects user engagement for my video product most of all. Second I
want to track #Active users per day. Thus, I follow how I engage people at the scale

99
of the world. And the third one is LTV. It reflects both engagement and
monetization, which drives R&D. To summarize, my three most important metrics
are Time spent on the platform by users/day,#Active users per day, and LTV.
Tradeoffs: Time spent on the platform is not always an accurate metric; for
example, users can sleep or go away from the TV player. Being overoptimized can
lead to the reputation of wasting time service. LTV, you can rise quickly by
increasing the fee. You can start optimizing service only for countries with big
salaries that are not aligned with the mission of entertaining the world.

54. Improve Buy and Sell Groups in Facebook

Mission: to help users buy and sell products from groups. If we think about the
value here, users want to join groups that sell/buy products of interest (young
moms or vinyl), or location of interest, or trust, or some other factor - either from
other users or SMBs.
User Actions:
create groups, join groups,
active in groups - make posts to sell or show interest in buying (like,
commenting, sharing or ideally transacting)
deeper engagement - sharing , saving , searching more based on what they
saw - potentially marketplace.
Metrics:
Growth (groups) - #groups recommendations views, #group searches, #new join
requests, #new groups created, #new group users
Reach (groups) - #groups, #active groups, #group size distribution (pct 25, pct 50,
ptc 90), #users in groups
Engagement - #sell posts created, #interactions with these posts, #feed views,
#timespent in feed-> no, #DAU in these groups

100
Deeper engagement - value of a Buy/Sell group interaction to FB - in terms of
engagement, interacts with other posts, or invites more users, or creates more
groups. other HVAs that this user takes.
To align with our goal of helping users buy / sell products - we would want
transactions. but we may not have. Furthermore transactions do not portray
browse experience which is also value adding and important to users.
In that scenrio, we will take indicators of engagement as signs that users are
finding the product valuable - some weighted metric of interactions with posts
(would cover all health metrics for the funnel, including DAUs)
Caveat / tradeoffs - may be complicated to communicate to outside team, difficult
to benchmark against other products. weight them. or just keep them individually
separate.
I will also caveat for red flag metrics - #DAUs to make sure there is no bots,
#Active groups, to make sure there is no drop in one segment and no bot activity.
isn't a small set of users or groups that are driving the activity, with negative
sentiment or bots and alienating a large set of users. That tells me that all three are
healthy.

55. You are the PM for Uber when it was only a ride hailing app. Would
you expand into Uber Eats? (Trade-off Question)
I would structure my answer in the following way:
What's the situational context?
Mission of Uber
Making transportation easy by creating a platform for connecting drivers
and riders
Key market trends: In top tier cities there are players in the food delivery
market and the size of the pie is growing fast

101
What would feed into the decision-making? The new platform should be able to
connect customers and restaurants. The key things needed to make this platform
successful are customers (people ordering in), restaurants, tech to connect these
two stakeholders, and operations to enable food deliveries. Besides these 4 key
players, there could be additional government regulations around delivering food,
such as safety and hygiene standards.
Positive Indicators:
Customers: Uber already has a sizeable active userbase and are an
established brand. This userbase is already tech savvy, convenience
driven, and relatively better off in terms of affordability and the order in
culture among the present earning generation is on the rise.
Tech : They already have a well established product which optimizes for
users across the two sides of the marketplace i.e. riders and drivers,
taking real time requests, routing via shortest distances,
starting/stopping the trips with live tracking, feedback etc. Adding food
delivery capabilities to existing tech should be relatively simple.
Ops: They have a significant driver base and automobile fleet so they
have ops capability at scale - no significant other human skills except
perhaps a simple additional training on tech usage are needed for picking
and dropping food.
Negative Indicators:
Restaurants: They do NOT have tie-ups with restaurants for food delivery,
while there are existing players with already established connections.
From the pov of restaurants, it should not be difficult to onboard
another food delivery platform because it would only help them get
more orders.
From the pov of Uber, they might need to compete on
commission/delivery rates with these restaurants, since there are
other players also offering the same services to the restaurants.
What would be the key hypothesis? H1: It might not be profitable to invest in food
delivery because the additional cost of delivering food would be higher than the
commissions Uber can make from restaurants or customers.
Unlikely because majority of the fixed cost comes from repurposing existing
tech & ops infra which puts Uber at an advantage compared to new players
trying to enter the market

102
H2: Since the parent business is not a food delivery company, appending
transportation to food delivery might need adherence to related regulations
This is a solvable issue
H3: Drivers might not want to sign up for food deliveries because taking riders is
more profitable than taking delivery orders.
Analyzing capacity utilization rates for existing drivers, gathering data on
comparative hourly earnings of competitors' delivery staff, surveying Ubers'
drivers with additional capacity on if they would opt-in to food delivery
services are some ways to get indication on if this hypothesis could be true.
If this hypothesis is true, since the food delivery market size is increasing and
Uber is well funded at this time, adding more drivers specifically for food
delivery services could be an option too.
Proposal Since all of the plausible issues from negative indicators can be worked
around and positive signals are significantly stronger, I would recommend
expanding into Uber Eats.

56. Pretend you are a PM at Lyft, and requests were down 5% in a given
city. Why?
In order to answer this question, I am going to do the following:
Ask some clarifying questions
Go through some major external and internal possible causes
Hypothesize and test/valid
Conclusion
1. Ask clarifying questions
Any particular market? Is it a US market or a new market with less than 12-18
months of full operation
US market

103
What do you mean by a given city? - Metropolitan with plenty of public
transportations (subway, bus, etc), a large city with limited transportation
(only bus), a small city
I will let you decide.
Okay, US major metropolitan with plenty of public transportations
Did the requests drop in any particular category or subcategory - Overall,
economy, extra seats, luxury, regular Lyft or Lyft XL?
Overall, no significant difference in any category/subcategory
Did the requests drop over any particular period? - Week over week, month,
quarter? Gradual or sudden change?
Gradual over a month period
2. Go through some possible internal and external causes
Internal
System downtime, major bug, data collection issue
Since it’s a gradual drop, this is highly unlikely. I would love to validate
this with the engineers and data analysts.
App / mobile web site Issue
I would check the following:
Understand where the requests are coming from - mobile app v
mobile web site
If the majority of the requests are coming from the mobile app, I
would check if there is any major update in iOS or Android
operating system and new handset adoption (new iPhone,
Samsung, MI, etc.)
Conversion and retention issue
If there is a major variance in conversion (requests -> paid rides), this
could impact request drop. Again, I would check with data analyst
If there is any major spike in customer complaints or the number of
unsatisfied customers, this could also impact requests. Check for
prior week customer complains, ride ratings, feedbacks
User segments
I would like to understand better the user segments to isolate the
possible cause of the requests
User requests ride for office commuting, ride to/from the airport,
school, ride from tourist areas, sporting events, concerts, etc.
External

104
Seasonality
Once I have a strong understanding of user segments request
breakdown, I would like to understand better if it’s driven by
seasonality.
For example, if it’s office commuting or school - vacation breaks tend
to happen over the mid to late summertime or year-end during the
holiday period. In particular, many parents take vacation breaks
based on their children’s school year calendar.
Notable Events
I would also check if there is any particular event that might have any
major impacts on ridership - CV19, corporate layoffs, protests, etc.
Competition
Are there any new direct competitor entering the market? Are they
offering discounts to get early adopters to try?
Any major competitors like Uber offering a promotion or promoting its
subscription product?
Any indirect competitors trying to making any strategic move to
market share? Taxi, bikes, scooters, Tesla Mobility?
Regulation
Did the local municipal government issue any new policy change over
regulations related to ride-sharing?
For example, airport pick up and drop off rules could also impact
requests
3. Hypothesize & Test/Valid
Competition - Uber is promoting its subscription product, Ride Pass to target
daily commuters
Check if Uber is running any promotion during that period. For example,
they could offer the first-month fee is waived in order to try out the Ride
Pass
I would check the daily commuters figure to validate this. Riders often
mark the destination locations as Home and Office.
Seasonality - Vacation breaks
Check if the daily commutes and schools account for the request drop
off
I would check if the school is open or close during that period. Vacation
breaks explain the seasonality drop off as expected.

105
4. Conclusion
If the dropdown is attributed to key user segments such as office and/or
school, I want to understand it’s related to seasonality as highlighted above.
In addition, I want to learn more about other factors such as competitors
running promotions to persuade riders to either gain market share or try
premium subscription products. If this the case, then this analysis is
actionable for the company to take the appropriate steps.
57. Determine if we should launch a feature of Instagram Store that lets
people buy stuff without logging into Instagram.
Business goals of the product (instagram shop - help influencers share products
that they love so their audience can benefit from it) Mission of the company
(instagram - to capture and share worlds moments) Why are we evaluating this ?
(assume to reduce friction and enable instagram as a legitimate social channel to
sell, compete with tiktok) Over what time frame ?6 months - US/worldwide -
worldwide Did we make any commitments ? (Nope) Who are all the stakeholders in
this ecosystem ? Influencers, buyers, store backend(shopify/woocommerce)
website builders, Who are our competitors ? Tiktok, Facebook (cannibalism),
Snapchat What are the implications of each choice in the short, medium and long
term ? Short term Pros - friction free, more revenue Cons - less data (but mitigated
by FB acquiring information anyway), less ad revenue (because of attribution
difficulties), traffic impact, follower count impact, future purchase impact Cost of
data loss : quantify amount of data lost because a user didnt login over LTV
Medium term Pros - Non users of instagram can also buy, might convert or
resurrect users - rec algo fomo Cons - history and long term profile building is
hard, further reason not to resurrect (when you can buy without logging in) Long
term Pros- establish a new habit for social selling Cons - may tip the balance
toward more influencers (but people dont create instagram accounts to shop) Pros
- can you boost the pros ? Instagram checkout for non us customers ? Cons - can
you blunt the cons ? incentivize login - faster checkout, returns, track purchases
Ecosystem impacts - regulations, environment, public opinion, investors Ultimate
decision - Yes, as long as we can still make attribution (by device, IP, credit card
name details etc) evaluate data loss because of non attribution. Test this out (force
login for a small % and see impacts), incentivize login - faster checkout, returns,
track purchases

106
58. How would you measure success for Facebook Groups?
1) Product Just to make sure we are on the same page. Facebook Groups is a
functionality/feature inside Facebook App where people can create and interact
within specific groups. These groups are managed by admins, who define the
rules. Is my understanding correct? 2) Clarification
Do we want to measure success for a specific user segment?
Are we looking at specific types of groups? What about location?
What about devices, have you any in mind, web, android, iOS?
I will assume that everything is broad, without focusing on any specifics.
3) Goals Well, the Meta's goal is to bring and connect people together all over the
world. I believe that Groups is at this goal's core, because it allows people to talk
about and interact within groups in which they share that same interest. Ultimately,
it fosters connection among people. For the purpose of this exercise, I would like to
focus on "engagement" to measure the success of Facebook Groups, since it's a
mature product and we want to make sure people are getting benefits from it and
improving the community at the same time. Does it make sense?
4) Metrics Brainstorming about some metrics, I could come up with the following
ones:
Number of (reactions, comments, shares, etc) per user over time
Avg session time of users per number of groups they participate in
% clicks groups notifications vs others
# or % friends expansion after an user joins a group
number of posts on average per group over week
number of reactions on average per group over week
number of events on average created per group over week
what are the trending topics among groups
number of unfollow groups per user over week
5) Prioritization So, given this set of metrics, I would like to select

107
North Star: as a north star metric and aligned with Meta's goal, I would want
to measure and follow the average session time of users per number of
groups they participate in. This would tell us if participating in groups
generates more engagement by spending more time on the platform.
Guardrail: I would like to keep an eye on "Number of (reactions, comments,
shares, etc) per user over time", because we do not want people to stop
interacting with each other. So, this number should increase or be stable over
time.
6) Trade-Offs As trade-offs from those metrics that comes to my mind right now
Participating in a group does not necessarily mean you are taking any action
towards it. That's the reason measuring reactions is important.
Maybe there are some people who are part of groups to avoind newsfeed,
thus they might spend less time on the platform by participating in many
groups.
There are some rules in groups, such as limited posts per day, etc. This might
also need to be taken into account on those metrics.
7) Summary Wrapping all up, we need to measure Facebook Groups success
without any specifics. As a goal aligned with Meta's, I am focusing on engagement.
After brainstorming some metrics, I selected one metric as North Star, avg session
time, and one to keep a closer eye, number of reactions. In the end, I talked about
some trade-offs.

59. How would you define success for FB Pages.


Clarifying Question:
-8. Are we talking about mission-oriented success or business-oriented
success? [Let's assume mission-oriented]
Let's start by talking about where Pages fit into the mission of Facebook. Then
we'll cover some relevant metrics, identify which are most important for measuring
the success of Pages, and then talk about some trade-offs as part of a final
recommendation.

108
Mission: Facebook's mission is to empower people to build community and to bring
people closer together. Pages are very much aligned with this top-level mission as
they are a great way to create a community around a shared interest.
Metrics: There are several perspectives that we can consider metrics for that
speak to success for Pages. Let's brainstorm a bit:
-8. Creators/Administrators:
0. # of new Pages
1. # of New Posts (by type: text, image, video, etc) per day
2. # of Connected FB Products used by the Page for its fans (e.g. Events,
Jobs, Watch Parties)
3. # of invites responded to (= invite sent + recipient joined)
4. CTR on promoted actions (e.g. "Buy", "Donate," "Subscribe," "Book
appointment," etc..)
5. Conversion rate (%) on promoted actions
6. # of Likes per Page
7. Frequency of new posts (does not speak to quality, must consider in
context with fan response/engagement)
0. Fans:
0. # of Likes
1. # of direct engagements with posts ("On-Page" engagement; comments,
reactions)
2. # of engagements with posts in News Feed ("Off-page" engagement;
reactions, shares)
3. # of engagement with non-post activity (this is the flip-side of 1c)
4. # of mentions of Page in posts and # of mentions of Page-related
hashtags ("Off-Page" engagement)
5. # of successful Friend invites to a Page (= invite + Like)
6. # of Friend Requests sent to other fans
7. # of messages sent between Fans through Messenger
... As far as a primary metrics go, let's first think about which of these are most
relevant to mission-oriented success. In particular, let's focus on 1b, 1c, 2a, 2b, 2c,
2d, 2g, 2h. Together, these all speak to actions that indicate Pages are working as
intended to generate engagement and strengthen connections between a Page
and its fans. You could categorize this bundle, which reflects the Page's ability to

109
engage its users with content and activate its users to engage with each other as
"Page Effectiveness."
We should also bear in mind a few things:
-8. Privacy concerns with respect to tracking messages and the volume of
communication between members through Facebook's messaging channels
-7. The response to the question changes quite a bit if we were to focus on
business-oriented success. This would significantly prioritize items 1e and 1f.
-6. We always want to prioritize quality, which is why engagement metrics of fans
(e.g. 2b, 2c, 2d) are just as important as content metrics (e.g. 1b).
In short: Successful Pages on Facebook should reflect increasing "Page
Effectiveness" metrics, which ultimately bubble up into increased time-on-
site/time-in-app and DAU.

60. How would you measure success for YouTube search?


Goal of YT search is to enable quick access to a video of interest to user.
So success for YT search would mean access to videos that interest, and user
actually clicks on one or more searched videos to watch them. It also means
reduced time searching overall, with high relevance results shown for search
string.
Actions performed by users are:
Goes to the YT site or Mobile app
Clicks on the Search window to type
Enters search string and presses enter
Browses the list, scrolls down till she finds something of interest
Clicks and views the video and any other related videos that are
recommended later
Metrics: Number of searches done Average time spent on searching before
clicking on at least 1 video to watch Number of scrolls done before clicking on a

110
video to watch (relevance) Average video watch time after search per month
Average Number of videos watched after a search per month
Northstar metric(s):
Monthly average number of minutes watched from search results
Monthly average number of searches that resulted in at least 1 video view
click
Evaluate tradeoffs: Counter metrics:
Bounce rate from search
DAU, MAU should not have changed much to cause increase/decrease in num
of searches.
Bot farm based searches - Monthly fake account detected on YT
Guardrail metrics:
NPS score, Account closure rate, YT revenue
64. How would you measure the success of discovery features on
Instagram?
Clarifying Questions:
What are Discovery features on IG?
My assumed answer: posts of followers you don’t already follow that you
see on the Discovery tab of IG. The posts consist of video clips, IGTV
posts, and pics. User can browse through endless scroll of posts, and
can filter by category (e.g. sports, comedy, etc.) on the Discovery tab.
Exclude IGTV and IG Shopping, which can also be accessed by Discovery
tab, and just focus on Discovery of users you don’t already follow as
shown upon first view of Discovery tab
Timeframe of product? Is it before launch, or as it is today? The reason I ask
is because if it’s today, since IG already has a meaningful amount of DAU, I’d
focus on engagement rather than growth.
My assumed answer: Discovery as it is launched today

111
Approach:
High-level product discussion
FB mission: give people power to build community and bring world closer
together
IG drives directly towards that mission because it’s a visual-first
communication platform for people to discover new things that are
interesting to them (Discovery), and interact with those that have the
same interests. In addition, visual-first communication (especially
Stories) is more emotionally meaningful and less friction to use, driving
towards more likelihood of people interacting with each other.
Define northstar vision statement for Discovery on IG
Quickly Discover new and fun people, places, and things that you can
indulge in and share with others
Actions that users can take on Discovery tab
User
Click into post
Follow creator of post
Comment on post
Like post
Share post
Scroll
Prioritize actions that drive towards vision statement and mission of FB
Clicking into post
I choose this because by clicking into a post, we know that we’ve at
least served up something interesting enough to the user to click into
it
Following content creator
If the user starts following the content creator, it’s a strong measure
that we’ve helped the user discover something they really like and
identify with
Comment/Liking (interaction with post)
If the user has actually interacted with the post found of Discovery
tab, then that drives towards the possibility of connecting further
with others about the post or subject content of post
Sharing post

112
If the user has actually shared the post with someone else, they have
the intent of actually driving further, deeper interaction with someone
else
High-level metrics to track - health metric
DAU
Natural frequency of Discovery tab on IG is daily
Measure of prioritized actions
Clicking into post (fourth priority)
Avg. # of posts clicked on Discovery tab per DAU
Following content creator (third priority)
Avg. # of follows found on Discovery tab per DAU
Comment/Liking (interaction with post) (second priority)
Avg. # of comments/likes on posts clicked on Discovery tab per DAU
Sharing post (first priority)
Avg. # of shares on posts clicked on Discovery tab per DAU
If i had to dwindle down list of metrics, i’d limit it to the following
DAU
Want to make sure people are actually using product regularly and
still engaged
Sharing post
Drives directly to mission of FB, and is a good assessment of
engagement too
Considerations
Is the sharing harmful?
When users share posts, they may be doing it in a hurtful way (e.g.
sharing a meme about being fat to someone you make fun of for
being fat)
Can control for this by tracking the reporting of share posts in DMs
Cannibalization to other products
If users are spending a lot of time on Discovery tab, it may take away
from other features (Stories, Shopping, etc.)
Can control for this by tracking overall time spent on other features
65. Live stream participation in US is down but in Spain increasing over
time. What would you do?
Clarifying questions:

113
-8. What are we defining as "participation"? Does it include creating a live
stream? Or is it just viewing a stream? Engaging (commenting/reacting) with
other viewers during a live stream? ["Participation" = Watching, commenting,
or reacting to a live stream as it's being broadcast, not during replays]
-7. Are these live streams from major broadcasters expecting large audiences?
Or individuals making use of live broadcasting from their phones? [Across the
board]
-6. What is the drop and rise in participation and over what period of time? [10%
drop (US) in the last 30 days, 5% rise (SP) in the last 30 days]
-5. Are the numbers reflecting users who identify as American and Spanish? Or
are they reflecting users who are connecting to Facebook while they are
physically in America or Spain? [The latter]
Let's start by ruling out specific technical issues, any in-app issues, and try to
understand if these changes are from particular groups of users. Afterward, let's
talk about how we're defining success through metrics before discussing
recommendations and trade-offs.
-8. Let's talk through some technical conditions potentially impacting US and
Spanish participants.
0. Are they using a particular technical platform? (desktop/mobile/TV;
iOS/Android; Chrome/Edge/Safari; tvOS/Android TV;
macOS/Windows/Linux).?
1. Are they using a particular mobile provider or ISP?
2. Are they using a particular version of the Facebook App?
3. [Nothing unusual stands out regarding these segmentations]
-4. OK, how about the application itself?
0. Have we made any region-specific changes to the app or to Facebook
Live or its component elements (video player, commenting, reactions)
which might appear in the US but not in Spain (or vice-versa)? [We're
always making changes to the app; no experiments running that should
have a significant impact here]
-7. What about the way in which users arrive at Facebook LIve?

114
0. Have we made any modifications to algorithms that would impact the
likelihood of someone engaging with Live video in the US or Spain? For
instance, has a Live announcement been given more or less weight in
News Feed? [We're always optimizing the algorithm, but no major
changes pushed]
1. Has the way in which users would find Facebook Live or Live Streams on
the website or mobile app changed? [No recent updates]
-6. Is there a particular demographic or psychographic that is participating less
in the US and more in Spain? [No, across the board]
-5. What about certain groups of users....
0. Users who are new to Facebook (< 30 days)
1. Users who have been long-time (5+ years) Facebook users
2. Frequent (Daily) Facebook users
3. Infrequent (Monthly Facebook users)
4. How often were the users who dropped off in the US participating in
Facebook Live (DAU/MAU)?
5. [These data are available but do not suggest a particular group to hone in
on]
-2. Have there been any major holidays or identifiable events (expected or
unexpected) in the US or in Spain which could explain the change in
participation? [Nothing that we can identify; consider seasonal impacts in
traffic/engagement]
Based on this information, there doesn't seem to be a whole lot connecting the US
drop to the Spanish rise in Live participation.
What does stand out to me is the definition of the metric itself. Have we changed
the definition of the metric lately? [No].
This is interesting. We seem to define "participation" to involve "active" actions
(e.g. commenting, reacting, sharing, etc.) as well as the "passive" action of simply
watching a video being streamed by someone. If we look to what's going on with
online video right now at the industry level, new services with high-quality content
such as HBO Max and DIsney+ have put greater demands on the attentional
appetite of Americans, reducing available time for other activities like participating
in Facebook Live events. Recent domestic events have also increased the appetite
for TV news consumption; this often takes the form of beginning a search for
content directly from a news source (e.g. CNN, MSNBC, Fox News, etc.) rather

115
than from a social media platform. These two issues don't seem to be impacting
Facebook users in Spain, so we can perhaps expect Spanish Facebook Live
participation to continue on an upward trajectory -- but it is reasonable to expect
that these issues would trigger a dip in US participation under its current definition.
It seems like we should reconsider the definition of Participation to include only
active engagement rather than passive engagement to better capture how well
Facebook Live is serving as an interactive and social video experience tied to
Facebook's mission. This, of course, would leave out how well "just the content" on
Facebook Live is performing, so additional metrics like Peak Concurrent Views, %
of Stream Completed (e.g. does someone watch the whole stream or just a small
portion?), and Total Minutes Streamed could be utilized to get a better
understanding of performance. We can then perform additional studies over time
to better assess how active engagement drives continued passive engagement,
growth of Live, and other network effects on the platform.
Lastly, many Facebook users rely on the platform to receive news, but not
necessarily in the form of Facebook Live. It may be interesting for the company to
investigate how it could partner with different local, national, and international TV
News broadcasters (or reputable newsrooms with live video capabilities) to
increase 'share of mind' for Facebook Live vis a vis consuming Live TV news.

66. If you were the PM for the fundraising product on FB, what goals
would you set?
Product Description
To start out with, I would like to make sure I understand how the product functions.
To my understanding, the fundraising product is for:
Individuals and non-profits to set-up fundraisers to receive donations either
to support a specific cause / initiative or to ask for help on a big expense like
a medical bill
Individuals within these networks are able to donate
At times, Facebook will charge a small fee to help facilitate the money
transfers

116
Facebook Mission
Facebook's goal is to create and connect communities.
Product Mission
This aligns with the overarching product goal of creating a way for people in your
network to help / support causes that are most important to you. Fundraisers
create a venue for individuals to connect by showcasing different causes they may
care about.
Users
When we want to set goals for a product, we need to take a moment to consider
the users within this ecosystem. I'd like to focus on the use case where the
fundraiser is created to support an non-profit as I believe this is the most common
use case of setting up a fundraiser.
Non-profit: listed on FB and accepting donations
Fundraiser creator:
Locate a cause / non-profit
Set a target goal, write a message
Publish and post fundraiser on wall
Donator
Sees fundraiser in newsfeed, notifications
2a. Comment / share / react to the fundraiser
2b. Click to donate
-8. See suggested donation amount;
-7. Enter payment information
Metrics
Now that we have a good idea of the users within the fundraising product, I'd like
to take a look at what metrics would be most appropriate to support our goal of
creating a platform to support causes that are important to people in your network.
Creator

117
# of fundraisers created / user
% of fundraisers successfully published
avg. # donators / fundraiser
% of target goal achieved
avg. # of fundraisers created / user / year (are they creating multiple?)
Donator
# of engagement / fundraiser
% successfully donate
# of repeat donators
North star metric
For my north star metric, I would like to use # of donators / fundraisers as it best
supports our goal of understanding if people in your network are helping to
support your cause. The drawbacks of this metric is that it isn't necessarily a
metric to optimize for, after all, we shouldn't necessarily be optimizing for many
small donations. Thus, a secondary metric that would be good to consider is % of
target goal achieved which would better ensure that people are achieving their
target goals.

67. Set the goals/metrics for Instagram Store. What happens if we


launch the store and purchases are up, but engagement is down in
other parts of the app?
Clarifying question: Is there a goal that we have in mind?
FB's mission: empower people to create community and bring the world closer
together IG shops as I understand are the marketplace on IG, where users can view
items listed for sale by creators/businesses and also search for specific items or
categories. Users can actually buy through shops instead of getting navigated to
the original business website. IG shop goals: Businesses continue to face the
challenges of selling online as they shift their business models during the COVID-
19 crisis. Today, we’re announcing a new shopping experience to keep businesses

118
going and make it easy for people to shop the things they’ll love. Competitors:
There are few competitors in the space eg amazon, Wayfair
Users Segments
-8. Buyers
-7. Sellers
Goal: Given the phase/maturity of the product I would choose Engagement as the
goal. Metrics:
-8. Avg no of checkout per week per user (Northstar) [ Of all the tracking metrics
outlined above, I would pick this as the north star for this product, because it
signals adoption/activation on the seller side as well as awareness,
engagement, and conversion (the complete funnel) on the buyer side.]
-7. Avg DAU on shop (eg Consumer side: added to wishlist, clicked to view
something Seller side: no of items added per day)
-6. DAU/WAU
-5. Avg time spent on the shopping app
-4. Avg no of cart abandoned per user
-3. Avg time spent on the shop platform
Guardrail:
-8. DAU (As avg is misleading)
-7. 7-day retention
-6. Quality of the product sold
Tradeoff:
-8. Cannibalization of other parts of FB
- What happens if we launch the store and purchases are up, but engagement is
down in other parts of the app? What would u do? Clarifying question:
-8. When we say down .. how much is it down is it significant?
-7. When u say engagement is down ..is there a particular type of engagement
that is down?

119
-6. Is there a particular demographic that is seeing this drop?
-5. Any bug reports?
Let's say the interview says that overall engagement is down in Insta as people are
spending more time on shops
Decision: Fb mission is to empower ppl to create community and bring the world
closer together.. engagement is very key. If there is a significant drop in
engagement in other parts of the app in the long term it might affect fb core
mission and even retain user to our platform. I would work with the team to see is
there a particular demographic that is seeing this huge drop and how we can
potentially create a win-win situation eg nudge users to engage with other features
of the app via notifications etc.

68. You're a PM in Onboarding team. You have to increase conversion of


the onboarding funnel by 15% in 3 months.
I broke down my answer into Understanding the goal, investigating the current
state and its challenges, prioritizing opportunities, brainstorming solutions, and
selecting solutions.
To understand the goal and current state, I asked questions about the onboarding
funnel and the conversion percentages. They have clear metrics to share, with the
specific conversion rates for each stage.
I then identified the subscription screen in the flow as one of the key opportunities,
as most of the drop happened there. I asked for more customer research
information, and he said research pointed towards customers not understanding
the value of each plan and that the "free plan" option was relatively hidden.
Throughout the case, the interviewer always asked my assumptions before
answering anything. After picking up the lack of a clear alternative to paying for a
subscription as the foremost opportunity, I proposed two solutions: A/B testing
with the subscription screen to improve conversion and experience with a
contextual subscription screen, where it wouldn't be showing during onboarding.

120
The interviewer was able to answer most of my questions, but it was pretty hard to
get a read on what he was thinking. After I proposed the solution, he asked a few
times if I was sure there was nothing else to add. I'm unsure if I forgot something or
if he was testing my decisiveness, to which I answered this would be my approach
to such a problem.

69. Imagine you are PM for a video conferencing app. Define success
metrics for this app
Clarifying questions:
What do we mean by a video conferencing app? Zoom or Messenger? A:
More like Zoom
Is the app for B2B users or C2C users? A: Dealer's choice
What stage is this app at? Early-stage or has been in the market for a while?
A: Current Player
This question will help decide if you will focus on Adoption or
Engagement and retention...
Is there a specific market or user group we need to focus on? A: Dealer's
choice
Is it a paid or free service? A: Both
The structure I would like to follow to find the right north star metrics:
Talk about the product to understand better how it is used and the job to be
done for users
Define a goal that we would like to focus on
To achieve the goal, we need to come up with user actions carried out by
users.
Now we have the possible actions, we can define metrics to measure and
monitor
Lastly, evaluate and discuss counter metrics and tradeoffs
Summarize
The Product: Video conferencing is a virtual service that allows two or more users
to converse utilizing audio and video features. Users can also share files and

121
screens, invite other users to join the call, and chat.
The goal that I want to focus on is Engagement and Retention:
Business Goal:
Keep users using the product more often (Retention)
Maintain or increase the number of paid users (Retention)
Product goals:
Service Uptime above 90% (Engagement)
Actions:
The user journey can be divided into two sides:
Common actions:
open cam
mute mic
Share screen
share a file
send a message in chat
Inviter:
Send invite
Starts the call
Ends the call
kick out users from the call
invite others during the call
Invitee:
Join a call via a link
invite others during the call
drop out of a call

122
rejoin
Metrics:
For Retention:
Number of invites sent across all users daily or weekly
Average number of calls per user per week (WAU)
Number of the paid users (Monthly)
Counter Metric: Number of cancellations
Counter Metric: number of net new paid customers
For Engagement:
Create a dashboard with the following:
Overall platform uptime to understand if service interruptions are
frequent impacting the user experience
Aggregate score with various weights for the different features
associated with the engagement actions above
Average call time per user per week
Average time on camera per user per session (not a robust metric,
however you want to know if users are using the service as phone or as
video conf)
Evaluate: Uptime is essential to ensure we don't have significant service
disruptions to measure the product's health. Engagement of users with the
platform is essential to monitor, using the features for better communication and
interactions For retention, keep our eyes on the number of active users on a weekly
basis and set up guardrail metrics or counter metrics to avoid being blindsided by
an over-optimized North Star Metric.

70. What is the value of Google photos user to Google?


Think about 3 types of users -
User who only uses Google photos and no other Google product. What is the value
that Google gets out of such user?

123
User who uses all the google products including photos but does not spend a
penny on google photos (think of storage fee, buying photo albums etc.). How do
photos increase the value provided by that user in conjunction with other Google
products?
Use who spends money on Google photos.
The value provided by the first user is through their photo data, which could be
used to improve ML processes inside Google.
The value provided by the second user is again through data but this time Google
has entire data on this user. Google knows deeply about their likes, dislikes, family,
friends etc. through Google photos, which gives a deeper insight into a person's
life. In conjunction with Gmail, Gmaps etc. the data on each user is very rich and
can be used for better advertisement targeting - the main revenue model of
Google.
For the third kind of user analyze what makes them buy storage or photo albums
and how can Google retain them.
71. You are a PM of Open Table's booking experience. How do you solve
their biggest issue?
Clarify the Issue: Begin by asking for more context about the specific issue they're
referring to. This demonstrates your ability to gather information and make sure
you have a clear understanding of the problem at hand.
Assumption: biggest issue is decrease in north start metric: # of bookings
Prioritize and Define Goals: Once you have a clear understanding of the issue,
outline the goals you'd like to achieve in solving it. Increasing booking conversions,
reducing friction in the booking process
Research and Data Analysis: Explain how you would conduct further research to
gather insights and data that would inform your solution. This could include user
surveys, A/B testing, user behavior analysis, and competitor research.
Walk through the user journey:

124
-8. Customer opens OpenTable: are there any issues with loading the site or the
mobile app?
-7. Customer intent: wants to go out to eat
0. Restaurant isn't available on the platform
1. Restaurant is no longer in business but was not removed from the
platform, impacting # of bookings by making false bookings
2. Not able to view specific details. Example: dress code, kid friendly, vegan
options, affordability
3. Restaurant search is unavailable
4. Not enough filters for the user to find what they're looking for
-3. Customer action: make a booking
0. No available times available, restaurant availability isn't updated on
OpenTable
1. User error: makes a booking for a restaurant in a different city
-6. Customer receives a booking confirmation
0. Backend issue with sending out booking confirmations
1. User error: inputs wrong personal details
Propose Solutions: Offer a few potential solutions to the identified issue. These
solutions should be based on your research and analysis. For instance, you might
consider streamlining the booking process, improving the search and filter
functionality, enhancing the mobile user experience, or introducing personalized
recommendations.
-8. Provide a better recommended experience for the end user based on their
last reservations
-7. Automatically fill out personal details
Evaluate Impact and Feasibility: Discuss the potential impact of each solution on
addressing the issue and achieving the defined goals. Consider factors like
implementation complexity, resources required, and potential risks.
Iterative Approach: Highlight that product management is an iterative process.
Explain how you would start with a minimum viable solution (MVP), gather
feedback from users, and iterate based on their responses and further data
analysis.

125
Measuring Success: Describe how you would measure the success of your
solution. Metrics might include increased booking conversion rates, improved user
engagement, lower customer support inquiries related to booking issues, and
positive user feedback.
Long-Term Vision: Discuss your long-term vision for OpenTable's booking
experience. How would you continue to iterate and innovate to stay ahead of
industry trends and evolving user needs?

72. You are the PM for Alexa. Decide which metrics you would prioritize.

Clarifying questions -
Alexa HW, Alexa SW, Alexa particular feature?
Assumption - Pick any. Picking HW due to relevance with my background.
Any particular pain point that we are trying to address, or any specific goal
that I should be aware of?
Assumption - None. Mine to pick.
Goal for Amazon is to be the most customer centric company. To extend
the same vision - With Alexa HW our goal is to provide the best customer
experience possible.
Restating problem statement with assumptions -I am the PM for Alexa HW. My goal
is to ensure Alexa HW is delivering to customer promise. To enable the same, what
metrics would I prioritize to help get the right signals and drive
changes/improvements.
There are multiple buckets of metrics that may be valuable. I would bucket them
into the following categories -
Adoption/ Increasing user base -
New HW purchases per month
Type of HW being purchased
Engagement. How well the existing users like the product -
Accuracy of input (Speech accuracy)
Comments/ratings on the Alexa product webpage

126
Weekly HW/Quality bugs reported
Daily activity per unit -
activity could be - asking a question, making a request, or running an
app.
Monetization/revenue increase -
New HW purchases
Enabling SW purchases - Amazon shopping etc.
I would prioritize Engagement as the highest category given the stated vision.
Things to watch out for -
Daily activity may encompass negative behavior/emotion associated with
Alexa experience -
Need to ensure they are categorized/filtered appropriately.
Keeping user privacy front and center while capturing user engagement
information.
73. You are the PM for Alexa. Decide which metrics you would prioritize.
Clarifying questions: How many metrics are we looking at? Who is the audience for
these metrics? Is this specific to a geography or region? Let me start with clarifying
what Alexa does and the goals of Amazon and Alexa as a whole. Alexa is Amazon's
virtual assistant which can be used on host of different devices to perform
different tasks from placing orders to playing music to checking the weather. It is
integrated with different Amazon products and can be connected to different
smart devices as well. Is my understanding right? Amazon aims to be the world's
most customer centric company and while I don't know what the mission of Alexa
is, I'm going to assume that it is to make customers' lives easier. For a virtual
assistant the main metric that I would want to track is basically how useful the
product is. If customers are finding the product useful, then they would use it quite
extensively and that is indicative of the success of the product. So if I had to
choose one north star metric, it would be the number of successful actions that
customers take using Alexa. Now to arrive at more detailed metrics, lets look at all
the usecases and what metrics are important for customers for each of them:

127
-8. Understanding the customer: Alexa should be able to correctly understand
customers, w/o them having to repeat themselves. Metric: Speech
recognition accuracy
-7. Alexa should be capable of supporting the actions that customers want to
take. This again involves 2 parts:
a) Integrated actions controlled by Amazon (ordering, amazon music, weather etc)
b) third party integrations so Alexa can help with things like controlling smart
devices or taking actions on other websites Here I would track the total number of
actions that is provided in each of the categories I would also track the number of
actions that customers wanted to take but could not
3 Customer's should not be frustrated: The DAU, one time users who never
adopted the technology or customers who churned are important metrics 4 I would
also like to track usage compared to competitors and metrics like number of
actions supported when compared to competitors as these are important factors
for onboarding new customers and could lead to churn
Finally, while a lot of data is collected and analyzed to improve the product, I would
want to track the number of cases where there was a negative impact on
customers privacy as this is critical to customer satisfaction.

75. As the owner of a burger shop, how would you determine the cause
of a sales decline?
The first step is to clarify what the interviewer means by a sales decline. Questions
I would ask here include:
-8. Did sales drop suddenly or gradually?
-7. What is the magnitude of the sales drop?
-6. Are sales down for a specific product?
-5. Are sales down for a specific channel (drive-thru, walk-in, etc.)?
This series of questions is to make sure we are on the same page when we say
'sales are down.' We may not know the why yet, but the what should be easy to
align on. Otherwise, how would we know to ask, 'why are sales down?'!

128
From there, I would use a series of flow-chart types of questions to narrow down
the possible causes and let the interviewer guide us down the right path.
For example, let's suppose sales dropped suddenly, are down 30% versus one
week ago, is down equally across all products, and is down uniformly across all
channels.
The critical insight here is the sudden drop from one week ago. Now we can
explore this with another series of questions.
-8. Perhaps in the right context, sales haven't declined, and this is a non-issue
0. Are there any anomalies about our sales one week ago, such as a movie
promotion that brought in more customers?
1. Are there any anomalies about the recent period, such as a holiday break
that reduced traffic?
-6. Are orders also down 30%?
0. Did we change our branding?
1. Do we have a new competitor nearby?
2. Did an existing competitor launch a new product?
3. Do customers have the same access to the restaurant?
0. Did our hours change?
1. Were we closed for unexpected reasons?
2. Did weather issues prevent customers from reaching the store?
3. Was there a significant public opinion shift on burgers due to a news
article?
4. Do we have any technical issues taking orders?
5. Is a particular method of payment having issues?
-3. Is the average order value down?
0. Do we have inventory issues?
0. Have we run out of an essential ingredient, such as buns, fries,
drinks, etc.?
1. Are there any issues with our suppliers?
2. Did we change pricing recently?
3. Did we change our promotional strategy in favor of lower-priced
products?
After determining this issue, I would make recommendations on how to move
forward. Some problems are out of our control, such as weather, but others are
actionable.

129
Say, for example, the issue was that we couldn't accept credit cards for several
hours one day because the store's internet was down. Is there a backup network
we can use, such as cellular or satellite? Could we take down the customer cards
and process them later? Could we offer a discount for other forms of payment?
Could we give away food for free as a goodwill gesture to our customers?
In this type of question, I treat the interviewer as a proxy for our database,
engineers, marketing manager, and designers. For example, asking, 'did we have
any technical issues accepting orders?' would be getting feedback from engineers.
Asking 'is the average order value down?' would be akin to querying the database.
And asking, 'did we shift our branding?' is getting feedback from our designers or
marketing managers on any aesthetic or messaging changes we have made.

80. Facebook Events startup time slowed. What would you do?
Clarify:
Has metric definition changed for what the NSM is and app startup time?
-->No
Are we sure that this not just a correlation and causal?
For e.g. We would need to dive into a few different technical buckets
around regional, version and other features launched-->No
Time period:
Is it over the same time period?--> Yes
Goal:
FB goal:
Empower people to build community and bring world closer together.
Doing so through driving meaningful connections.
App is g/w to all other interactions on platform and given app is where
most time is spent, startup time if noticeable would drive down sessions
per user per day ultimately driving down interactions in Events also.
Do we have data from similar slowdowns in the past to benchmark
against?--> No
Product goal: drive meaningful connex through events. If app itself cannot be
opened, this spiffy new feature cannot be accessed.

130
Establish common goal:
Common goal for app and product is to increase total number of
meaningful interactions per user per day.
Hypothesis:
# of Meaningful interactions at platform level per user per day will
increase: Meaningful interactions count things such as Likes, shares,
comments, through NF, Events, Friends. We would need to establish
how much of this increase was driven directly and indirectly by this
new Events feature to determine whether this slowdown is
acceptable.
Guardrail:
% of abandoned app opens per day must not decrease in a
statistically significant manner.
Secondary:
ARPU decline must not be statistically significant.
Median # of successful app opens per day must not decrease.
Guardrail metrics will help us establish whether the slowdown is "noticeable" from
a user perspective.
Long term, if the guardrail metric isn't "noticeable" but meaningful interactions
driven by feature have increased, I would keep the feature; otherwise I would
deprecate it.

81. At Netflix, you notice that about 1M users drop off roughly 6 months
after signing up. What is happening here and how would you deal with
this?
Clarifying questions: 1M users- are these paying users? Daily active users?
Once we know which users, we look at certain areas of initial investigation for
patterns:
-8. Dashboards: Is this a flaw in the dashboard, or metrics pipelines not running?
(Assuming not a problem)

131
-7. Geography: Are these users in a certain geolocation? (Ex: China bans the use
of Netflix?)
-6. Users segments: Are these certain age group (kids, where parents might
have restricted) or first-time users (they watched only one show and dropped
off)
-5. Surfaces: Were they on laptops/TV or some other surface where they
watched the content a few times (buggy experience, broken experience) and
dropped off? Any patterns emerge here
-4. Technical bugs: Any new experiments were rolled out that could have
affected the user experience, or a matching algorithm for first time users ?
-3. Content consumption: Do we know if these dropped users had started a
particular show and never returned, or did not start watching anything?
-2. Funnel: Where in the funnel did the user drop off most? after payment step?
Of these, if there are any directional responses from interviewer, I would dig in that
area. Here, assuming that no UI changes, no changes to algorithm but these users
signed up and added a credit card information, started/ viewed a show and never
returned, forgot to cancel subscription. After 6 months came and cancelled
subscriptions:
To deal with this situation, there are couple ways, depending on why the users left
us:
Dropping off because users don't like the content/have watched majority of shows/
finding nothing interesting:
-8. Notifying the users to come back (within (if they started a show) and give
them multiple recommendations
-7. Diverse recommendations: If the users don't like the content
recommendations, it is likely that they will not return. Pairing ML
recommendations with editorial content recommendations, considering
what's award winning, popular and trending and yet one of the
recommendations completely outside of the user preferences.
-6. When a new Original content is added, let the user know
Dropping off because competition provides more value for money

132
-8. When competition like Hulu, Youtube TV provides more features like live
video content (even at higher price) and the user can get more in a single
subscription, sports content; evaluate which features make sense for Netflix
strategy
0. This will be a defensive play and not necesarrily align with Netflix mission
-7. Netflix's business goal is to provide the best entertainment globally as a
distribution service. If users don't find the content entertaining, they will
leave, so look at user's needs for entertainment.
0. Social + entertainment = Live watch parties
1. Immersive experience = AR effects
2. Bring on more types Entertainment shows : Interactive shows like Ellen
shows or comedies where the user is a part of the reality show.
Ultimately, evaluate if this is a long-term trend with users dropping off, showing an
inherest risk to the business or something more transactional with a temporary
effect like new competitor on market

82. Venmo is seeing a decrease in users adding their bank accounts.


Why might this be happening?
Clarfying questions :
When we say a decrease in users adding the bank accounts. I would like to
understand how the users making payments within Venmo I assume they are either
using their credit cards/debit cards?
I would like to understand why the Adding of Bank Accounts is integral to Venmo
since the users are using the debit card and Credit Cards.
My understanding is when the payments happen through debit cards rails Venmo
pays higher interchange fees and to Reduces any losses incurred for any stolen
cards. Adding bank accounts let us verify the account holder and have the KYC
information.
Interviewer: Something along those lines and adding bank account reduces the
users of 3% incurred for paying friends through credit card.

133
I would like to understand if it is a gradual decrease vs sudden decrease.
Interviewer: For the sake of this exercise I would like to say it is a gradual decrease.
Firstly I would like to look into the data to identify a few patterns for MoM and Wow.
This also rules out the there is no issue with the reporting of metrics.
-8. Segments the data into different buckets like ( geography, device, platform,
age, new users vs recurring users, the current method of payment )
If the we see the drop in adding the data for new users. I would like to look into the
registration journey flow where the user will be able to add the bank account.
If we see the drop in both new and recurring users we would like to look into all the
flows to identify the issues ( Registration, Pay to friends , Payments, Adding in the
profile )
Interviewer: the decrease is throughout in both new users vs recurring.
So let me look into a few things.
-8. Check for new changes/bugs introduced in the Add the bank flows. I would
like to cross-check with the metrics decrease in new users with the same
period. Cross-check with the page failure logs to identify the flow where the
users are failing to add the banks.
-7. Adding a bank requires two things one verifying instantly with Plaid and
manually where we send some money to the banks to verify the account. I
would like to check the logs to rule any issues with the API in integration to
third party providers.
I would like to check with the #success,#failureAvg response times for the API Calls
made to Plaid and bank integration calls. #Failures will indicate failure to
authenticate the bank details and #higher Response times would indicate the user
hasn't completed this process due to the response time.

134
-8. looking into the front end analytics. I would like to see the dropout rate from
adding the bank page. Here I could hypothesis two issues either the user not
able to find the bank during instant verification. The users do not have the
bank account number and routing number handy and hence are not providing
the details.
Interviewer: Yes the dropout rate is higher.
Here are a few recommendations and measured by %users add the bank accounts.
-8. If the user is unable to find the list of Banks form the Plaid list for instant
verification route them to the Manual verification page providing the bank
account number and routing number. The reason is currently we exit out and
return the main payment method page By diverting users to a manual
verification page will reduce the number of clicks and will drive the user re-
attempt for adding a bank account.
-7. Look for the bank names which return no results and Work with the 3rd party
provider to update the bank list periodically.
-6. Introduce to scan a blank check for manual verification to get the account
number/routing number for helping users who don't remember the account
number/routing numbers.
83. Facebook is considering adding a 7th reaction. How would you
determine whether this is necessary and how would you measure
success?
Lay out the mission of facebook and how reactions relate to it (FB mission is
to connect people, FB app has posts that make up a feed, reactions enable
more engagement with posts
Describe the benefits of reactions for users (less friction to engage) and for
FB (more user engagement with posts which helps viewer and poster,
improve newsfeed algo)
Notice that you can evaluate demand for a 7th reaction by looking at posts
with a high ratio of comments to reactions, and many comments are short or
a single word
Evaluate success by looking at an increase in post responses (reactions +
comments) with a goal of increasing overall user engagement with FB

135
84. A super-app like Grab decides to launch a new vertical (perhaps
peer-to-peer car rental). In the first six months, what metrics would you
look at to see if this new vertical looks promising?
A peer-to-peer car rental is a 2-sided marketplace, with two types of users : 1)
those who want to rent a car (demand), 2) those looking to rent out their car
(supply). So, there are two main challenges for us to solve for through our MVP: 1)
the classic (critical mass) marketplace problem of ensuring enough demand and
supply. Either side will not find the product useful unless, there is enough of the
other group of users, 2) In the case of peer-to-peer car rental, we are also trying to
change user behaviour (this is hard) - because today they rent out cards from
Hertz and other car rental companies.
To tackle the first problem, we will have to recruit our supply in areas where the
demand for rental cars is high in general (e.g. Near Airports, Railway Stations etc.)
We probably also have to recruit supply of cars (which have different features /
price points to what current car rentals have - for e.g. cars with baby seats pre-
installed). For the second problem, let's put up display banners at these Airports
and also partner with popular, local Travel Agents in these areas to market our app.
Partnerning with Airbnb can also be an option to get demand. We can even throttle
demand (through an invite-only process) till we get our supply side acquired and
bring it to critical mass.
This means that the metrics we will monitor in the first 6 months of launch will also
need to track the above two challenges. So, I'd divide the metrics into Supply and
Demand, and track the funnel of each to see the health of both numbers.
I. User Acquisition:
Demand Funnel
-8. # of App Downloads
-7. # of Sign-ups/Registrations (% Sign-up Conversion)
Supply Funnel
-8. # of Sign-ups/Registrations (% Sign-up Conversion)
-7. % Completion/Upload of Inventory (Adding details about the car + uploading
pictures)

136
-6. % of Inventory passing Quality Checks (no junk values etc.)
II. Conversion (Critical Actions):
-8. # of Rental bookings completed (% Drop-offs and at what part of the funnel)
-7. # of booking attempts that failed (due to lack of supply) / # of Requests from
Under-served Areas (in order to scale supply in those areas)
-6. of Contacts from customers (Support queries and reason for contact)
-5. NPS score and Feedback (Both Supply and Demand feedback)
III. Upselling/Cross-Selling Potential
-8. # of Customers that rented a car that also performed another High-Value
Grab Action in the next 30 days (e.g. ordered food, added money to wallet)
These metrics help you understand if the Rental Vertical is growing (has PMF) and
also helps understand how this vertical serves the larger Grab business (feeds into
the flywheel). For e.g. can Grab Drivers themselves rent their car out if unused
(during off-peak seasons)

85. Facebook Logins are going down. What would you do?
Methodology:
-8. Clarify some of the terms in our question
-7. List high level causes based on the category and gather context information,
rule out the ones that have low probability or out of scope
-6. Drill down the causes and establish a few hypotyhesis
-5. Investigate and test the hypothesis I have and try to fix the problem
Clarify questions: Define FB logins: through 3rd party Apps,
-8. Assuming our data is accurate, double check with 3rd party Apps that our
understanding is correct/ or cross check data somehow
-7. how are the engagement/ login behavior on those 3rd party Apps? is it also
going down at the same extent? if so, then this might not be a concern.
-6. how bad is the decrease in logins? let's assume it is 10%
-5. what is the time frame? is it MoM,WoW or DoD? assume MoM
-4. Is this something sudden or steady? assume steady

137
Gathering more context to help my understanding, using the process of elimination
:
The causes are most likely from those 4 buckets:
-8. Technical Issues? ( slice it by platform, Updates like IOS 14; slice it carrier;
looking into CS tickets in a particular categories, talk to the FB 3rd party App
point of contact to check if there is any unusual or outage ), is the login API
broken ? - assume no
-7. Other product updates? Within FB product updates in this case unlikely no as
there is little interactions within the FB platform. But we need to check if FB's
own login is also down, Are these user's FB engagement level also 10%
lower? to the same extent or worse? -> assume no; Outside of FB in those 3rd
party login system, did they remove, depriortize FB login or changed the UI in
someway that is why? Did they not aggressively push users to login anymore?
assume no
-6. FB Login updates? any new release in the past 2 months? any AB test
running? Is this decline coming from a type of 3rd party Apps or global? or
from a specific GEO location? Or from a cohort of users? Are they existing FB
users or new users who have do not have FB accounts so it might be easier
for them to just create an account directly on those 3rd party Apps. Any
particular user segment? ( existing <25 users )
-5. Assuming it is related to FB login itself, check the onbording funnel where the
user dropped off, did they start to "connect to Facebook" or simply avoid this
option during 3rd party App login?
--> Ir seems like we have the following hypothesis: 1 Trust of 3rd party App: the
purpose of having FB login is convince so it helps the sign up or login through the
3rd party Apps much faster and those Apps can cross reference FB user data, so if
the user does not trust those 3rd party Apps then they may not want to share their
FB data 2 Trust of FB App: Due to some recent issues, some users are concerned
about how FB is using their data so they do not want to use FB login in a 3rd party
App, they are afraid of FB might want to serve them more ads due to login in to a
baby food shopping App. 3 More and more 3rd Apps use phone authentication to
login for even more convince, so FB is not the fastest onboarding way anymore, a
less attractive option for users to use [Link] the user drops off during the Login,
check connectively and performance decay of last month, maybe FB login takes
too long due to multiple API calls and maybe the performance /user experience

138
gets worst overtime especially when there is no wifi/ low network connection,
which caused the user to drop off
How would I improve/next steps?
-8. Investigate those 3rd party Apps and drill down to a specific category, if it is
the problem of the nature of the 3rd party Apps, like decrease in casino
mobile game App etc, then might not be a concern for us. Also, we should
educate those 3rd party Apps how valuable FB login for them are due to the
data and maybe they can incentivize their users to connect to FB later on in
the App post-onboarding.
-7. If if it is lack of the trust for FB the brand, inform PR team and maybe initiate a
campaign or simplely AB test to have a banner or a 🔐 icon to inform user ( "
we won't share your data such as XYZ" ) to provide peace of mind and see if
this can help
-6. Similar to 1, mention the pros of FB login, educate 3rd party Apps how
valuable FB login for them will be for more personalized experience in their
Apps
-5. Work with engineering team to improve the performance and champaign the
importance of 3rd party login for FB as a company: understand user's
preference and behavior to serve them more targeted Ads -> revenue
Recap everything, 1->4. ask the interviewer if there is any other areas we need to
explore more into.

86. What goals would you set for Reels recommendation engine?
What goals would you set for Reels recommendation engine? Clarification: Is this
Reels on Instagram, FB? Which ones do you mean? -Instagram
What does this feature cover- Reels recommendation engine- showcases a new
video to the user when tapped User can Like, comment , add reel to the story,
remix this reel, save, report, not interested
Why is this done? Competition from Tiktok : Make Reels appealing to Meta
userbase so that Meta doesn't lose users, creators, brands to TikTok

139
Mission Meta's mission is about building communities and giving people the power
to connect with them Product goals - this feature bring together users, content
creators and brand advertisers on Meta for an immersive entertaining engaging
experience
User segments
Content creators/ Influencers
Star creators
SMB
Content consumers
Frequent
casual
low frequency
Advertisers
Large
SMB
Where the product is at in maturity Instagram- used by 1B+ users, so product is at
mature stage, hence engagement, retention are key from a metrics standpoint
Reels -> rolled out to all the users- assumption
User segments Prioritization - picking Content consumers as the product is in
mature stage and as mentioned engagement, retention is key Funnel Activation
adoption engagement monetization
Metrics Over a certain duration, engagement metrics are key here - Time spent on
Reels # of reels viewed % of video viewed
NSM Average # of reels viewed would be key NSM Guardrail metric
Of reels reported as not interested , as reported as spam etc
87. How would you measure the success of Facebook dating?
Why do we have FB dating? How does it fit within FB's mission of connecting
people and building community?

140
Research shows that lonely people have worse health outcomes, get sick more
easily, die faster, etc. There are many ways to solve for loneliness. FB already has
Groups which connect people based on shared interests/hobbies. FB also has
Messengers and other apps that help you connect with people you already know.
Dating is an important way FB can help people form more intimate connections and
is a missing part of FB portfolio today.
Assumptions
FB Dating is launched about 1 year ago. It remains free of charge and ad-free
today. So monetization is not a focus right now
FB Dating aims toward users looking for actual dating and not hooking up.
The profile displays job, education and other interests. These information
aren't really relevant unless you are looking to date
FB Dating is primarily designed for single people looking to get into an
exclusive relationship. Not people looking to get into a poly relationship
Goals
FB-level Goals
Engagement Boost: People spend a lot of time on different dating apps
(Tinder, Hinge, OKC, CMB, bumble, match - each with a different flavor). FB
wants to capture this significant portion of time spent' on other apps today
and does not want to monetize this usage for the time being
User acquisition: There are some people who do not have / deleted FB
accounts. The dating gives them a reason to re-activate their FB account
FB Dating Success Measure
Given the FB-level goals, we want to keep users on FB dating as long as their
relationship status remains 'single'. User retention (ex those who got into a
relationship) should be our success measure. Retention boils down to making our
users satisfied with using the dating feature. Crucially, optimizing for retention
means optimizing for different things depending on the lifecycle of the users.
Goals

141
What matters for users in dating?
-8. Users have enough options to 'swipe right' on
0. (# of profiles swiped right / total profiles shown)
-7. Users get enough matches rate
0. (# of matches / # of profiles swiped right)
-7. A match leads to a good conversation
0. (# of matches with more than 3 messages exchanged / total # of
matches)
1. (# of bad convo as defined by FB dating guideline violation / total # of
matches)
-6. A good portion of matches leads to a meetup (virtual or real-life meetup pre-
covid)
0. (# of meetup / # matches with conversation exchanged)
-7. Users have good experience meeting up the match and decide to meet again
-6. A meet up eventually lead to an exclusive relationship (relationship status
update) and the user will drop off from FB dating for some time
As discussed, each metric should be carefully optimized depending on user
lifecycle
#1 is important to retain users within the first weeks. If the user doesn't see anyone
appealing showing up, they would stop using pretty quickly
#2 is critical to retain users in the month. If users get very low match rate after
swiping for a few weeks, they are less likely to continue using FB dating.
#4 is important in the medium term retention (month 2 to 3 retention) - If users
repeatedly get matched with someone who aren't compatible (interest wise,
personality wise, aren't looking for the same thing), the match is unlikely to lead to
a meetup.
#5 is important in keeping the users on FB dating longer-term (month 4 and
beyond).
Evaluations

142
Optimizing for #1 means FB needs to 'acquire' more FB users to become FB dating
users. This could mean showing FB dating icon more prominently in the FB apps,
reducing the traffic to other corners of FB app (marketplace, videos and more).
Optimizing for 1 also means showing more people 'out of the user's league'
(people tend to like those who are ~20% more attractive than themselves) which
will hurt the second metric of focus (match rate)
Feasibility: One advantage FB has over other apps is that pretty much everyone is
on FB (but not everyone is on FB dating). The pool of FB dating users is
theoretically larger than any other apps. This means that they can show more
desirable 'profiles' to each user.
Optimizing for #2 could hurt #3 and #4, potentially by showing the profile of
people who pretty much swipe right on everyone but have relatively low
convo/meetup percentage.
Optimizing for #5 could be not feasible as it's difficult to predict a person
compatibility in real life vs on paper.

88. How would you measure success for preventing misinformation on


Facebook?
Clarify:
-8. I will assume that misinformation includes posts, videos and pictures and that
sources could be Groups/ Pages and Individuals. High level structure:
Mission, Information funnel and drop-off points, Metrics & Trade-offs,
Mitigations Mission/ Strategy: Facebook's mission is to bring the world closer
together and empower people to build community. Reliable information is key
to fostering a healthy community. In light of recent world events and public
sentiment, it is important for Facebook to earn back the trust of its users,
governments and organizations. On the other hand, in an effort to quash
misinformation, we should not threaten another core value: free speech.
Information funnel: Laying out my understanding of how information flows in
facebook

143
-7. User (individual/ group/ page) creates a post
-6. Post is flagged by algorithm (some potential for false positives). I will assume
that algorithm is more front footed on posts which are trending topics. Most
obvious minsinfo posts are suppressed at this point while others are sent for
human review.
-5. Facebook employee reviews posts flagged by algorithm: My assumption is
that most of these require some contextual understanding/ human
judgement. Here too, there is an opportunity for false positives (non-misinfo
posts labelled as misinfo)and false negatives (misinfo posts passed off as
reliable posts)
-4. Facebook user flags posts: Sent back for algorithmic/ human review based on
number of flags raised and impact of trending topic.
Metrics: My metrics are distributed along the funnel to understand where the
biggest issues are.
-8. % of posts labelled as misinfo by algorithm/ total posts reviewed by
algorithm: This will give us a sense of how much is captured by the algorithm
on the first pass. We can slice this by % for trending topics vs overall and
over different time slices e.g. Last 12 hours, 24 hours, 72 hours to give us a
better sense of virality. We can also look at % of false positives and false
negatives. We can also slice by region, demographic and device type to
detect any bot activity. Blind spot: Accuracy of algorithm not known.
-7. # of posts flagged as misinfo by facebook users/ total posts viewed: ideally
we want this % to be small. Blind spot: doesn't tell us whether the user is
flagging misinfo because their opinion differs from that of the post.
-6. Probability distribution of users flagging content: 20% of users are likely
flagging 80% of the content. These are likely self-appointed guardians of the
system (similar to wikipedia editors).Long term we might want to leverage this
community as an additional layer of checks.
-5. Virality score of misinfo pots in 6, 12, 24 hours: This woudl be a weighted
average score fo likes, shares comments, messages, etc. As an example, a
like will get the lowest score as it has lowest impact while a share will get the
highest score since it promotes virality. We will need to work with data
science to determine the scoring system. This will help us understand the
severity of the problem.

144
-4. Lead time to taking down misinfo posts from time of creation: should go down
over time but if system load is too high especially for top trending topics, this
may surge initially.
NSM is #1 as it tells us efficacy of the algorithm and ideally we want to catch
misinfo as high up in the funnel as possible to maintain scale and ensuring that our
mechanisms are strong enough to stem them before they get to the end user.
Guardrail metrics
-8. % of false positives/ all posts labelled as misinfo: I would slice this at each
stage of the funnel i.e. By algorithm, human reviewer and Facebook user.
Ideally we would want this to decrease as we go lower in the funnel and we
want to ensure
-7. % of false negatives/ misinfo posts not labelled: I would slice this at each
stage of the funnel i.e. By algorithm, human reviewer and Facebook user.
Ideally we would want this to decrease as we go lower in the funnel and we
want to ensure
-6. Long term:
0. I would also look at NPS score to ensure that community trend maintains
an upward trend i.e. That users trust us.
1. Track repeat offenders and takedown after x legitimate offenses (x TBD
with data science based on virality score).
Mitigations: For top trending topics,
-8. establish a center on the hamburger menu which is the single source of truth
e.g. Similar to COVID and Climate change portals
-7. Surface center in the hamburger menu
-6. Establish a task force of human reviewers to focus on evolving trending topic
to ensure that there is sufficient man power.
89. You are the PM of Messenger, you noticed that its DAU has gone
down significantly. How would you go about looking for the root cause?

First, I want to get a better sense of what is a DAU. What is considered “Active”?
User sending message, reading message, opening app?

145
All of the them, any activity within the Messenger App
Second, it’s important to know the context of the change. What timeframe was this
over?
Let’s assume this was over the past 30 days.
Ok, a follow up to this would be to check if this is a seasonal pattern? For example,
there could be a correlation to Messenger usage and school starting.
This is not seasonal.
Great, the way I want to address this is by thinking of (1) user issues; (2) feature
issues; (3) funnel issues (i.e. ways users access messenger); and (4) external
factors.
(1) Let’s first think about user issues.
I would want to see if we can use data to identify a segment of the userbase that is
at the cause of this problem. Examples of dimensions to analyze – platform,
device, geography, demographics, FB version, age of the account, usage of
Messenger - low, medium, high
Also, I think it’s important to understand DAU’s relative to that of other metrics
where there could be a correlation, example FB and/or WhatsApp
(2) Next, I would want to look at potential feature issues.
Here would take a look at some of the health metrics of the product. Examples
Messages sent and open rates
Notifications sent and the open rate
Message delivery latency
Breakdown of messages by type - text, image, video, url, etc.
Breakdown by sender type - bot, company, friend, stranger
(3) I also want to assess the funnel or the way people access messenger.

146
Breakdown of Message by channel - direct, facebook, API, 3rd party/widget
Breakdown of referral source by channel - direct, facebook (notifications,
messenger widget)
(4) Lastly, let’s think about potential outside forces that could be having an impact.
FB brand reputation
User behavior
Competition - iMessage, Google, Skype
Major event - Olympics, Elections
Let’s say that we cannot find a pattern based upon user info, however, we do find
there is a drop of messages being created from FB. Where would you go next?
I think here it’s important to breakdown the message by sender type as well as
message type. From looking at those segments, we could start thinking of a few
theories that are occurring. A few potential ideas that come to mind are.
Send a message to a friend is deprioritized in UI
Messaging is being cannibalized by other features (e.g. Facebook videos,
commenting, etc.)
Sharing via message is deprioritized in UI
90. YouTube comments are up but watch time is down. What do you do?
Clarifying Qns:
-8. Did the watch time went down and comments went up for the same videos or
are we saying total watch time and total comments on Youtube? (Total)
-7. I am assuming we are not including Youtube Live videos because those are
typically live chat 1000’s of comments for a single video. (Correct)
-6. Do we know if these two could be independent problems not correlated with
each other (No)
Here is my approach to diagnose the root cause:
-8. Will look at some general trends first

147
-7. Internal factors
-6. External factors
-5. Potential reasons/Hypothesis
General Trends: Slice and dice the data by various factors below -
-8. Was this happening for certain type of videos (Gaming, Entertainment, News,
etc.)
-7. Is the change specific to a certain Geography (China, India, US, etc.)
-6. Is this an issue on certain OS/device types
-5. Is this change observed with certain user segments (Demographics)
-4. Is this a gradual change or a sudden shift
Internal factors:
-8. Did we change the approach we are tracking these metrics recently?
-7. Did we launch any new feature like short form videos that will contribute less
to watch time but have high engagement in comments?
-6. Did we do any PR campaign to encourage commenting on videos?
Before thinking of next factors, are we observing the same trend with our
competitors? (No) External factors:
-8. Was these any social issue recently that led to more discussion on the
platform?
-7. Was there any new product or feature launch from competitors?
Hypothesis: Since this trend is observed only for YouTube, I will look at internal
factors. #1 and #3 are easy to identify. Most likely, the change is due to the launch
of short form videos.
-8. Short form videos are typically 30sec-1min in length compared to 5-10mins
long form videos
-7. Short form videos have commenting feature inline leading to more comments
91. You're a PM on FB Watch. Your team is working on a redesign. This
redesign improved the watch time, but it also caused a drop in likes and
comments. Should you ship this?

148
Clarify
Functionality: Watch is a tab in FB app where people can upload and view
Videos.
What's the redesign? We collapsed the reaction and comment bar so people
can see more videos and spend more time watching videos.
What's the objective of the redesign? To help people discover and watch
more videos they'll enjoy._
Structure To decide whether or not to ship this redesign, i'd like to go over.
-8. How does the Watch Tab fit into FB's mission and what's its goal?
-7. What's the long-term impact on users and their user journey?
-6. What are pros and cons in the short and long-term?
-5. What's our decision criteria?
Mission & Goal FB's mission is to give people the power to build community and
bring the world closer together. The Watch tab furthers that mission by helping
people stay entertained together. The goal of the Watch tab is to drive FB
engagement by allowing people to share and discover and watch videos together.
Hypothesized Impact Watch tab has a double sized market place of producers and
consumers of Videos. The Redesign collapsed the reaction and comment bar
resulting in an increase in watch time and fewer likes and comments. Lets start
with hypothesized impact on consumers given they're the primary users of the
reaction and comment bar.
Consumers get success when they discover, watch, and engage with video
content. The redesign allowed consumers to discover more videos per vpv and
hence their watch time went up. However, it's harder to discover and access the
reaction and comment bar which, over time, may results in consumers being less
engaged missing the opportunity to respond to the content or connect with others
over the content. Engagement often begets more engagement.
Producers get success when consumers watch and engage with their videos.
Overall watch time went up meaning more videos are being discovered and being

149
watched contributing more value to Producers. This may incentivized Producers to
create more videos. However, there's less reaction and comments which doesn't
let Producers know if Consumers are actually getting value from their Videos. Over
time, producers may also be missing out on getting feedback to let them know
what consumers like or don't like and creating lower quality videos over time.
FB gets success when people create and discover and watch videos together.
While total watch time has gone up, FB maybe relying on reactions and comments
on videos to understand video quality and user interest and how to best distribute
videos to which type of audience. Over time, FB may be making lower quality and
less effective video recommendations to users over time.
Pro and Cons in Short vs Long Term
Short-Term Pros: More Watch Time Cons: Less Reactions and Comments
Long-Term Pros: More Videos Created Cons:
Less Watch time/Video
Less FB sessions/timespent
Decision
We should consider what's the short and long-term impact on FB's mission and
Watch goals when deciding whether or not to ship the redesign.
Ship - should further FB's mission and Watch goal No Ship - doesn't further FB
mission and Watch goal Re-test - furthers FB mission or Watch goal
Based on the hypothesized impact to users, we should not ship the redesign as is
given it'll hurt both FB's mission and Watch goal in the long-run. We should
reiterate on the redesign and see how we can drive up discovery of additional
videos and watch time without the regression on engagement.

92. You're the PM for Facebook Newsfeed. How would you decide
whether or not to auto-play videos or make the videos click-to-play.

150
Clarifying
-8. How does autoplay work? I presume it's how it's done today (no audio, start
playing once the video is scrolled to be completely in the user's screen)
-7. Desktop or mobile?
-6. Which market is this? Users in US / Western Europe vs developing markets
have different behaviors
Goals & Metrics
FB's mission is to help people build deeper connections. As a news feed PM, the
goal is to make sure FB news feed is valuable to the users by being a great way to
keep in touch with friends. Deriving from that, these should be the primary goals
-8. Boost time spent on FB News Feed (= less time on Youtube)
-7. Boost users interactions - a user discovers a funny video and share them on
their own feed or in a private message in Messenger, this helps strengthen /
maintain connection between Facebook users
Other stakeholders goals
This decision will impact the Facebook Videos team, the Ads team and the
Instagram TV team. These are the primary metrics
Number of videos watched, % completed, interacted, skipped and shared
[Videos team]
Autoplay will most likely increase most of these metrics
Number of ad impressions [Ads team]
Users spending more time watching video could mean less 'scrolling',
which triggers fewer ads
Revenue per 1,000 ad impressions [Ads team]
More video watching
Engagement (time spent, number of videos watched, etc.) on IGTV
[Instagram team]
This is a direct substitute
Decision Making Criteria

151
I would run an experimentation on autoplay vs click-to-play and makes the decision
based on which one has more positive impact on following metrics
Frequency of interactions with friends (by private messaging, commenting
and reacting to posts)
Time spent on FB news feed
To reiterate, the assumption is autoplaying videos will save users a click, lowering
the barrier for the users to discover great videos to share with their friends.
Countermetrics
The primary one to watch would be monetization-related metrics, but they would
not be the main criteria since video ads are expected to be more monetizable than
news feed ads in the long-term. However, if they are disproportionately impacted,
we will have to revisit the criteria.
The video-related metrics are left out since Autoplay would most likely win out and
thus would not be a fair comparison. IGTV is also left out since we work for FB
News Feed and are primarily concerned with FB metrics

93. You're a PM at a food delivery app. There's a 10% decline in


restaurant supply over the past week.
AA is Interviewer and BB is Interviewee
BB: I would like to ask few clarifying questions: AA: Sure, Go ahead. BB: Is it
happening across the country or its region specific? AA: It's happening across all
North America BB: Has it happened in the past? With that I meant, is it a
seasonality issue? AA: No, we are facing this issue for the first time. BB: Are our
delivery competitors also facing the same issue? AA: Since this is the recent issue,
we don't have much info about our rivals in the market. BB: Are we seeing decline
in revenue due to our restaurant partners supply-side issue? AA: Yes, we have
started seeing gradual decline in the no. of orders. BB: Are supply shortage scoped
to any particular cuisine or is it uniform across all the cuisines? AA: It's affecting

152
the overall operations of the restaurants. BB: Have we made any recent UI changes
on our app? Maybe our users would be facing issues while ordering? AA: Yes we
have made some gradient changes. Changed our onboarding process. We have
moved from pagination to lazy loading of restaurant listing pages which is good in
terms of data consumption on the user end. BB: Cool. I would like to brainstorm to
further scope down the root cause and steps to counter the situation. AA: Go
ahead. BB: So here's the framework that I would like to go ahead with:
Brainstorm list of possible reasons by diving into use cases
Identifying root cause
Brainstorm ways to fix
Make a recommendation
Sounds good to you? AA: Go ahead. BB: By daily active users, we mean users who
visit our app, search for the meals they want to have, scroll through restaurants
which are offering that meal and place the order. Is that the safe assumption? AA:
Yes, that's the correct use case. BB: Okay, since our partner restaurants are facing
issue, I believe we should look at external factors as we would only be able to fulfil
demands if we have sufficient supply. So based on above discussion I have thought
of three potential reasons:
Trade sanctions among countries through which restaurants import their
supply
On and off Covid-19 lockdown issues among various states around US and
Canada.
False rumors about any particular restaurant which sparked protest which led
to supply-side shortage.
Are we facing any of the above issue? AA: Yes, on and off-lockdown situation is
making it worse for restaurants to get enough supplies. Can you think of few
solutions to counter it? BB: Okay. Let me brainstorm for a bit. AA: Sure BB: Since
pandemic is a global issue and we don't have powers to stop the inevitable. We can
surely think of other ways to provide value to our users. This involves:
Start partnering with local cafes, bakeries and giving out discounts on food
items which can keep the business going.
Get into grocery & medicine delivery segment since we have sufficient drivers
to fulfil the demand.

153
Acquire new customers via marketing & promoting cuisines which are
sufficient in supply.
I would like to go with 2nd solution as it's unique and provides a much higher value
to our customers in current crisis. We have drivers who can deliver groceries,
medicines and other products by following the same safety protocol which they are
accustomed to now. Although, it would be a fundamental change in our value
proposition and we would gain new set of rivals in the market but we can expand
our scope of business. We also have to change our compensation model for drivers
for delivering new set of products. Success metrics which I would like to track
would be:
No of users adding grocery, medicines to cart.
% increase in DAU/WAU
% of DAU placing orders relative to total users.
94. If you were the PM for FB marketplaces, what goals would you set?
Clarification
I would first try to understand what FB Marketplace is since I haven't really used it
more than a few times.
My understanding is the following
-8. Launched 2-3 years ago in the US, it is similar to web classified services such
as Craigslist
-7. The sellers are not businesses but rather individuals, primarily in local areas
-6. Broad range of products with no particular focus: ranging from electronics,
furniture, used cars to even apartment for rent. Sometimes posters are willing
to give things away for free
-5. FB doesn't charge listing fees
-4. FB runs ads on Marketplace
Why does FB Marketplace exist?
FB Marketplace is beneficial to the business in three key ways

154
-8. Engagement
0. FB wants a greater share of user's time. There is only so much time
people are willing to spend browsing FB/IG feed and messaging their
friends before the marginal return from doing so hits negative
1. This is probably the reason FB is aggressively pushing FB Watch and Live
(as evidenced by their prominent positioning in the FB app), since they
are different kinds of content and reduce the news feed fatigue
2. Higher engagement also boosts monetization - however I do not believe
this to be a primary goal - the ad density on FB marketplace is a lot lower
than News Feed such that it may even be a net-negative to nudge people
to come here (assuming some cannibalization to News Feed)
-5. Retention
0. By becoming a go-to place for people to buy/sell things, FB is increasing
the utility of the platform, making it stickier for the users
1. FB MAU has plateaued over time and while this is not disclosed in FB's
investors presentation, the user churn number should be considerable
2. Marketplace can reduce churn by countering the "social media is a waste
of time" sentiment and even resurrect some users
-5. Strategic Reasons
0. As FB becomes home to more activities from users in the community, it
can launch a competitor to Nextdoor (a private, neighborhood-only social
network)
1. FB Marketplace can also be used to promote adoption of Facebook Pay -
> more users linking cards with FB -> lower barriers to get users to adopt
other future products
0. Creators marketplace selling custom stickers for Messenger /
WhatsApp (the LINE strategy)
1. "Tipping" live streamers (FB Gaming is recently launched and this
would be critical to attract streamers)
I believe #2 is the most important reason FB created Marketplace for the following
reasons
-8. The natural frequency of FB Marketplace usage is not daily. Realistically, you
are not going to go there every day. Hence, it's not going to have a
meaningful impact on engagement metrics (as measured by time spend or #
of sessions per week)

155
-7. #3 is unlikely to be the main goal here. There are better ways for FB to
promote 'local' usage - for example FB Groups can be more optimized
towards local. FB Pay adoption can also be better boosted by lower barrier
products such as FB Fundraising (unlike marketplace, users don't have to go
pick up the goods)
Metrics
We would like FB marketplace to be successful as a classified product, providing
utilities to both the sellers and the buyers. The key metrics I would measure are
going to be focused on user satisfaction. For each set of users, I will list the goals.
Sellers
% of listings that that get engagement (likes, comments, buyer's message) in
<7d, 7-14d, 14-30d and >30d
If most of the items that sellers list never get any view, likes or comments,
they will feel that it went into the black hole and not post more items in
the future
The time 'bucket' is to account for different 'velocity' of each vertical -
selling a used car will take a lot longer than selling an Airpod. We should
not punish a car listing if it doesn't get an interest quickly enough
% of listings that result in closed transactions (measured by # of closed sales
/ # of listings posted by the seller) in <7d, 7-14d, 14-30d and >30d
Transaction would not be closed for several reasons
Buyer/Seller never showed up
Product condition doesn't match the listing
Buyer tried to further negotiate after having agreed
% of listings that were closed outside of Facebook
This means the seller sold it on another platform because FB doesn't
have enough buyers or does a poor job showing the right listing to the
right people
Buyers
% of buyer's inquiries that get a response in <24 hours, 24-48 hours, and
>48 hours

156
Similar to the seller scenario, if a buyer reaches out to several sellers and
receive no or very laggy response, s/he will quickly stop using
Marketplace
% of buyer's inquiries that results in closed transactions (measured by # of
closed sales / # of inquiries made by the buyer) in <7d, 7-14d, 14-30d and
>30d
Transaction Experience (out of 5 star)
Why the above metrics?
We want to focus on retention which ultimately boils down to happiness
Active users (Marketplace DAU/WAU/MAU) should be measured, but are not
the goals here because they do not measure the 'values' that the users get
from Marketplace
Picking it as a goal can lead to FB excessively pushing the Marketplace
product while having sub-par user experience, leading to churn
The 'depth' of the marketplace as measured by number of new listings
posted in each category/market is not the goal. Having a lot of listings do not
necessary imply transactions. Given that in matured markets, FB Marketplace
has been around for several years, it should not lack marketplace depth (that
being said, they should still be a measured as counter-metric)
Depth alone does not create a sufficient moat to retain users. As a
marketplace, eBay has the most 'depth' but startups focusing on better
experience in a particular vertical (e.g. GOAT for high-end sneakers,
Chrono24 for watches) have chipped away eBay's market share
However, in a new market entry scenario, these sets of metrics would
become more important
Risks
Optimizing for response rate / listings percentage that lead to a successful
transaction could lead to us imposing a higher bar for a listing (requiring more
pics, more item details, etc). Being too aggressive here can prevent a good
amount of items from being posted, reducing marketplace depth
Number of new listings posted in each category should be monitored as a
guardrail metric
95. How would you measure the success of Facebook dating?

157
Why do we have FB dating? How does it fit within FB's mission of connecting
people and building community?
Research shows that lonely people have worse health outcomes, get sick more
easily, die faster, etc. There are many ways to solve for loneliness. FB already has
Groups which connect people based on shared interests/hobbies. FB also has
Messengers and other apps that help you connect with people you already know.
Dating is an important way FB can help people form more intimate connections and
is a missing part of FB portfolio today.
Assumptions
FB Dating is launched about 1 year ago. It remains free of charge and ad-free
today. So monetization is not a focus right now
FB Dating aims toward users looking for actual dating and not hooking up.
The profile displays job, education and other interests. These information
aren't really relevant unless you are looking to date
FB Dating is primarily designed for single people looking to get into an
exclusive relationship. Not people looking to get into a poly relationship
Goals
FB-level Goals
Engagement Boost: People spend a lot of time on different dating apps
(Tinder, Hinge, OKC, CMB, bumble, match - each with a different flavor). FB
wants to capture this significant portion of time spent' on other apps today
and does not want to monetize this usage for the time being
User acquisition: There are some people who do not have / deleted FB
accounts. The dating gives them a reason to re-activate their FB account
FB Dating Success Measure
Given the FB-level goals, we want to keep users on FB dating as long as their
relationship status remains 'single'. User retention (ex those who got into a
relationship) should be our success measure. Retention boils down to making our

158
users satisfied with using the dating feature. Crucially, optimizing for retention
means optimizing for different things depending on the lifecycle of the users.
Goals
What matters for users in dating?
-8. Users have enough options to 'swipe right' on
0. (# of profiles swiped right / total profiles shown)
-7. Users get enough matches rate
0. (# of matches / # of profiles swiped right)
-7. A match leads to a good conversation
0. (# of matches with more than 3 messages exchanged / total # of
matches)
1. (# of bad convo as defined by FB dating guideline violation / total # of
matches)
-6. A good portion of matches leads to a meetup (virtual or real-life meetup pre-
covid)
0. (# of meetup / # matches with conversation exchanged)
-7. Users have good experience meeting up the match and decide to meet again
-6. A meet up eventually lead to an exclusive relationship (relationship status
update) and the user will drop off from FB dating for some time
As discussed, each metric should be carefully optimized depending on user
lifecycle
#1 is important to retain users within the first weeks. If the user doesn't see anyone
appealing showing up, they would stop using pretty quickly
#2 is critical to retain users in the month. If users get very low match rate after
swiping for a few weeks, they are less likely to continue using FB dating.
#4 is important in the medium term retention (month 2 to 3 retention) - If users
repeatedly get matched with someone who aren't compatible (interest wise,
personality wise, aren't looking for the same thing), the match is unlikely to lead to
a meetup.

159
#5 is important in keeping the users on FB dating longer-term (month 4 and
beyond).
Evaluations
Optimizing for #1 means FB needs to 'acquire' more FB users to become FB dating
users. This could mean showing FB dating icon more prominently in the FB apps,
reducing the traffic to other corners of FB app (marketplace, videos and more).
Optimizing for 1 also means showing more people 'out of the user's league'
(people tend to like those who are ~20% more attractive than themselves) which
will hurt the second metric of focus (match rate)
Feasibility: One advantage FB has over other apps is that pretty much everyone is
on FB (but not everyone is on FB dating). The pool of FB dating users is
theoretically larger than any other apps. This means that they can show more
desirable 'profiles' to each user.
Optimizing for #2 could hurt #3 and #4, potentially by showing the profile of
people who pretty much swipe right on everyone but have relatively low
convo/meetup percentage.
Optimizing for #5 could be not feasible as it's difficult to predict a person
compatibility in real life vs on paper.

96. How would you measure success for preventing misinformation on


Facebook?
Clarify:

160
-8. I will assume that misinformation includes posts, videos and pictures and that
sources could be Groups/ Pages and Individuals. High level structure:
Mission, Information funnel and drop-off points, Metrics & Trade-offs,
Mitigations Mission/ Strategy: Facebook's mission is to bring the world closer
together and empower people to build community. Reliable information is key
to fostering a healthy community. In light of recent world events and public
sentiment, it is important for Facebook to earn back the trust of its users,
governments and organizations. On the other hand, in an effort to quash
misinformation, we should not threaten another core value: free speech.
Information funnel: Laying out my understanding of how information flows in
facebook
-7. User (individual/ group/ page) creates a post
-6. Post is flagged by algorithm (some potential for false positives). I will assume
that algorithm is more front footed on posts which are trending topics. Most
obvious minsinfo posts are suppressed at this point while others are sent for
human review.
-5. Facebook employee reviews posts flagged by algorithm: My assumption is
that most of these require some contextual understanding/ human
judgement. Here too, there is an opportunity for false positives (non-misinfo
posts labelled as misinfo)and false negatives (misinfo posts passed off as
reliable posts)
-4. Facebook user flags posts: Sent back for algorithmic/ human review based on
number of flags raised and impact of trending topic.
Metrics: My metrics are distributed along the funnel to understand where the
biggest issues are.
-8. % of posts labelled as misinfo by algorithm/ total posts reviewed by
algorithm: This will give us a sense of how much is captured by the algorithm
on the first pass. We can slice this by % for trending topics vs overall and
over different time slices e.g. Last 12 hours, 24 hours, 72 hours to give us a
better sense of virality. We can also look at % of false positives and false
negatives. We can also slice by region, demographic and device type to
detect any bot activity. Blind spot: Accuracy of algorithm not known.
-7. # of posts flagged as misinfo by facebook users/ total posts viewed: ideally
we want this % to be small. Blind spot: doesn't tell us whether the user is
flagging misinfo because their opinion differs from that of the post.

161
-6. Probability distribution of users flagging content: 20% of users are likely
flagging 80% of the content. These are likely self-appointed guardians of the
system (similar to wikipedia editors).Long term we might want to leverage this
community as an additional layer of checks.
-5. Virality score of misinfo pots in 6, 12, 24 hours: This woudl be a weighted
average score fo likes, shares comments, messages, etc. As an example, a
like will get the lowest score as it has lowest impact while a share will get the
highest score since it promotes virality. We will need to work with data
science to determine the scoring system. This will help us understand the
severity of the problem.
-4. Lead time to taking down misinfo posts from time of creation: should go down
over time but if system load is too high especially for top trending topics, this
may surge initially.
NSM is #1 as it tells us efficacy of the algorithm and ideally we want to catch
misinfo as high up in the funnel as possible to maintain scale and ensuring that our
mechanisms are strong enough to stem them before they get to the end user.
Guardrail metrics
-8. % of false positives/ all posts labelled as misinfo: I would slice this at each
stage of the funnel i.e. By algorithm, human reviewer and Facebook user.
Ideally we would want this to decrease as we go lower in the funnel and we
want to ensure
-7. % of false negatives/ misinfo posts not labelled: I would slice this at each
stage of the funnel i.e. By algorithm, human reviewer and Facebook user.
Ideally we would want this to decrease as we go lower in the funnel and we
want to ensure
-6. Long term:
0. I would also look at NPS score to ensure that community trend maintains
an upward trend i.e. That users trust us.
1. Track repeat offenders and takedown after x legitimate offenses (x TBD
with data science based on virality score).
Mitigations: For top trending topics,
-8. Establish a center on the hamburger menu which is the single source of truth
e.g. Similar to COVID and Climate change portals
-7. Surface center in the hamburger menu

162
-6. Establish a task force of human reviewers to focus on evolving trending topic
to ensure that there is sufficient man power.
97. You are the PM of Messenger, you noticed that its DAU has gone
down significantly. How would you go about looking for the root cause?
First, I want to get a better sense of what is a DAU. What is considered “Active”?
User sending message, reading message, opening app?
All of the them, any activity within the Messenger App
Second, it’s important to know the context of the change. What timeframe was this
over?
Let’s assume this was over the past 30 days.
Ok, a follow up to this would be to check if this is a seasonal pattern? For example,
there could be a correlation to Messenger usage and school starting.
This is not seasonal.
Great, the way I want to address this is by thinking of (1) user issues; (2) feature
issues; (3) funnel issues (i.e. ways users access messenger); and (4) external
factors.
(1) Let’s first think about user issues.
I would want to see if we can use data to identify a segment of the userbase that is
at the cause of this problem. Examples of dimensions to analyze – platform,
device, geography, demographics, FB version, age of the account, usage of
Messenger - low, medium, high
Also, I think it’s important to understand DAU’s relative to that of other metrics
where there could be a correlation, example FB and/or WhatsApp
(2) Next, I would want to look at potential feature issues.

163
Here would take a look at some of the health metrics of the product. Examples
Messages sent and open rates
Notifications sent and the open rate
Message delivery latency
Breakdown of messages by type - text, image, video, url, etc.
Breakdown by sender type - bot, company, friend, stranger
(3) I also want to assess the funnel or the way people access messenger.
Breakdown of Message by channel - direct, facebook, API, 3rd party/widget
Breakdown of referral source by channel - direct, facebook (notifications,
messenger widget)
(4) Lastly, let’s think about potential outside forces that could be having an impact.
FB brand reputation
User behavior
Competition - iMessage, Google, Skype
Major event - Olympics, Elections
Let’s say that we cannot find a pattern based upon user info, however, we do find
there is a drop of messages being created from FB. Where would you go next?
I think here it’s important to breakdown the message by sender type as well as
message type. From looking at those segments, we could start thinking of a few
theories that are occurring. A few potential ideas that come to mind are.
Send a message to a friend is deprioritized in UI
Messaging is being cannibalized by other features (e.g. Facebook videos,
commenting, etc.)
Sharing via message is deprioritized in UI
98. You're a PM at a food delivery app. There's a 10% decline in
restaurant supply over the past week.
AA is Interviewer and BB is Interviewee

164
BB: I would like to ask few clarifying questions: AA: Sure, Go ahead. BB: Is it
happening across the country or its region specific? AA: It's happening across all
North America BB: Has it happened in the past? With that I meant, is it a
seasonality issue? AA: No, we are facing this issue for the first time. BB: Are our
delivery competitors also facing the same issue? AA: Since this is the recent issue,
we don't have much info about our rivals in the market. BB: Are we seeing decline
in revenue due to our restaurant partners supply-side issue? AA: Yes, we have
started seeing gradual decline in the no. of orders. BB: Are supply shortage scoped
to any particular cuisine or is it uniform across all the cuisines? AA: It's affecting
the overall operations of the restaurants. BB: Have we made any recent UI changes
on our app? Maybe our users would be facing issues while ordering? AA: Yes we
have made some gradient changes. Changed our onboarding process. We have
moved from pagination to lazy loading of restaurant listing pages which is good in
terms of data consumption on the user end. BB: Cool. I would like to brainstorm to
further scope down the root cause and steps to counter the situation. AA: Go
ahead. BB: So here's the framework that I would like to go ahead with:
Brainstorm list of possible reasons by diving into use cases
Identifying root cause
Brainstorm ways to fix
Make a recommendation
Sounds good to you? AA: Go ahead. BB: By daily active users, we mean users who
visit our app, search for the meals they want to have, scroll through restaurants
which are offering that meal and place the order. Is that the safe assumption? AA:
Yes, that's the correct use case. BB: Okay, since our partner restaurants are facing
issue, I believe we should look at external factors as we would only be able to fulfil
demands if we have sufficient supply. So based on above discussion I have thought
of three potential reasons:
Trade sanctions among countries through which restaurants import their
supply
On and off Covid-19 lockdown issues among various states around US and
Canada.
False rumors about any particular restaurant which sparked protest which led
to supply-side shortage.

165
Are we facing any of the above issue? AA: Yes, on and off-lockdown situation is
making it worse for restaurants to get enough supplies. Can you think of few
solutions to counter it? BB: Okay. Let me brainstorm for a bit. AA: Sure BB: Since
pandemic is a global issue and we don't have powers to stop the inevitable. We can
surely think of other ways to provide value to our users. This involves:
Start partnering with local cafes, bakeries and giving out discounts on food
items which can keep the business going.
Get into grocery & medicine delivery segment since we have sufficient drivers
to fulfil the demand.
Acquire new customers via marketing & promoting cuisines which are
sufficient in supply.
I would like to go with 2nd solution as it's unique and provides a much higher value
to our customers in current crisis. We have drivers who can deliver groceries,
medicines and other products by following the same safety protocol which they are
accustomed to now. Although, it would be a fundamental change in our value
proposition and we would gain new set of rivals in the market but we can expand
our scope of business. We also have to change our compensation model for drivers
for delivering new set of products. Success metrics which I would like to track
would be:
No of users adding grocery, medicines to cart.
% increase in DAU/WAU
% of DAU placing orders relative to total users.
99. Should Uber Eats be a different app from Uber Rides?
Approach:
-8. Clarify question with interviewer
When we say "app", I'd like to confirm the app for which user. For example, in
Rides, we have an app for the rider and an app for the driver. For Eats, we
have an experience for a customer purchasing food but I don't know of the
experience for the restaurant. Could you confirm which users' experience
you're describing when saying "app"?
Let's say rider/customer purchasing food.

166
Also is this a situation prior to launch of Uber Eats, which was launched after
Rides? Or is the question geared towards the current state of Eats and Rides?
Let's say prior to Eats launch.
-8. Dig into the user experiences - I want to start by first observing the user
experience for riders/eaters. It's helpful to look at their experiences to know
how similar or different they are to help us answer if they should be in the
same app.
Rider - As a rider:
I already know where I need to go when I open the app. I need to enter an
address/location.
when I open the app, my highest priority is finding a ride quickly (could
be an assumption but I would share some sort of personal insight to
empathize with the rider experience)
There are very few steps I take before ordering a ride (decide ride type,
enter location, view time until pickup and cost, reserve ride).
Eater - As someone looking to order food:
I may or may not have decided what kind of food I want to order. I may
need help deciding this - often the biggest task for the Eater.
I want to see restaurant options available to help me decide what I want.
When I find a restaurant, I want to see the menu and make selections.
There are multiple steps I take before ordering food and each step can
take me some time to deliberate.
-8. Now that I've covered the customer experience, I want to dig into the impact
of each of the options on business goals, customer experience, and
engineering experience.
Combined app experience
Business
(+) Potential for faster adoption and growth - Leverage existing
customer base with Rides and get Eats to existing customers faster
without having them discover and download a separate app. I could
see this as a benefit to Uber and possibly align with a business
strategy for growth.
(+) Potential opportunity for bigger innovations combining the
experience - what if Uber wanted to make it so that a rider can order
and pick up food while on a ride? I don't know that the combined app
experience is necessary to do this but I could see that it may make it
easier to test.
Customer experience

167
( - ) Combining Ride and Eats in a single app makes a larger app. This
can be off-putting as it takes up more space on a user's phone.
( - )The process for getting a ride and ordering food differ greatly.
The UX for a combined app would need to accommodate for the
different processes involved for each task so it might make sense to
completely separate the app to accommodate the differences.
Engineering
( - ) Testing each new app release can take more time with a much
larger app. With the combined experiences, there is more room and
risk of bugs.
Separate apps
Business
( + ) Opportunity to create two independently valuable lines of
business to minimize risks - if one suffers due to market conditions or
other factors, we could minimize negative impact to the other
product line because customers compartmentalize app uses (totally
oddball hypothesis but it's possible that the usage of one product
line would not drop because users stop using the other)
Customer experience
( + ) Easier to accommodate the different processes involved for
getting rides vs. ordering food
Engineering
( + ) Easier to test each new release with smaller apps and more
targeted UX.
-8. Decide and summarize
I really want to advocate strongly for the customer's experience here. From what I
know about the tasks they go through to hail rides and order food, the processes
differ greatly enough that the experience should be different. I feel that the best
way to do this would be to have separate apps. I don't want to ignore the possible
opportunity for more innovative thinking in the future but if we're trying to decide
how we launch Eats, it's best for the customer experience to launch separately. We
can still solve the problems of new customer growth and tapping into our existing
Rider customer base with marketing in the ride app to drive Eats app downloads -
we don't need a combined app to leverage our existing customer base.

100. What metrics would you define for success of Facebook Lite?

168
So first, I have some clarifying questions, what is Facebook lite?
If I understand correctly this is a version of the Facebook application for Android,
with the same functionality but lighter in terms of size. It is aimed to be used in
emerging markets where the bandwidth is limited
What are the tradeoffs for the size of the app?
I guess that some visual elements are stripped down from the app and reduced its
size
Is this application exist or I am launching it
The timeframe of the product? Is it before launch, or as it is today? The reason I
ask is that if it’s today since Facebook Lite already has a meaningful user base, I’d
focus on engagement rather than growth.
Goals
So the idea behind this application is to bring the same value of the Facebook
application to other regions and bring more users to the platform. The users can
then share their lives through posts or stories, react to other posts and comments,
and of course, they can connect to others.
The Facebook mission is to bring people together, and to break the technological
barriers and bringing people from emerging markets to the platform, serves the
Facebook mission.
Actions
The actions I would like users to do on Facebook lite are also similar to the
Facebook app, and I would like them to engage in one of the ways:
-8. Download the application
-7. Engage and increase engagement over time
Advanced user

169
Created a group
Opened a shop
Created an event
Everyday user
Became friend
Post
Share a post
Commented on a post
Basic user
Open the app and scroll, but did not react
Metrics
So the metrics to track my goals are
The first would be how would Facebook lite supports the Facebook mission,
and here I can track
The user base growth
Activation rate - How many accounts created on this app (MoM)
Acquisition rate - how many users downloaded the app (MoM)
User engagement - Monthly active users where an active user is a user
that engaged with the platform. I need to be able also to score the
engage level - a weighted score of the activity
The second bucket would be product health
Retention rate, and how it compares to the main Facebook application
Churn rate - how many users did not return to the platform
I think the true north would be the monthly active users, and a second metric is the
retention rate
Evaluate
The success of this app could hurt or cannibalize the Facebook application
for Android. users may switch to the lite app, to enjoy the optimized
performance of the lite application. Overall, it did not lead to churn, so it may
be okay.
If the engagement of both applications is the same, this may lead to the
question - do we really need all the features that we stripped down?

170
Abuse of the platform in emerging markets - fake accounts that are used to
share fake news for example.

171

You might also like