0% found this document useful (0 votes)
60 views16 pages

CS339H Human-Computer Interaction and AI - ML - Syllabus For Stanford Course - Fall 2022 - HCIAI

Uploaded by

gia shin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views16 pages

CS339H Human-Computer Interaction and AI - ML - Syllabus For Stanford Course - Fall 2022 - HCIAI

Uploaded by

gia shin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Human-Computer Interaction and AI/ML

CS339H #HCIAI
Professors: Daniel M. Russell <[email protected]> & Peter Norvig <[email protected]>
Course Assistant: Alejandrina Gonzalez Reyes <[email protected]>

When/Where: Tu/Th 10:30-12:20 Hewlett 101


This document: bit.ly/HCIAI-Stanford-2022
Website for the course: sites.google.com/view/hciai339h/
Aka… cs339h.stanford.edu/

Week 1.A: Sep 27, 2022

* Introduction to the course


- What you need to know about the course / schedule / etc.
- Initial definitions: AI, ML, DL, HCAI. What’s the difference?
- Overview of topics
- Grading issues; Expectations
- Student survey (Please fill this out)

* The human aspects of designing and building AI/ML systems (Dan)


- How do we design complex systems for human use and understanding?
- How does AI/ML fit into an engineering practice? Is it a way of building
systems, or is it a property of designed systems?
- What’s the difference between a mental model and a user model?
Readings:
Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal,
S. T., Bennett, P., Inkpen, K., Teevan, J., Kikin-Gil, R., and Horvitz, E. (2019) Guidelines for
Human-AI Interaction (CHI 2019).

Bogost, Ian “The end of manual transmission” Atlantic, Aug 8, 2022. (The sense/illusion
of control. Why is a manual transmission important, but ABS brakes are not?)

Week 1.B: Sep 29, 2022

* A History of Artificial Intelligence and People (Peter)


- A few minutes of background information: how we got to this place
- What the goals of AI were (and how they have changed over time)
- Recap of the symbol system hypothesis; expert systems; “knowledge as stuff”
- Beginnings of neural networking and the evolution into modern ML

Readings:

Alan Turing’s classic paper “Computing machinery and intelligence” (aka “The imitation
Game” paper) Mind LIX (1950): 433-460. Or, you could watch the movie: The Imitation
Game ;-)

Chapter 1 of Artificial Intelligence: A Modern Approach, 4th ed. (S. Russell and P. Norvig)
(a fairly long excerpt from the book, but remarkably comprehensive in tracing the ideas
of AI from the earliest stages)

* A History of Humans Interacting with AI (Dan)


- Things we thought were intelligent, e.g.,
- Mechanical Turk (the original), Clever Hans (the horse)
- Where is the AI vs. merely-complex-system boundary?
- When is it really AI? How do you know?
- Classic avionics user-interaction examples (“what’s it doing??”)
Readings:
Ben Schneiderman and Pattie Maes. Direct Manipulation vs. Interface Agents.
Interactions CHI 1997 (an infamous debate about the role of AI agents vs.
human-directed control)

Eric Horvitz. Reflections on Challenges and Promises of Mixed-Initiative Interaction.


AAAI Magazine 28, Special Issue on Mixed-Initiative Assistants (2007) (what will work in
designing interactions with AI agents using interleaved actions by computers and
people)

▶ Assignment 1 handed out: “What is the nature of intelligence?” (Reflection)

Week 2.A: Oct 4, 2022 (Dan)

* Communicating Predictions & Recommendations with Users


(Ed Chi talk)
- Recommendation engines
- How people perceive and understand rankings and recs

Readings:
Zhao, Z., Hong, L., Wei, L., Chen, J., Nath, A., Andrews, S., ... & Chi, E. (2019, September).
Recommending what video to watch next: a multitask ranking system. In Proceedings
of the 13th ACM Conference on Recommender Systems (pp. 43-51). (How YouTube
recommendations work.)

Konstan and Riedl Recommender systems: from algorithms to user experience

* How people understand AI systems: AI and mental models


- How do people THINK that AI systems work?
- Why understanding the mental model of the user is important
- Defining a mental model

Readings:
Google Handbook on Mental Models

* What kinds of UX does AI/ML afford?


- What are the N different kinds of AI interactions you’ll have (or have to design
for / understand?)

Week 2.B Oct 6, 2022 (Dan)

* Designing for AI failures: Feedback to Users


- When things go wrong. How to detect this, how to communicate it
- Probably ought to discuss Flu Trends (how did it go wrong?)

Readings:
Kocielnik, R., Amershi, S., and Bennett, P. (2019) Will You Accept an Imperfect AI?
Exploring Designs for Adjusting End-User Expectations of AI Systems. (CHI 2019).

Cai, C., et al. "``Hello AI": Uncovering the Onboarding Needs of Medical Practitioners for
Human-AI Collaborative Decision-Making." (2019).

Can we make Montreal’s buses more predictable? No. But machines can. (Case study of
Montreal transit using ML to improve predictions of buses.)

Pappas, S. Data Fail! How Google Flu Trends Fell Way Short (LiveScience.com, 2014)

◀ Assignment 1 due: “What is the nature of intelligence?”


▶ Assignment 2 handed out:: “Reflection on weeks 1 and 2” (Reflection)
Week 3.A: Oct 11, 2022 (Peter)

* Data & Knowledge. Where does it come from? Who owns it?
Who should own it?
- Creating training data sets
- Ethical issues involved
- Pragmatics of creating a representative data set

Readings:
Lindvall, et al. From Machine Learning to Machine Teaching, Interactions v 25, n 6
(2018)

Aniket Kittur, Jeffrey V. Nickerson, Michael Bernstein, Elizabeth Gerber, Aaron Shaw,
John Zimmerman, Matt Lease, and John Horton. 2013. The Future of Crowd Work. In
Proceedings of the 2013 Conference on Computer Supported Cooperative Work (CSCW
’13), 1301–1318.

Mary L. Gray and Siddharth Suri. Ghost Work: How to Stop Silicon Valley from Building
a New Global Underclass. Houghton Mifflin Harcourt, Boston. Introduction and Chapter
1. [Read blog posts about this chapter from Virginia Tech] (2019)

Nithya Sambasivan, Rajesh Veeraraghavan. The deskilling of domain expertise in AI


development. CHI 2022.

Nithya Sambasivan, Shivani Kapania, Hannah Highfill, Diana Akrong, Praveen Paritosh,
Lora M Aroyo “Everyone wants to do the model work, not the data work”: Data
Cascades in High-Stakes AI. Proceedings of the CHI 2021.

* Data ethics and laws


- Where’s the dividing line between ethics, regulation, and morality?

Readings:
Jeffrey Bigham, Michael Bernstein, and Eytan Adar. Human-Computer Interaction and
Collective Intelligence
Zimmerman, J., Tomasic, A., Garrod, C., Yoo, D., Hiruncharoenvate, C., Aziz, R., ... &
Steinfeld, A. (2011, May). Field trial of tiramisu: crowd-sourcing bus arrival times to spur
co-design. CHI 2011.

Week 3.B: Oct 13, 2022 (Nithya; Kelly Moran)


* AI Ethics, Fairness, Social Acceptability, and Trust
- What you need to know about these topics.

Readings:
Medical devices: the Therac-25 by Nancy Leveson

Carrie Cai et al (2019) Human-Centered Tools for Coping with Imperfect Algorithms
during Medical Decision-Making

Pitfalls of algorithmic de-biasing…


Residual unfairness in fair machine learning from prejudiced data. Kallus & Zhou

Kidney Allocation Algorithms


https://slate.com/technology/2022/08/kidney-allocation-algorithm-ai-ethics.html

* Making AI/ML systems fair: What do practitioners need to know?


- fairness in ML systems: What do industry practitioners need to know?
- Understanding your data set characteristics (PAIR What-If tool)

Readings:
Pitfalls of algorithmic de-biasing…
Residual unfairness in fair machine learning from prejudiced data. Kallus & Zhou

◀ Assignment 2 due: Reflection on Weeks 1 and 2 (Reflection)


▶ Assignment 3 handed out: “On fairness” – here’s a data set, create an
analysis that shows the ways in which it is fair (or not fair). Connect with the
PAIR example (what’s fair).
Week 4.A: Oct 18, 2022 (Guest speaker: Merrie Morris)

* Accessibility as a North Star Challenge for Human-Centered AI


Research

- Abstract: The World Health Organization estimates that more than one billion
people worldwide experience some form of disability; beyond this 15% of the
population experiencing permanent or long-term disability, nearly everyone
experiences temporary or situational impairments that would benefit from
accessible technology solutions. Emerging AI technologies offer huge potential
for enhancing or complementing peoples' sensory, motor, and cognitive
abilities. Designing human-centered systems to address accessibility scenarios
is a "north star" goal that not only has great societal value, but also provides
challenging and meaningful problems that, if solved, will fundamentally
advance the state of the art of both AI and HCI. Here I will reflect on challenges
and opportunities in designing human-centered AI systems for scenarios
including automated image description for people who are blind, efficient and
accurate predictive text input for people with limited mobility, and AI-enhanced
writing support for people with dyslexia.

Week 4.B: Oct 20, 2022 (Dan)

* Data Communication / Data Visualizations to improve HAI


interactions
- See PAIR’s Facets and Know Your Data (to understand data sets with inferences
and labeling run over it)
- Datasets have World Views (PAIR tool)

Readings:
Carrie Cai et al (2019) Human-Centered Tools for Coping with Imperfect Algorithms
during Medical Decision-Making
Matthew Kay, Tara Kola, Jessica R. Hullman, Sean Munson (2016) When (ish) is My Bus?
User-centered Visualizations of Uncertainty in Everyday, Mobile Predictive Systems

Lab: PAIR Facets

◀ Assignment 3 due: “On fairness”


▶ Assignment 4 handed out: “Visualizing data for AI” Project to visualize /
understand the biases in a data set

Week 5.A: Oct 25, 2022 (Carrie Cai; Dan away)

* Interpreting and Explaining AI Algorithms and Systems


- Does telling people how an algorithm works change their experience or
understanding?

Readings:
Intelligible Artificial Intelligence Dan Weld (2018)
Why these Explanations? Selecting Intelligibility Types for Explanation Goals
The Mythos of Model Interpretability
Explainability and Trust (Google AI Handbook)

* How people think about risk WRT AI/ML systems


- Disparate Interactions

Readings:
How humans interact with risk predictions and how it leads to differences in fairness.
Do artifacts have politics?
Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day
Readmission
Week 5.B: Oct 27, 2022 (Been Kim)

* Building AI/ML with humans in the loop


- Building handoffs into AI systems
- Creating tools that express the inexpressible (CAV systems)

Readings:
Julian Ramos: personalized context aware health interventions.

Gagan Bansal, Besmira Nushi, Ece Kamar, Daniel S. Weld, Walter S. Lasecki, Eric Horvitz
(2019): Updates in Human-AI Teams: Understanding and Addressing the
Performance/Compatibility Tradeoff

Sharon Zhou, Melissa Valentine, Michael S. Bernstein (2018) In Search of the Dream
Team: Temporally Constrained Multi-Armed Bandits for Identifying Effective Team
Structures

Ting-Hao (Kenneth) Huang, Joseph Chee Chang, and Jeffrey P. Bigham Evorus: A
Crowd-powered Conversational Assistant Built to Automate Itself Over Time

Anca Dragan on self-driving cars

◀ Assignment 4 due: “Visualizing data for AI”


▶ Assignment 5 handed out: Project to build AI an system with human in loop.
Wizard of Oz example? Perhaps a group project! We’ll assign groups to get a
diversity of skills and background. (Everyone gets the same grade for the
assignment).

Week 6.A: Nov 1, 2022 (Peter)

* Natural Language (part 1)


- building conversational AI systems
- Eliza (for history and as a cautionary tale)
- Her (Spike Jonze movie) for cultural resonance
- DialogFlow / Lamda / MUM / Duplex
- chatbots in general
- chatbots for a specific domain
- chatbot architecture
- assistants (Google Assistant; Alexa; Siri; others)

Readings:
ELIZA—a computer program for the study of natural language communication between
man and machine
Voice Interfaces in Everyday Life Porcheron, et al. CHI 2018
Calendar.help: Designing a Workflow-Based Scheduling Agent with Humans in the Loop

Week 6.B: Nov 3, 2022 (Peter)

* Natural Language (part 2)


- large language models
- GPT-3, BERT, Palm, Lamda, etc.
- What Have Language Models Learned (PAIR tool)

◀ Assignment 5 due: “Project on Human in the AI loop”


▶ Assignment 6 handed out: Reflection on Weeks 5 and 6 (Reflection)

Week 7.A: Nov 8, 2022 (Dan here; Peter away)

* Computer vision
- Labeling parts of images
- Case study: Google Lens
- Other examples…
- ClearView - face recognition (what are the ethical implications of their
scraping behavior? What can be done about it? Should it be used?)
- PimEyes

Readings:
Walsh, Toby. "The troubling future for facial recognition software" Communications of
the ACM 65.3 (2022): 35-36.

A Face Search Engine Anyone Can Use Is Alarmingly Accurate Kasmir Hill, NYTimes,
March, 2022.

Week 7.B: Nov 10, 2022 (Peter away)

* Computer perception (perception more generally)


- << TBD >>

◀ Assignment 6 due: Reflection on Weeks 5 & 6


▶ Assignment 7 handed out: Reflection on Week 7

Week 8.A: Nov 15, 2022 (Doug Eck, Peter away)

* AI & Art
- Doug Eck’s work on music synthesis.
- See also: deepmusic.ai (relevant?) other music+AI companies?
- - see: g.co/tonetransfer (ML based transformation of tone from A to B)

Readings:
Deep Dream github (iPython Notebook)

Hayes, B. Computer vision and computer hallucinations American Scientist

Field Guide for Making AI Art Responsibly Medium post By Claire Leibowicz and Emily
Saltz. Points to Field Guide

Google AI Turns Text Into Images (Petapixel) - overview

Imagen Outperforms DallE-2 (Medium post by Teemu Määttä)


AI Designed Drugs Financial Times article.

Google’s Imagen Google’s website. Photorealistic Text-to-Image Diffusion Models with


Deep Language Understanding technical paper by Chitwan Saharia, et al. Drawbench
spreadsheet (prompts for images)

Copyright issues caused by stable diffusion algorithms? Medium post by Aaron Brand

Week 8.B: Nov 17, 2022 (Dan)

* AI & Synthesis systems (Dall-E, et al.)


- Deep Dream
- Parti / Imagen / Crayon / DALL-E / Midjourney / Dreamstudio

- A (Google) Imagen; Parti Article: (Google has released their latest text-to-image
generation model- Parti. They provide a few prompts and showcase the differences
between models trained on 350M, 750M, 3B and 20B parameters.

- One difference from last week's Imagen is that Parti is GAN-based. Imagen and DALL-E
2 are diffusion-based models, whereas Parti is a sequence-to-sequence model scaled
highly on Transformer + VQGAN.)

- DreamStudio
- OpenAI’s text to image generator (Dall-E and others… including Imagen)
- Diffusion models - what they are / how they work
- How people can understand what they’re doing.
- How can AI build creativity support tools?
- A small diffusion system you can run on your laptop

◀ Assignment 7 due: Reflection on Week 7


▶ Assignment 8 handed out: Project to create and analyze some AI art
Thanksgiving Week: Nov 21 - 25, 2022

● No classes this week

Week 9.A: Nov 29, 2022 (Dan, Peter)

* AI and writing natural language


-
- Epistemic neural networks
- The dividing line between symbolic reasoning and neural
- “Prompt engineering” (to control systems like image-synth diffusion or
GPT-3-like text creations)
-

* Can AI systems write code too? (see this Tweet) (Peter)


- What about the cherry-picking
- Can they write Python? How about SQL? XML?

Readings:
Ai is Mastering Language: Should we trust what it says? Steven Johnson, NYTimes.

Osband, Ian, Zheng Wen, Mohammad Asghari, Morteza Ibrahimi, Xiyuan Lu, and
Benjamin Van Roy. "Epistemic neural networks." arXiv preprint arXiv:2107.08924 (2021).

Week 9.B Dec 1, 2022

* How to engineer the data needed to build system X


- What kinds of data do you need to collect?
- Data is an asset / liability (do you really want to own it?)
- Do you really have the data you need?
* How to build human-in-loop systems
- The whole process of deciding data -needed; design / build system to elicit the
data you need. Use MT? Train the people? Have a process to do labeling?

◀ Assignment 8 due: Create and analyze some AI art


▶ Assignment 9 handed out: Reflection on Week 9
▶▶ Final Assignment 10 handed out: Small group presentation on HCI/AI.
Same groups as before?

Week 10.A: Dec 6, 2022 (Peter)

* Where the future leads? (Peter’s vision)


- The coming AI spring/summer/winter … or apocalypse?
- Thinking about where AI will go in the next 5-10 years.
- Can modern ML systems do symbolic reasoning? (What would this mean for the
UX?)
- Common sense reasoning –ever possible? What will it take to get there? What
will it mean for your mental model of AI systems?
- Sentience?
- Will we get to General AI anytime soon? (GAI) Contrasting visions. What would
that be like?
- Comparing visions of Musk / Gates / Norvig / Russell / Dean / LeCun (etc.)

Readings:
Can chain of thoughts make large language models do reasoning?

Week 10.B: Dec 8, 2022 (Dan)


* Where the future leads? (Dan’s vision)

Additional Resources for Students:


- RAI-HCT Resources for AI Curricula (PAIR teachables)
- Designing Machine Learning (from Michelle Carney, d.school at Stanford)
- PAIR’s AI Explorables (widgets to explain complex AI concepts e.g., differential
privacy, language model learning, why some models leak data, measuring
diversity hidden bias)
- Teachable Machine (ML exploration)
- AI Transparency and explainability (Carrie Cai)
- Fundamental Challenges in Human-AI interaction (Ed Chi; 2018)

Finals Week 11: Dec 15, 2022 time: 3:30 - 6:30PM

* Final exam: Group presentations

◀◀ Final Assignment 10 due: Small group presentation on HCI/AI topic

==============================================
LINK to additional notes (for Dan)

DETAILS:

There are 10 weeks to the term. Each week we’ll have 2 classes, each 90 minutes long.
Monday, November 7 2022 (5:00 p.m.) is term withdrawal deadline. The last day to submit a
Leave of Absence to withdraw from the university with a partial refund on the Stanford
Academic Calendar 2022-2023.

Tuesday, November 8 2022 is a holiday (Democracy Day), a day of civic service (Stanford
Holidays 2022).

Stanford Thanksgiving break 2022 is from Monday, November 21 2022 to Friday, November
25 2022.

Friday, December 9 2022 is the last day of classes. This day is also the last opportunity to
arrange an Incomplete in a course, at the last class.

The final will be held in class on Dec 15, 2022 at 10:30AM. Each team will make a
presentation of 5 minutes on a topic of their choosing. (Per Stanford schedule.)

You might also like