0% found this document useful (0 votes)
2K views35 pages

AI Redefining The Future of Psychology

The document discusses the transformative impact of artificial intelligence (AI) on the field of psychology, emphasizing the need for psychologists to actively engage with AI technologies to enhance mental health care and research. It outlines various training programs for psychologists to integrate AI into their practice while addressing ethical considerations and patient privacy. The document also highlights the potential benefits of AI in improving therapeutic relationships and the efficiency of mental health services.

Uploaded by

Cris C
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2K views35 pages

AI Redefining The Future of Psychology

The document discusses the transformative impact of artificial intelligence (AI) on the field of psychology, emphasizing the need for psychologists to actively engage with AI technologies to enhance mental health care and research. It outlines various training programs for psychologists to integrate AI into their practice while addressing ethical considerations and patient privacy. The document also highlights the potential benefits of AI in improving therapeutic relationships and the efficiency of mental health services.

Uploaded by

Cris C
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Artificial Intelligence

Redefining the Future of Psychology

Artifical Intelligence: Redefining the Future of Psychology 1


Professional Coach Certification
for Psychologists
Looking to use your skills in a new
way with growth-minded clients?
Earn your specialty coaching credential from the
accredited post-graduate executive coaching institute
founded by psychologists.

FIVE OPTIONS

How to Use AI in Coaching 8 HOURS

Certified Professional
Coach Certification 72 HOURS
Advanced Personal and Executive “The presentations were dynamic
Coach Certification 128 HOURS and content-rich. My coaching
practice has benefited as a result
Positive Psychology-based of my participation at College
Wellness Coach Certification 75 HOURS of Executive Coaching. I
look forward to integrating the
EQI2.0 and EQ360 training into my coaching and
Assessment Certification 16 HOURS
leadership work at Mayo Clinic.”
All training is delivered virtually—live and online— LISA HARDESTY, PH.D., L .P., ACPEC

supplemented by on-demand video modules.

APA: The College of Executive Coaching is approved by the American


Psychological Association to sponsor continuing education for
psychologists. College of Executive Coaching maintains responsibility
for this program and its content. Accredited by the International Coach
Federation for ICF Credentials. Accredited by the National Board of Health DOWNLOAD THE
and Wellness Coaches for the National Board Certified Health and Wellness FIRST CHAPTER AND
Coach Credential delivered by the National Board of Medical Examiners, ARTICLES FREE AT
the same prestigious board that licenses physicians.
WEBSITE
Ph.D. and Master’s Level Faculty

(800) 475-1837
executivecoachcollege.com
CONTENTS
4 AI’s Profound Impact

5 Harnessing the Power of AI

9 Artificial Intelligence: A New Chapter in Psychology

10 Addressing Equity and Ethics of Artificial Intelligence

14 Steps to Evaluate an AI-Enabled Clinical or Administrative Tool

17 Artificial Intelligence and Psychological Research: Can AI Replace Human Participants?

19 Classrooms Are Adapting to the Use of Artificial Intelligence

24 The Promise and Perils of Using AI for Research and Writing

28 Quoting or Reproducing ChatGPT Text or Other Generative AI Tools

30 Five Questions for Melissa Smith

32 Survey Reveals Job Loss and Privacy Fears Over Workplace AI

33 In a World with Artificial Intelligence, What Can I Do With a Psychology Degree?


COVER: VITALII GULENOK/GETTY IMAGES

Artifical Intelligence: Redefining the Future of Psychology 3


AI’s Profound Impact
Psychology’s enormous opportunities to shape AI’s influence
BY ARTHUR C. EVANS JR., PHD
From Monitor On Psychology, July 2024

These days, artificial intelligence (AI) is a common topic of conversation with strong—but not always
recognized—connections to psychology. These connections to psychology often fall into two broad
categories, both of which require our field to be proactive and strategic.

F irst, AI will have a profound impact on every aspect


of our field. Whether you provide mental health ser-
vices, conduct research, teach, consult, or facilitate the
standing to help AI minimize algorithmic biases and bring
a human-centered perspective to its safe and effec-
tive design. We have expertise on issues like ethics and
application of psychological science in different settings, the psychology of privacy to ensure that AI is promot-
industries, or systems, AI will affect what we do and how ing positive outcomes, not generating harm or fueling
we do it. We already see evidence of this disruption tak- manipulation. This provides a tremendous opportunity to
ing place around issues like scholarly publishing, complex demonstrate the breadth of our field by fully engaging in
statistical analyses, and the precision and efficiency of this evolving topic, intentionally seeking out opportunities
psychological assessment. The advancement of AI offers to apply our science and knowledge to AI, and elevating the
extraordinary opportunities for innovation, and we do not invaluable role psychology plays.
have the luxury of shying away. Instead, we must position As an association, APA is taking a multipronged approach
ourselves to shape and evolve along with these technol- to AI—both supporting the adaptation of our profession
ogies. We must demonstrate that, as a field, we serve in a and discipline and infusing psychol-
wide range of important roles that can be supplemented— ogy into the global conversation. I hope
but not supplanted—by AI. you will help us embrace the unprece-
SVETAZI/GETTY IMAGES

Additionally, psychological science can inform the dented possibilities before us. n
development and use of AI. Every area of psychology can
and should contribute—human factors, cognitive, social,
Arthur C. Evans Jr., PhD, is the chief executive
developmental, and more. We can use our scientific under- officer of APA. You can follow him on LinkedIn.

Artifical Intelligence: Redefining the Future of Psychology 4


As artificial intelligence transforms our world, psychologists
are working to channel its power and limit its harm
BY ZARA ABRAMS
From Monitor On Psychology, January/February 2025

Artifical Intelligence: Redefining the Future of Psychology 5


A rtificial intelligence (AI), which is driving what some
call the fourth industrial revolution, has been a har-
binger of change. Its power and potential have wrought
The forthcoming guidebook explores how AI can help
with everything from administrative and back-office tasks
to assessment, direct care, and patient self-care. It is
excitement and fear across nearly every sector of soci- intended as a practical manual to give psychologists and
ety, from finance and transportation to education and other providers of mental health care the knowledge they
health care. need to get started.
“Because AI has surpassed the limits of pattern rec- One tool that uses large language models and may
ognition that individuals can do, naturally it becomes soon become standard: chatbots that assess for depres-
impressive to us when generative AI tools work well,” said sion, suicide risk, and other issues in a conversational
Ericka Rovira, PhD, a professor of engineering psychol- and empathetic way. Luxton is one of several research-
ogy at the United States Military Academy at West Point ers developing such a chatbot, which he said could
who studies human-technology interaction. “But as psy- offer advantages over the standard nine-item Patient
chologists, because our work impacts people’s lives in such Health Questionnaire (PHQ-9), which can feel redun-
serious ways, we cannot trade magic for the explainable.” dant for patients over time, affecting the accuracy of their
Far from observing the AI revolution from the side- responses.
lines, psychologists are rolling up their sleeves to develop, “What we hope to assess is: How do both clinicians and
examine, and integrate AI tools—both into the discipline clients feel about using an automated interactive conversa-
and across society at large. APA’s policy statement “Arti- tional tool? Could these systems actually be more accurate
ficial Intelligence and the Field of Psychology,” released in and reliable than traditional screening tools and assess-
August 2024, outlines how psychologists can shape AI and ments?” Luxton said.
how the new technologies are changing the field. More and more therapists are also using AI to transcribe
In clinical practice, psychologists are exploring how and analyze their session notes. While it can improve effi-
large language models (a type of generative AI focused
on natural language processing) can increase efficiency,
expand access to mental health support, and even deepen ESSENTIAL RESOURCES ON
the therapeutic relationship. On the research front, AI AI AND PSYCHOLOGY
tools are accelerating studies across psychology while gen- Psychologists are leading a range of efforts focused on
erating new courses of inquiry in the human-technology safely leveraging AI to improve research, education,
interaction field. And psychologists increasingly have a seat and mental health. To get involved, explore the follow-
at the table in broader conversations about ethical data ing resources.
use and product development, as well as the role AI should ■ Practitioners can use the “Companion Checklist:
have in our health, work, and relationships. Evaluation of an AI-Enabled Clinical or
Administrative Tool” created by APA’s Office of
“As psychologists, we have the opportunity—the respon-
Health Care Innovation to guide their use of AI tools
sibility, even—to guide and shape the future of mental in therapy.
health,” said Jessica Jackson, PhD, a Houston-based ■ The Society for Digital Mental Health unites experts
licensed psychologist, chair of the APA Mental Health for the advancement of digital mental health,
Technology Advisory Committee, and a member of the new including through an annual meeting.
U.S. Food and Drug Administration Digital Health Advisory ■ The International Society for Research on Internet
Interventions connects international researchers
Committee. “We do not have to observe this process of
studying digital mental and behavioral health.
development from the sidelines.” ■ The Journal of Medical Internet Research publishes
cutting-edge research on digital health.
AI in the Clinic ■ APA’s Mental Health Technology Advisory
Bringing AI into the clinic can be a daunting task—one that Committee engages in a range of activities aimed
requires technological know-how and ethical delibera- at centering psychology in the development of new
tion—but experts say it’s time to do so. One helpful tool digital mental health technologies.
■ The Chronicle of Higher Education reports on AI’s
for clinicians is the “Companion Checklist: Evaluation of
growing impact on colleges and universities.
an AI-Enabled Clinical or Administrative Tool” created by ■ The journal AI & Society: Knowledge, Culture and
APA’s Office of Health Care Innovation. Communication publishes interdisciplinary research
PREVIOUS: MOOR STUDIO/GETTY IMAGES

“The time is now to start integrating AI,” said David on AI’s broader impacts, ranging from social and
Luxton, PhD, a clinical psychologist and an affiliate profes- cognitive to ethical and philosophical.
sor at the University of Washington’s School of Medicine, ■ The Royal Society’s report Science in the Age of
AI explores how artificial intelligence is changing
who wrote an APA guidebook on integrating AI into prac-
scientific research across disciplines.
tice, due out in 2025. “Behavioral health professionals need
to be competent on the boundaries and risks of AI but also
on how it can benefit their practice.”

Artifical Intelligence: Redefining the Future of Psychology 6


ciency, psychologists should also consider the very real
privacy concerns with storing patient data in the cloud, AI has the potential to
said Margaret Morris, PhD, a clinical psychologist in private increase access to care
and help psychologists
practice and an affiliate faculty member at the University monitor certain conditions,
of Washington who studies digital mental health inter- such as suicide risk.
ventions. For example, patients who live in states where
abortion is banned might hesitate to discuss the procedure
during a session.
“If you as a patient don’t feel your conversations with
a therapist are really your own, I think the whole pursuit
changes,” Morris said.
Clinicians using such tools should explore whether
patient data must be stored in the cloud. Morris encour-
ages clinicians to ask questions such as: Who has access to
the data? Who will have access to it in the future? Is it used
to train an AI model? Can patients later retract their data?
It’s also important to consider how switching to auto-
mated transcription may change the workflow and risk
shortchanging patients, Morris said. How can providers
continue to do the critical analysis and formulation they
previously did while writing notes?
Promising advances include opportunities to run AI
transcription software on powerful local computers, using foster coinvestigation with patients. Clinicians wanted to
open-source large language models, such as Llama from query the AI collaboratively with patients, with the patients
Meta, tuned for mental health. Seeing clinicians opt for deciding what data to share and how to interpret patterns
that method is promising, Morris said, because it shows a (Proceedings of the ACM on Interactive, Mobile, Wearable
sense of agency in using tools to serve a specific purpose and Ubiquitous Technologies, Vol. 8, No. 2, 2024).
rather than simply adopting a popular new product. For example, the meaning of activity and phone usage
“If you can put the therapeutic relationship first and metrics vary from one person to the next. A patient who
then use the technology in service of that relationship, it’s experiences high anxiety at the start of new roman-
possible to get some value from these tools—and not be tic relationships might track the number of times they
the product yourself,” she said. unlock their phone screen as an indicator of their shifting
emotional states.
New Directions In Practice “What I love about this is that it’s putting AI in the ser-
Morris and a team at the University of Washington, led vice of this human relationship, the therapeutic alliance
by graduate student Zachary Englhardt and computer that we know to be so important for therapy,” Morris said.
scientist Vikram Iyer, PhD, are also studying how gener- Large language models can also analyze session tran-
ative AI could help therapists analyze data from patients’ scripts to aid in supervision and training, including by
daily lives, for example, from wearable devices. In a study highlighting instances where a trainee deferred from a
where clinicians interacted with ChatGPT around its anal- treatment protocol, then providing real-time feedback.
ysis of wearable data, they envisioned using such data to For example, mpathic and Lyssn are AI models trained by
mental health care providers to recognize qualities such
as empathy during conversations, including reflective lis-
tening, affirmations, and open-ended questions. Such
Key Points technology can help busy supervisors target key moments
■ Experts say it’s time to start integrating AI tools for feedback—or the AI itself can offer trainees an alterna-
into psychological practice, while prioritizing pa- tive way to phrase a response, for instance, by rephrasing a
tient privacy. directive as a question.
■ Large language models and wearable technology
While products on the market today are relatively sim-
offer exciting ways to improve therapy and train-
plistic—for instance, scoring adherence to a cognitive
ing.
■ Ongoing research explores how to safely integrate behavioral therapy protocol—Morris is hopeful that future
D3SIGN/GETTY IMAGES

AI, including questions about trust and what hap- tools will provide insights on psychodynamic and other
pens when technology makes mistakes. insight-based approaches. Other training opportunities
include simulations that afford therapists-in-training a safe
place to explore various approaches.

Artifical Intelligence: Redefining the Future of Psychology 7


“For example, imagine a trainee interacting with a simu- ple, past research shows that people with better working
lated patient—one that has realistic facial expressions and memory and attentional control tend to be less easily
body language and responds in predictable ways,” said Tara duped when automation fails (Proceedings of the Human
Behrend, PhD, a professor at Michigan State University’s Factors and Ergonomics Society Annual Meeting, Vol. 63,
School of Human Resources and Labor Relations who stud- No. 1, 2019). Does that trend hold true for AI agents, such
ies human-technology interaction. as when a ChatGPT provides bogus references? In another
It’s well known that when generative AI is trained on line of research, Rovira and her colleagues are explor-
incomplete data, it can draw incorrect conclusions that ing whether people are more likely to trust AI agents when
perpetuate bias in society. But AI can also help improve they use good social etiquette.
health equity if used strategically, said psychologist Adrian “People need to stay engaged in the decision-making,
Aguilera, PhD, an associate professor at the University which can be hard when these systems operate incredibly
of California, Berkeley, and the University of California, quickly and smoothly,” she said. Human factors psychol-
San Francisco, who studies how technology can reduce ogists can help structure human-AI interactions in a way
health disparities. what keeps people in the loop to provide context for
For instance, psychologists might prefer to build cultur- safety reasons.
ally adapted mental health interventions from the ground Before using AI tools in mental health care and beyond,
up, but resources to do so are often limited. With much less psychologists have a responsibility to look past market-
effort, large language models can help providers tailor an ing language and evaluate each product with a critical
intervention for a specific population— eye, Behrend said. They can also help shape ethical devel-
not just translating a protocol into Spanish, for example, opment principles set by government regulators and
but adapting it for Spanish speakers from a Latin American oversight organizations, such as collecting the least
country, complete with relevant cultural metaphors. amount of data necessary and providing transparency
“That becomes a deeper adaptation that certainly needs about exactly how it will be used, Aguilera said.
to be reviewed, updated, and refined by humans, but it “All of this is moving so rapidly, so that regulation needs to
allows us to create these tailored interventions with much be an iterative process,” he said. “But having basic principles
less effort than needed in the past,” Aguilera said. can at least provide some guard rails.”
At mpathic, researchers are also exploring how AI can It would be foolish to ignore the dangers AI can pose
be used to detect high versus low cultural attunement when used with bad intent. Social media algorithms
during therapy sessions, then provide real-time feedback already manipulate us in a very personalized way, Luxton
when needed. said, and similar data can be used for reputational harm
and psychological manipulation. Disparities in who has
Research And Society access to and control of AI are also likely to cause problems
AI has already been a major boon for the scientific research across society, he said. Far from a deterrent, he hopes those
process, including through generating R code, a program- truths can galvanize psychologists to get involved.
ming language often used for statistical analysis, and to “Change brings anxiety, and being critical is extremely
create experimental stimuli such as text or video vignettes. important,” Aguilera said. “But the AI revolution is hap-
But generative tools such as ChatGPT also invent false ref- pening, so let’s engage and try to steer things in a
erences (often referred to as AI hallucinations) and may direction that’s for better rather than for worse.” n
even fabricate data, Rovira said.
Another open question is to what degree AI can or
should be used for manuscript writing. Doing so could save
time, accelerate scientific discovery, and make science
more accessible to non-native English speakers, Rovira FURTHER READING
said, but it is not without its risks. Journals may require Artificial intelligence
disclosure when AI is used for manuscript writing, but psy- APA Services, 2024
chology as a field should also consider what degree of The opportunities and risks of large
generative AI use is acceptable at any stage of the research language models in mental health
process (Tay, A., Nature Index, Aug. 17, 2021). Lawrence, H. R., et al.
Journal of Medical Internet Research Mental Health, 2024
“This explosion in manuscripts could break the peer
review system as we know it,” Rovira said. “So, what is our Large language models could change
the future of behavioral healthcare:
role as a society of professionals in deciding the limits of AI
A proposal for responsible development and evaluation
for these tasks?” Stade, E. C., et al.
Rovira and other human factors researchers are also npj Mental Health Research, 2024
asking new questions about human-technology interaction
that could prove critical to using AI safely. For exam-

Artifical Intelligence: Redefining the Future of Psychology 8


Artificial Intelligence: A New Chapter in Psychology
AI has the potential to revolutionize psychology, offering powerful tools to
advance research, improve clinical practice, and transform education. Yet, it
also brings significant risks that demand careful attention. Here’s an overview
of the opportunities and challenges AI presents for the field of psychology.

BENEFIT RISK
Efficient Data Analysis: AI can process and analyze vast Bias and Inequity: AI systems can perpetuate or even
amounts of research data quickly, identifying trends and amplify existing biases in data, leading to unfair treat-
insights that might take humans much longer to uncover, ment of marginalized groups. If algorithms are trained
thus accelerating the pace of psychological research. on biased data, they may produce skewed results that
exacerbate inequalities in mental health care.

Increased Accessibility and Affordability: AI-driven Lack of Accountability: When AI systems make mistakes,
tools, such as chatbots and virtual therapists, can pro- it can be challenging to identify accountability. This can
vide support to individuals who may not have easy lead to confusion and frustration for clients and practi-
access to traditional mental health services, especially tioners alike, complicating the healing process.
in remote or underserved areas. By streamlining pro-
cesses and increasing efficiency, AI can help reduce the
costs associated with mental health care, making it more
affordable and accessible.

Enhanced Diagnosis: AI can analyze large datasets to Misdiagnosis and Mismanagement: AI systems may
identify patterns and correlations, aiding in more accu- lack the nuanced understanding that a trained cli-
rate and timely diagnoses of mental health conditions. nician possesses. Misdiagnoses or inappropriate
treatment recommendations can result from overly
simplistic algorithms.

Personalized Treatment: By leveraging data, AI can help Over-reliance on Technology: There is a risk that prac-
create tailored treatment plans that consider individ- titioners might over-rely on AI tools, potentially
ual client characteristics, preferences, and responses, undermining the human element of therapy. This could
leading to more effective interventions. Wearable lead to a diminished therapeutic relationship and neglect
devices and apps can track emotional and physiologi- of individual client needs.
cal responses, allowing for continuous monitoring and
timely interventions when needed.

Support for Therapists: AI tools can assist clinicians in Privacy Concerns: The use of AI often involves handling
administrative tasks, documentation, and treatment sensitive personal data. Inadequate safeguards can lead
recommendations, freeing up more time for direct to breaches of confidentiality, compromising clients’
patient interaction. trust and safety.

Improved Training: AI can simulate clinical scenarios for Ethical Dilemmas: The rapid development of AI can
training purposes, providing psychology students and outpace ethical guidelines, leading to practices that pri-
practitioners with valuable hands-on experience in a oritize efficiency over care. This raises concerns about
controlled environment. the moral implications of decisions made by machines.

Source: OpenAI. (2023) ChatGPT [Large language model]. https://chat.openai.com/chat

Artifical Intelligence: Redefining the Future of Psychology 9


Addressing Equity and Ethics of Artificial Intelligence
Algorithms and humans both contribute to bias in artificial intelligence, but AI
may also hold the power to correct or reverse inequities among humans
BY ZARA ABRAMS
From Monitor On Psychology, April 2024

A s artificial intelligence (AI) rapidly permeates our “The conversation about AI bias is broadening,” said
world, researchers and policymakers are scrambling to psychologist Tara Behrend, PhD, a professor at Michigan
stay one step ahead. What are the potential harms of these State University’s School of Human Resources and Labor
new tools—and how can they be avoided? Relations who studies human-technology interaction and
“With any new technology, we always need to be think- spoke at CES about AI and privacy. “Agencies and vari-
ing about what’s coming next. But AI is moving so fast that ous academic stakeholders are really taking the role of
it’s difficult to grasp how significantly it’s going to change psychology seriously.”
things,” said David Luxton, PhD, a clinical psychologist
and an affiliate professor at the University of Washing- Bias in algorithms
ton’s School of Medicine who spoke at the 2024 Consumer Government officials and researchers are not the
Electronics Show (CES) on “Harnessing the Power of only ones worried that AI could perpetuate or worsen
AI Ethically.” inequality. Research by Mindy Shoss, PhD, a professor of
Luxton and his colleagues dubbed recent AI advances psychology at the University of Central Florida, shows
“super-disruptive technology” because of their poten- that people in unequal societies are more likely to say AI
tial to profoundly alter society in unexpected ways. In adoption carries the threat of job loss (Technology, Mind,
addition to concerns about job displacement and manipu- and Behavior, Vol. 3, No. 2, 2022).
lation, AI tools can cause unintended harm to individuals, Those worries about job loss appear to be connected
relationships, and groups. Biased algorithms can pro- to overall mental well-being. For example, about half of
mote discrimination or other forms of inaccurate employees who said they were worried that AI might
decision-making that can cause systematic and poten- make some or all of their job duties obsolete also said
tially harmful errors; unequal access to AI can exacerbate their work negatively impacted their mental health.
inequality (Proceedings of the Stanford Existential Risks Among those who did not report such worries about AI,
Conference 2023, 60–74). On the flip side, AI may also hold only 29% said their work worsened their mental health,
the potential to reduce unfairness in today’s world—if according to APA’s 2023 Work in America survey.
people can agree on what “fairness” means.
“There’s a lot of pushback against AI
because it can promote bias, but humans have
been promoting biases for a really long time,”
said psychologist Rhoda Au, PhD, a professor
of anatomy and neurobiology at the Boston
University Chobanian & Avedisian School of
Medicine who also spoke at CES on harness-
ing AI ethically. “We can’t just be dismissive
and say, ‘AI is good’ or ‘AI is bad.’ We need to
embrace its complexity and understand that
it’s going to be both.”
With that complexity in mind, world leaders
are exploring how to maximize AI’s benefits
and minimize its harms. In 2023, the Biden
administration released an executive order
CONSUMER TECHNOLOGY ASSOCIATION

on Safe, Secure, and Trustworthy AI, and the


European Union came close to passing its first
comprehensive AI legislation, the AI Act. Psy-
chologists, with expertise on cognitive biases
and cultural inclusion, as well as in measuring Experts discuss harnessing the power of AI ethically at an APA-sponsored session at the
the reliability and representativeness of data- Consumer Electronics Show in January. Left to right: Drs. Lindsay Childress-Beatty, Nathanael
Fast, Rhoda Au, and David Luxton.
sets, have a growing role in those discussions.

Artifical Intelligence: Redefining the Future of Psychology 10


“In places where there’s a lot of
inequality, those systems essen-
tially create winners and losers,”
Shoss said, so there is greater con-
cern about how AI tools could
be used irresponsibly—even
maliciously—to make things worse.
Those fears are not
unfounded. Biased algorith-
mic decision-making has been
reported in health care, hiring,
and other settings. It can happen
when the data used to train a
system is inaccurate or does
not represent the population it
intends to serve. With genera-
tive AI systems, such as ChatGPT,
biased d ­ ecision-making can also
happen unexpectedly due to the
“black box” issue, which refers to
the fact that even an algorithm’s Biased algorithmic decision-making, reported in areas like health care and hiring, raises significant concerns,
developers may not understand prompting calls for careful auditing of AI tools, transparency in algorithmic learning processes, and testing.

how it derives its answers.


“Even if we give a system the best available data, unintentionally steer a woman away from jobs in STEM
the AI may start doing things that are unpredictable,” (science, technology, engineering, and math), influencing
Luxton said. her entire life trajectory.
Examples include a recruiting tool at Amazon that pre- “That can be potentially hugely consequential for a
ferred male candidates for technical jobs and Replika, an person’s future decisions and pathways,” she said. “It’s
AI companion that harassed some of its users. Avoiding equally important to think about whether those tools are
such issues requires careful auditing of AI tools—includ- designed well.”
ing testing them in extreme scenarios before they are Even if an algorithm is well designed, it can be applied
released—but it also requires significantly more trans- in an unfair way, Shoss said. For example, a system
parency about how a given algorithm learns from data, that determines salaries and bonuses could be imple-
Luxton said. mented without transparency or human input—or it could
On top of technical audits, Behrend and Richard be used as one of a series of factors that guide human
Landers, PhD, a professor of i­ndustrial-organizational psy- ­decision-making. In that sense, using AI ethically requires
chology at the University of Minnesota Twin Cities, have asking the same questions that evaluate any other orga-
published guidelines for conducting a “psychological nizational change: Is it done with trust, transparency,
audit” of an AI model, or evaluating how it might impact and accountability?
humans (American Psychologist, Vol. 78, No. 1, 2023). That
includes direct effects, such as who is recommended by a Human error
hiring algorithm, as well as broader ripple effects on orga- An algorithm itself may be biased, but humans can also
nizations and communities. introduce inaccuracies based on how they use AI tools.
The audit employs basic principles of psychologi- “AI has many biases, but we’re often told not to worry
cal research to evaluate fairness and bias in AI systems. because there will always be a human in control,” said
For example: Where is the data used to train an AI model Helena Matute, PhD, a professor of experimental psychol-
coming from, and does it generalize to the population the ogy at Universidad de Deusto in Bilbao, Spain. “But how do
tool intends to serve? Were the data collected using sound we know that AI is not influencing what a human believes
research methods, or were limitations introduced? Are and what a human can do?”
CHIP SOMODEVILLA /GETTY IMAGES

developers making appropriate inferences from that data? In a study she conducted with graduate student Lucía
Conversations about algorithmic bias often center Vicente, participants classified images for a simulated
around high-stakes decision-making, such as educational medical diagnosis either with or without the help of AI.
and hiring selection, but Behrend said other applica- When the AI system made errors, humans inherited the
tions of this technology are just as important to audit. same biased ­decision-making, even when they stopped
For example, an AI-driven career guidance system could using the AI (Scientific Reports, Vol. 13, 2023).

Artifical Intelligence: Redefining the Future of Psychology 11


“If you think of a doctor working with this type of assis- “We all have cognitive biases, and AI can be used to
tance, will they be able to oppose the AI’s incorrect exploit them in a very dangerous way,” Matute said.
advice?” said Matute, adding that human users need the
training to detect errors, the motivation to oppose them, Working toward “fairness”
and the job security to speak up about it. While poorly designed algorithms can perpetuate
Decades of psychological research clearly show that real-world biases, AI may also hold the power to correct or
once humans inherit a bias or encounter misinforma- reverse inequities among humans. For example, an algo-
tion, those beliefs are hard to revise. Celeste Kidd, PhD, rithm could detect whether a company is less likely to hire
an assistant professor of psychology at the University of or promote women, then nudge leaders to adjust job ads


California, Berkeley, argues that assumptions about AI’s and decision-making criteria accordingly.
capabilities, as well as the way many tools present infor-
mation in a conversational, matter-of-fact way, make the AI has many biases, but we’re often
risk of inheriting stubborn biases particularly high (Science, told not to worry because there will
Vol. 380, 2023).
“By the point [that] these systems have transmitted the
always be a human in control. But how do
information to the person . . . it may not be easy to cor- we know that AI is not influencing what
rect,” Kidd said in a press release from the university
a human believes and what a human
(Berkeley News, June 22, 2023).
Companies also can—and do—intentionally leverage AI can do?
to exploit human biases for gain, said Matute. In a study —Helena Matute, PhD, professor of experimental psychology, Universidad de Deus-
of simulated AI dating recommendations, she and gradu- to in Bilbao, Spain
ate student Ujué Agudo found that participants were more
likely to agree to date someone whose profile they viewed “There are risks here, too, and it’s equally important
more than once, a choice she said is driven by the famil- to have transparency about these types of systems—how
iarity heuristic (PLOS ONE, Vol. 16, No. 4, 2021). Guidelines they’re deriving answers and making decisions—so they
for ethical AI should consider how it can be designed to don’t create distrust,” Luxton said.
intentionally play on cognitive biases and whether that Using AI to reverse bias also requires agreeing on
constitutes safe use, she added. what needs to change in society. The current approach

Note Designer
Write Better Notes. Faster.
With Optional
AI-Assist!

Leaders in Therapy Note Writing Software


The affordable solution for quickly completing your documentation. All
subscriptions include the option to use AI to rewrite and polish off your
notes with ease and professionalism.

SOAP, BIRP, SIRP, DAP, PIE notes, Couples, Family, Child, Addictions Therapy,
Risk Assessment, Intakes, Treatment Plans, Forms and more!

One subscription - everything included

Secure Fully Optional AI-


Customizable Assist Included

For More Information

notedesigner.com LEARN MORE

Artifical Intelligence: Redefining the Future of Psychology 12


The algorithms powering dating apps Tinder, Hinge, and the League are designed to addict users to these platforms rather than help
them meet suitable partners, say plaintiffs in a federal lawsuit filed in February 2024.

to building AI tools involves collecting large quantities ing developers to show an audit trail, or a record of how
of data, looking for patterns, then applying them to the an algorithm makes decisions. (Luxton is also writing a
future. That strategy preserves the status quo, Behrend guidebook for behavioral health practitioners on integrat-
said—but it is not the only option. ing AI into practice.) When challenges arise, he suggests
“If you want to do something other than that, you have letting those play out through the judicial system.
to know or agree what is best for people, which I don’t “Government does need to play a role in AI regula-
know that we do,” she said. tion, but we also want to reduce the inefficiencies of
As a starting point, Behrend is working to help AI government roadblocks in technological development,”
researchers, developers, and policymakers agree on how Luxton said.
to conceptualize and discuss fairness. She and Landers One thing is clear: AI is a moving target. Using it eth-
distinguish between various uses of the term, including ically will require continued dialogue as the technology
statistical bias versus equity-based differences in group grows ever more sophisticated.
outcomes, in their recent paper. “It’s not entirely clear what the shelf life of any of these
“These are noncomparable ways of using the word conversations about bias will be,” said Shoss. “These dis-
‘fairness,’ and that was really shutting down a lot of con- cussions need to be ongoing, because the nature of
versations,” Behrend said. generative AI is that it’s constantly changing.” n
Establishing a common language for discussing AI is an
important step for regulating it effectively, which a grow-
ing contingent is seeking to do. In addition to Biden’s 2023
executive order, New York State passed a law requiring
companies to tell employees if AI is used in hiring or pro-
FURTHER READING
motion. At least 24 other states have either proposed or
passed legislation aiming to curtail the use of AI, protect How psychology is shaping the future of technology
the privacy of users, or require various disclosures (U.S. Straight, S., & Abrams, Z., APA, 2024
State-by-State AI Legislation Snapshot, BCLP Law, 2023). Speaking of Psychology: How to use AI
“It’s pretty difficult to stay on top of what the best ethically with Nathanael Fast, PhD
APA, 2024
practice is at any given moment,” Behrend said. “That’s
another reason why it’s important to emphasize the role Worried about AI in the workplace? You’re not alone
Lerner, M., APA, 2024
of psychology, because basic psychological principles—
The unstoppable momentum of generative AI
NURPHOTO/GETTY IMAGES

reliability, validity, fairness—don’t change.”


Abrams, Z., Monitor on Psychology,
Luxton argues that executive orders and piecemeal January/February 2024
legislation can be politicized or inefficient, so policy-
makers should instead focus on establishing standard
guidelines and best practices for AI. That includes requir-

Artifical Intelligence: Redefining the Future of Psychology 13


Steps to Evaluate an AI-Enabled
Clinical or Administrative Tool
This step-by-step guide discusses how to decide
which AI tools are right for your practice

A rtificial intelligence (AI) continues to develop rapidly and is being integrated into many facets of daily
life. Increasingly, AI-enabled tools are being developed for use in mental health care. These tools have
a wide range of functionality; some focus on streamlining administrative tasks like scheduling or documen-
tation, while others focus on providing clinical supports to augment traditional therapy practices. Given the
proliferation of tools on the market, it is important for psychologists to develop a process to assess which
tools may be right for their practice.
Following is a step-by-step guide that highlights many of the important considerations when assessing
digital tools that use AI technology.

1
Company (vendor/ FDA-cleared digital therapeutic? Or has the com-
device maker/developer) pany done research on its product that is available
It is important to understand who is on the lead- for you to review? Research could include a ran-
ership team of the company. If a tool is designed domized controlled trial (RCT) or real-world
for use by mental and behavioral health (MBH) effectiveness study.

4
clinicians, are psychologists or other MBH profes-
sionals represented in leadership? For example, HIPAA compliance
MBH professionals may be represented in the roles Does the company attest that it complies with
of chief medical officer or clinical director, or they HIPAA, GDPR, and/or other applicable privacy stan-
may serve on advisory boards. dards in the jurisdiction(s) where you practice (e.g.,

2
state consumer data privacy laws)? Additionally, do
Tool functionality they offer a business associate agreement (BAA)?

5
Does the tool have the function(s) that is valuable
to you? Data security
• Does it integrate with software or the elec- Does the company have a clear and easily under-
tronic health record (EHR) that you may already standable data security policy?
be using? Usually, this information is found under a head-
• Does it fit within your workflow and save you ing titled “Data Security” and is often found in the
administrative time? Privacy Policy, but some companies also have a
• Is it a cost-effective tool for your practice separate “Security” or “Privacy and Compliance”
needs? webpage or document available on their website.
• Does the company offer demos of their product

or a limited free trial? Is the data encrypted?


• What kind of technical support is offered by the The HIPAA Security Rule requires that electronic
company? (e.g., is tech-support limited to busi- protected health information (e-PHI) be secured
ness hours or is it available 24/7?) when it is in transit and at rest; however, it does

3
not specify which security measures an organiza-
VIKTOR TANASIICHUK/GETTY IMAGES

Clinical evidence tion must use.


If the tool provides a clinical intervention, treat- National Institute of Standards and Technology
ment, or support (rather than an administrative (NIST) has developed guidance to assist orga-
tool, like automating documentation), is there nizations in complying with the Security Rule.
clinical evidence to support the tool’s safety Advanced Encryption Standard (AES) is a widely
and effectiveness? For example, is the tool an used and U.S. government-approved algorithm

Artifical Intelligence: Redefining the Future of Psychology 14


used to encrypt and decrypt data. You will often Typically, companies collect user data such as
see products listed as using AES-128, AES-192, or your email, login credentials, and payment infor-
AES-256-bit encryption. The numbers (128, 192, mation to provide the service. Use of cookies and
256) refer to the bit length of the encryption/ data analytics often are discussed here.
decryption key.
Carefully review how the data are used and with
Does the company have any additional certifica- whom it can be shared.
tions? This information is generally found under a head-
HITRUST certification is a voluntary certification ing such as “Personal Data Use/How We Use Your
that demonstrates strong cybersecurity and data Information” or “Data Sharing/Disclosure of Your
security practices. SOC 2 is a voluntary compliance information.”
standard that demonstrates a service organi- Common data uses are to provide the service,
zation’s strong security, availability, processing training employees, research, and marketing. Take
integrity, confidentiality, or privacy practices. note of whether you can “opt-out” of some types of

6
data sharing (e.g., can you decline data sharing for
Privacy policy marketing purposes?).
Is the privacy policy readily available for review Common parties with whom data may be shared
before purchasing the tool or signing up for the include third party service providers/vendors,
service? marketers, law enforcement agencies (as applica-
ble).
Read the privacy policy in full. Be aware wshether the company makes any
Carefully review what data are collected. This statements about selling data. Selling personally
information is generally found under a heading identifiable data is a violation of HIPAA and possi-
such as “Personal Information We Collect.” bly other applicable data privacy and security laws.

Artifical Intelligence: Redefining the Future of Psychology 15


10
However, selling appropriately deidentified data (as Contact the company if
described in the HIPAA Privacy Rule) is not a viola- you have questions
tion of HIPAA; however, you have to decide if you After reviewing all this information, contact the
are comfortable with this practice. company should you have any further questions
Many jurisdictions, (such as California, Colorado, or if you were not able to answer any of the listed
Nevada, and Virginia), have specific state-based questions based on the company’s website. You
data privacy laws that may apply. Privacy policies may wish to consult with a local attorney if there
often have a separate state specific section. You are provisions in the privacy policy, TOS, and/or
may look for sections regarding additional protec- BAA that are unclear.

11
tions such as “Requests to Delete Data” or “Right to
Correct Data.” Base your decision on the
needs of your practice
Carefully review how long data are retained A decision about which tools to incorporate into
This generally can be found under a “Data Reten- your practice is an individual decision based on
tion” heading. one’s practice needs. However, these steps will

7
help you gather the relevant information needed to
Terms of service (TOS) make an informed decision.

12
Is the TOS readily available for review before pur-
chasing/signing up for the service? Document your review
It is important to document your initial review of
Read the TOS in full the above information (see the Companion check-
Carefully review the section on “Customer Data” list [PDF, 60KB]) to demonstrate your due diligence
which also may be labeled “Protected Health Infor- in selecting a tool.

13
mation or User Data.” This section will generally
discuss how personal health information (PHI) is Review policies for updates
stored and maintained. It also may discuss business Privacy policies and TOS can be periodically
associates and BAAs. updated, and you are encouraged to review these

8
updates.
Location of relevant data policies
It is important to note that while companies should
Bringing the Power of Artificial Intelligence to Your
provide the aforementioned information described
Clinical Practice: A Hands-on Guide
above in steps 5–7, sometimes there is variability
in whether that information resides in the Privacy Online Course/Video On-Demand
$50 – List Price
Policy, TOS, BAA, or some combination of those
$40 – Member/Affiliate
documents.

9
2 CE Creditss

Informed consent Gain insights into various ai technologies applicable in


clinical settings, learn effective strategies for identifying
Does the company give any guidance or provide
opportunities to implement technology for client benefit,
a sample consent form, and/or require an attes- and discover how to select and integrate the most suit-
tation from the provider that patient informed able digital tools.
consent has been gained prior to using a tool that Purchase through our partner site
accesses PHI?

This “Steps to evaluate an AI-enabled clinical or administrative tool” is provided by APA as a preliminary guide for psychologists considering
the integration of clinical tools utilizing AI into their practice. It is intended to serve as a starting point for evaluation and is not exhaustive. Users
are encouraged to apply their own professional judgment and seek additional resources and guidance as needed, including legal consultation
to ensure compliance with applicable laws and regulations. APA does not endorse any specific AI tools and assumes no responsibility for the
outcomes of their use. Always ensure compliance with relevant ethical guidelines and legal requirements.

Artifical Intelligence: Redefining the Future of Psychology 16


Artificial
intelligence and
psychological
research:
Can AI replace human
participants?

APA’s Essential Science Conversations webinar


series brings together panelists and audience
members for dynamic discussions on emerging
topics in psychological science. In this April
2024 session, APA’s Chief of Science, Mitch
Prinstein, PhD, led an engaging dialogue with
five leading researchers about the potential
of AI as a substitute for human participants
in psychological research. They explored both
the promises and challenges of using AI to
enhance research models and reduce biases.

In this excerpt of their discussion, the researchers responded to Prinstein’s prompt:


“If you had the opportunity to meet with the leaders of AI, the people who are creating it,
generating and guiding its future, what do you think they need to know about psychological
science to make AI better serve their purposes for which it is designed?”

Rose Sokol, PhD, Publisher, APA Journals and Books


I think I’d take advantage to first ask to respect copyright and
confidentiality in the systems and the creations and training
of the systems. Beyond that, I do think it’s important to infuse
psychologists at each step, at each iteration. Slow down. Let
psychologists play in the system and point out biases and point
out the challenges. You need to know your assumptions going
into a system, and psychologists can really help figure that out.
What could use improving in the next iteration to keep making it
better instead of just making it faster?

Researcher and global speaker on AI Jerri Lynn Hogg, PhD,


chair, APA’s Artificial Intelligence Task Force
I do think that technology developers, in general, often think first,
“Oh, that’s cool. Let’s see if we can do it.” When they do it, they
think, “Hey, what else can we do with it?” As opposed to thinking
about the ramifications of the psychological impact. Once it’s out
of the gate, it’s often hard to pull it back in. It would be great if we
could be involved in the creation phase. How can they design it so
it can be used ethically in powerful ways to understand ourselves
better and support well-being?

Artifical Intelligence: Redefining the Future of Psychology 17


Kurt Gray, PhD, professor in psychology and neuroscience at
UNC Chapel Hill
I also think that we should get our work to computer scientists.
When I work with them, they often are not familiar with our
latest psychological theories about how the mind works. Their
field is focused less on mirroring human cognition and more
focused on pushing technical benchmarks. But they rapidly
become excited about connecting their work to psychological
theory once you learn to speak each other’s language. By getting
our work to computer scientists, it will help them make models
that better approximate how we think and feel: Cognition,
the nature of categorization, how people feel emotion, maybe
psychopathologies, the structure of psychopathology, etc. This
is the work that we all do every day. Popular psychology books
about how the brain or mind works can even be a good start.

Sang Eun Woo, PhD, professor of psychological science,


Purdue University
I echo all those suggestions. If I may add one more, it would be
the psychological measurement principles, how psychological
constructs are measured and assessed in a way that is actually
reliable and valid. I think sometimes the measurement piece is
really the first step towards creating a reliable tool for us to use
to go further, to investigate the psychological phenomenon. If the
measurement is not done in a way that makes sense theoretically,
we’re not talking about the same construct in the first place.

Mohammad Atari, PhD, assistant professor of psychology at


UMass Amherst
Speaking with the leaders of AI, I would probably tell them that
there are more humans than English-speaking and Western,
educated, industrialized, rich and democratic (WEIRD) people.
We have talked a lot about demographic biases, which are really
important—like gender bias or racial bias. One of the things that
I want to point our attention to is the cultural bias and linguistic
bias that we have in our own existing data. More than 95% or
96% of our knowledge base in psychology is from a thin slice of
human diversity. When we have a more inclusive database, we
can definitely do more bottom-up exploratory data analysis for
picking up interesting theories that we did not know before.

Visit Essential Science Conversations for


an on-demand version of the webinar.

Artifical Intelligence: Redefining the Future of Psychology 18


CLASSROOMS ARE ADAPTING TO THE USE OF

ARTIFICIAL
INTELLIGENCE
Young people’s use of artificial intelligence is forcing change
in classrooms. Psychologists can help maximize the smart
adoption of these tools to enhance learning.
BY ZARA ABRAMS
From Monitor On Psychology, January/February 2025

G enerative artificial intelligence (AI) promises to touch nearly every part of our
lives, and education is one of the first sectors grappling with this fast-moving
technology. With easy and free-to-access tools like ChatGPT, everything related to
teaching, learning, and assessment is subject to change.
“In many ways, K–12 schools are at the forefront of figuring out practical, opera-
tional ways to use AI, because they have to,” said Andrew Martin, PhD, a professor
of educational psychology and chair of the educational psychology research group
at the University of New South Wales in Sydney. “Teachers are facing a room full of
people who are very much at the cutting edge of a technology.”

Artifical Intelligence: Redefining the Future of Psychology 19


AI has been used in classrooms for years, quietly power- AI for schoolwork did so with permission from a teacher. A
ing learning management tools, such as Google Classroom, similar number checked the veracity of generative AI out-
Canvas, and Turnitin. But the recent democratization puts using outside sources, suggesting that many students
of generative AI tools such as ChatGPT, and the rush to are aware of the fallibility of such tools (The Dawn of the AI
commercialize similar technologies across sectors, is pro- Era, Common Sense Media, 2024).
viding new challenges and opportunities for students and “Teens have quite a sophisticated and nuanced view of
educators alike. AI,” said Beck Tench, PhD, an information scientist based at
In a growing movement to find out how to safely and the Center for Digital Thriving, which explores the role of
effectively use AI to enhance learning, educational psy- technology in young people’s lives and is part of the Project
chologists are playing a critical role. They are studying how Zero initiative at the Harvard Graduate School of Educa-
AI tools can lighten the workload on teachers—without tion. “They report that they feel conflicted, and are having
interfering with the social aspects of learning—as well as just as many excitements and concerns as we do as adults,”
how intelligent tutoring systems can personalize education including worries about misinformation, awareness that


while keeping students motivated. They are also explor- it will change their work prospects, and enthusiasm about
ing whether educators can leverage tools such as ChatGPT
without hindering the broader goals of learning. Teachers are facing a room full of
One question should always be at the forefront, said
educational psychologist Ally Skoog-Hoffman, PhD, senior
people who are very much at the
director of research and learning at the Collaborative for cutting edge of a technology.
Academic, Social, and Emotional Learning (CASEL): “How
—Andrew Martin, PhD, a professor of educational psychology and chair of the edu-
are we using AI and technology as tools to elevate the con- cational psychology research group at the University of New South Wales in Sydney
ditions and the experiences of education for students
without sacrificing the human connection that we abso- its potential to advance science, creativity, and human-
lutely know is integral to learning?” ity (Teen and Young Adult Perspectives on Generative AI,
Common Sense Media, Hopelab, and Center for Digital
How Children View AI Thriving, 2024).
Psychologists have studied human­technology interac- The Center for Digital Thriving offers guidelines for
tion for decades. A new line of research now seeks to talking to youth about generative AI, including asking chil-
understand how people, including children, interact with dren what school rules seem fair and whether they have
chatbots and other virtual agents. ever heard about AI getting something wrong.
“Little kids learn from characters, and our tools of edu-
cation already [rely on] the parasocial relationships that Intelligent Tutoring
they form,” said David Bickham, PhD, a health communica- Much of the conversation so far about AI in education cen-
tion researcher based at Boston Children’s Hospital, during ters around how to prevent cheating—and ensure learning
a panel discussion on AI in the classroom. “How are kids is actually happening—now that so many students are
forming a relationship with these AIs, what does that look turning to ChatGPT for help.
like, and how might that impact the ability of AIs to teach?” A majority of teachers surveyed by the Center for
In a series of qualitative studies, Randi Williams, PhD, Democracy and Technology, a nonprofit focused on tech-
a program manager at the Algorithmic Justice League, a nology policy, said they have used AI detection software to
nonprofit focused on making AI more equitable, observed check whether a student’s work was their own, but those
playful interactions between young children and robots, tools can also be fallible—in a way that could exacerbate
including the children’s attempts to both teach the agents
and learn from them. Williams and her colleagues also
found that children viewed agents with a more humanlike Key Points
and emotive voice as friendlier and more intelligent (Pro-
■ AI has been in use in classrooms for years, but
ceedings of the 2017 Conference on Interaction Design and a specific type of AI—generative models—could
Children, 2017). But many questions remain, including how transform personalized learning and assessment.
to study and foster such relationships while protecting the ■ Teenagers are quick adopters, with 7 in 10 using
PREVIOUS PAGE: MASKOT/GETTY IMAGES

safety and privacy of minors—issues that psychologists are generative AI tools, mostly for help with home-
well poised to address. work.
Among adolescents, the use of generative AI is already ■ Educational psychologists are studying how these
widespread. Of the 7 in 10 who reported using at least one tools can be used safely and effectively, including
such tool in a 2024 Common Sense Media survey of 1,045
to support social and emotional learning in chil-
dren and adolescents.
teenagers ages 13 to 18, homework help was the most
common reason. About half of those who used generative

Artifical Intelligence: Redefining the Future of Psychology 20


inequities (Up in the Air, Center for Democracy and Tech-
nology, 2024). Black teenagers were about twice as likely OATutor—
as their peers to tell Common Sense that they had school- built by Zachary
work incorrectly flagged as being AI-generated (The Dawn Pardos, PhD, an associate
of the AI Era, Common Sense Media, 2024). professor of education at the University of California,
Some schools are adapting by changing the nature of Berkeley, and his colleagues—uses generative AI to learn
assessment, Martin said. In Australia, for example, senior from an instructor’s own teaching style and materials, then
year science projects are traditionally submitted in written creates new and improved worksheets and lesson plans.
form, but students must now also present their findings This bespoke learning tool can allow teachers to replace
orally and respond to questions in real time. On the whole, textbook homework questions with interactive exer-
teachers told the Center for Democracy and Technology cises that cater to each student’s mastery level and do not
they need better guidance and training on what responsi- require grading.
ble use is and how to respond if they suspect a student is “The teacher can spend less time adapting to the tech-
cheating by using AI tools. nology, so it feels more like an extension of her class that
On the bright side, educators are increasingly relying helps unburden her, rather than another professional
on AI such as Curipod, Gradescope, and Twee to auto- development task,” said Pardos, who is also publishing
mate certain tasks and lighten their workload, said Nicole journal articles on OATutor to add to the knowledge base
Barnes, PhD, APA’s senior director for schools and edu- about adapting and scaling generative AI in education.
cation. That includes generating new ideas for lesson A key task for psychologists, Aguilar said, will be to study
plans and activities, writing parent-teacher letters, adapt- how using AI tools relates to students’ motivation to learn.
ing materials for different age groups and neurodiverse Intelligent tutoring systems still lag far behind human
learners, and getting a second opinion on how to improve teachers, Barnes said, in their ability to detect whether a
existing materials. student is feeling frustrated, anxious, or uncertain about
Intelligent tutoring systems are another major focus the content they’re learning.
for researchers, developers, and education technology “These systems often treat responses as black and
companies. These AI-powered systems promise to help white, but the reality is far more nuanced,” Barnes said.
personalize the learning experience for each student, tai- “Every answer elicits an emotional response from stu-
loring style, pace, and assessment to the individual and dents, whether positive or negative.” Teachers detect these
making lessons more accessible to students learning nuances and adjust instruction accordingly—existing AI
English or those with disabilities. Khan Academy, McGraw tutors do not.
Hill, and Carnegie Learning are among the companies Future intelligent tutors are poised to collect more
offering AI tools, while the Los Angeles Unified School nuanced data on students as they learn—including every-
District invested millions in “Ed,” a custom chatbot that thing from the heart rate to facial expressions, Bickham
survived for just a few months after the financial collapse said—and know when to call on a teacher to step in. That
of the company that built it. could ultimately shift teachers into more of a facilitator role.
“It’s sort of a gold rush right now for edtech compa- “The teacher role has the potential to evolve from the
nies to sell districts the right thing, without having any person who’s really directing the education to a person who
data to support their claims,” said educational psychol- is kind of managing the experience,” he said.
ogist Stephen Aguilar, PhD, an associate professor of
education at the University of Southern California who Social and Relational Shifts
studies how such technologies relate to student motivation Ask ChatGPT for homework help and you’ll get a polite,
and engagement. friendly response, Martin said, which makes it easy to for-
As an alternative to commercial offerings, which are get you’re not interacting with a sentient being. The tool
expensive and difficult to customize, some researchers may therefore represent a social opportunity cost if chil-
are working on open-source intelligent tutoring systems. dren use it to answer questions they might otherwise ask
their parents, peers, or siblings.
“The more you rely on generative AI to help
The Center for Digital Thriving suggests the following questions for you with your schoolwork, the less you might
starting conversations with youth about generative AI: be inclined to meet up with friends in person
1 Do you know any kids your age who are using generative AI? or online after school to brainstorm around an
2 Has your school or teachers set any rules about using generative AI? essay,” Martin said.
3 What kinds of uses do you think should be allowed in school? Teenagers also report talking to generative
4 Have you ever seen an AI tool get something wrong? AI about relationships, identity, and sexuality,
5 What kinds of questions feel easier to ask AI than a human? including to find answers to questions they’re
afraid to ask adults and to have the feeling of

Artifical Intelligence: Redefining the Future of Psychology 21


talking to a friend who won’t judge them (Teen and Young
Adult Perspectives on Generative AI, Common Sense
Media, Hopelab, and Center for Digital Thriving, 2024). 400+ CE programs.
“It’s striking to me that young people are sharing their
deepest, darkest secrets and questions to a company that
can collect that information and use it,” Tench said. One-time fee.
To help students learn about the downsides of
using such technologies, CASEL has partnered with
Common Sense Media to apply its five social and emo-
Unlimited online
tional learning (SEL) competencies (self-awareness, learning.
self-management, responsible decision-making, relation-
ship skills, and social awareness) to the digital space. The
goal is to empower students to bring social and emotional
awareness to difficult online situations. For example, how
can teenagers with body image concerns navigate a social
media feed rife with photos edited by AI?
CASEL is also exploring whether AI can be used to teach
SEL. Because young people today are beginning to enmesh
their online and offline lives, virtual SEL lessons could be
useful, Skoog-Hoffman said.
Young people may develop a cyber identity that differs
from their real-world social identity. How do those con-
cepts relate to one another and influence behavior, both
online and in person? Before AI can safely be used to teach
SEL, more research is needed to understand these con- Get 12 months of unlimited access
cepts, Skoog-Hoffman said, as well as whether skills such to all of APA’s live webinar series,
as empathy can be practiced and acquired in a digital con- including special events, plus over
text (with a chatbot, for example).
“For youth, online and in-person interactions are start-
400 video on-demand CE programs
ing to become more seamless,” she said. “That could for an unbelievably low price.
change the way teens are learning about relationships
and interpersonal skills, and as educators, it’s time for us
Learn more.
to adapt.” n

Save money.
FURTHER READING Advance your career.
“My doll says it’s ok”: A study of children’s
conformity to a talking doll APA MEMBER: $379 | NONMEMBER: $649
Williams, R., et al.
IDC ‘18: Proceedings of the 17th ACM Conference
on Interaction Design and Children, 2018
More teachers are using AI-detection tools. SUBSCRIBE TODAY AT
Here’s why that might be a problem APA.CONTENT.ONLINE
Prothero, A.
EducationWeek, Apr. 5, 2024
Artificial intelligence and social-emotional
learning are on a collision course
Prothero, A. Continuing Education
EducationWeek, Nov. 13, 2023 from your Association
AI in the classroom: Technology
and the future of learning
American Psychological Association is recognized by the New York
Family Online Safety Institute, 2023 State Education Department's (NYSED) State Board for Psychology as
Using artificial intelligence tools in K–12 classrooms an approved provider of continuing education for licensed psychologists
#PSY-0100. Programs in our video on-demand catalog meet NYSED
Diliberti, M. K., et al. requirements unless otherwise specified.
RAND Corporation, 2024

Artifical Intelligence: Redefining the Future of Psychology 22


PRECISION MEETS PROGRESS‒ASSESSING VISUAL ATTENTION

Evaluate attention-related issues in just 14 minutes with the Conners CPT™ 3


Online, a performance-based assessment for individuals aged 8 and older.

COMING
SOON

LEARN MORE
Learn more
The Promise and
Perils of Using AI for
Research and Writing
Psychologists and students may tap AI tools for an assist
in some scenarios, but human oversight—including
vetting all output and citing all uses—is essential.
BY CHARLOTTE HUFF

As artificial intelligence (AI) tools


proliferate, the goals of ethical research
and writing remain the same: to be
transparent, preserve the integrity
of authorship, and verify reported
findings. What’s changed is that AI can
provide somewhat of an assist as long
as researchers and students retain
rigorous oversight.
ANNA HECKER/UNPLASH

Artifical Intelligence: Redefining the Future of Psychology 24


Among the ways AI tools can be useful include help- Citing transparently
ing with more routine tasks, cleaning up grammar, and How and whether psychologists and students can incor-
streamlining time-consuming steps involved with finalizing porate AI tools into their research will vary depending
manuscripts, such as citations and the submission process, upon the circumstances involved, said Samantha Den-
according to APA leaders whose work involves providing neny, development manager for APA Style. AI use can be
guidance on the use of AI. The technology can also enable unavoidable if it comprises the heart of a research project
non-native English speakers to improve syntax and read- on, for instance, the role of the technology in psychology,
ability, as well as to translate academic terms prior to she said.


submitting to English-language journals, said Rose Sokol,
PhD, publisher of APA Journals and Books.
In addition, as AI continues to evolve, it could sup-
To be an author you must be a
port the initial or brainstorming stages of research, said human. The threat for students
Emily Ayubi, senior director of APA Style. If a researcher is
and researchers is really the same—
considering the pursuit of an avenue of study and wants
to gain a better sense of gaps in the existing knowledge over-relying on the technology.
base, she said, generative AI “would theoretically be able —Rose Sokol, PhD, publisher of APA Journals and Books
to review the existing literature more expediently than
a human being could. But you would still have to vet the Policies also may differ depending upon the journal or
output because there may be fabrications. It could make up university or instructor involved, Denneny said. “You might
studies that don’t actually exist.” have a professor who says, ‘Do not open ChatGPT or you’re
A good guideline is that although AI tools can support in trouble,’ and then you might have a professor who has
more routine steps of research and writing, it should not you use it throughout an assignment.”
be relied upon, Ayubi and Sokol stressed. Still, APA policy about generative AI use has developed
At the heart of the APA Publishing policies as related to consensus on several key points, as outlined in a blog post
generative AI, Sokol said, “is that to be an author you must published in late 2023, including that AI cannot be listed as
be a human. The threat for students and researchers is an author in any one of APA’s 88 scholarly publications.
really the same—over-relying on the technology.” When “An author needs to be someone who can provide
that happens, you are at risk of essentially ceding control consent, who can affirm that they followed the ethical pro-
of intellectual property to the machine, she noted. “You’ve tocols of research, that they did the steps as they said they
handed that over. The machine has no accountability and would,” said Chelsea Lee, instructional lead for APA Style.
no responsibility.” “You need a human to be able to give that consent.”
PORTISHEAD1/GETTY IMAGES

Artifical Intelligence: Redefining the Future of Psychology 25


When AI is used during research, that involvement Plus, research indicates that instructors may not be
should be cited in the methods or a comparable sec- skilled in their ability to detect AI-produced text, Den-
tion of the paper, said Timothy McAdoo, a manager on the neny said. She cited one recent study that found that both
APA Style team, who wrote a blog post providing guidance novice and experienced teachers struggled to identify AI
about how to appropriately cite ChatGPT. Although quotes versus student-written text and yet found that both groups
and other details that are not retrievable are typically were overconfident in their capacity to do so (Fleckenstein,
cited as personal communication, that’s not an option with J., et al., Computers and Education: Artificial Intelligence,
ChatGPT as there’s no human involved, he noted. Vol. 6, 2024).
The recommended citation approach is to include the AI detection software exists but is not reliable, according
precise text of the prompt, and then provide a reference to APA Style team members. When they fed the text of the
that incorporates the author of the AI tool, the date of the U.S. Constitution into one program, it reported back that
version used, and other details. If AI produces a lengthy 99% of the historical document had been AI generated.
response, researchers should add that text to the paper’s Another concern: the potential for bias against
appendix or online supplemental materials, given that the non-native English speakers. In one study, which evaluated
same prompt will generate a unique response each time seven ChatGPT detectors against 91 Test of English as a
that it’s used, McAdoo wrote. Foreign Language essays, the detectors incorrectly labeled
more than half of the essays as AI-generated, with an aver-
Detection and bias age false-positive rate of 61.3%. But only 5.1% of 88 essays
Plagiarism software is typically not able to flag writing written by U.S. 8th grade students were similarly misclassi-
that AI has produced, as the tools generate sentences that fied (Liang, W., et al., Patterns, Vol. 4, No. 7, 2023).
haven’t existed in that combination before, members of the Scholars also should be aware of the potential for bias and
APA Style team pointed out. “When you plagiarize another lack of inclusivity in the research that AI identifies and thus
person, there is evidence for that—you found that informa- dig further to determine if the technology has missed key
tion from somewhere,” Denneny said. “But the output from elements, Ayubi said. In addition, AI may draw from older
AI is not something that is public record—it’s not trackable.” studies that can use outdated terminology that doesn’t align

Artifical Intelligence: Redefining the Future of Psychology 26


with APA’s Inclusive Language Guide, Second Edition and
Key points
lacks gender-inclusive language, for instance, she said.
■ AI tools can be useful for some of the routine
tasks, time-consuming steps, and initial stages of
Verify and verify
psychology research and writing. But researchers
Along with assisting with the mechanics of writing, such
must vet AI output and retain control over their
as checking grammar and phrasing in a paragraph, AI may scholarship because of the technology’s potential
provide initial insights on a subject, Denneny said. For for bias and fabrication.
instance, someone can ask for a quick summary, similar ■ In addition to carefully checking all output, APA
to checking out a Wikipedia page, she said. “And then you Style guidelines for using AI include citing its use,
take that and delve deeper.” including noting both the prompt used and the text
Above all, the researcher or student must remain in the generated. Long responses may be incorporated
driver’s seat, checking everything that falls beneath their into the paper’s appendix or online supplemental
materials.
name, APA leaders stress. The Style Team has conducted
■ Ideally, AI tools will progress to the point that they
various test runs with generative AI, and the results have
can enable researchers to focus on the more
not always been encouraging. complex and cognitively demanding elements of
In one instance, Lee asked ChatGPT for five scholarship.
peer-reviewed sources on a topic with which she had
familiarity.“It sounded exactly like what I was looking for,”
she said, noting that the citations included authors who asking AI to do something where you’re not in control,”
had studied that subject. “I went looking for [the studies], she said.
and none of them were real at all.” Moreover, the goal of scholarship is to add something
Lee returned to ChatGPT and asked if it was certain that new to the conversation, Lee said, noting that AI only sum-
those references existed. “It said, ‘Sorry for any confusion. marizes existing information. “The texts generated by AI on
These sources are illustrative. I don’t have access to the the whole tend to be on the surface level. Whereas in sci-
information that you’re actually asking for.’” ence, you want to be very precise and think about, ‘What is
In his blog post about citing ChatGPT, McAdoo describes the thing that I’m trying to share with my audience here?
how he requested five sources related to ideas about What’s new? Why does it matter if anybody reads this?’”
brain lateralization and how the brain operates. ChatGPT As these tools continue to evolve, APA Style team mem-
provided five, only four of which he was able to locate bers welcome ongoing input, Ayubi said. “What we’re
online. The fifth reference included a real digital object presenting now is a snapshot in time,” she said. “We con-
identifier (DOI), but it was one that was assigned to a tinue to research these technologies.”
different article. Ideally, as AI tools progress, they hold the potential to
With these potential pitfalls in mind, researchers and enable researchers to focus on the more complex and cog-
students should verify not only the legitimacy of the nitively demanding elements of scholarship, Sokol said.
sources identified, but “it may be better to read those orig- “If you can automate the routine parts that are less
inal sources to learn from that research and paraphrase essential, then you have more time for that creative pro-
or quote from those articles, as applicable, than to use the cess,” she said. “You have more time to think about, ‘What’s
model’s interpretation of them,” McAdoo wrote. the interpretation? Was there bias introduced in my
In short, generative AI should be viewed as similar to an research design, and how might that affect my interpreta-
electric bike, with the capacity to augment but not replace tion?’ You have more time to think through the [research]
one’s own skill set, Denneny said. “You don’t want to be process.” n

More resources
Want to keep up on the latest APA Style guidance
regarding AI? Follow updates on the APA Style blog.

Related
Learn more about this topic in a recent webinar from
APA Style, Process Over Product: Setting Students Up
KINDAMORPHIC/GETTY IMAGES

for Success in Writing APA Style Papers. Check out


the APA Style playlists on YouTube for more webinars
and trainings.

Artifical Intelligence: Redefining the Future of Psychology 27


Quoting or Reproducing ChatGPT Text
or Other Generative AI Tools
Adapted from APA Style Blog, Feb. 23, 2024, By Timothy McAdoo, updated November 2024

I f you’ve used ChatGPT or other AI tools in your research, describe how you used the tool in your Method section or in
a comparable section of your paper. For literature reviews or other types of essays or response or reaction papers, you
might describe how you used the tool in your introduction. In your text, provide the prompt you used and then any por-
tion of the relevant text that was generated in response.
Unfortunately, the results of a ChatGPT “chat” are not retrievable by other readers, and although nonretrievable data
or quotations in APA Style papers are usually cited as personal communications, with ChatGPT-generated text there is no
person communicating. Quoting ChatGPT’s text from a chat session is therefore more like sharing an algorithm’s output;
thus, credit the author of the algorithm with a reference list entry and the corresponding in-text citation.

only the year, not the exact date. The version number pro-
The reference and in-text citations for vides the specific date information a reader might need.
ChatGPT are formatted as follows: Title: The name of the model is “ChatGPT,” so that serves
as the title and is italicized in your reference, as shown in
OpenAI. (2023). ChatGPT (Mar 14 version) [Large the template. Although OpenAI labels unique iterations
language model]. https://chat.openai.com/chat (i.e., ChatGPT-3, ChatGPT-4), they are using “ChatGPT”
as the general name of the model, with updates identified
• Parenthetical citation: (OpenAI, 2023) with version numbers.
• Narrative citation: OpenAI (2023) In the example above, the version number is included
after the title in parentheses. If a platform does not provide
the version number, that is simply omitted from the refer-
MOOR STUDIOS/GETTY IMAGES

Let’s break that reference down and look at the four ele- ence. ChatGPT does not currently show users the version
ments (author, date, title, and source): number. Different large language models or software might
Author: The author of the model is OpenAI. use different version numbering; use the version number
Date: The date is the year of the version you used. Fol- in the format the author or publisher provides, which may
lowing the template in Section 10.10, you need to include be a numbering system (e.g., Version 2.0) or other methods.

Artifical Intelligence: Redefining the Future of Psychology 28


6
FREE
FREE
MONTH TRIAL
TRY LISTING YOUR PRACTICE ON
THERAPIST.COM FOR FREE!
Plus, lock in $9.99/mo (normally
$12.99/mo) after your free trial expires.

CLAIM OFFER NOW AT:


THERAPIST.COM/APA

Bracketed text is used in references for additional APA Policies on Use of Generative AI
descriptions when they are needed to help a reader
For other issues about generative AI, the APA Style
understand what’s being cited. References for a number
team follows APA Journals policies. APA Journals
of common sources, such as journal articles and books, has published policies on the use of generative AI in
do not include bracketed descriptions, but things out- scholarly materials. For this policy, AI refers to generative
side of the typical peer-reviewed system often do. In the LLM AI tools and does not include grammar-checking
case of a reference for ChatGPT, provide the descrip- software, citation software, or plagiarism detectors.
tor “Large language model” in square brackets. OpenAI ■ When a generative artificial intelligence (AI) model
is used in the drafting of a manuscript for an APA
describes ChatGPT-4 as a “large multimodal model,” so
publication, the use of AI must be disclosed in the
that description may be provided instead if you are using methods section and cited.
ChatGPT-4. Later versions and software or models from ■ AI cannot be named as an author on an APA scholarly
other companies may need different descriptions, based publication.
on how the publishers describe the model. The goal of the ■ When AI is cited in an APA scholarly publication, the
bracketed text is to briefly describe the kind of model to author must employ the software citation template,
which includes specifying in the methods section how,
your reader.
when, and to what extent AI was used. Authors in APA
Source: When the publisher name and the author name publications are required to upload the full output of the
are the same, do not repeat the publisher name in the AI as supplemental material.
source element of the reference, and move directly to the ■ The authors are responsible for the accuracy of any
URL. This is the case for ChatGPT. The URL for ChatGPT information in their article. Authors must verify any
is https://chat.openai.com/chat. For other models or information and citations provided to them by an AI tool.
Authors may use but must disclose AI tools for specific
products for which you may create a reference, use the
purposes such as editing.
URL that links as directly as possible to the source (i.e., the ■ No submitted content may be entered into generative AI
page where you can access the model, not the publisher’s tools as this violates the confidentiality of the process.
homepage). n

Artifical Intelligence: Redefining the Future of Psychology 29


Five Questions for Melissa Smith
The Google Workspace researcher is embracing the
possibilities of AI, shaping how we work, and creating
user-friendly products for all types of tech consumers
BY LUCY TU
From Monitor On Psychology, September 2024

A s artificial intelligence (AI) and automation revolu-


tionize work, employers worldwide are striving to
keep pace with the latest developments, maintain pro-
How do your team’s strategies and goals stand
out from those of other companies developing
tools to improve how people work?
ductivity, and reduce employee stress. Google Workspace products have always been known
Applied cognitive psychologist Melissa Smith, PhD, for their collaborative nature. When I was in early
is studying the best ways to help companies and orga- college and Google first introduced Docs, it was rev-
nizations do that as a senior user experience (UX) olutionary to be able to have multiple people working
researcher at Google Workspace, based in Raleigh, on one document at the same time. Today, those
North Carolina. The group designs and integrates Goo- collaborative features are an industry norm, and our
gle’s vast suite of productivity tools, including Gmail, team is still pushing the cutting-edge boundaries of
Google Docs, and Google Meet, into a cohesive service. collaborative work. We are currently incorporating
Using the latest cognitive science, Smith and her team generative AI features across Gmail and Workspace to
are building more intuitive, user-friendly programs, simplify organization tasks. Soon, you will be able to
such as the mobile versions of popular applications like use Gmail’s side panel to summarize emails and high-
Google Drive and Calendar. Their goal is to boost both light the most important action items. Also, the “Help
employee performance and well-being. me write” feature in Gmail and Docs, which uses AI to
Smith underscores the need for workplaces to adapt draft messages based on your prompts, will support
to AI and other emerging technologies. She sees these Spanish and Portuguese.
advances not as threats to replace people but as tools Our team also prioritizes tech accessibility as we
to aid in mundane or risky tasks, enabling people to pri- build new features, making sure that we don’t inad-
oritize what truly defines human work: collaboration vertently exclude people who, for instance, rely on
and creativity. “The beauty of user experience research screen readers or high-contrast screens to inter-
is discovering what makes someone care deeply about act with our services. Accessibility considerations can
a product, then developing that technology to support be easily overlooked if you don’t actively engage with
PHOTO BY BRENT CLARK

their learning and growth,” said Smith. the many types of consumers who use your services.
The Monitor talked with Smith about how she There are always opportunities for us to improve in
came to UX research and its implications for the creating technology that caters to people with diverse
future workforce. needs or disabilities.

Artifical Intelligence: Redefining the Future of Psychology 30


How is your research at Google enhancing employee well- major because I wanted to work in robotics. But when
being and shaping how the next generation will work? I discovered human-robot interactions, I found that
User experience research is vital in product develop- exploring how people engage with and trust artificial
ment because we are actively incorporating the voices agents, and how robots can improve human lives, inter-
of customers and users. My work focuses on talking with ested me far more. So, I changed my major and pursued
people who use our products to accomplish the diverse a PhD in applied cognitive psychology and eventually
tasks relevant for their roles. For example, the needs realized that my research interests aligned with the user
of a general consumer using our products to complete experience field.
schoolwork or organize family events differ from those
of a small business owner who uses Google Workspace Your dissertation looked at people’s trust in
to manage a team. automation and robotics. How do you bring
By making productivity tools more user-friendly, our that knowledge into your current work?
services streamline workflows and reduce employee No matter what the technology is—you could insert
stress. Overly complex software and information over- whichever technology buzzword you want, whether
load can cause mental fatigue. If we can simplify these it’s AI, machine learning, or big data—people’s funda-
processes and present information more clearly, we can mental approaches to adopting new systems follow a
help workers focus on essential tasks. This is especially similar pattern. There will be the early adopters, who
important as workplaces increasingly adopt hybrid work embrace the new technology and trust it even if it’s still
models and communication among workers is frag- being workshopped. Then, there is a larger chunk of
mented. Our research helps us develop products that intermediary users, who prefer to test the waters and
better support remote work, such as improved virtual wait for the technology to take off before they immerse
collaboration and scheduling tools that help employees themselves in it. Finally, there are the people who resist
maintain work-life balance. change altogether—the “if it’s not broken, why fix it?”
For example, my team has gained valuable information users, who probably wouldn’t mind using an old-school
from users about the importance of seamless connec- flip phone.
tion across multiple platforms and devices that has That research taught me that you need to adapt to
inspired us to improve the mobile interface for Google each set of users. I emphasize that perspective in every
Workspace products. Just 5 years ago, I would have product my team creates because most of us on the
never opened a Google Doc on my phone. Now, mobile development team belong to that first group, who gen-
Docs is far more accessible and offers expanded features erally trust and understand technology. But we aren’t
for collaboration among employees working from many representative of most consumers, so it’s essential to
different locations and platforms. reach out to our end users, not to convince them to
trust our product but to hear their concerns so we can
What led you to user experience research? build a product worth trusting.
During middle school and high school, I was involved
with a nonprofit organization called FIRST, which fos- How will AI continue to influence UX research?
ters excitement for science and technology among K–12 AI is unique in that it doesn’t just offer incremental
students through annual robotics competitions. It’s been improvements over existing technologies; it represents
more than 20 years since I first participated in the pro- a whole new paradigm in how people think about and
gram, but that excitement hasn’t stopped. I serve on the interact with technology. Consequently, we need to
FIRST Robotics board and help connect FIRST students exercise much greater caution when building new prod-
with alumni at Google. ucts and proactively anticipate how users will interact
One of my goals is to show students the diverse STEM with these systems. At the same time, AI opens many
(science, technology, engineering, and mathematics)-re- more opportunities to create magical moments—to
lated careers available to them, beyond the already push productivity, problem-solving, and collaboration
well-known roles like engineer, lab scientist, or doctor. forward. That kind of entirely new technology hasn’t
This is partly influenced by my own experiences. I spent emerged in many years, so it is an incredibly interesting
my undergraduate years as a mechanical engineering time to be a user experience researcher. n

Artifical Intelligence: Redefining the Future of Psychology 31


Survey Reveals Job Loss and
Privacy Fears Over Workplace AI
Worries about AI-driven job loss and workplace
surveillance are taking a toll on mental health
for some workers. APA’s 2023 Workplace in
America survey reveals these anxieties are
disproportionately associated with some
groups of workers more than others.

38%
38% of workers worry AI might replace some or all of their job duties.

64% 38%

of workers worried about of workers not worried


AI feel stressed during about AI feel stressed.
the workday. Black
Hispanic Asian
White

44% 34%
50% 46% 44% 34%

High School or Less: 44% College Degree or More: Worry About AI by Ethnicity
worry about AI replacing 34% worry.
jobs.
/GETTY IMAGES

Source: APA 2023 Work in America Survey: Artificial intelligence, monitoring technology, and psychological well-being

Artifical Intelligence: Redefining the Future of Psychology 32


In a World with Artificial Intelligence,
What Can I Do With a Psychology Degree?
While AI can automate a lot of jobs, there are some unique human skills it can’t replicate (yet)
BY CORY PAGE, MPH
From Psychology Student Network, Februrary 2024

O ne well-intentioned question I hated during college


was, “What are you going to do with your degree after
you graduate?” How could I answer that? Sure, I knew what
replace, World Economic Forum). And those with training
in psychology are well on their way to becoming effective
leaders. The British Psychological Society highlights that
I wanted to do, but I also knew those jobs would be highly understanding human behavior is key to effecting change
competitive. And with the advent of artificial intelligence at any level; by deeply understanding your coworkers, you
(AI), this question is likely scarier for current students. Will will be better able to motivate them to meet work chal-
the job you want even exist in 10 years? lenges together (Gervais, R. Nov. 3, 2002, Leading the way
with psychology, British Psychological Society).
AI is Already Here And if clinical practice or leadership roles aren’t of inter-
The McKinsey Global Institute suggested in 2017 that tech- est, you’re also in luck. Soft skills like curiosity, humility,
nologies of the time could automate 30% of activities for and emotional intelligence are things AI will struggle to
most occupations (McKinsey Global Institute, Nov. 28, 2017, imitate (Tong, G. C., May 9, 2023, Here are the top skills you
Jobs lost, jobs gained: What the future of work will mean for will need for an “AI-powered future,” according to new Mic-
jobs, skills and wages). With generative AI improvements, rosoft data, CNBC). These skills are in-line with the five
that estimate has undoubtedly expanded. In 2023, Gold- learning goals of APA’s Guidelines for the Undergraduate
man Sachs claimed 300 million full-time equivalent (FTE) Psychology Major: Version 3.0 (APA, 2023a.). In other words,
jobs worldwide could move to automation, with AI auto- what you’re learning in your undergraduate program will
mating at least some part of roughly two-thirds of U.S. make you valuable in the future labor market.
jobs (Goldman Sachs, Apr. 5, 2023, Generative AI could raise
global GDP by 7%). Future psychology jobs are undoubtedly
going to be affected.
But “automate” does not necessarily mean “replace.”
Younger members of the U.S. workforce are more con-
cerned than older members about AI replacing them (APA
(2023b.), 2023 Work in America Survey: Artificial intelli-
gence, monitoring technology, and psychological well-being).
This fear is valid, but current studies suggest a future
working alongside AI, not competing against it. So, what
job prospects do current students have?

Your Skills Are Valuable


Undergraduates with a degree in psychology are perfectly
suited for our changing labor market. AI can’t do certain
things yet, and these are things psychology programs instill
in students.
For one thing, current AI can’t replace counseling pro-
fessions. Navigating complex emotional expressions with
empathy and competency is something AI simply cannot
replicate (Morgan, K., July, 13, 2023, The jobs AI won’t take
yet, BBC). If you plan on a clinical career, then a job will be
waiting for you after graduation.
But what if you don’t want to be a clinician? Well,
GREMLIN/GETTY IMAGES

another thing all industries in the future workforce need


are leaders. According to the World Economic Forum,
machines fail at simulating leadership and social influ-
ence (Shine, I., May 17, 2023, These are the jobs that AI can’t

Artifical Intelligence: Redefining the Future of Psychology 33


Employers in
Psychology
are Hiring
500+ Opportunities
Available Now

www.psycCareers.com

So, What Now? models with deep-learning algorithms to spot patterns


By studying psychology, you’ve begun learning unique in how humans behave and why (Abrams, Z., Jul., 2023, AI
human skills that can’t easily be automated. But AI inte- is changing every aspect of psychology. Here’s what to
gration is the future of the workforce. Rather than watch for, Monitor on Psychology, 54(5), 46). And across
planning an AI-proof career, it’s more important to learn all industries, AI implementation will require workers
how AI might assist you with whatever future you pursue. with new skills, like prompt engineering (Clark, P. A., Feb.
If you pursue a clinical career, you may find a combi- 2, 2023, AI’s rise generates new job title: Prompt engineer,
nation of natural-language processing and generative AI Axios) and output verification (Sauer, M., Jul. 29, 2023, AI
doing your notetaking for you (Capoot, A., Mar. 20, 2023, is making some common side hustles more lucrative-these
OpenAI-powered app from Microsoft will instantly tran- can pay up to $100 an hour. CNBC), to help run the algo-
scribe patient notes during doctor visits, CNBC). This rithms. In other words, learning how to tell AI what to do
unbillable work that used to take hours can now be fin- in a way that gives you the answer you want and being
ished in minutes by technology that listens to sessions, able to double-check its work efficiently could both be
transcribes everything said, and then condenses the con- marketable skills in coming years.
versation into summary topics. Other AI tools can help Learning about AI does not have to be a daunting task.
train psychologists by analyzing transcripts of their ses- APA has free resources online about the subject, includ-
sions with clients (Allen, S., Nov. 3, 2022, Improving ing articles and webinars. And you can consider learning
Psychotherapy With AI: From the Couch to the Keyboard, more about how generative AI tools like ChatGPT or
IEEE Engineering in Medicine and Biology Society). Bard work by experimenting with them yourself. While
These AI tools then coach psychologists on their use of many colleges and universities have policies against
evidence-based methods, active listening, and empathic using generative AI tools for classwork, learning how
affect for their patients. these tools operate on your own could be insightful
If you don’t pursue a clinical career, you may also and empowering.
find yourself using AI in your future work. In the field And now you can answer the dreaded question of what
of research, AI can more quickly parse through big data you’ll likely be doing after graduation! n

Artifical Intelligence: Redefining the Future of Psychology 34


Keep Your Membership in 2025!

Don’t Miss Out

at.apa.org/renew

Artifical Intelligence 35

You might also like