AI Redefining The Future of Psychology
AI Redefining The Future of Psychology
FIVE OPTIONS
Certified Professional
Coach Certification 72 HOURS
Advanced Personal and Executive “The presentations were dynamic
Coach Certification 128 HOURS and content-rich. My coaching
practice has benefited as a result
Positive Psychology-based of my participation at College
Wellness Coach Certification 75 HOURS of Executive Coaching. I
look forward to integrating the
EQI2.0 and EQ360 training into my coaching and
Assessment Certification 16 HOURS
leadership work at Mayo Clinic.”
All training is delivered virtually—live and online— LISA HARDESTY, PH.D., L .P., ACPEC
(800) 475-1837
executivecoachcollege.com
CONTENTS
4 AI’s Profound Impact
These days, artificial intelligence (AI) is a common topic of conversation with strong—but not always
recognized—connections to psychology. These connections to psychology often fall into two broad
categories, both of which require our field to be proactive and strategic.
Additionally, psychological science can inform the dented possibilities before us. n
development and use of AI. Every area of psychology can
and should contribute—human factors, cognitive, social,
Arthur C. Evans Jr., PhD, is the chief executive
developmental, and more. We can use our scientific under- officer of APA. You can follow him on LinkedIn.
“The time is now to start integrating AI,” said David on AI’s broader impacts, ranging from social and
Luxton, PhD, a clinical psychologist and an affiliate profes- cognitive to ethical and philosophical.
sor at the University of Washington’s School of Medicine, ■ The Royal Society’s report Science in the Age of
AI explores how artificial intelligence is changing
who wrote an APA guidebook on integrating AI into prac-
scientific research across disciplines.
tice, due out in 2025. “Behavioral health professionals need
to be competent on the boundaries and risks of AI but also
on how it can benefit their practice.”
AI, including questions about trust and what hap- tools will provide insights on psychodynamic and other
pens when technology makes mistakes. insight-based approaches. Other training opportunities
include simulations that afford therapists-in-training a safe
place to explore various approaches.
BENEFIT RISK
Efficient Data Analysis: AI can process and analyze vast Bias and Inequity: AI systems can perpetuate or even
amounts of research data quickly, identifying trends and amplify existing biases in data, leading to unfair treat-
insights that might take humans much longer to uncover, ment of marginalized groups. If algorithms are trained
thus accelerating the pace of psychological research. on biased data, they may produce skewed results that
exacerbate inequalities in mental health care.
Increased Accessibility and Affordability: AI-driven Lack of Accountability: When AI systems make mistakes,
tools, such as chatbots and virtual therapists, can pro- it can be challenging to identify accountability. This can
vide support to individuals who may not have easy lead to confusion and frustration for clients and practi-
access to traditional mental health services, especially tioners alike, complicating the healing process.
in remote or underserved areas. By streamlining pro-
cesses and increasing efficiency, AI can help reduce the
costs associated with mental health care, making it more
affordable and accessible.
Enhanced Diagnosis: AI can analyze large datasets to Misdiagnosis and Mismanagement: AI systems may
identify patterns and correlations, aiding in more accu- lack the nuanced understanding that a trained cli-
rate and timely diagnoses of mental health conditions. nician possesses. Misdiagnoses or inappropriate
treatment recommendations can result from overly
simplistic algorithms.
Personalized Treatment: By leveraging data, AI can help Over-reliance on Technology: There is a risk that prac-
create tailored treatment plans that consider individ- titioners might over-rely on AI tools, potentially
ual client characteristics, preferences, and responses, undermining the human element of therapy. This could
leading to more effective interventions. Wearable lead to a diminished therapeutic relationship and neglect
devices and apps can track emotional and physiologi- of individual client needs.
cal responses, allowing for continuous monitoring and
timely interventions when needed.
Support for Therapists: AI tools can assist clinicians in Privacy Concerns: The use of AI often involves handling
administrative tasks, documentation, and treatment sensitive personal data. Inadequate safeguards can lead
recommendations, freeing up more time for direct to breaches of confidentiality, compromising clients’
patient interaction. trust and safety.
Improved Training: AI can simulate clinical scenarios for Ethical Dilemmas: The rapid development of AI can
training purposes, providing psychology students and outpace ethical guidelines, leading to practices that pri-
practitioners with valuable hands-on experience in a oritize efficiency over care. This raises concerns about
controlled environment. the moral implications of decisions made by machines.
A s artificial intelligence (AI) rapidly permeates our “The conversation about AI bias is broadening,” said
world, researchers and policymakers are scrambling to psychologist Tara Behrend, PhD, a professor at Michigan
stay one step ahead. What are the potential harms of these State University’s School of Human Resources and Labor
new tools—and how can they be avoided? Relations who studies human-technology interaction and
“With any new technology, we always need to be think- spoke at CES about AI and privacy. “Agencies and vari-
ing about what’s coming next. But AI is moving so fast that ous academic stakeholders are really taking the role of
it’s difficult to grasp how significantly it’s going to change psychology seriously.”
things,” said David Luxton, PhD, a clinical psychologist
and an affiliate professor at the University of Washing- Bias in algorithms
ton’s School of Medicine who spoke at the 2024 Consumer Government officials and researchers are not the
Electronics Show (CES) on “Harnessing the Power of only ones worried that AI could perpetuate or worsen
AI Ethically.” inequality. Research by Mindy Shoss, PhD, a professor of
Luxton and his colleagues dubbed recent AI advances psychology at the University of Central Florida, shows
“super-disruptive technology” because of their poten- that people in unequal societies are more likely to say AI
tial to profoundly alter society in unexpected ways. In adoption carries the threat of job loss (Technology, Mind,
addition to concerns about job displacement and manipu- and Behavior, Vol. 3, No. 2, 2022).
lation, AI tools can cause unintended harm to individuals, Those worries about job loss appear to be connected
relationships, and groups. Biased algorithms can pro- to overall mental well-being. For example, about half of
mote discrimination or other forms of inaccurate employees who said they were worried that AI might
decision-making that can cause systematic and poten- make some or all of their job duties obsolete also said
tially harmful errors; unequal access to AI can exacerbate their work negatively impacted their mental health.
inequality (Proceedings of the Stanford Existential Risks Among those who did not report such worries about AI,
Conference 2023, 60–74). On the flip side, AI may also hold only 29% said their work worsened their mental health,
the potential to reduce unfairness in today’s world—if according to APA’s 2023 Work in America survey.
people can agree on what “fairness” means.
“There’s a lot of pushback against AI
because it can promote bias, but humans have
been promoting biases for a really long time,”
said psychologist Rhoda Au, PhD, a professor
of anatomy and neurobiology at the Boston
University Chobanian & Avedisian School of
Medicine who also spoke at CES on harness-
ing AI ethically. “We can’t just be dismissive
and say, ‘AI is good’ or ‘AI is bad.’ We need to
embrace its complexity and understand that
it’s going to be both.”
With that complexity in mind, world leaders
are exploring how to maximize AI’s benefits
and minimize its harms. In 2023, the Biden
administration released an executive order
CONSUMER TECHNOLOGY ASSOCIATION
developers making appropriate inferences from that data? In a study she conducted with graduate student Lucía
Conversations about algorithmic bias often center Vicente, participants classified images for a simulated
around high-stakes decision-making, such as educational medical diagnosis either with or without the help of AI.
and hiring selection, but Behrend said other applica- When the AI system made errors, humans inherited the
tions of this technology are just as important to audit. same biased decision-making, even when they stopped
For example, an AI-driven career guidance system could using the AI (Scientific Reports, Vol. 13, 2023).
“
California, Berkeley, argues that assumptions about AI’s and decision-making criteria accordingly.
capabilities, as well as the way many tools present infor-
mation in a conversational, matter-of-fact way, make the AI has many biases, but we’re often
risk of inheriting stubborn biases particularly high (Science, told not to worry because there will
Vol. 380, 2023).
“By the point [that] these systems have transmitted the
always be a human in control. But how do
information to the person . . . it may not be easy to cor- we know that AI is not influencing what
rect,” Kidd said in a press release from the university
a human believes and what a human
(Berkeley News, June 22, 2023).
Companies also can—and do—intentionally leverage AI can do?
to exploit human biases for gain, said Matute. In a study —Helena Matute, PhD, professor of experimental psychology, Universidad de Deus-
of simulated AI dating recommendations, she and gradu- to in Bilbao, Spain
ate student Ujué Agudo found that participants were more
likely to agree to date someone whose profile they viewed “There are risks here, too, and it’s equally important
more than once, a choice she said is driven by the famil- to have transparency about these types of systems—how
iarity heuristic (PLOS ONE, Vol. 16, No. 4, 2021). Guidelines they’re deriving answers and making decisions—so they
for ethical AI should consider how it can be designed to don’t create distrust,” Luxton said.
intentionally play on cognitive biases and whether that Using AI to reverse bias also requires agreeing on
constitutes safe use, she added. what needs to change in society. The current approach
Note Designer
Write Better Notes. Faster.
With Optional
AI-Assist!
SOAP, BIRP, SIRP, DAP, PIE notes, Couples, Family, Child, Addictions Therapy,
Risk Assessment, Intakes, Treatment Plans, Forms and more!
to building AI tools involves collecting large quantities ing developers to show an audit trail, or a record of how
of data, looking for patterns, then applying them to the an algorithm makes decisions. (Luxton is also writing a
future. That strategy preserves the status quo, Behrend guidebook for behavioral health practitioners on integrat-
said—but it is not the only option. ing AI into practice.) When challenges arise, he suggests
“If you want to do something other than that, you have letting those play out through the judicial system.
to know or agree what is best for people, which I don’t “Government does need to play a role in AI regula-
know that we do,” she said. tion, but we also want to reduce the inefficiencies of
As a starting point, Behrend is working to help AI government roadblocks in technological development,”
researchers, developers, and policymakers agree on how Luxton said.
to conceptualize and discuss fairness. She and Landers One thing is clear: AI is a moving target. Using it eth-
distinguish between various uses of the term, including ically will require continued dialogue as the technology
statistical bias versus equity-based differences in group grows ever more sophisticated.
outcomes, in their recent paper. “It’s not entirely clear what the shelf life of any of these
“These are noncomparable ways of using the word conversations about bias will be,” said Shoss. “These dis-
‘fairness,’ and that was really shutting down a lot of con- cussions need to be ongoing, because the nature of
versations,” Behrend said. generative AI is that it’s constantly changing.” n
Establishing a common language for discussing AI is an
important step for regulating it effectively, which a grow-
ing contingent is seeking to do. In addition to Biden’s 2023
executive order, New York State passed a law requiring
companies to tell employees if AI is used in hiring or pro-
FURTHER READING
motion. At least 24 other states have either proposed or
passed legislation aiming to curtail the use of AI, protect How psychology is shaping the future of technology
the privacy of users, or require various disclosures (U.S. Straight, S., & Abrams, Z., APA, 2024
State-by-State AI Legislation Snapshot, BCLP Law, 2023). Speaking of Psychology: How to use AI
“It’s pretty difficult to stay on top of what the best ethically with Nathanael Fast, PhD
APA, 2024
practice is at any given moment,” Behrend said. “That’s
another reason why it’s important to emphasize the role Worried about AI in the workplace? You’re not alone
Lerner, M., APA, 2024
of psychology, because basic psychological principles—
The unstoppable momentum of generative AI
NURPHOTO/GETTY IMAGES
A rtificial intelligence (AI) continues to develop rapidly and is being integrated into many facets of daily
life. Increasingly, AI-enabled tools are being developed for use in mental health care. These tools have
a wide range of functionality; some focus on streamlining administrative tasks like scheduling or documen-
tation, while others focus on providing clinical supports to augment traditional therapy practices. Given the
proliferation of tools on the market, it is important for psychologists to develop a process to assess which
tools may be right for their practice.
Following is a step-by-step guide that highlights many of the important considerations when assessing
digital tools that use AI technology.
1
Company (vendor/ FDA-cleared digital therapeutic? Or has the com-
device maker/developer) pany done research on its product that is available
It is important to understand who is on the lead- for you to review? Research could include a ran-
ership team of the company. If a tool is designed domized controlled trial (RCT) or real-world
for use by mental and behavioral health (MBH) effectiveness study.
4
clinicians, are psychologists or other MBH profes-
sionals represented in leadership? For example, HIPAA compliance
MBH professionals may be represented in the roles Does the company attest that it complies with
of chief medical officer or clinical director, or they HIPAA, GDPR, and/or other applicable privacy stan-
may serve on advisory boards. dards in the jurisdiction(s) where you practice (e.g.,
2
state consumer data privacy laws)? Additionally, do
Tool functionality they offer a business associate agreement (BAA)?
5
Does the tool have the function(s) that is valuable
to you? Data security
• Does it integrate with software or the elec- Does the company have a clear and easily under-
tronic health record (EHR) that you may already standable data security policy?
be using? Usually, this information is found under a head-
• Does it fit within your workflow and save you ing titled “Data Security” and is often found in the
administrative time? Privacy Policy, but some companies also have a
• Is it a cost-effective tool for your practice separate “Security” or “Privacy and Compliance”
needs? webpage or document available on their website.
• Does the company offer demos of their product
3
not specify which security measures an organiza-
VIKTOR TANASIICHUK/GETTY IMAGES
6
data sharing (e.g., can you decline data sharing for
Privacy policy marketing purposes?).
Is the privacy policy readily available for review Common parties with whom data may be shared
before purchasing the tool or signing up for the include third party service providers/vendors,
service? marketers, law enforcement agencies (as applica-
ble).
Read the privacy policy in full. Be aware wshether the company makes any
Carefully review what data are collected. This statements about selling data. Selling personally
information is generally found under a heading identifiable data is a violation of HIPAA and possi-
such as “Personal Information We Collect.” bly other applicable data privacy and security laws.
11
tions such as “Requests to Delete Data” or “Right to
Correct Data.” Base your decision on the
needs of your practice
Carefully review how long data are retained A decision about which tools to incorporate into
This generally can be found under a “Data Reten- your practice is an individual decision based on
tion” heading. one’s practice needs. However, these steps will
7
help you gather the relevant information needed to
Terms of service (TOS) make an informed decision.
12
Is the TOS readily available for review before pur-
chasing/signing up for the service? Document your review
It is important to document your initial review of
Read the TOS in full the above information (see the Companion check-
Carefully review the section on “Customer Data” list [PDF, 60KB]) to demonstrate your due diligence
which also may be labeled “Protected Health Infor- in selecting a tool.
13
mation or User Data.” This section will generally
discuss how personal health information (PHI) is Review policies for updates
stored and maintained. It also may discuss business Privacy policies and TOS can be periodically
associates and BAAs. updated, and you are encouraged to review these
8
updates.
Location of relevant data policies
It is important to note that while companies should
Bringing the Power of Artificial Intelligence to Your
provide the aforementioned information described
Clinical Practice: A Hands-on Guide
above in steps 5–7, sometimes there is variability
in whether that information resides in the Privacy Online Course/Video On-Demand
$50 – List Price
Policy, TOS, BAA, or some combination of those
$40 – Member/Affiliate
documents.
9
2 CE Creditss
This “Steps to evaluate an AI-enabled clinical or administrative tool” is provided by APA as a preliminary guide for psychologists considering
the integration of clinical tools utilizing AI into their practice. It is intended to serve as a starting point for evaluation and is not exhaustive. Users
are encouraged to apply their own professional judgment and seek additional resources and guidance as needed, including legal consultation
to ensure compliance with applicable laws and regulations. APA does not endorse any specific AI tools and assumes no responsibility for the
outcomes of their use. Always ensure compliance with relevant ethical guidelines and legal requirements.
ARTIFICIAL
INTELLIGENCE
Young people’s use of artificial intelligence is forcing change
in classrooms. Psychologists can help maximize the smart
adoption of these tools to enhance learning.
BY ZARA ABRAMS
From Monitor On Psychology, January/February 2025
G enerative artificial intelligence (AI) promises to touch nearly every part of our
lives, and education is one of the first sectors grappling with this fast-moving
technology. With easy and free-to-access tools like ChatGPT, everything related to
teaching, learning, and assessment is subject to change.
“In many ways, K–12 schools are at the forefront of figuring out practical, opera-
tional ways to use AI, because they have to,” said Andrew Martin, PhD, a professor
of educational psychology and chair of the educational psychology research group
at the University of New South Wales in Sydney. “Teachers are facing a room full of
people who are very much at the cutting edge of a technology.”
“
while keeping students motivated. They are also explor- it will change their work prospects, and enthusiasm about
ing whether educators can leverage tools such as ChatGPT
without hindering the broader goals of learning. Teachers are facing a room full of
One question should always be at the forefront, said
educational psychologist Ally Skoog-Hoffman, PhD, senior
people who are very much at the
director of research and learning at the Collaborative for cutting edge of a technology.
Academic, Social, and Emotional Learning (CASEL): “How
—Andrew Martin, PhD, a professor of educational psychology and chair of the edu-
are we using AI and technology as tools to elevate the con- cational psychology research group at the University of New South Wales in Sydney
ditions and the experiences of education for students
without sacrificing the human connection that we abso- its potential to advance science, creativity, and human-
lutely know is integral to learning?” ity (Teen and Young Adult Perspectives on Generative AI,
Common Sense Media, Hopelab, and Center for Digital
How Children View AI Thriving, 2024).
Psychologists have studied humantechnology interac- The Center for Digital Thriving offers guidelines for
tion for decades. A new line of research now seeks to talking to youth about generative AI, including asking chil-
understand how people, including children, interact with dren what school rules seem fair and whether they have
chatbots and other virtual agents. ever heard about AI getting something wrong.
“Little kids learn from characters, and our tools of edu-
cation already [rely on] the parasocial relationships that Intelligent Tutoring
they form,” said David Bickham, PhD, a health communica- Much of the conversation so far about AI in education cen-
tion researcher based at Boston Children’s Hospital, during ters around how to prevent cheating—and ensure learning
a panel discussion on AI in the classroom. “How are kids is actually happening—now that so many students are
forming a relationship with these AIs, what does that look turning to ChatGPT for help.
like, and how might that impact the ability of AIs to teach?” A majority of teachers surveyed by the Center for
In a series of qualitative studies, Randi Williams, PhD, Democracy and Technology, a nonprofit focused on tech-
a program manager at the Algorithmic Justice League, a nology policy, said they have used AI detection software to
nonprofit focused on making AI more equitable, observed check whether a student’s work was their own, but those
playful interactions between young children and robots, tools can also be fallible—in a way that could exacerbate
including the children’s attempts to both teach the agents
and learn from them. Williams and her colleagues also
found that children viewed agents with a more humanlike Key Points
and emotive voice as friendlier and more intelligent (Pro-
■ AI has been in use in classrooms for years, but
ceedings of the 2017 Conference on Interaction Design and a specific type of AI—generative models—could
Children, 2017). But many questions remain, including how transform personalized learning and assessment.
to study and foster such relationships while protecting the ■ Teenagers are quick adopters, with 7 in 10 using
PREVIOUS PAGE: MASKOT/GETTY IMAGES
safety and privacy of minors—issues that psychologists are generative AI tools, mostly for help with home-
well poised to address. work.
Among adolescents, the use of generative AI is already ■ Educational psychologists are studying how these
widespread. Of the 7 in 10 who reported using at least one tools can be used safely and effectively, including
such tool in a 2024 Common Sense Media survey of 1,045
to support social and emotional learning in chil-
dren and adolescents.
teenagers ages 13 to 18, homework help was the most
common reason. About half of those who used generative
Save money.
FURTHER READING Advance your career.
“My doll says it’s ok”: A study of children’s
conformity to a talking doll APA MEMBER: $379 | NONMEMBER: $649
Williams, R., et al.
IDC ‘18: Proceedings of the 17th ACM Conference
on Interaction Design and Children, 2018
More teachers are using AI-detection tools. SUBSCRIBE TODAY AT
Here’s why that might be a problem APA.CONTENT.ONLINE
Prothero, A.
EducationWeek, Apr. 5, 2024
Artificial intelligence and social-emotional
learning are on a collision course
Prothero, A. Continuing Education
EducationWeek, Nov. 13, 2023 from your Association
AI in the classroom: Technology
and the future of learning
American Psychological Association is recognized by the New York
Family Online Safety Institute, 2023 State Education Department's (NYSED) State Board for Psychology as
Using artificial intelligence tools in K–12 classrooms an approved provider of continuing education for licensed psychologists
#PSY-0100. Programs in our video on-demand catalog meet NYSED
Diliberti, M. K., et al. requirements unless otherwise specified.
RAND Corporation, 2024
COMING
SOON
LEARN MORE
Learn more
The Promise and
Perils of Using AI for
Research and Writing
Psychologists and students may tap AI tools for an assist
in some scenarios, but human oversight—including
vetting all output and citing all uses—is essential.
BY CHARLOTTE HUFF
“
submitting to English-language journals, said Rose Sokol,
PhD, publisher of APA Journals and Books.
In addition, as AI continues to evolve, it could sup-
To be an author you must be a
port the initial or brainstorming stages of research, said human. The threat for students
Emily Ayubi, senior director of APA Style. If a researcher is
and researchers is really the same—
considering the pursuit of an avenue of study and wants
to gain a better sense of gaps in the existing knowledge over-relying on the technology.
base, she said, generative AI “would theoretically be able —Rose Sokol, PhD, publisher of APA Journals and Books
to review the existing literature more expediently than
a human being could. But you would still have to vet the Policies also may differ depending upon the journal or
output because there may be fabrications. It could make up university or instructor involved, Denneny said. “You might
studies that don’t actually exist.” have a professor who says, ‘Do not open ChatGPT or you’re
A good guideline is that although AI tools can support in trouble,’ and then you might have a professor who has
more routine steps of research and writing, it should not you use it throughout an assignment.”
be relied upon, Ayubi and Sokol stressed. Still, APA policy about generative AI use has developed
At the heart of the APA Publishing policies as related to consensus on several key points, as outlined in a blog post
generative AI, Sokol said, “is that to be an author you must published in late 2023, including that AI cannot be listed as
be a human. The threat for students and researchers is an author in any one of APA’s 88 scholarly publications.
really the same—over-relying on the technology.” When “An author needs to be someone who can provide
that happens, you are at risk of essentially ceding control consent, who can affirm that they followed the ethical pro-
of intellectual property to the machine, she noted. “You’ve tocols of research, that they did the steps as they said they
handed that over. The machine has no accountability and would,” said Chelsea Lee, instructional lead for APA Style.
no responsibility.” “You need a human to be able to give that consent.”
PORTISHEAD1/GETTY IMAGES
More resources
Want to keep up on the latest APA Style guidance
regarding AI? Follow updates on the APA Style blog.
Related
Learn more about this topic in a recent webinar from
APA Style, Process Over Product: Setting Students Up
KINDAMORPHIC/GETTY IMAGES
I f you’ve used ChatGPT or other AI tools in your research, describe how you used the tool in your Method section or in
a comparable section of your paper. For literature reviews or other types of essays or response or reaction papers, you
might describe how you used the tool in your introduction. In your text, provide the prompt you used and then any por-
tion of the relevant text that was generated in response.
Unfortunately, the results of a ChatGPT “chat” are not retrievable by other readers, and although nonretrievable data
or quotations in APA Style papers are usually cited as personal communications, with ChatGPT-generated text there is no
person communicating. Quoting ChatGPT’s text from a chat session is therefore more like sharing an algorithm’s output;
thus, credit the author of the algorithm with a reference list entry and the corresponding in-text citation.
only the year, not the exact date. The version number pro-
The reference and in-text citations for vides the specific date information a reader might need.
ChatGPT are formatted as follows: Title: The name of the model is “ChatGPT,” so that serves
as the title and is italicized in your reference, as shown in
OpenAI. (2023). ChatGPT (Mar 14 version) [Large the template. Although OpenAI labels unique iterations
language model]. https://chat.openai.com/chat (i.e., ChatGPT-3, ChatGPT-4), they are using “ChatGPT”
as the general name of the model, with updates identified
• Parenthetical citation: (OpenAI, 2023) with version numbers.
• Narrative citation: OpenAI (2023) In the example above, the version number is included
after the title in parentheses. If a platform does not provide
the version number, that is simply omitted from the refer-
MOOR STUDIOS/GETTY IMAGES
Let’s break that reference down and look at the four ele- ence. ChatGPT does not currently show users the version
ments (author, date, title, and source): number. Different large language models or software might
Author: The author of the model is OpenAI. use different version numbering; use the version number
Date: The date is the year of the version you used. Fol- in the format the author or publisher provides, which may
lowing the template in Section 10.10, you need to include be a numbering system (e.g., Version 2.0) or other methods.
Bracketed text is used in references for additional APA Policies on Use of Generative AI
descriptions when they are needed to help a reader
For other issues about generative AI, the APA Style
understand what’s being cited. References for a number
team follows APA Journals policies. APA Journals
of common sources, such as journal articles and books, has published policies on the use of generative AI in
do not include bracketed descriptions, but things out- scholarly materials. For this policy, AI refers to generative
side of the typical peer-reviewed system often do. In the LLM AI tools and does not include grammar-checking
case of a reference for ChatGPT, provide the descrip- software, citation software, or plagiarism detectors.
tor “Large language model” in square brackets. OpenAI ■ When a generative artificial intelligence (AI) model
is used in the drafting of a manuscript for an APA
describes ChatGPT-4 as a “large multimodal model,” so
publication, the use of AI must be disclosed in the
that description may be provided instead if you are using methods section and cited.
ChatGPT-4. Later versions and software or models from ■ AI cannot be named as an author on an APA scholarly
other companies may need different descriptions, based publication.
on how the publishers describe the model. The goal of the ■ When AI is cited in an APA scholarly publication, the
bracketed text is to briefly describe the kind of model to author must employ the software citation template,
which includes specifying in the methods section how,
your reader.
when, and to what extent AI was used. Authors in APA
Source: When the publisher name and the author name publications are required to upload the full output of the
are the same, do not repeat the publisher name in the AI as supplemental material.
source element of the reference, and move directly to the ■ The authors are responsible for the accuracy of any
URL. This is the case for ChatGPT. The URL for ChatGPT information in their article. Authors must verify any
is https://chat.openai.com/chat. For other models or information and citations provided to them by an AI tool.
Authors may use but must disclose AI tools for specific
products for which you may create a reference, use the
purposes such as editing.
URL that links as directly as possible to the source (i.e., the ■ No submitted content may be entered into generative AI
page where you can access the model, not the publisher’s tools as this violates the confidentiality of the process.
homepage). n
their learning and growth,” said Smith. the many types of consumers who use your services.
The Monitor talked with Smith about how she There are always opportunities for us to improve in
came to UX research and its implications for the creating technology that caters to people with diverse
future workforce. needs or disabilities.
38%
38% of workers worry AI might replace some or all of their job duties.
64% 38%
44% 34%
50% 46% 44% 34%
High School or Less: 44% College Degree or More: Worry About AI by Ethnicity
worry about AI replacing 34% worry.
jobs.
/GETTY IMAGES
Source: APA 2023 Work in America Survey: Artificial intelligence, monitoring technology, and psychological well-being
www.psycCareers.com
at.apa.org/renew
Artifical Intelligence 35