0% found this document useful (0 votes)
170 views7 pages

Ethical AI Policies in Education

This document provides a summary and recommendations regarding artificial intelligence and the future of teaching and learning from the U.S. Department of Education. It outlines four foundational principles: 1) Center people (parents, educators, students) and ensure humans remain in control of decisions; 2) Advance equity and address issues of unfair bias and disparities; 3) Focus on learning, emphasizing mastery of skills over test scores; 4) Ensure transparency and oversight to build trust. The document recommends focusing AI applications on assisting and enhancing what humans do, not replacing humans. Overall, the document provides a vision for an ethical and equitable role for AI in education.

Uploaded by

Jhoanna Daria
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
170 views7 pages

Ethical AI Policies in Education

This document provides a summary and recommendations regarding artificial intelligence and the future of teaching and learning from the U.S. Department of Education. It outlines four foundational principles: 1) Center people (parents, educators, students) and ensure humans remain in control of decisions; 2) Advance equity and address issues of unfair bias and disparities; 3) Focus on learning, emphasizing mastery of skills over test scores; 4) Ensure transparency and oversight to build trust. The document recommends focusing AI applications on assisting and enhancing what humans do, not replacing humans. Overall, the document provides a vision for an ethical and equitable role for AI in education.

Uploaded by

Jhoanna Daria
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Artificial Intelligence

and the Future of


Teaching and Learning
Insights and Recommendations
May 2023

1
Artificial Intelligence and the Future of Teaching and Learning
Miguel A. Cardona, Ed.D.
Secretary, U.S. Department of Education
Roberto J. Rodríguez
Assistant Secretary, Office of Planning, Evaluation, and Policy Development
Kristina Ishmael
Deputy Director, Office of Educational Technology
May 2023

Examples Are Not Endorsements


This document contains examples and resource materials that are provided for the user’s
convenience. The inclusion of any material is not intended to reflect its importance nor is it
intended to endorse any views expressed or products or services offered. These materials may
contain the views and recommendations of various subject matter experts as well as hypertext links,
contact addresses, and websites to information created and maintained by other public and private
organizations. The opinions expressed in any of these materials do not necessarily reflect the
positions or policies of the U.S. Department of Education. The U.S. Department of Education does
not control or guarantee the accuracy, relevance, timeliness, or completeness of any information
from other sources that are included in these materials. Other than statutory and regulatory
requirements included in the document, the contents of this guidance do not have the force and
effect of law and are not meant to bind the public.

Contracts and Procurement


This document is not intended to provide legal advice or approval of any potential federal
contractor’s business decision or strategy in relation to any current or future federal procurement
and/or contract. Further, this document is not an invitation for bid, request for proposal, or other
solicitation.

Licensing and Availability


This report is in the public domain and available on the U.S. Department of Education’s
(Department’s) website at https://tech.ed.gov.

Requests for alternate format documents such as Braille or large print should be submitted to the
Alternate Format Center by calling 1-202-260-0852 or by contacting the 504 coordinator via email
at [email protected].

Notice to Limited English Proficient Persons


If you have difficulty understanding English, you may request language assistance services for
Department information that is available to the public. These language assistance services are
available free of charge. If you need more information about interpretation or translation services,
please call 1-800-USA-LEARN (1-800-872-5327) (TTY: 1-800-437-0833); email us at
[email protected]; or write to U.S. Department of Education, Information Resource
Center, LBJ Education Building, 400 Maryland Ave. SW, Washington, DC 20202.

How to Cite
While permission to reprint this publication is not necessary, the suggested citation is as follows:
U.S. Department of Education, Office of Educational Technology, Artificial Intelligence and
Future of Teaching and Learning: Insights and Recommendations, Washington, DC, 2023.

This report is available at https://tech.ed.gov


Building Ethical, Equitable
Policies Together
In this report, we aim to build on the listening sessions the Department hosted to engage and
inform all constituents involved in making educational decisions so they can prepare for and
make better decisions about the role of AI in teaching and learning. AI is a complex and broad
topic, and we are not able to cover everything nor resolve issues that still require more
constituent engagement. This report is intended to be a starting point.

The opportunities and issues of AI in education are equally important in K-12, higher education,
and workforce learning. Due to scope limitations, the examples in this report will focus on K-12
education. The implications are similar at all levels of education, and the Department intends
further activities in 2023 to engage constituents beyond K-12 schools.

Guiding Questions
Understanding that AI increases automation and allows machines to do some tasks that only
people did in the past leads us to a pair of bold, overarching questions:

1. What is our collective vision of a desirable and achievable educational system that
leverages automation to advance learning while protecting and centering human agency?

2. How and on what timeline will we be ready with necessary guidelines and guardrails, as
well as convincing evidence of positive impacts, so that constituents can ethically and
equitably implement this vision widely?

In the Learning, Teaching, and Assessment sections of this report, we elaborate on elements of
an educational vision grounded in what today’s learners, teachers, and educational systems need,
and we describe key insights and next steps required. Below, we articulate four key foundations
for framing these themes. These foundations arise from what we know about the effective use of
educational technology to improve opportunity, equity, and outcomes for students and also
relate to the new Blueprint.

Foundation 1: Center People (Parents, Educators, and Students)


Education-focused AI policies at the federal, state, and district levels will be needed to guide and
empower local and individual decisions about which technologies to adopt and use in schools
and classrooms. Consider what is happening in everyday lives. Many of us use AI-enabled
products because they are often better and more convenient. For example, few people want to
use paper maps anymore; people find that technology helps us plan the best route to a
destination more efficiently and conveniently. And yet, people often do not realize how much
privacy they are giving up when they accept AI-enabled systems into their lives. AI will bring
privacy and other risks that are hard to address only via individual decision making; additional
protections will be needed.

6
There should be clear limits on the ability to collect, use, transfer, and
maintain our personal data, including limits on targeted advertising.
These limits should put the burden on platforms to minimize how much
information they collect, rather than burdening Americans with reading
fine print.8

As protections are developed, we recommend that policies center people, not machines. To this
end, a first recommendation in this document (in the next section) is an emphasis on AI with
humans in the loop. Teachers, learners, and others need to retain their agency to decide what
patterns mean and to choose courses of action. The idea of humans in the loop builds on the
concept of “Human Alternatives, Consideration, and Fallback” in the Blueprint and ethical
concepts used more broadly in evaluating AI, such as preserving human dignity. A top policy
priority must be establishing human in the loop as a requirement in educational applications,
despite contrary pressures to use AI as an alternative to human decision making. Policies should
not hinder innovation and improvement, nor should they be burdensome to implement. Society
needs an education-focused AI policy that protects civil rights and promotes democratic values
in the building, deployment, and governance of automated systems to be used across the many
decentralized levels of the American educational system.

Foundation 2: Advance Equity

“AI brings educational technology to an inflection point. We can either


increase disparities or shrink them, depending on what we do now.”
—Dr. Russell Shilling

A recent Executive Order9 issued by President Biden sought to strengthen the connection among
racial equity, education and AI, stating that “members of underserved communities—many of
whom have endured generations of discrimination and disinvestment—still confront significant
barriers to realizing the full promise of our great Nation, and the Federal Government has a
responsibility to remove these barriers” and that the Federal Government shall both “pursue
educational equity so that our Nation’s schools put every student on a path to success” and also
“root out bias in the design and use of new technologies, such as artificial intelligence.” A specific
vision of equity, such as described in the Department’s recent report, Advancing Digital Equity for
All10 is essential to policy discussion about AI in education. This report defines digital equity as

8The White House (September 8, 2022). Readout of White House listening session on tech platform accountability.
https://www.whitehouse.gov/briefing-room/statements-releases/2022/09/08/readout-of-white-house-listening-session-
on-tech-platform-accountability/
9The White House (February 17, 2023). Executive order on further advancing racial equity and support for underserved
communities through the federal government. https://www.whitehouse.gov/briefing-room/presidential-
actions/2023/02/16/executive-order-on-further-advancing-racial-equity
10U.S. Department of Education, Office of Educational Technology (2022). Advancing digital equity for all: Community-
based recommendations for developing effective digital equity plans to close the digital divide and enable technology-
empowered learning. US Department of Education.

7
“the condition in which individuals and communities have the information technology capacity
that is needed for full participation in the society and economy of the United States.”

Issues related to racial equity and unfair bias were at the heart of every listening session we held.
In particular, we heard a conversation that was increasingly attuned to issues of data quality and
the consequences of using poor or inappropriate data in AI systems for education. Datasets are
used to develop AI, and when they are non-representative or contain undesired associations or
patterns, resulting AI models may act unfairly in how they detect patterns or automate decisions.
Systematic, unwanted unfairness in how a computer detects patterns or automates decisions is
called “algorithmic bias.” Algorithmic bias could diminish equity at scale with unintended
discrimination. As this document discussed in the Formative Assessment section, this is not a new
conversation. For decades, constituents have rightly probed whether assessments are unbiased
and fair. Just as with assessments, whether an AI model exhibits algorithmic bias or is judged to
be fair and trustworthy is critical as local school leaders make adoption decisions about using AI
to achieve their equity goals.

We highlight the concept of “algorithmic discrimination” in the Blueprint. Bias is intrinsic to


how AI algorithms are developed using historical data, and it can be difficult to anticipate all
impacts of biased data and algorithms during system design. The Department holds that biases
in AI algorithms must be addressed when they introduce or sustain unjust discriminatory
practices in education. For example, in postsecondary education, algorithms that make
enrollment decisions, identify students for early intervention, or flag possible student cheating
on exams must be interrogated for evidence of unfair discriminatory bias—and not only when
systems are designed, but also later, as systems become widely used.

Foundation 3: Ensure Safety, Ethics, and Effectiveness


A central safety argument in the Department’s policies is the need for data privacy and security
in the systems used by teachers, students, and others in educational institutions. The
development and deployment of AI requires access to detailed data. This data goes beyond
conventional student records (roster and gradebook information) to detailed information about
what students do as they learn with technology and what teachers do as they use technology to
teach. AI’s dependence on data requires renewed and strengthened attention to data privacy,
security, and governance (as also indicated in the Blueprint). As AI models are not generally
developed in consideration of educational usage or student privacy, the educational application
of these models may not be aligned with the educational institution’s efforts to comply with
federal student privacy laws, such as FERPA, or state privacy laws.

8
Figure 2: The Elementary and Secondary Education Act defines four levels of evidence.

Further, educational leaders are committed to basing their decisions about the adoption of
educational technology on evidence of effectiveness—a central foundation of the Department’s
policy. For example, the requirement to base decisions on evidence also arises in the Elementary
and Secondary Education Act (ESEA), as amended, which introduced four tiers of evidence (see
Figure 2). Our nation’s research agencies, including the Institute of Education Sciences, are
essential to producing the needed evidence. The Blueprint calls for evidence of effectiveness, but
the education sector is ahead of that game: we need to insist that AI-enhanced edtech rises to
meet ESEA standards as well.

Foundation 4: Promote Transparency


The central role of complex AI models in a technology’s detection of patterns and
implementation of automation is an important way in which AI-enabled applications, products,
and services will be different from conventional edtech. The Blueprint introduces the need for
transparency about AI models in terms of disclosure (“notice”) and explanation. In education,
decision makers will need more than notice—they will need to understand how AI models work
in a range of general educational use cases, so they can better anticipate limitations, problems,
and risks.

AI models in edtech will be approximations of reality and, thus, constituents can always ask these
questions: How precise are the AI models? Do they accurately capture what is most important?
How well do the recommendations made by an AI model fit educational goals? What are the
broader implications of using AI models at scale in educational processes?

Building on what was heard from constituents, the sections of this report develop the theme of
evaluating the quality of AI systems and tools using multiple dimensions as follows:

● About AI: AI systems and tools must respect data privacy and security. Humans must be
in the loop.
● Learning: AI systems and tools must align to our collective vision for high-quality
learning, including equity.
● Teaching: AI systems and tools must be inspectable, explainable, and provide human
alternatives to AI-based suggestions; educators will need support to exercise professional
judgment and override AI models, when necessary.

9
● Formative Assessment: AI systems and tools must minimize bias, promote fairness, and
avoid additional testing time and burden for students and teachers.
● Research and Development: AI systems and tools must account for the context of
teaching and learning and must work well in educational practice, given variability in
students, teachers, and settings.
● Recommendations: Use of AI systems and tools must be safe and effective for students.
They must include algorithmic discrimination protections, protect data privacy, provide
notice and explanation, and provide a recourse to humans when problems arise. The
people most affected by the use of AI in education must be part of the development of
the AI model, system, or tool, even if this slows the pace of adoption.

We return to the idea that these considerations fit together in a comprehensive perspective on
the quality of AI models in the Recommendations section.

Overview of Document
We begin in the next section by elaborating a definition of AI, followed by addressing learning,
teaching, assessment, and research and development. Organizing key insights by these topics
keeps us focused on exploring implications for improving educational opportunity and
outcomes for students throughout the report.

Within these topics, three important themes are explored:

1. Opportunities and Risks. Policies should focus on the most valuable educational
advances while mitigating risks.

2. Trust and Trustworthiness. Trust and safeguarding are particularly important in


education because we have an obligation to keep students out of harm’s way and
safeguard their learning experiences.

3. Quality of AI Models. The process of developing and then applying a model is at the
heart of any AI system. Policies need to support evaluation of the qualities of AI models
and their alignment to goals for teaching and learning during the processes of
educational adoption and use.

“AI in education can only grow at the speed of trust.”


—Dr. Dale Allen

10

You might also like