0% found this document useful (0 votes)
5 views18 pages

Sustainability 16 06596

This study investigates faculty and student perceptions of assessment practices in blended learning environments at the University of Barcelona during the COVID-19 pandemic. The findings reveal a consensus on the summative purpose of assessment but highlight disagreements regarding formative assessment and feedback. The research emphasizes the need for improved assessment literacy among educators and students to enhance the effectiveness of assessment practices in higher education.

Uploaded by

Prodi PGSD
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views18 pages

Sustainability 16 06596

This study investigates faculty and student perceptions of assessment practices in blended learning environments at the University of Barcelona during the COVID-19 pandemic. The findings reveal a consensus on the summative purpose of assessment but highlight disagreements regarding formative assessment and feedback. The research emphasizes the need for improved assessment literacy among educators and students to enhance the effectiveness of assessment practices in higher education.

Uploaded by

Prodi PGSD
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

sustainability

Article
Faculty and Students’ Perceptions about Assessment in Blended
Learning during Pandemics: The Case of
the University of Barcelona
Ana Remesal 1, * , Elena Cano 2 and Laia Lluch 2

1 Facultat de Psicologia, Universitat de Barcelona, 08035 Barcelona, Spain


2 Facultat d’Educació, Universitat de Barcelona, 08035 Barcelona, Spain; [email protected] (E.C.);
[email protected] (L.L.)
* Correspondence: [email protected]

Abstract: Blended teaching and learning modalities are well established in higher education, par-
ticularly after all learning through pandemics. This study aims to explore faculty and students’
perceptions about potentially empowering assessment practices in blended teaching and learning
environments during remote teaching and learning. Two samples of 129 university educators and
265 students of the University of Barcelona responded to a survey. The most salient agreement
between faculty and students deals with the accreditation purpose, thus summative function, of
assessment and a lack of students’ participation in assessment practices. At the same time, the results
show some disagreements regarding formative assessment purposes and feedback. Our results offer
implications for future blended teaching and learning designs for training students and faculty in the
pursuit of assessment literacy, and for institutional policies to ensure the sustainability of formative
assessment practices.

Keywords: assessment for learning; blended learning; feedback; generic competencies; higher
education; instructor; student

Citation: Remesal, A.; Cano, E.; Lluch,


L. Faculty and Students’ Perceptions
1. Introduction
about Assessment in Blended
Learning during Pandemics: The Case The COVID-19 pandemic generated a new hybrid scenario in which face-to-face and
of the University of Barcelona. online teaching (synchronous and asynchronous) eventually blended. For this new instruc-
Sustainability 2024, 16, 6596. tional context, educators must consider purely technical aspects, such as, for example, the
https://doi.org/10.3390/ attentional arc in front of screens for learning time management [1] or differential character-
su16156596 istics of students with difficulties in accessing digital information [2]. However, more than
these so-called technical conditions are required for a high-quality educational experience
Academic Editor: Vasiliki Brinia
to be generated. Educators must also consider pedagogical, or rather techno-pedagogical,
Received: 12 May 2024 aspects, such as the distribution of knowledge sources and the diversity of roles to be exer-
Revised: 26 July 2024 cised in the virtual classroom, the potential for multidirectional communicative interaction,
Accepted: 31 July 2024 and the very agency of the learning process, mediated by digital tools [3,4]. These specific
Published: 1 August 2024 pedagogical decisions must extend to assessment practices [5]. Therefore, knowing how
assessment worked in the exceptional pandemic-affected course can be helpful to avoid
perpetuating ineffective practices and to identify training needs among higher education
faculty [6]. Further, in this scenario, it is relevant to study the perception of both instructors
Copyright: © 2024 by the authors.
and students regarding the frequency and quality of certain assessment practices during
Licensee MDPI, Basel, Switzerland.
the first confinement term regarding perceived purposes and features.
This article is an open access article
distributed under the terms and
In any educational system, two primary purposes or functions of assessment are
conditions of the Creative Commons
recognized. On the one hand, the regulation of teaching and learning processes, also
Attribution (CC BY) license (https:// known as formative assessment. On the other hand, the accreditation of learning results,
creativecommons.org/licenses/by/ or summative assessment. While many studies emphasize the regulatory function and
4.0/). highlight the relevance of assessment for learning [7], minimizing the value of summative

Sustainability 2024, 16, 6596. https://doi.org/10.3390/su16156596 https://www.mdpi.com/journal/sustainability


Sustainability 2024, 16, 6596 2 of 18

purposes [8], other authors point out the complementarity of both functions and the in-
dissolubility of these in any educational system, precisely because of the social function
that academic education fulfills [9]. Previous research also shows that the tendency to-
wards the summative function or purpose is more present at higher educational levels,
where education adopts a final character [10], which makes teacher training essential to
enable instructors to develop rich and balanced assessment practices [11,12]. Pastore and
Andrade define teacher assessment literacy as the “interrelated set of knowledge, skills,
and dispositions that a teacher can use to design and implement a coherent approach to
assessment within the classroom context and the school system. An assessment literate
teacher understands and differentiates the aims of assessment and articulates a sound, cycli-
cal process of collection, interpretation, use of evidence and communication of feedback”
(pp. 134–135) [13]. Other authors [14] proposed a hierarchical model for teacher assessment
literacy, including six components: knowledge base; teacher conceptions of assessment;
institutional and socio-cultural contexts; teacher assessment literacy in practice; teacher
learning; and teacher identity in its (re)construction as an assessor. Assessment literacy
still needs to be improved, particularly regarding university faculty [15], whose specific
pedagogical training is generally subject to voluntariness and individual acceptance of
in-service training and professional development recycling programs [6,16].
The characteristics of good assessment practices have been discussed at length in
the previous literature. There is now a broad consensus [17–19] that assessment should
encompass the following:
• Be focused on supporting the learning process.
• Be aligned with didactic goals.
• Take place throughout the learning process.
• Follow clear criteria.
• Progressively foster students’ responsibility in their learning and assessment process
by developing evaluative judgment.
These characteristics are common in face-to-face and hybrid contexts, but some other
features must be added for the virtual context. The Joint Information Systems Committee
(JISC) [20] advocates that assessment practices should encompass the following:
• Be accessible and designed under universal learning design criteria.
• Be easily automated so educators’ workload—especially clerical tasks—can be minimized.
• Be safe, respecting students’ rights and attentive to online risks.
Online assessment, hence, presents some differential features [21] that appeal not only
to technical or technological issues of security and accessibility [22] but also to the instruc-
tional design itself and the new possibilities of interaction with and among students [23].
In that sense, some authors [24] propose productive online assessment tasks rather than
re-productive, where students must elaborate, compare, and revise their productions in a
cyclic way to strengthen evaluative judgment [25]. Indeed, virtual scenarios may constitute
a privileged scenario to promote assessment for learning [23,26].
Formative assessment, in the pursuit of a steady improvement in teaching and learning
processes, seeks to support these processes so that students can benefit from assessment and
develop their abilities to become effective evaluators of their own and others’ work, which
is an essential competence in today’s professional world [27]. It is the so-called assessment
for learning (AFL) which we understand, together with Klenowsky, as a “process that
is part of the daily practice of students, teachers and peers by which they seek, reflect
on and respond to information (derived) from dialogue, demonstration and observation
in a way that enhances continuous learning” (p. 264) [28]. The concept was initially
driven by Sadler [29] and extensively developed later on by Hawe and Dixon [30], among
others. Assessment for learning is associated with participatory processes [31], for example,
through peer assessment strategies [32,33] and through self-assessment practices [34].
Involving students explicitly in the assessment process implies helping them develop
their evaluative competence—and assessment literacy—by fostering evaluative judgment.
Sustainability 2024, 16, 6596 3 of 18

Previous works define evaluative judgment as the ability to make decisions about the
quality of one’s work and that of others, which involves being able to spot the excellence
of a production or a process, as well as applying that understanding in the assessment of
one’s work and other’s work [19]. In addition, feedback appears as another element of
impact on learning, focused on processes and with a self-regulatory character [35–37].
The need for formative feedback may have been even more significant during the
pandemic, as online instructional designs need carefully planned feedback to maintain
learner engagement [38]. This requires a specific evaluative literacy [39,40] and awareness
of the formative potential of assessment processes.
We propose to study the purposes and characteristics that both the faculty and the
students attribute to assessment practices implemented in the context of confined terms
during pandemics following these research questions: Do students and faculty share similar
evaluations of the experienced remote assessment practices? To what extent were students
enacting appropriate participation in the referred assessment practices? What are the most
affecting personal and contextual variables in the participants’ evaluation?
In this project, we conducted descriptive and exploratory research to explore instruc-
tors and students’ perceptions during the two academic terms affected by COVID-19 at the
University of Barcelona. Data were collected using two different surveys for students and
instructors. The specific research goals (RGs) were as follows:
RG1: Explore faculty and students’ perceptions of the purposes of assessment practices
carried out in blended teaching environments during the academic terms affected by total
or partial lockdown.
RG2: Explore faculty and students’ perceptions of the characteristics of assessment practices
carried out in blended teaching environments during the academic terms affected by total
or partial lockdown.
RG3: Compare student and faculty’s perspectives specifically on those assessment practices
associated with a formative purpose and students’ agency increase.
RG4: Explore likely connections between the two collectives’ perceptions by considering
the following variables: general satisfaction with the experience of remote teaching and
learning, gender, previous experience in online teaching and learning, academic course,
and teaching experience.
The conceptual framework guiding this study is grounded in the dual purposes
of assessment: formative (regulating teaching and learning processes) and summative
(accrediting learning outcomes). While the literature often highlights the importance of
formative assessment, the complementarity and necessity of both functions in educational
systems are acknowledged. Teacher assessment literacy is crucial for developing effective
assessment practices. Also, promoting active participation of both faculty and students in
high-quality assessment practices is essential for sustainable education. Engaging in these
practices ensures continuous improvement and fosters a culture of lifelong learning, making
the educational ecosystem more resilient and adaptable. By investigating these aspects, this
study aims to contribute to sustainable education by promoting assessment literacy among
faculty and students, thereby fostering more balanced and effective assessment practices in
blended learning environments.

2. Method
We conducted a descriptive, exploratory survey research. The research team, com-
posed of an interdisciplinary group of instructors, invited all faculty and students from the
Schools of Law, Pharmacy, Mathematics, History, Information and Audiovisual Communi-
cation, Psychology, and Education).

2.1. Participants
A total of 394 individuals responded to the invitation: 129 instructors and 256 students.
Only 28% of the teaching staff declared having previous experience in online teaching,
Sustainability 2024, 16, 6596 4 of 18

while for the students, the percentage of previous e-learning experience dropped to 20%.
Tables 1–4 further describe the participants.

Table 1. Participants: schools and degrees.

Students (%) Degree Instructors (%) Faculty


Mathematics (26.4)
Education (33.3)
Primary Teacher (19.3)
Psychology (16.3)
Pharmacy (18.1)
Geography and History (15.5)
Informatics (10.6)
Pharmacy (14.7)
Archeology (9.4)
Mathematics (8.5)
Management and Public Administration (7.9)
Law (7)
Audiovisual Communication (5.3)
Information and Audiovisual Media (4.7)
Psychology (3)

Table 2. Participants: academic courses.

Course Students (%) Instructors (%)


First course 77 (29.1) 36 (27.9)
Second course 70 (26.4) 26 (20.2)
Third course 49 (18.5) 35 (27.1)
Fourth course 69 (26.0) 32 (24.8)

Table 3. Participants: years of teaching experience.

Teaching Experience Instructors (%)


Less than ten years 42 (32.6)
Between 11 and 20 years 27 (20.9)
Between 21 and 30 years 34 (26.3)
More than 30 years 26 (20.2)

Table 4. Participants: gender.

Gender Female (%) Male (%)


Students 165 (62.26) 100 (37.73)
Instructors 79 (61.24) 50 (38.76)

2.2. Instruments and Data Collection


Data were collected online in March 2021 through surveys for students and instructors.
We disseminated the surveys to potential participants via institutional communication chan-
nels. The responses collected refer to the second semester of the academic year 2019–2020
until the second semester of the academic course 2020–2021 (still active during the data
collection). For the construction and application of these surveys, we followed all the
procedures of responsible research and the institutional Code of Good Research Practices.
All participants agreed to informed consent, and data were anonymously treated, stored in
institutional facilities, and conveniently returned to participants.
The first section of each survey included the informed consent and demographic data.
Next, 13 items were presented to investigate the perceptions of students and instructors on
the following aspects (Table 5).
Sustainability 2024, 16, 6596 5 of 18

Table 5. Dimensions and items of the surveys linked to research goals.

Dimensions in the Dimensions in the


Research Goals Items in Surveys
Student Survey Instructor Survey
P1—Identify students’ needs
(formative)
RG1. To explore the primary
Primary goals of assessment P2—Identify the level of learning
purposes of assessment Primary goals of assessment
practice as intentionally performance (summative)
practices in the blended practice as perceived
designed P3—Orient the learning process
courses of confinement
(formative)
P4—Certify learning (summative)
C1—Assessment activities are
productive, requiring active
elaboration on the students’ side.
C2—Assessment activities are
coherent with the course goals
and pursued competencies.
C3—Students are
invited/expected to assume an
active role in defining and
comprehending assessment goals.
C4—Students are
invited/expected to assume an
Types and characteristics of Types and characteristics of
RG2. To explore the features active role in defining and
assessment practices assessment practices as
of assessment practices. comprehending assessment
as perceived intentionally designed
criteria.
C5—Students may self-assess.
C6—Students may carry out peer
assessment.
C7—Students may integrate
feedback into subsequent steps of
learning tasks.
C8—Students have the chance to
reflect upon feedback.
C9—Assessment practices
promote using digital tools to
offer and receive feedback.
RG3. To compare these Goals and characteristics of Goals and characteristics of
Same items as RG1 and RG2.
purposes and features. assessment practices assessment practices
RG4. To explore links between
personal characteristics and Identification data Identification data See Tables 1–4
online assessment perceptions

• The primary purposes of assessment practices in their courses. According to the


literature, the survey addressed four purposes (P1 to P4).
• Quality characteristics of the assessment practices in their courses. We considered nine
characteristics according to the previous theoretical review (C1 to C9).
All items were rated on a five-point Likert scale. Participants could choose to answer
in a range 1 to 5, with 1 point meaning “Do not agree at all/Not at all/Never” and 5 points
meaning “Strongly agree/Very much/Very often”, with the additional option of “No
answer/Do not know/Not applicable”.

2.3. Data Analysis and Procedure


For the data analysis, we first descriptively explored the data to see their distribution
and behavior. Subsequently, to address RG1, RG2, and RG3, we performed a comparison
of means analysis for unpaired categorical data (Mann–Whitney U-test) to observe if
there were significant differences between the students and teaching staff’s perceptions to
Sustainability 2024, 16, 6596 6 of 18

eventually determine the effect size of the found differences. To address RG4, we carried
out a Chi-square contrast.

3. Results
We have organized this Results Section into separated paragraphs dedicated to each
research question. As a first general result, we highlight a higher global satisfaction on the
faculty’s side compared to students’ perspectives, as faculty’s mean value was M = 3.56
and SD = 0.95, while that of students was M = 2.99 and SD = 1.09. We identified indeed a
significant difference in those results (U = 12,088.5, p < 0.00001) between groups, with a
moderate effect size (d = 0.557), which confirms the differential perspective of instructors
and students in this emergency teaching and learning experience, supporting thus the need
for the subsequent analyses.

3.1. Assessment Purposes (RG1)—Differences Found between Collectives (RG3)


As for the four assessment purposes included in the survey (see Table 6), the two items
of the survey relating to the summative purpose of assessment received higher scores—
though different—from both groups: P2 (identify the level of learning performance) and
P4 (certify learning). In contrast, the purposes linked to formative assessment (P1 identify
students’ needs and P3 orient the learning process) show a lower mean for both groups.
The Mann–Whitney U-test showed significant differences in the four items between the
perception of the faculty and that of students, with consistently higher values among
instructors. Effect sizes varied from moderate to high.

Table 6. Purposes of assessment practices as perceived by faculty and students. ** values of p indicate
significant differences at 99%.

Students Instructors
Assessment Purposes (n = 265) (n = 129) Mann–Whitney p (Two-Tailed) Effect Size
U-Test (Cohen’s d)
M (SD) M (SD)
P1—Identify students’
2.91 (1.37) 3.48 (1.33) 12866 0.00003 ** 0.422
needs (formative)
P2—Identify the level of
learning performance 3.14 (1.20) 4.07 (1.20) 9203.5 <0.00001 ** 0.775
(summative)
P3—Orient the learning
3.06 (1.23) 4.02 (1.13) 9515 <0.00001 ** 0.813
process (formative)
P4—Certify learning
3.47 (1.31) 4.33 (1.17) 9944.5 <0.00001 ** 0.692
(summative)

Figure 1 shows these results graphically. We can observe that the instructors’ percep-
tions were consistently higher, and that the horizontal axis (summative purpose) predomi-
nates over the vertical axis (formative). Although there was a significant difference among
the participants for all the assessment functions, the effect size was just moderate for the
formative function of needs identification, together with the lowest mean for both group of
participants, which coherently points to a certainly lower presence of this very important
formative function of assessment, while the other three functions were much more present
both in faculties and students’ perceptions.
inates over the vertical axis (formative). Although there was a significant difference among
the participants for all the assessment functions, the effect size was just moderate for the
formative function of needs identification, together with the lowest mean for both group
of participants, which coherently points to a certainly lower presence of this very im-
Sustainability 2024, 16, 6596
portant formative function of assessment, while the other three functions were much7more of 18

present both in faculties and students’ perceptions.

M-stud. M-inst.

P1
5
4
3
2
1
P4 0 P2

P3

Figure1.1.Contrasting
Figure Contrastingperceptions
perceptionsof
ofassessment
assessmentgoals:
goals:faculty
facultyversus
versusstudents.
students.

3.2. Assessment
Table Features
6. Purposes (RG2)—Differences
of assessment Found between
practices as perceived Collectives
by faculty (RG3)** values of p indi-
and students.
cate significant
The resultsdifferences at 99%.
refer to the dimensions and categories presented in Table 7, and they show
that, following theirStudents
professionalInstructors
responsibility, instructors perceive specific characteristics
of good assessment practices more frequently Mann–Whitney
than students (see Table 6),Effect
since Size
their
Assessment purposes (n = 265) (n = 129) p (Two-Tailed)
declared perception was systematically higher. Students’ U-Test evaluation is below (Cohen’s d)
three points
M (SD) M (SD)
in all items but the first one; in other words, their perception of eight out of nine assessment
P1—Identify students’ needsfeatures
(formative) 2.91 (1.37) 3.48 (1.33)
is rather negative. 12866 0.00003 ** 0.422
P2—Identify the level of learning performance
3.14 (1.20) 4.07 (1.20) 9203.5 <0.00001 ** 0.775
(summative) Table 7. Characteristics of assessment practices as perceived by faculty and students. * values of p
P3—Orient the learning process (formative)
indicate 3.06 (1.23)
significant differences 4.02**(1.13)
at 95%; 9515 significant
values of p indicate <0.00001 ** at 99%.0.813
differences
P4—Certify learning (summative) 3.47 (1.31) 4.33 (1.17) 9944.5 <0.00001 ** 0.692
Students Instructors Mann–
Characteristics of p
(n = 265)
3.2. Assessment (n = 129)
Features (RG2)—Differences Found between Collectives
Whitney (RG3) Effect Size
(Two-Tailed)
Assessment Practices (Cohen’s d)
M (SD)refer to the M dimensions
(SD) U-Test
The results and categories presented in Table 7, and they
C1—Assessment activities are show that, following their professional responsibility, instructors perceive specific char-
productive, requiring active acteristics2.57 (1.13) assessment
of good 3.83 (1.17)
practices more7652 <0.00001
frequently than ** (see Table
students 1.095
6), since
elaboration from students.
their declared perception was systematically higher. Students’ evaluation is below three
C2—Assessment activities are
coherent with the course goals and 3.13 (1.0) 4.16 (1.02) 7898.5 <0.00001 ** 1.019
pursued competencies.
C3—Students are invited/expected to
assume an active role in defining and 2.47 (1.17) 2.75 (1.20) 14,865 0.01786 * 0.185
comprehending assessment goals.
C4—Students are invited/expected to
assume an active role in defining and 2.12 (1.14) 2.45 (1.14) 14,248.5 0.00368 ** 0.289
comprehending assessment criteria.
C5—Students may self-assess. 2.58 (1.22) 2.83 (1.47) 15546 0.07215 0.185
C6—Students may carry out peer
2.37 (1.20) 2.72 (1.49) 15,013.5 0.025 * 0.258
assessment.
C7—Students may integrate feedback
into subsequent steps of 2.56 (1.15) 3.88 (1.03) 7001.5 <0.00001 ** 1.209
learning tasks.
C8—Students have the chance to
2.74 (1.23) 4.0 (0.91) 7508 <0.00001 ** 1.164
reflect upon feedback.
C9—Assessment practices promote
using digital tools to offer and 2.91 (1.24) 3.81 (1.16) 10,079.5 <0.00001 ** 0.749
receive‘feedback.
ty 2024, 16, x FOR PEER REVIEW 7 of 18

Sustainability 2024,points in all items but the first one; in other words, their perception of eight out of nine 8 of 18
16, 6596
assessment features is rather negative.
For both groups, the assessment characteristic with the highest (C2) and lowest (C4)
perceived frequency, respectively,
For both groups, thecoincide.
assessment However, for students,
characteristic the item
with the highest (C2) with the (C4)
and lowest
highest reportedperceived frequency,
frequency respectively,
is the only one with coincide.
a value However, for students,
of barely over the item
three points with the
(3.13).
Meanwhile, the instructors’ perceptions of their pedagogical coherence are much higher, (3.13).
highest reported frequency is the only one with a value of barely over three points
revolving fourMeanwhile, the instructors’ perceptions of their pedagogical coherence are much higher,
points in five out of nine items. It is also noteworthy that the four remaining
revolving four points in five out of nine items. It is also noteworthy that the four remaining
items (C3, C4, C5, and C6) with faculty’s declared perception below a mean of three points
items (C3, C4, C5, and C6) with faculty’s declared perception below a mean of three points
precisely referprecisely
to those items
refer or items
to those assessment characteristics
or assessment more
characteristics focused
more focusedon on students’
students’ agency
agency in assessment. These four items also present the lowest effect sizes altogether,
in assessment. These four items also present the lowest effect sizes altogether, contrastingcon-
trasting with strong effecteffect
with strong sizessizes
for differences between
for differences between participants
participantsfor
for all
all the other
otheritems.
items.
Figure 2 depicts the 2distribution
Figure of results and
depicts the distribution allows
of results usallows
and to visualize the coincidences
us to visualize the coincidences
and divergences andbetween
divergences
the between
responses theofresponses of both groups.
both groups.

M-stud. M-inst.

C1
5
C9 4 C2
3
2
C8 1 C3
0

C7 C4

C6 C5

Figure 2. Contrasting perceptions of assessment practices: faculty versus students.


Figure 2. Contrasting perceptions of assessment practices: faculty versus students.

3.3. Incidence
Table 7. Characteristics of Mediating
of assessment Variables
practices as (RG4)
perceived by faculty and students. * values of p
indicate significant differences at 95%;
We considered ** values
several of p indicate
variables significant
that could differences
affect the at 99%.
perceptions of both collectives
regarding the assessment practices. In the case of instructors, we asked them about the
Students Instructors
following: Mann–Whitney Effect Size
eristics of Assessment Practices •(n =General
265) global(n = satisfaction
129) with p (Two-Tailed)
the remote teaching
U-Test experience;(Cohen’s d)
• M (SD)
Gender; M (SD)
essment activities are produc- • Previous experience in online teaching;
uiring active elaboration from • The
2.57 (1.13)course3.83
with(1.17)
main teaching 7652
duties in those semesters
<0.00001 (first
** to fourth year of Bache-
1.095
lor’s degree);
.
• Years of teaching experience (up to 10 years, 11–20 years, 21–30 years, more than
essment activities are coherent 30 years).
course goals and pursued com- 3.13 (1.0) 4.16 (1.02) 7898.5 <0.00001 ** 1.019
In the case of students, we asked them about the following:
s.
• General global satisfaction with the remote learning experience;
dents are invited/expected to as- • Gender;
active role in defining and com- 2.47
• (1.17) experience
Previous 2.75 (1.20)in online 14,865
teaching; 0.01786 * 0.185
ing assessment goals. • The course enrolled (first to fourth year of Bachelor’s degree).
dents are invited/expected to as- We present the following subsections grouping results from both participant samples
active role in defining and com- 2.12 (1.14)first to
referring variables
2.45 (1.14) shared14,248.5
by students and 0.00368
instructors
** and then0.289
to the teaching
ing assessment criteria. experience of instructors. We will present only those results where significant differences
could be identified, together with at least a moderate effect size.
dents may self-assess. 2.58 (1.22) 2.83 (1.47) 15546 0.07215 0.185
Sustainability 2024, 16, 6596 9 of 18

3.3.1. Students’ and Faculty’s Global Satisfaction


First of all, regarding students (see Table 8), all aspects explored through the survey
drew a connection between students’ satisfaction—which was a priori asked—and their
perception of both assessment purposes and special features in the assessment practices
during remote teaching, except for the sixth characteristic, referring to peer assessment and
the coherence between assessment activities and goals of the course. The more satisfied
the students declared themselves, the more sensitive they proved to be towards assess-
ment practices. These results present a strong significance in all cases, but particularly
moderate effect sizes concerning the four assessment purposes and the two first charac-
teristics of assessment practices, C1, dealing with the presence of complex elaborative
assessment tasks.

Table 8. Students’ general satisfaction, purposes, and characteristics of assessment practices. ** values
of p indicate significant differences at 99%.

Students Chi-Square Two-Tailed Effect Size


n = 265
χ2 p Phi Gamma
df = 16
P1—Identify students’ needs (formative) 78.21 0 ** 0.543 0.51
P2—Identify the level of learning
99.688 0 ** 0.613 0.515
performance (summative)
P3—Orient the learning process (formative) 99.39 0 ** 0.612 0.549
P4—Certify learning (summative) 86.145 0 ** 0.57 0.484
C1—Assessment activities are productive,
76.377 0 ** 0.537 0.496
requiring active elaboration from students.
C2—Assessment activities are coherent with
75.316 0 ** 0.529 0.501
the course goals and pursued competencies.

Faculty, on their side, showed some connection between their general satisfaction with
remote teaching and the recognition of the assessment purpose of identifying students’
needs, and there was also a higher perception of peer assessment practices, usable feedback,
and the integration of digital tools in assessment (C6, C7, and C9) in all cases with a
moderate effect size, as shown in Table 9.

Table 9. Faculty’s general satisfaction, purposes, and characteristics of assessment practices. ** values
of p indicate significant differences at 99%.

Faculty Chi-Square Two-Tailed Effect Size


n = 129
χ2 p Phi Gamma
df = 16
P1—Identify students’ needs (formative) 109.078 0 ** 0.522 0.294
C6—Students may carry out peer assessment. 36.366 0.0026 ** 0.529 0.403
C7—Students may integrate feedback into
33.164 0.0070 ** 0.507 0.466
subsequent steps of learning tasks
C9—Assessment practices promote using
40.14 0.0007 ** 0.56 0.443
digital tools to offer and receive feedback.

3.3.2. Students’ and Faculty’s Gender


We searched for differences in participants’ responses in connection with their gender.
In this case, we found no differences among instructors and only weak to moderate con-
nections for students in both assessment characteristics referring to self-assessment and
a slightly stronger connection—medium effect size—for peer assessment (C5 and C6), as
shown in Table 10. Particularly, women appeared to be more sensitive to these practices
and evaluated them higher.
Sustainability 2024, 16, 6596 10 of 18

Table 10. Students’ gender and features of assessment practices. * values of p indicate significant
differences at 95%; ** values of p indicate significant differences at 99%.

Students Chi-Square Two-Tailed Effect Size


n = 265
χ2 p Phi Gamma
df = 16
C5—Students may self-assess. 9.805 0.0438 * 0.193 −0.239
C6—Students may carry out
36.366 0.0026 ** 0.529 0.403
peer assessment.

3.3.3. Faculty’s Previous Experience with Online Teaching and Learning


As we indicated in a previous section, both groups—faculty and students—had scarce
experience in online teaching and learning prior to the pandemic (under 30% in both cases).
Both groups are similar regarding this aspect. This condition is comprehensible, given that
the University of Barcelona is a traditional face-to-face institution where online devices are
considered a complement rather than a requirement.
In searching for connections between participants and prior online experience, we
found no relevant result for faculty in terms of a likely connection between previous online
experience or its lack and the perception of assessment purposes and features in the remote
teaching semester, and only weakly significant results, and a minimal effect size, regarding
the students without previous online experience with respect to the formative purpose
of pinning students’ needs (P1) (χ2 = 9.688 (df = 4), n = 265, p = 0.0460 *, phi = 0.191,
gamma = 0.112).

3.3.4. Students’ Enrolled Course during the Study and Faculty’s Main Course of Teaching
Concerning the course in which students were enrolled during the data collection pro-
cess, very weak differences were found regarding the perception of purposes of assessment,
both summative (P2) and formative (P3). Stronger differences, however, still of minimal
effect size, referred to two features of the assessment, both related to students’ opportunity
to reflect upon learning (C5 and C8). In this case, students of lower courses (first and
second year) were more positive in their perception of these practices (see Table 11).

Table 11. Students’ academic course and features of assessment practices. ** values of p indicate
significant differences at 99%.

Students Chi-Square Two-Tailed Effect Size


n = 265
χ2 p Phi Gamma
df(16)
C5—Students may self-assess. 28.338 0.0049 ** 0.328 −0.241
C8—Students have the chance to
28.106 0.0053 ** 0.275 −0.068
reflect upon feedback.

Regarding instructors, only weak differences were identified with respect to purposes
of assessment P1 and P2, with a minimal effect size. As both collectives coincided in
internally differing upon the evaluation purpose of evaluating the learning performance
level (P2), we searched for intergroup differences, presented in Table 12. Following the
results, less experienced students (first and second grade) coincided with instructors in their
higher perception of this assessment purpose, whereas in higher courses (third and fourth
grade), we found more differences among participants, with students less sensitive and
satisfied with this aspect and disagreeing more with faculty. In other words, students with
higher education experience prior to the pandemics seemed to be more critical regarding
the assessment of performance levels during remote teaching compared with those students
without previous higher education experience.
Sustainability 2024, 16, 6596 11 of 18

Table 12. Difference between faculty and students regarding P2 and academic course. * values of p
indicate significant differences at 95%; ** values of p indicate significant differences at 99%.

Chi-Square Two-Tailed Effect Size


Students/Faculty
χ2 p Phi Gamma
First grade (n = 123, df = 4) 18.188 0.0011 ** 0.358 0.533
Second grade (n = 95, df = 4) 26.273 0 ** 0.526 0.745
Third grade (n = 84, df = 4) 13.074 0.0109 * 0.395 0.497
Fourth grade (n = 101, df = 4) 24.233 0.0001 ** 0.49 0.46

3.3.5. Faculty’s Teaching Experience


Table 13 shows the results of the connection between faculty’s teaching experience and
the perception of assessment purpose and practice characteristics. Although significance
was not very strong in any of the cases, effect sizes were close to moderate. Faculty with
more teaching experience appeared to be more negative in their perception of the purpose
of determining students’ performance level in comparison with less experienced faculty.
This latter group also showed themselves as being more positive towards assessment
practices where students could assume an active role in defining assessment criteria (C4)
and also assessment practices with a formative use of feedback (C7, C8, and C9).

Table 13. Difference between faculty and students regarding P2 and academic course. * values of
p indicate significant differences at 95%.

Faculty Chi-Square Two-Tailed Effect Size


n = 129
χ2 p Phi Gamma
df(12)
P2—Identify the level of learning
21.839 0.0394 * 0.411 0.115
performance (summative).
C4—Students are invited/expected to assume
an active role in defining and comprehending 23.901 0.021 * 0.43 −0.239
assessment criteria.
C7—Students may integrate feedback into
21.849 0.0393 * 0.412 −0.337
subsequent steps of learning tasks.
C8—Students have the chance to reflect upon
24.587 0.0169 * 0.437 −0.266
feedback.
C9—Assessment practices promote using
21.306 0.0461 * 0.406 −0.083
digital tools to offer and receive feedback.

4. Discussion
In this paper, we share the results of the perceptions of a sample of instructors and
students of the University of Barcelona on the assessment practices carried out during the
period of blended education affected by the pandemic. In a certain way, these perceptions
also refer to both collectives’ conceptions of assessment.
Firstly, with respect to RG1 and RG2 (to explore faculty and students’ perceptions of
the purposes and characteristics of assessment practices carried out), we must highlight that
it would have been desirable to reveal the formative purposes of the assessment (P1 and P3)
as the predominant perceptions [7]. However, in our results, the participants perceived
summative rather than formative assessment purposes. Both instructors and students
report similar perceptions, which reinforces the consistency and validity of these results [9].
Teachers highly value purposes P2—identify the level of learning performance (sum-
mative) and P4—certify learning (summative). This points to an assessment culture closely
linked to a summative vision. However, the diagnostic purpose of assessment deserves
special attention. The lack of attribution of a diagnostic purpose to assessment (both on the
Sustainability 2024, 16, 6596 12 of 18

part of the students and on the part of the faculty) is certainly alarming since the adjustment
of assessment procedures to particular students or the possibility of personalization of
certain proposals is being lost. Higher education has by itself a finalist nature. However,
research indicates that the diagnostic function of assessment can and should be performed
throughout the educational experience to adjust teaching practices and resources to stu-
dent characteristics, adapt programming and curricular materials, and eventually offer
educational support, specific to those who need it, accomplishing formative assessment.
Formative assessment is valid at any time in the teaching and learning process [11,41],
but, in this case, it was not valued. This result is also consistent with previous studies of
preuniversity educational levels that locate a predominance of summative and accreditive
purposes in conceptions and practices at the end of compulsory education [10].
This seems far from advocating the active role of students in the assessment process [32,42]
in making sense of the feedback of the instructors and making efficient use of this feedback for
further learning, revealing the need for the sustainability of the evaluation practices. Further-
more, as reported by some previous studies [7], these imbalances toward summative purposes
would also not be nurturing or supportive of evaluative judgment [19,43].
Regarding the characteristics of the assessment processes, as evaluated by the par-
ticipants in our study, both students and faculty coincide, reinforcing what is indicated
in the literature [9], as far as they value more the second characteristic C2 (assessment
activities are consistent with course goals and pursued competencies) but less the fourth
C4 (students are invited/expected to assume an active role in defining and understanding
assessment criteria). Constructive alignment is valued [17]. However, it is alarming that nei-
ther students nor instructors consider that an active role in understanding and establishing
assessment criteria is important. To strengthen learning self-regulation processes [30,34],
this first phase of appropriation/participation in the criteria is decisive.
Secondly, regarding RG3 (to compare student and faculty’s perspectives), there is
a notable difference in satisfaction with experience; this is significantly greater for the
teaching staff than for the students.
Regarding purposes and characteristics, there are several things that can be com-
mented on. Regarding activities that require creative elaboration or production by students,
such as the coherence between assessment practices and degrees, generic competencies
and course objectives, or the opportunity to reflect on and react to feedback, all these
characteristics of assessment practices were reported more frequently and strongly by
faculty than by students.
Students generally value any function of assessment less than instructors. In other
words, a deeper assessment and feedback literacy of students is required [32]. And, in
summary, instructors’ discourse and practices seem less aligned than expected, since
students do not confirm their perceptions [18,40].
However, regarding the characteristics of assessment practices, there are some note-
worthy similarities between participants. Once again, students place little value on any
of the characteristics of this experience with emergency online assessment. They are only
closer to faculty in relation to characteristic C5 (students may self-assess). This perception
of their chance to make judgments about the quality of their own processes and products
could be the starting point for the development of self-assessment processes, with adequate
training [34].
Finally, in relation to RG4 (to analyze assessment perceptions considering satisfaction,
academic course, gender, or previous experience in online teaching and learning), regarding
the students, and in relation to assessment purposes, the results of our study show that
first- and second-year students were somewhat more positive regarding the assessment as
an opportunity to reflect on learning than older students. This is also consistent with other
recent studies [38] and underscores the importance of articulating first-year experiences
in higher education to consolidate this vision and maintain it throughout the curriculum.
Nevertheless, it is also this subset of less experienced students who reveals more sensitive-
ness to the certifying assessment purpose, as recently coming from secondary education,
Sustainability 2024, 16, 6596 13 of 18

where grades continue to have great importance, especially because of the role they play in
access to university.
Also, the variable “level of satisfaction” seems to correlate with students’ sensitiveness,
since the more satisfied the students declared themselves, the more positive they were in
perceiving assessment practices. Specifically, despite the generally lower perception of
formative assessment, our results also point to a relationship, although moderate, between
perceiving this purpose by students with higher levels of satisfaction. This would require
further studies to understand the actual association between both constructs and be able to
make decisions regarding training or institutional assessment policy.
In contrast, those students with experience in higher education before the pandemic
turned out to be more critical regarding online assessment than those students without
previous experience in higher education. Finally, women proved to be more sensitive to
peer assessment practices. This corroborates previous experiences that give women a more
conscious and dedicated role [44].
Regarding instructors, the lack of previous online experience did not appear to influ-
ence their responses. The situation was so unexpected and exceptional that we all made an
extraordinary effort to adapt. In this sense, it is worth considering whether these results
reflect only the urgent measures taken in response to the exceptional situation of the pan-
demic, for which almost three quarters of the teaching staff lacked previous experience in
blended education contexts [45], or if they reveal previous deficiencies [46–48]. The years of
teaching experience, however, showed differences among instructors: the less experienced
instructors did not assign as much importance to the certifying purpose compared to more
experienced colleagues. However, the latter group was more inclined towards assessment
practices where students could take an active role in defining the assessment criteria (C4),
and also towards assessment practices with a formative use of feedback (C7, C8, and C9).
The finding that instructors, globally seen, rarely refer to practices in which students
have opportunities to participate actively in the assessment process is a worrying finding.
It is likely that their initial training influenced their beliefs many years back; thus, they
are more traditionally tuned. Older faculty might also be more critical of professional
development programs and more resistant to change. However, previous studies in the
pandemic prevent us from associating age (or gender) with tackling the challenge of
using new digital resources [49]. Other studies, in contrast, do point to instructors’ digital
competence prior to the global crisis, and also more general conceptions of teaching and
learning, to be at the heart of the challenges encountered during the pandemic [50]. As
some authors state [51], faculty assessment literacy, particularly feedback literacy, is at
risk until sustained institutional support by directive positions and administrators is
warranted. In that sense, fostering institutional actions to improve lifelong learning and
lowering barriers to teaching innovation—such as boosting teaching teams or regulating
qualification norms [6]—become crucial. Previous research has also recently warned of
the danger of considering feedback literacy as something purely subjective, linked to the
individual profiles of instructors, and has advocated instead for the need to approach this
construct from a more communal and institutional perspective [39].
Thus, not only individual but also in-team teacher training are critical for the develop-
ment of good assessment practices. The results also show that previous experience with
online teaching allowed for better self-reported use of online assessment strategies, so
that, without diminishing the value of face-to-face education at most universities, perhaps
pandemics and the experience of emergency remote teaching have brought us evidence
of our general need to consider online teaching and learning resources as a continuous
companion, moving from the extraordinary to the ordinary. Also, in institutional terms, we
advocate for the creation of teaching teams that may share and consolidate good assessment
practices and collaborate to foster the progressive development of evaluative judgment
and self-regulated learning [52].
We presented a first exploratory approach to assessment practices in emergency
blended learning in our face-to-face institution. Overall, our results outline assessment
Sustainability 2024, 16, 6596 14 of 18

practices that are still far from active and formative proposals, with scarce space for student
participation in negotiating goals, neither for reflection on the practices themselves nor
on assessment criteria, especially in courses approaching the end of the degree program,
where a summative and accreditation perspective dominates. This constitutes a future
challenge to support assessment literacy [32,53].
Regarding our results, the lower perception of some characteristics of assessment
practices in relation to teaching experience deserves special attention. It alerts us of the need
for ongoing professional development for senior faculty. The fading off of the diagnostic
purpose and more participatory practices are other results that point to the need for
a deeper pedagogical reflection on online assessment proposals [5]. Also, as noted by
previous studies, the scarcity of previous experience in online teaching leads to a deficient
in digital teaching competence [54] and sets critical difficulties for us to conclude from the
instructors’ sample. However, concerning students, our results point to previous online
experience as a differential factor for the higher degree of appreciation of more creative
and productive assessment tasks, with integrated, reusable feedback. One could state that
these assessment features—related to formative assessment—are more salient or accessible
to students’ perceptions in online or hybrid educational contexts, where participants’
actions remain in time [23], and this particularity should be put into value [55]. In the
case of online assessment practices, Forsyth and colleagues [56] suggest four differential
desirable features: (a) there should be a diversity of presentation, grading, and feedback
forms, catering to participants’ diversity; (b) assessment programs should be flexible and
adaptative, fostering innovative uses over the replication of traditional practices; (c) online
assessment should lessen faculty’s workload, facilitating automatic tasks, so that instructors
could focus on nuclear pedagogical issues and actual formative practices; (d) administrative
student profiles should be integrated into the LMS to ease clerical tasks eventually.
In addition, to develop students’ assessment literacy, educators should integrate self-
assessment activities that encourage reflection on learning and promote understanding of
assessment criteria. Implementing peer assessment practices can help students critically
analyze work and provide constructive feedback. Providing timely, specific formative
feedback guides students in closing performance gaps and improving their work. Engag-
ing students in defining assessment criteria demystifies assessment processes and fosters
ownership and accountability in learning. Leveraging digital tools for ongoing assessment
and feedback enhances interactivity and engagement, while offering professional devel-
opment for instructors equips them with the strategies needed to integrate these practices
effectively, thus fostering a culture of continuous feedback and sustainable education.
We encourage future research to deepen the reasons for these descriptive results,
free from pandemic-related constraints. Understanding faculty and students is necessary
to generate more fine-tuned digitally supported educational experiences in iterative de-
signs [57]. The results of this study have crucial implications for future blended educational
proposals; finding a way to implement more competence-based assessment supported by
technology remains challenging. On the one hand, more teacher training is required to
improve assessment literacy [13]. But students, on their side, also need to gain awareness
of their responsibility in the learning and assessment process to be empowered and gain
agency so that instructors’ pedagogical efforts increase sustainability [32,58,59]. We must
look up, thus, to teaching programs increasing complexity in assessment processes where
students become active participants [31], especially in hybrid or blended designs, to pro-
mote self-regulated learning and persist in life-long learning skills. These decisions, in
turn, shall improve the quality of programmatic assessment, allowing for more inclusive,
personalized, and coherent assessment proposals [60,61].

5. Conclusions
This paper presents the results of a survey study which aimed to compare and contrast
faculty and students’ perceptions of learning assessment practices during emergency
remote learning during the COVID-19 pandemic lockdown at one of the leading higher
Sustainability 2024, 16, 6596 15 of 18

education institutions in Spain. Our findings reveal a predominant emphasis on summative


assessment purposes, highlighting the need for a shift towards more formative assessment
practices that support continuous learning and development.
This research contributes significantly to the existing body of knowledge by identi-
fying a gap between the perceived and ideal purposes of assessment in higher education,
emphasizing the necessity of balancing formative and summative assessment functions
to enhance learning outcomes. It also provides valuable insights into the current state
of assessment literacy among both faculty and students, underlining the importance of
developing assessment competencies to foster self-regulation and lifelong learning skills.
Furthermore, this study offers practical guidelines for educators to develop students’ assess-
ment literacy, such as integrating self-assessment, promoting peer assessment, providing
formative feedback, involving students in defining assessment criteria, leveraging and
normalizing digital tools, and offering professional development for instructors.
Future research should explore the long-term effects of enhanced assessment literacy
on student outcomes, investigating how improved assessment competencies influence aca-
demic performance, self-regulation, and career readiness. Additionally, further studies are
needed to examine the effectiveness of digital tools in supporting sustainable assessment
practices, providing insights into innovative strategies for higher education. Comparative
research across different disciplines and educational contexts could offer a deeper under-
standing of how assessment practices and literacy vary, helping to tailor interventions to
specific needs and promote best practices universally. Finally, research should focus on
the role of institutional support and policy in fostering sustainable assessment practices,
understanding how organizational factors influence assessment literacy and practices and
informing the development of comprehensive strategies to support educators and students.
Our study, while comprehensive, does have some limitations that are important
to acknowledge. First, the participant samples are limited and represent only a small
percentage of the entire volume of faculty and students invited to respond. However, we
considered it acceptable in the context of pandemics. This response is ordinary in online
data collection designs, and the emergency context poses an added challenge to the call
for participation. The low response rate did hinder us from exploring likely differences
between disciplinary areas, for example. Also, the results always showed mostly moderate
to slight effect sizes. The second limitation deals with the nature of the data reported, i.e.,
non-observational, in surveys.
Despite these limitations, we underline two concluding points after presenting and
discussing the results. First, the need for improvement in assessment practices, as mass ac-
cess to higher education, new blended modalities, and the rise of artificial intelligence make
a traditional assessment model, in which educators are the only feedback providers, useless.
Second, to address this first need, we must also tackle the development of both faculty
and students’ assessment literacy. Institutional stakeholders should promote professional
development programs to enhance assessment for learning practices from a sustainability
point of view. Feedback practices must turn sustainable for instructors to engage in them
and, in turn, promote students’ agency in the assessment process. It is crucial to emphasize
that improvement will not happen without strong institutional commitment.

Author Contributions: Conceptualization, A.R., E.C. and L.L. methodology, E.C.; formal analysis,
L.L.; investigation, A.R., E.C. and L.L.; resources, A.R., E.C. and L.L.; data curation, L.L.; writing—
original draft preparation, A.R., E.C. and L.L.; writing—review and editing, A.R.; supervision, E.C.;
project administration, E.C.; funding acquisition, E.C. All authors have read and agreed to the
published version of the manuscript.
Funding: This research was funded by the Universitat de Barcelona “Análisis de las prácticas de
evaluación en entornos de docencia mixta orientadas al desarrollo de las competencias transversales”
(REDICE20-2380), Institut de Desenvolupament Professional (IDP).
Institutional Review Board Statement: The study was conducted in accordance with the Declaration
of Helsinki and approved by the Institutional Review Board of the Universitat de Barcelona.
Sustainability 2024, 16, 6596 16 of 18

Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement: Data of this study are available upon request from the authors.
Conflicts of Interest: The authors declare no conflicts of interest.

References
1. Lissak, G. Adverse physiological and psychological effects of screen time on children and adolescents: Literature review and case
study. Environ. Res. 2018, 164, 149–157. [CrossRef] [PubMed]
2. Rodrigo, C.; Tabuenca, B. Ecologías de aprendizaje en estudiantes online con discapacidades. Comunicar 2020, 28, 53–65. [CrossRef]
3. Coll, C.; Bustos, A.; Engel, A.; de Gispert, I.; Rochera, M.J. Distributed educational influence and computer-supported collaborative
learning. Digit. Educ. Rev. 2013, 24, 23–42. Available online: https://raco.cat/index.php/DER/article/view/271198 (accessed on
15 February 2024).
4. Stenalt, M.H.; Lassesen, B. Does student agency benefit student learning? A systematic review of higher education research.
Assess. Eval. High. Educ. 2022, 47, 653–669. [CrossRef]
5. Barberá, E.; Suárez-Guerrero, C. Evaluación de la educación digital y digitalización de la evaluación. RIED 2021, 24, 33–40.
[CrossRef]
6. Malagón, F.J.; Graell, M. La formación continua del profesorado en los planes estratégicos de las universidades españolas.
Educación XX1 2022, 25, 433–458. [CrossRef]
7. Yan, Z. Assessment-as-learning in classrooms: The challenges and professional development. J. Educ. Teach. 2021, 47, 293–295.
[CrossRef]
8. Sridharan, B.; Tai, J.; Boud, D. Does the use of summative peer assessment in collaborative group work inhibit good judgement?
High. Educ. 2019, 77, 853–870. [CrossRef]
9. Veugen, M.J.; Gulikers, J.T.M.; den Brok, P. We agree on what we see: Teacher and student perceptions of formative assessment
practice. Stud. Educ. Eval. 2021, 70, 101027. [CrossRef]
10. Remesal, A. Primary and secondary teachers’ conceptions of assessment: A qualitative study. J. Teach. Teach. Educ. 2011, 27,
472–482. [CrossRef]
11. Cañadas, L. Evaluación formativa en el contexto universitario: Oportunidades y propuestas de actuación. Rev. Digit. Investig.
Docencia Univ. 2020, 14, e1214. [CrossRef]
12. Looney, A.; Cumming, J.; Van Der Kleij, F.; Harris, K. Reconceptualising the role of teachers as assessors: Teacher assessment
identity. Assess. Educ. Princ. Policy Pract. 2018, 25, 442–467. [CrossRef]
13. Pastore, S.; Andrade, H.L. Teacher assessment literacy: A three-dimensional model. Teach. Teach. Educ. 2019, 84, 128–138.
[CrossRef]
14. Xu, Y.; Brown, G.T.L. Teacher assessment literacy in practice: A reconceptualization. Teach. Teach. Educ. 2016, 58, 149–162.
[CrossRef]
15. Remesal, A.; Estrada, F.G. Synchronous Self-Assessment: First Experience for Higher Education Instructors. Front. Educ. 2023, 8,
1115259. [CrossRef]
16. Offerdahl, E.G.; Tomanek, D. Changes in instructors’ assessment thinking related to experimentation with new strategies. Assess.
Eval. High. Educ. 2011, 36, 781–795. [CrossRef]
17. Biggs, J.; Tang, C. Teaching for Quality Learning at University; Open University Press: Oxford, UK, 2011.
18. Laveault, D.; Allal, L. (Eds.) Assessment for Learning: Meeting the Challenge of Implementation; Springer: London, UK, 2016.
19. Tai, J.; Ajjawi, R.; Boud, D.; Dawson, P.; Panadero, E. Developing evaluative judgement: Enabling students to make decisions
about the quality of work. High. Educ. 2018, 76, 467–481. [CrossRef]
20. JISC. The Future of Assessment: Five Principles, Five Targets for 2025. 2020. Available online: https://repository.jisc.ac.uk/7733
/1/the-future-of-assessment-report.pdf (accessed on 15 February 2024).
21. García-Peñalvo, F.J.; Corell, A.; Abella-García, V.; Grande, M. Online assessment in higher education in the time of COVID-19.
Educ. Knowl. Soc. 2020, 21, 1–26. [CrossRef]
22. Robertson, S.N.; Humphrey, S.M.; Steele, J.P. Using technology tools for formative assessments. J. Educ. Online 2019, 16, n2.
[CrossRef]
23. Lafuente, M.; Remesal, A.; Álvarez Valdivia, I. Assisting Learning in e-Assessment: A Closer Look at Educational Supports.
Assess. Eval. High. Educ. 2014, 39, 443–460. [CrossRef]
24. Sambell, K.; Brown, S. Changing assessment for good: Building on the emergency switch to promote future-oriented assessment
and feedback designs. In Assessment and Feedback in a Post-Pandemic Era: A Time for Learning and Inclusion; Baughan, P., Ed.;
Advance HE: York, UK, 2021; pp. 11–21.
25. Fischer, J.; Bearman, M.; Boud, D.; Tai, J. How does assessment drive learning? A focus on students’ development of evaluative
judgement. Assess. Eval. High. Educ. 2023, 49, 233–245. [CrossRef]
26. Ruiz-Morales, Y.; García-García, M.; Biencinto, C.; Carpintero, E. Evaluación de competencias genéricas en el ámbito universitario
a través de entornos virtuales: Una revisión narrativa. RELIEVE 2017, 23, 2. [CrossRef]
27. Abelha, M.; Fernandes, S.; Mesquita, D.; Seabra, F.; Ferreira, A.T. Graduate employability and competence development in higher
education—A systematic literature review using PRISMA. Sustainability 2020, 12, 5900. [CrossRef]
Sustainability 2024, 16, 6596 17 of 18

28. Klenowski, V. Assessment for learning revisited: An Asia-Pacific perspective. Assess. Educ. Princ. Policy Pract. 2009, 16, 263–268.
[CrossRef]
29. Sadler, D.R. Formative assessment and the design of instructional systems. Instr. Sci. 1989, 18, 119–144. [CrossRef]
30. Hawe, E.; Dixon, H. Assessment for learning: A catalyst for student self-regulation. Assess. Eval. High. Educ. 2017, 42, 1181–1192.
[CrossRef]
31. Molina, M.; Pascual, C.; López Pastor, V.M. Los proyectos de aprendizaje tutorado y la evaluación formativa y compartida en la
docencia universitaria española. Perfiles Educ. 2022, 44, 96–112. [CrossRef]
32. Carless, D.; Boud, D. The development of student feedback literacy: Enabling uptake of feedback. Assess. Eval. High. Educ. 2018,
43, 1315–1325. [CrossRef]
33. Henderson, M.; Ajjawi, R.; Boud, D.; Molloy, E. The Impact of Feedback in Higher Education Improving Assessment Outcomes for
Learners; Palgrave/MacMillan: London, UK, 2019.
34. Panadero, E.; Jonsson, A.; Strijbos, J.-W. Scaffolding self-regulated learning through self-assessment and peer assessment:
Guidelines for classroom implementation. In Assessment for Learning: Meeting the Challenge of Implementation; Laveault, D., Allal,
L., Eds.; Springer: London, UK, 2016; pp. 311–326. [CrossRef]
35. Hortigüela, D.; Pérez-Pueyo, A.; López-Pastor, V. Implicación y regulación del trabajo del alumnado en los sistemas de evaluación
formativa en educación superior. RELIEVE 2015, 21, ME6. [CrossRef]
36. Nicol, D. The power of internal feedback: Exploiting natural comparison processes. Assess. Eval. High. Educ. 2020, 46, 756–778.
[CrossRef]
37. Nicol, D.; Serbati, A.; Tracchi, M. Competence development and portfolios: Promoting reflection through peer review. AISHE-J.
2019, 11, 1–13. Available online: https://ojs.aishe.org/index.php/aishe-j/article/view/405/664 (accessed on 20 February 2024).
38. Azevedo, R. Defining and measuring engagement and learning in science: Conceptual, theoretical, methodological, and analytical
issues. Educ. Psychol. 2015, 50, 84–94. [CrossRef]
39. Nieminen, J.H.; Carless, D. Feedback literacy: A critical review of an emerging concept. High. Educ. 2023, 85, 1381–1400.
[CrossRef]
40. O’Donovan, B.; Rust, C.; Price, M. A scholarly approach to solving the feedback dilemma in practice. Assess. Eval. High. Educ.
2016, 41, 938–949. [CrossRef]
41. Lui, A.M.; Andrade, H.L. The Next Black Box of Formative Assessment: A Model of the Internal Mechanisms of Feedback
Processing. Front. Educ. 2022, 7, 751548. [CrossRef]
42. Boud, D. Retos en la reforma de la evaluación en educación superior: Una mirada desde la lejanía. RELIEVE 2020, 26, M3.
[CrossRef]
43. Winstone, N.E.; Mathlin, G.; Nash, R.A. Building feedback literacy: Students’ perceptions of the developing engagement with
feedback toolkit. Front. Educ. 2019, 4, 39. [CrossRef]
44. Ocampo, J.C.; Panadero, E.; Zamorano, D.; Sánchez-Iglesias, I.; Diez Ruiz, F. The effects of gender and training on peer feedback
characteristics. Assess. Eval. High. Educ. 2023, 49, 539–555. [CrossRef]
45. Cano, E.; Lluch, L. Competence-Based Assessment in Higher Education during COVID-19 Lockdown: The Demise of Sustainabil-
ity Competence. Sustainability 2022, 14, 9560. [CrossRef]
46. Mishra, L.; Gupta, T.; Shree, A. Online teaching-learning in higher education during lockdown period of COVID-19 pandemic.
Int. J. Educ. Res. Open 2020, 1, 100012. [CrossRef]
47. Sharma, A.; Alvi, I. Evaluating pre and post COVID-19 learning: An empirical study of learners’ perception in higher education.
Educ. Inf. Technol. 2021, 26, 7015–7032. [CrossRef] [PubMed]
48. Tillema, H.H.; Kremer-Hayon, L. “Practising what we preach”—Teacher educators’ dilemmas in promoting self-regulated
learning: A cross case comparison. Teach. Teach. Educ. 2002, 18, 593–607. [CrossRef]
49. Hidalgo, B.G.; Gisbert, M. La adopción y uso de las tecnologías digitales en el profesorado universitario: Un análisis desde la
perspectiva del género y la edad. RED 2021, 21, 1–19. [CrossRef]
50. Dorfsman, M.; Horenczyk, G. El cambio pedagógico en la docencia universitaria en los tiempos de COVID-19. RED 2021, 21,
1–27. [CrossRef]
51. Carless, D.; Winstone, N. Teacher feedback literacy and its interplay with student feedback literacy. Teach. High. Educ. 2023, 28,
150–163. [CrossRef]
52. Jiang, L.; Yu, S. Understanding Changes in EFL Teachers’ Feedback Practice During COVID-19: Implications for Teacher Feedback
Literacy at a Time of Crisis. Asia-Pac. Educ. Res. 2021, 30, 509–518. [CrossRef]
53. Gulikers, J.T.M.; Biemans, H.J.A.; Wesselink, R.; van der Wel, M. Aligning formative and summative assessments: A collaborative
action research challenging teacher conceptions. Stud. Educ. Eval. 2013, 39, 116–124. [CrossRef]
54. Pérez-López, E.; Yuste, R. La competencia digital del profesorado universitario durante la transición a la enseñanza remota de
emergencia. RED 2023, 23, 1–19. [CrossRef]
55. Gu, X.; Crook, C.; Spector, M. Facilitating innovation with technology: Key actors in educational ecosystems. Br. J. Educ. Technol.
2019, 50, 1118–1124. [CrossRef]
56. Forsyth, R.; Cullen, R.; Stubbs, M. Implementing Electronic Management of Assessment and Feedback in Higher Education.
In Research Handbook on Innovations in Assessment and Feedback in Higher Education; Evans, C., Waring, M., Eds.; Edward Elgar
Publishing: Cheltenham, UK, 2024.
Sustainability 2024, 16, 6596 18 of 18

57. Badwan, B.; Bothara, R.; Latijnhouwers, M.; Smithies, A.; Sandars, J. The importance of design thinking in medical education.
Med. Teach. 2018, 40, 425–426. [CrossRef]
58. Brown, G.T.L. Student Conceptions of Assessment: Regulatory Responses to Our Practices. ECNU Rev. Educ. 2022, 5, 116–139.
[CrossRef]
59. Tari, E.; Selfina, E.; Wauran, Q.C. Responsibilities of students in higher education during the COVID-19. Pandemic and new
normal period. J. Jaffray 2020, 18, 129–152. [CrossRef]
60. Torre, D.M.; Schuwirth, L.W.T.; Van der Vleuten, C.P.M. Theoretical considerations on programmatic assessment. Med. Teach.
2020, 42, 213–220. [CrossRef] [PubMed]
61. Tai, J.; Ajjawi, R.; Bearman, M.; Boud, D.; Dawson, P.; Jorre de St Jorre, T. Assessment for inclusion: Rethinking contemporary
strategies in assessment design. High. Educ. Res. Dev. 2023, 42, 483–497. [CrossRef]

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like