Translation Revision and Post-Editing - Industry Practices and Cognitive Processes
Translation Revision and Post-Editing - Industry Practices and Cognitive Processes
AND POST-EDITING
Translation Revision and Post-editing looks at the apparently dissolving boundary between
correcting translations generated by human brains and those generated by machines.
It presents new research on post-editing and revision in government and corporate
translation departments, translation agencies, the literary publishing sector and the
volunteer sector, as well as on training in both types of translation checking work.
This collection includes empirical studies based on surveys, interviews and
keystroke logging, as well as more theoretical contributions questioning such
traditional distinctions as translating versus editing. The chapters discuss revision
and post-editing involving eight languages: Afrikaans, Catalan, Dutch, English,
Finnish, French, German and Spanish. Among the topics covered are translator/
reviser relations and revising/post-editing by non-professionals.
The book is key reading for researchers, instructors and advanced students in
Translation Studies as well as for professional translators with a special interest in
checking translations.
Giovanna Scocchera has been a literary translator from English to Italian since
2000, working for major Italian publishers both as translator and reviser. She has
taught translation and revision for publishing purposes at several institutions. She
earned a PhD on revision in the publishing sector in 2015 and has pursued her
research interest in revision training and education.
TRANSLATION
REVISION AND
POST-EDITING
Industry Practices and
Cognitive Processes
Introduction 1
PART I
Post-editing versus revision 19
PART II
Non-professional revision and post-editing 71
PART III
Professional revision in various contexts 107
PART IV
Training 185
Bibliography 247
Index 273
CONTRIBUTORS
Ilse Feinauer is Professor at the University of Stellenbosch, South Africa, where she
teaches Translation Studies and Afrikaans linguistics. Her research focus is socio-
cognitive Translation Studies: processes and networks. She is a founding member and
board member of the Association for Translation Studies in Africa, and a member of
the executive board of the European Society for Translation Studies.
viii Contributors
Anne-Kathrin Gros is a PhD student, research assistant and Lecturer at the Center
for Translation and Cognition, University of Mainz/Germersheim, Germany. Her
research interests include cognitive Translation Studies, translation process research
and psycholinguistics.
Jean Nitzke is a postdoc and instructor at the Center for Translation and Cognition,
University of Mainz, Germany. In her PhD studies, she focused on post-editing,
which she also teaches to students, trainers and professional translators. Other
research interests include translation process research, translation tools and
technologies as well as cognitive aspects of translation.
Leena Salmi is a Senior Lecturer in French at the University of Turku, Finland. Her
research interests include post-editing, translation quality assessment, information-
seeking as part of translator work and themes related to the production of legally
valid translations. Her teaching focuses on practical translation (French-Finnish),
translation technology, post-editing and translation company simulation, as well as
supervision of MA and PhD theses.
Gys-Walt van Egdom holds a PhD in linguistics and literary studies and a master’s
degree in Translation Studies. He lectures on translation and Translation Studies
at Utrecht University, the Netherlands. His research interests include translation
didactics, translation evaluation, translation ethics, translation processes and human-
computer interaction.
necessarily good revisers, and vice versa; revision competence requires additional
aptitudes and attitudes.
More recently, a group of researchers at the University of Antwerp sought to
validate their model of revision competence (Robert et al. 2018). The initial results
appeared in three publications (Rigouts Terryn et al. 2017; Robert et al. 2017b;
Robert et al. 2018). The researchers conducted an experiment with 21 students
divided into an experimental group, who had taken a module on revision, and a
control group, who had no revision training. The students frst performed a set of
revision tasks, recorded with keystroke logging, and then flled out an online ques-
tionnaire. One fnding was that the experimental group used the same search tools
as the control group, but more frequently; they performed more searches in order
to be able to justify their changes, and their searches were more meticulous. The
experimental group was also more tolerant of wording choices and made fewer
pointless changes. However the researchers were not able to demonstrate that the
students in the experimental group revised better, in the sense of making more
necessary changes (an indicator of strategic competence). Their explanation was
that the revision training had been brief and that a pilot study has inherent limita-
tions (such as the number of participants).
The literature also includes three experimental studies of revision teaching. In the
frst (Shreve et al. 2014), the researchers compare the efcacy of screen recording
data with that of Integrated Problem and Decision Reporting (Gile 2004) in assisting
other-revision by 12 students. They found that the students performed better when
they had access to screen recording data on the translation process. The second study
(Robert and Brunette 2016) looks at the relationship between think-aloud, revi-
sion quality and ability to detect errors with 16 professional revisers, to see whether
think-aloud would help learners. The researchers found that the more the revis-
ers verbalized detailed diagnoses of error, the better their detection and correction
work, but the longer they took to complete the task. Applying this to pedagogy, the
researchers concluded that it might be useful to ask revision students to think aloud
when revising at home. Finally, drawing from previous research on revision pedagogy
and didactics (Scocchera 2014) and based on a multi-component view of revision
competence (2017a), the third study (Scocchera 2019) investigates the general atti-
tudes towards revision and the background skills of revision students taking part in a
short-term course, as well as their progress in the acquisition of revision competence.
By comparing the students’ output on diferent revision assignments, the study also
tests the validity and efcacy of the teaching contents and methods and provides
insights and practical suggestions for revision-specifc education and training based
on quantitative and qualitative data obtained through an end-of-study questionnaire.
using think-aloud for three revision tasks performed by ten professionals, Künzli
(2006c) found that a lack of clear instructions (a revision brief) had a negative efect
on the revised translation. Künzli (2007b) also looked at changes made by revis-
ers, changes which should have been made but were not and the task defnition
for revision (revision brief), and found that defective revision could be due to the
reviser not having a clear defnition of the task as well as a lack of well-structured
procedures. Later, Künzli (2009) found that quality takes time, and that it did not
seem to matter whether the reviser read a wording of the translation frst or a word-
ing of the source text frst when comparing source text and translation.
As far as revision procedures are concerned, the frst study was probably that of
the GREVIS research group (Brunette et al. 2005), which looked at the revision
interventions by 14 professional revisers, frst with the source text, then without.
Analysis of the result led to the conclusion that unilingual revision is inadvisable.
Robert (2013, 2014b; Robert and Van Waes 2014) looked at how revision proce-
dure is related to the quality of the product, the error detection potential and the
duration of the task. Sixteen professional revisers had to each revise four transla-
tions, each time using a diferent procedure. Data collection was by think-aloud,
keystroke logging and a questionnaire. She found that procedure did indeed have
an impact, and she formulated recommendations. Marashi and Okhowat (2013)
compared revision with and without the source text by 40 professional revisers,
with a focus on the linguistic quality of the revision and the revisers’ profles.
The results were rated by professors of the target language. The fnding was that
recourse to the source had no efect on quality with this type of purely linguistic
revision. Finally, Huang (2018) investigated the self-revision, other-revision and
post-editing processes of translation trainees through eyetracking, keystroke log-
ging and retrospection. Drawing on an analysis of the students’ reading and typing
activities, together with the corresponding cognitive activities and purposes, she
observed three working phases in the processes (planning, drafting and fnal check)
and was able to identify four types of working styles, which roughly correspond
to diferent revision procedures. One example is the Micro processing working
style, which means one or two run-throughs of bilingual reading and detailed revi-
sion. She also found three types of reviser (habit-oriented, task-oriented and habit/
task-oriented). By comparing the observed students’ working styles with those of
professionals in the existing literature, she hoped “to ofer insight into the behav-
iour of pre-career translators and contribute to translation pedagogy” (176).
Another set of studies pays more attention to the revisers and/or their profles
or expertise. Künzli (2005), for example, asked what principles guide translation
revision, and he also looked at the quality of the product and the duration of the
process. He found that revisers did not always apply the principles they had verbal-
ized, such as “revising is not retranslating”. He then considered the sense of loyalty
in decision-making (2006a) and concluded that revisers are faced with a dilemma:
loyalty to self versus loyalty to other actors such as the client and the reader.
With regard to profles, Van Rensburg (2017) designed instruments to measure
the quality of revision in order to fnd a possible link between revision quality and
Introduction 9
certain variables in the reviser’s profle. With 30 revisers and 3 language experts,
she found a signifcant correlation between years of translation experience and just
one of the indicators in her measuring instrument, namely necessary corrections to
the target language. Scocchera (2017a) conducted a survey in the Italian publishing
sector, and the fndings enabled her to outline a profle of the reviser in terms of
age, gender, expertise, education/training and work practices, providing a valuable
contribution to higher visibility for this professional fgure. Schaefer et al. (2019a)
used eyetracking and key logging to compare the revision behaviour of translation
students and professional translators. The aim was to investigate the efect of error
type on error recognition, the relation between eye-movement patterns and error
detection, and the relation between translation expertise and error recognition
behaviour. They concluded that professional translators are more strategic in terms
of cost/beneft in their revision behaviour.
A last group of studies take into account the ubiquity of technologies in the
translation and revision workfow and were thus carried out within a CAT tool
environment. This is the case for Mellinger and Shreve (2016), who observed
revision by nine professional translators using keylogging software. They found a
tendency to make unnecessary changes. Ipsen and Dam (2016) had nine students
perform a revision task in the CAT tool MemSource. Data were gathered using
screen capture, interviews and retrospective think-aloud. The researchers found
that regardless of the procedure, participants obtained better results when they read
the translation before the source during comparison. However, the order of read-
ing was established after completion of the task, through retrospective interviews,
rather than during the task using eyetracking. Ciobanu et al. (2019) looked at the
embedding of speech technologies in the revision workfow in order to determine
their efect on the revisers’ preferences, viewing behaviour and revision quality.
Five professional translators and six translation trainees were asked to revise the
English translation of a French text in a CAT tool, with and without source text
sound (via speech synthesis). Their eye movements were tracked and they had to
fll in a post-eyetracking questionnaire. The authors found an improvement in revi-
sion quality, especially regarding accuracy errors, when sound was present.
Participant-based studies
Studies of participants in the revision process are based on surveys. Scocchera (2015)
looked at revision by literary publishers in Italy from two points of view: frst, she
investigated the relationship between translator and reviser using a survey with 80
respondents; second, she considered the role of computer-based revision in the
genesis of translations.2 She found that 43.6% of the translators surveyed always had
contacts with their reviser, but the same proportion never had contact; the remain-
ing 12.8% had sporadic contact. She also found that very few revisers had had any
training in literary revision. The same study was drawn on by Scocchera (2014) to
make suggestions for a course in literary revision, and to make a case for the use
of specifc tools: the Review functions in Word, publishers’ house-style guides,
10 Introduction
Mossop’s (2014a) revision parameters and Berman’s (1985) typology of forces that
tend to domesticate literary translations. Scocchera (2016) provides her personal
refections on the importance of a collaborative approach to revision, through an
anecdote taken from her own experience as a reviser. Finally, still drawing on her
survey, Scocchera (2017b) looks at translation revision as rereading and analyses the
reviser’s strategies and purposes.
Rasmussen and Schjoldager (2011) conducted a survey to fnd out about pro-
cedures, revision parameters and reviser profles at 22 Danish translation agencies,
and then interviewed some of the revisers. She found that 10% of translations are
not revised at all, while 90% receive a comparative revision. Few agencies had a
revision manual, though most of the revisers are trained translators. Schjoldager
et al. (2008) used a survey and interviews to develop a precis-writing, editing
and revision module for a course. Hernandez-Morin (2009) looked at practices
and perceptions of revision in France through a survey with 115 respondents.
She found that most freelance translators support the European translation ser-
vice standard EN 15038, but while they endorse revision, they do not think it is
always necessary. Robert (2008) examined the inconsistent terminology of revision
through a literature review and then looked at revision procedures using two sur-
veys (48 and 21 respondents). Robert and Remael (2016) conducted a survey (99
respondents) about revision of subtitles. Lafeber (2012) conducted a survey of in-
house translators and revisers in order to identify the importance of various types
of skill and knowledge, and the extent to which they are lacking in new recruits.
The results confrmed that linguistic competence alone is not sufcient. In 2018,
she confrmed these frst results, explaining that at both the EU and the UN, the
skills required also include analytical and research skills as well as procedural and
substantive knowledge.
Product-oriented studies
The quality of post-edited texts has been a central question in product-oriented
studies about professional contexts (Guerberof Arenas 2017), post-editing by trans-
lation students (Ortiz-Boix and Matamala 2017) and non-professional post-editing
of user-generated content (Mitchell et al. 2014). Studies have also addressed the
correctness and necessity of changes in post-editing and identifed unnecessary
changes, changes introducing errors and errors missed by professional translators
(de Almeida 2013) and by translation students (Koponen et al. 2019).
Research has also addressed questions related to the usability or acceptability
of post-edited texts from the perspective of the end user of a translation. Bowker
and Buitrago Ciro (2015) present a study of the acceptability of human-translated,
machine-translated and post-edited texts and also take into account the time and
cost of producing each version. Van Egdom and Pluymaekers (2019) compare
various quality aspects and end-user impressions of texts with various levels of
post-editing. Screen (2019) uses eyetracking as well as subjective evaluations of
Introduction 13
Process-oriented studies
In the translation industry, the rationale for using MT and post-editing relies on the
assumption that it requires less efort than translation ‘from scratch’. For this rea-
son, the question of efort has received much interest in post-editing research. The
infuential model of post-editing efort by Krings (2001) distinguishes three aspects:
temporal efort (the time needed for post-editing), technical efort to make cor-
rections (for example, keystrokes) and cognitive efort to identify errors and plan
corrections. A survey of studies on post-editing efort is given in Koponen (2016).
Technical and temporal efort have been addressed in studies where translators’
productivity during post-editing has been compared to translation from scratch
(Plitt and Masselot 2010; Läubli et al. 2013) and to translation with the aid of
translation memory matches (Guerberof Arenas 2014a, 2014b; Teixeira 2014).
Research has also investigated the efects of interactive and adaptive MT (Alabau
et al. 2016) and recently the efect of neural MT as compared to earlier technologies
(Toral Ruiz et al. 2018; Daems and Macken 2019; Jia et al. 2019; Sanchez-Gijón
et al. 2019). Combining product and process, Vieira (2017b) investigates the con-
nections between diferent measures of post-editing efort and the quality of the
post-edited texts.
Cognitive efort is the most difcult of the three aspects to capture and mea-
sure, although considerable research interest has been focused on this topic. Some
research has investigated the use of think-aloud protocols (O’Brien 2005; Vieira
2017a; Koglin and Cunha 2019). One of the approaches to identifying cognitive
efort is based on detecting pauses in keylogging data. It has been suggested that
extended pauses are points of increased efort (O’Brien 2005; Screen 2017), as
are clusters of short pauses (Lacruz and Shreve 2014). Eyetracking technology has
also been applied to investigating cognitive efort, on the assumption that more
14 Introduction
frequent and longer gaze fxations indicate increased cognitive efort (Moorkens
2018a). Vieira (2017a) investigates and compares diferent indicators of cognitive
efort using eyetracking, think-aloud protocols and subjective ratings. Herbig et
al. (2019) experiment with physiological measures such as skin- and heart-based
indicators.
Cognitive efort has also been explored in terms of the efort perceived by the
person carrying out the post-editing. An annotation scale for collecting sentence-
level information about the perceived efort in post-editing was proposed by Specia
(2011). Such perceived efort ratings do not, however, always correspond to the
technical efort involved in terms of number of changes, as was found by Koponen
(2012). Correlations between efort perceived by the post-editor and measures of
efort like time and number of keystrokes have also been investigated by others
(Herbig et al. 2019).
Specifc issues in investigations of temporal, technical or cognitive efort include
the impact of source text features (O’Brien 2005; Aziz et al. 2014; Koglin and
Cunha 2019), diferent types of errors in the MT output (Koponen 2012; Kopo-
nen et al. 2012; Daems et al. 2017b; Carl and Toledo-Baez 2019) and types of
post-editing operations (Popović et al. 2014). Da Silva et al. (2017) investigate the
efect of L2 to L1 versus L1 to L2 directionality on the post-editing process. Lacruz
(2018) investigates stages of processing in post-editing.
Participant-oriented studies
The adoption and perception of MT post-editing in diferent settings has been a
point of interest for participant-oriented studies. Guerberof Arenas (2013) presents
a study based on online questionnaires and “debriefng” interviews with profes-
sional translators who participated in a post-editing experiment. Adoption and
acceptance of MT and post-editing among translators at the European Commis-
sion have been explored in two studies using focus groups (Cadwell et al. 2016) and
survey data (Rossi and Chevrot 2019). Bundgaard (2017a, 2017b) investigates trans-
lators’ resistance to and accommodation of post-editing and “translator-computer
interaction” more broadly, through interviews and written answers to questions
about expectations and experiences. Sakamoto (2019) also examines resistance to
post-editing based on a focus group of translation project managers.
Participant-oriented studies have also examined the profles, skills and compe-
tences of post-editors. Guerberof Arenas (2014b) examines the efect of professional
experience on post-editing productivity as well as the quality of the post-edited
translations. Alabau et al. (2016) present a longitudinal study of professional trans-
lators using an interactive MT system; they examine the efect of post-editing
experience, and personal factors such as typing skills, on the post-editing process.
Huang (2018) looks at translation students, comparing their profles and approaches
when post-editing, self-revising and other-revising. Daems et al. (2017c) compare
the translation and post-editing processes of professional translators and translation
students. Aranberri et al. (2014) examine the productivity of professional translators
Introduction 15
feld; the authors compare the lay users’ performance to that of professional transla-
tors. O’Brien et al. (2018) examine the quality of post-edited texts in a case where
academic writers machine-translated and then post-edited their own texts as a sup-
port for writing scientifc articles.
Notes
1 The term originated by contrast with ‘pre-editing’ (of MT inputs). It has appeared both
hyphenated and unhyphenated since the earliest publications, and both spellings continue
to be used. In this volume, we use the hyphenated form, which appears to be more com-
mon in the 2010s. The acronyms PEMT, for Post-Editing of Machine Translation, and
MTPE, for Machine Translation Post-Editing, are also both in use.
2 Translation genesis is a new area of research inspired by the textual genetics approach in
literary criticism. This involves examining the different versions of a work (in this case, a
translation) as well as manuscripts and other working documents that may have left traces
in the work. See issue 14 (2015) of the Translation Studies journal LANS (https://lans-tts.
uantwerpen.be/index.php/LANS-TTS/issue/view/16).
PART I
Post-editing versus revision
1
PREFERENTIAL CHANGES IN
REVISION AND POST-EDITING
Jean Nitzke and Anne-Kathrin Gros
competing versions of the target text segment. The participants tend to change the
TM match into their own translation.
The same phenomenon can also be observed in PE and revision. Transla-
tors might create their own mental concept of the source unit with their own
translation ideas and are then confronted with the machine translation or a
translation created by another person (see Oster 2017 for information on prim-
ing and monitoring). These latter translations are not necessarily defective, but
they do not always correspond to the translator’s representation. Over-edit-
ing was previously examined for PE in De Almeida (2013). In her study, she
analysed the edits of 20 participants post-editing IT texts (ten translated into
French and ten into Brazilian Portuguese). Her main categories were essential
changes, preferential changes, unimplemented essential changes and introduc-
tion of errors. ‘Preferential changes’ are synonymous with what we refer to as
over-editing. The participants made 45.16 preferential changes on average in a
text with 1008 words; in other words, an unnecessary change was made every
22.32 words.
Specifc instructions to avoid over-editing are frequently included in the PE
brief. For example, one of the instructions in the study by Aikawa et al. (2012: 7)
very explicitly states: “Avoid over-editing: don’t try to over-edit if the exist-
ing translation(s) . . . are grammatical and readable.” As over-editing is a rather
abstract concept, the usefulness of such an instruction might be questionable.
Participants are reminded that they should keep the editing process to a mini-
mum, but the extent to which the translators adhere to the instructions remains
unknown. Further, it might be difcult for translators who are not trained in PE
to lower their quality standards and correct only what they are asked to correct.
Similarly, it might be difcult to revise only what is necessary and to disregard
personal style and habits.
This chapter will focus on the over-editing behaviour of translators in PE and
revision scenarios. To that end, datasets from three studies will be analysed, in
which the participants were required to perform one of the two tasks. In the fol-
lowing sections, we will frst describe the three datasets and present the instructions
which were given to the participants. Then we will outline how we identifed
over-editing instances and report the results for the individual tasks with their simi-
larities and diferences, which will be discussed in the last section.
1. The studies
The analysis in this chapter is based on the datasets of three diferent studies, all
of which were conducted with English source texts and German as the target
language. There were no strict time restrictions for any of the tasks. They all
were conducted in Translog II (Carl 2012), a program used to record keystrokes,
mouse activities and gaze data with the help of an eyetracker (either tobii TX300
or SMI mobile). The fnal product as well as the eyetracking and keylogging data
were connected via the alignment tool YAWAT (Germann 2008). All the studies
Changes in revision and post-editing 23
Study 1
The frst subset of data comes from an experiment in which the participants were
asked to perform three diferent tasks: translate from scratch, full post-edit (FPE)
and light post-edit (LPE) texts from either the technical or the medical felds
(only the PE sessions will be analysed here). Table 1.1 presents the instructions
for the tasks, which were adapted from the TAUS PE guidelines (Massardo et al.
2016). The three technical texts were excerpts from a dishwasher manual, while
the three medical texts were taken from documentation included with a vaccine
against measles, insulin for the treatment of diabetes patients and a medication for
the treatment of cancer. All texts were approximately 150 words long. The MT
output was created by Google Translate (at the time, it was still a statistical rather
than a neural MT engine).
The output of ‘good enough’ post-editing The level of quality is generally defined
(or ‘light’ post-editing) is comprehensible as being comprehensible, accurate,
and accurate, but not stylistically stylistically fine, though the style may
compelling. The text may sound like it was not be as good as achieved by a native-
generated by a computer, syntax might be speaker human translator. Syntax is
somewhat unusual, grammar may not be normal, grammar and punctuation are
perfect, but the message is accurate. correct.
• Use as much of the raw MT output as • Aim for grammatically, syntactically and
possible! semantically correct translation.
• Aim for semantically correct • Ensure that no information has been
translations. accidentally added or omitted.
• Ensure that no information has been • Ensure that key terminology is correctly
accidentally added or omitted. translated and used consistently.
• No need to use correct terminology, as • Edit any inappropriate or culturally
long as it is clear what is meant. unacceptable content.
• Edit any inappropriate or culturally • Use as much of the raw MT output as
unacceptable content. possible.
• Apply rules regarding spelling. • Apply rules regarding spelling,
• Don’t implement corrections that are of punctuation and hyphenation.
a stylistic nature only. • Ensure that formatting is correct.
• Don’t restructure sentences solely to
improve the natural fow of the text.
24 Jean Nitzke and Anne-Kathrin Gros
Technical text 1 4 4 4
Technical text 2 4 4 4
Technical text 3 4 4 4
Medical text 1 3 3 3
Medical text 2 3 3 3
Medical text 3 3 3 3
Study 2
This study was also conducted at the University of Mainz, Faculty of Translation
Studies, Linguistics and Cultural Studies in Germersheim in 2012, on behalf of the
Center for Research and Innovation in Translation and Translation Technology
(CRITT), Copenhagen Business School, Denmark. The data2 are included in the
CRITT-TPR database (https://sites.google.com/site/centretranslationinnovation/
tpr-db), which collects translation process data for diferent tasks and in diferent
languages. The source texts consisted of four newspaper articles and two sociology
texts with diferent levels of complexity. The length of the texts varied between
100 and 150 words. The MT output was again created by Google Translate.
In total, 24 participants took part in the study, 12 of them professional translators
(with university degrees and some professional work experience) and 12 translation
students (students of the university with only a little professional work experience).
The participants were asked to translate two texts from scratch, bilingually post-edit
two machine-translated texts, i.e. with the source text at hand, and monolingually
post-edit two machine-translated texts, i.e. without the source text at hand. Only
the bilingual PE task will be considered in the present analysis.
The participants were provided with the following guidelines for the PE task
(see also Carl et al. 2014: 153):3
Changes in revision and post-editing 25
A detailed description of the data subset is available in Nitzke (2019) and of the
whole dataset and database in Carl et al. (2016a).
Study 3
In Study 3, we used the results of Study 2 to create a revision task. Of all the
translations prepared from scratch in Study 2, we chose six—one translation per
source text—independently of the professional status of the participants but based
on quality in order to provide natural translations. Each text contained roughly 100
to 150 words and 5 to 11 sentences. The main criterion for choosing a text was that
the original translation prepared in Study 2 was fawless regarding errors and style.
After fnding suitable texts, we manipulated them by inserting errors that
occurred in human translations in other sessions of the experiment, to avoid
inserting unnatural errors. A total of three to fve errors were added to the texts,
but no sentence contained more than one error. The number of inserted mistakes
varied from text to text so that the participants would not be able to detect a
pattern. However, all mistake categories appeared equally over all the texts. Fur-
ther, each text contained at least one sentence without errors. We used Mertin’s
(2006) typology to categorise which error types needed to be included. How-
ever, not all categories suggested by Mertin were suitable for our purposes: we
could disregard formatting (because the texts were presented and translated in a
simple editor environment) and adherence to predefned terms (because there
were none). In the end, we applied six error types to our texts: orthography,
grammar, sense, omissions, consistency, coherence. Every error type occurred
with equal frequency in the texts.
We created two versions of each text (version A and version B) to distribute the
errors equally, to have enough material for each error type, and to avoid overloading
the texts with errors. The participants saw either version A or version B. Sentences
with an error in version A did not have any inserted error in version B. We asked
our participants to revise the six translations (either version A or version B), with
access to the source text. In total, 38 translators participated in the study (23 profes-
sionals, 15 students), each of whom received either text package A or text package
B. The following scenario was presented to the participants (originally in German):
A colleague has translated the following texts from English into German for
a renowned German newspaper (print) or for an encyclopaedia and has asked
you to now revise the translation before it is published.
26 Jean Nitzke and Anne-Kathrin Gros
Please concentrate on correct content and language and please insert your
corrections directly into the texts.
You’ll get paid for seven minutes per text.
There is no Internet connection—please revise the text without Internet
research.
We did not adhere strictly to the time limit. It was meant only to indicate to the
participants that they should not spend too much time on the tasks. However,
after about seven to eight minutes, we told them that the initial time was up and
that they should slowly come to an end. For more information on the study, see
Schaefer et al. (2019a).
TABLE 1.3 Mean time, text production and text elimination per session
Task Mean time per session (in minutes Mean text Mean text
and seconds) production (in elimination (in
characters) characters)
Study 1 FPE 18 min, 19.9 s (SD: 7 min, 7.5 s) 386 (SD: 156.4) 199.9 (SD: 72.6)
LPE 12 min, 0.8 s (SD: 3 min, 57.3 s) 224.5 (SD: 150.6) 111.8 (SD: 65.5)
Study 2 PE 12 min, 28.1 s (SD: 3 min, 26 s) 346.8 (SD: 107.7) 323 (SD: 168.9)
Study 3 Revision 6 min, 23 s (SD: 2 min, 20.4 s) 97.2 (SD: 101.2) 91.1 (SD: 97.2)
Changes in revision and post-editing 27
had loose time constraints). The high standard deviations for all parameters and
sessions show that the values vary a lot from one participant to another, which is
partly caused by diferent text lengths, the quality of the MT output for the various
tasks and individual working style and experience. Interestingly, the participants in
Study 1 inserted more characters than they erased during the PE sessions, while
in the other two studies the numbers of inserted and deleted characters are more
balanced. Thus, the general editing efort varied a lot between the diferent tasks
and participants.
All these changes were mapped to the target text word. Square brackets show the
characters that were erased during the translation process in chronological order.
Thus, the participant decided to insert Wirkung, which they frst misspelled but
then immediately corrected, erasing frst the r and then the u; they then continued
Study 1 FPE 27.81 (SD: 2.92) 19.57 (SD: 3.17) 8.24 (SD: 2.66)
LPE 23.19 (SD: 4.4) 11.71 (SD: 3.8) 11.48 (SD: 5.0)
Study 2 PE 23.89 (SD: 3.78) 15.75 (SD: 3.02) 8.14 (SD: 3.08)
Study 3 Revision 6.34 (SD: 3.89) 1.95 (SD: 1.15) 4.39 (SD: 3.43)
28 Jean Nitzke and Anne-Kathrin Gros
the writing process. In the second edit, the participant decided that they preferred
the word Wirksamkeit and therefore erased the sufx -ung and inserted -samkeit.
The fnal post-edited heading reads Wie wurde die Wirksamkeit von Hycamtin bisher
untersucht? [How was the efectiveness of Hycamtin so far studied?].
Depending on the type of change, a text unit could consist of at least one word
and at most one sentence. Grammatical changes, for example, could be made by
changing only one letter in the word (e.g. changing the case of the German def-
nite article: Nominative der to Dative dem). Reformulating, on the other hand,
could take up to an entire sentence and had to be seen as one editing instance,
because if every word were counted individually, it would falsify the results.
In the second step of our analysis, we determined which changes were neces-
sary and which could be characterised as over-editing. Finally, we categorised
the over-editing instances and quantifed them. Nine areas were defned for which
over-editing instances could be observed in our dataset:
For these last two steps, we divided the texts between us (we are both German
native speakers and trained translators with PE experience), identifed the instances
of over-editing, under-editing and necessary editing, and categorised the over-
editing instances. We then checked each other’s assessments. If we did not agree on
a category, a third rater was asked to decide which category was correct. An inter-
rater agreement was calculated with Cohen’s Kappa for two raters. The results were
κ = 0.95, z = 144 and p < 0.0001, which can be interpreted as an almost perfect
agreement.5
An overview of editing instances for all the studies can be found in Table 1.4.
Study 1 comprised medical and technical texts that were full and light post-edited.
In view of the guidelines, the editing efort should have been quite diferent for
these two tasks. However, when the changes were counted, the diference was less
striking than we expected: on average, 27.81 changes were made per FPE session
(SD: 2.92), while 23.19 changes were made per LPE session (SD: 4.4). However,
the diference in total changes between full and light PE is still statistically signif-
cant (since the data were not distributed normally, a Mann-Whitney U test was
Changes in revision and post-editing 29
results. They may have expected that we included many more errors in the texts.
This may also have been caused by the experimental situation in all three studies.
The participants may have felt the need to deliver high quality, because they knew
that their translation process was being observed and recorded and they therefore
felt pressure to perform exceptionally well, although we, of course, informed all
participants that the data would be anonymised.
TABLE 1.6 Distribution of over-editing instances for LPE and FPE in Study 1
FPE 41 2 56 13 15 10 4 6 24
LPE 53 39 61 7 4 41 2 17 22
A further reason might be that less obvious mistakes could be found in the revi-
sion task, and therefore preferential changes occurred more often. Additions and
deletions occurred with similar frequency in all studies, except for additions in the
revision task, which made up over 12% of all the over-editing instances in Study 3.
Most unnecessary changes in punctuation occurred in Study 1. The reason might
be that these domain-specifc texts were structured diferently from the texts in
the other two studies, and therefore punctuation could be handled diferently (e.g.
commas or semicolons were inserted at the end of each point in an unordered list,
or perhaps punctuation marks were avoided). Finally, insecure typing behaviour
could also be observed most often in Study 1. This, again, might be caused by the
diferent PE tasks, as the relatively untrained participants could have been uncertain
as to when to change a unit and when not.
Finally, let us take a closer look at the diferences between light and full PE. As
can be seen in Table 1.6, the distribution of over-editing instances is similar for
lexicon, style, spelling and insecurities. One reason could be that the guidelines
were similar for these characteristics. Additions and deletions occurred twice as
much in the FPE task; the total numbers, however, are still quite low in comparison
to the other over-editing areas. Still, the diference is not very great. Further, many
more syntactic and grammatical faws were corrected in the LPE task, although
they should not have been corrected. A reason for this behaviour might be that it is
odd for translators to keep syntactically and grammatically unusual or even incor-
rect units in the fnal text, even if the content is still conveyed. The participants
probably behaved similarly in the light and full PE task when it came to syntax and
grammar. This is judged as over-editing in the LPE task, but not in the FPE task.
Accordingly, both revision and PE tasks require practice (in line with Nitzke et al.
2019 and Robert et al. 2017a); not every translator is necessarily suited to revising
translated text or post-editing MT output.
Our results also suggest that revision and PE should be included in university
curricula in order to train students for these tasks, which could easily become
components of their professional lives. A focus should be placed on adhering to
guidelines and predefned quality requirements in training programmes—no matter
whether in university courses or training courses for professionals—as this, under-
standably, seems to be difcult for translators, who are normally used to creating
very high-quality fnal texts. A perfect translation is not always required, especially
when machine-translated output is to be post-edited. Usually, other factors like
time and money are essential when MT output is created for PE. Further, quality
is a subjective matter which people may judge diferently, and a PE/revision job
may not pay enough to justify very high quality. The fnal text simply needs to be
acceptable under the quality requirements, not perfect. This, however, partly con-
trasts with Depraetere’s (2010) fndings. She reports in her study of 10 translation
students that her participants did not change the phrasing of the texts as long as the
translation was acceptable. The students did not change anything just to improve
the natural fow of the translation, even though there may have been a more idi-
omatic solution. She concludes that it is not necessary, in PE classes, to emphasise
that style is not important in post-editing machine-translated output.
We, however, would recommend that one focus of training in revision and post-
editing should be on suppressing personal preferences (as was already described
in Mossop 2001), especially regarding lexical choices and style. Trainees need to
learn to explain how they decided that an edit was or was not necessary. They
need to learn how to make reasonable decisions. Further, it seems plausible to train
translators to work with what the machine or human has previously produced and
improve the text with the least possible efort, so that jobs can become as proftable
as possible.6 Another issue that is quite essential for LPE is to learn to leave gram-
matical and syntactic errors in the text if requested. This task might be frustrating
and even time-consuming at frst, but with growing experience it will become
easier. Translators who are also trained as post-editors and/or revisers need to be
able to rapidly assess which changes are necessary.
Of course, only a few clients would mind additional improvements as long as
the costs do not increase, for example, if the job is paid per source or target text
word. However, if the job is paid per hour or per implemented change, problems
could occur and the post-editor or reviser might have to justify the changes, which
is time-consuming and might lead to a less trusting relationship. Further, some col-
leagues might feel patronised if their translations are revised according to personal
preferences during the revision task. Finally, translators lose the most when they
put more efort into the task than they are paid for.
This study is only a starting point for further research on editing behaviour
in PE and revision tasks. As with the assessment of translation quality, assessing
the validity of editing during diferent tasks and according to diferent guidelines
Changes in revision and post-editing 33
that is visible in process data. This topic might therefore need to be addressed in
PE and revision training.
Notes
1 The concept session refers to the recording of the process data for one text.
2 The data also formed part of a multilingual study (the same source texts were translated
into six different languages under comparable conditions).
3 The data for the English–German sessions were collected in 2012, and this was one of the
earlier process studies including PE data. Accordingly, the instructions presented for this
study were rather vague and should have been more precise. Still, we wanted to present
the guidelines in order to make the analysis more transparent.
4 Some sessions had to be disregarded because of technical difficulties during or after the
recording.
5 Of course, the agreement is so high due to our methodology. If we had conducted an
independent rating of all texts by both raters, the inter-rater agreement would probably
have been lower. However, due to the amount of data, we decided to split it between us.
6 According to many personal experiences and conversations, the unprofitability of PE
seems to be one of the main reasons professional translators are critical of it.
2
DIFFERENTIATING EDITING,
POST-EDITING AND REVISION
Félix do Carmo and Joss Moorkens
In her article on the transition from statistical to neural machine translation (NMT),
Kenny (2018) argues that NMT is a sustaining rather than disrupting technology,
a linear progression along a continuum. Similarly, we believe that for translators
whose work processes have evolved alongside translation technology, post-editing
(PE) may be just one more step in that progression, with machine-translation (MT)
suggestions acting as another contributory input to the translation decision process
alongside translation memory (TM) matches, terms and concordances. The aim of
this chapter is to draw on the editing task that is present in translation, in revision
and in PE to clarify the impact of translators’ use of MT. The chapter critically
analyses views and narratives about PE from Translation Studies, MT research,
and the industry. The alternative view we propose calls for further discussion and
study of the technical dimension of translators’ work, and it draws on translation
process research to recommend a re-understanding of PE as a translation process
rather than a revision one. As a consequence of this new understanding, we claim
that, for MT content to be used efciently, specialised users with specialised tools
are required.
The frst part of the chapter (sections 1 to 3) draws on existing studies of PE,
presenting the terms and assumptions upon which the subsequent sections are
built, discussing the existing narratives in academia and industry (in section 2),
and drawing out implications of these views for professional translators. In section
3, we set out reasons for opposing these narratives and for considering PE to be a
form of translation, informed by theoretical considerations and practical analyses
of industrial workfows. The second part of the chapter (sections 4 to 6) details the
role of editing in our alternative view of PE, analysing the consequences of this
view in terms of the need for specialised tools and specialised users of technology.
We consider in detail what editing is and its role in the study of PE, supported by
translation process research. Two fundamental elements arise from our analysis:
36 Félix do Carmo and Joss Moorkens
the threshold that separates editing from translating, and the description of editing
as four actions. Section 5 applies these elements to the description of tools that
specifcally support editing, while in section 6, the views that were built up over
the chapter converge into our conclusion that, although MT might seem to be a
process that replaced translators in the translation process, in fact it requires even
more specialisation by translators.
As for the term ‘editing’, it is used here to describe a type of writing task that
is diferent from translating. In editing, translators act on a segment of text, be it a
suggestion already rendered in the target language, or a segment still written in the
source language, with either one requiring only a few changes here and there to
be ready for validation. The term may be used to describe the actions performed
during PE, and it may also be used to describe the actions carried out to update the
translation of a fuzzy match from a TM. Editing is a writing action that happens
after ‘checking’, which is a reading task that has the purpose of identifying whether
the segment should be validated or edited.
In the flm or publishing industries, editing is part of the creation process, but it
is only performed once all parts of the intended result have been produced. Editors
apply surgical actions to remove bad sections from a sequence, they decide what to
ft into a specifc place, they move sections to their optimum positions and they try
to select the best options for the diferent parts of a flm or a book. Likewise, delet-
ing, inserting, moving and replacing are the four actions that compose ‘editing’.
These will be referred to as ‘editing actions’, but one may also talk about ‘edits’.
“the art of fnding the best choice among all choices” and who may face an identity
crisis because of industry pressures (2013: 331).
Two examples of studies that have focused on the causes of translators’ resistance
to PE are Cadwell et al. (2018) and Moorkens and Way (2016). Both studies are
based on data from real users and interviews with practising translators. They sug-
gest that use of MT by translators may increase if they feel a greater sense of agency
and have greater confdence in the utility of the MT suggestions.
In addition to translators, we can analyse the voice of the industry as expressed
in the documents produced to standardise procedures. This is how the ISO 18587
standard explains the reasons for PE adoption:
using modern CAT tools, designed for optimising work with TM rather than with
MT (Moorkens and O’Brien 2017). In these environments, MT suggestions appear
intermingled with TM matches as resources for translators to check and edit. In
addition, CAT tools show suggestions for terminology, allow for searches of words
in context, show predictive writing suggestions and repaired fuzzy matches. The
inclusion of suggestions from MT is not a major disruption to the growing complex-
ity of CAT tools, but a natural evolution. When MT becomes an added resource,
the distinction between editing TM suggestions and post-editing MT suggestions
becomes less obtrusive. This is borne out by the similarities between high-quality
TM matches and NMT output, in terms of editing efort and the comparative per-
ceptions of their usefulness, as reported in Sanchez-Gijón et al. (2019).
In translator training, PE has already become a standard practice. Many trans-
lation programmes have, in one way or another, incorporated PE into their
curricula since at least 2009 (Guerberof and Moorkens 2019), and teaching of
PE is now expected for all courses included in the European Master’s in Trans-
lation Network (2017). Researchers have even presented suggestions for PE to
be taught as extensions of translators’ skills (Kenny and Doherty 2014). We may
thus say that there is a whole generation of graduate translators for whom PE is
expected as part of their jobs.
Furthermore, industry statistics reveal that PE is growing steadily, at least since
TAUS published their guidelines (2010). Lommel and DePalma (2016) refer to an
estimate by 56 enterprise clients that the percentage of PE in the global transla-
tion industry could reach 10% in 2019. If we add to these numbers ‘unofcial PE’
(when translators are given source language text in the target window and they
decide to machine-translate it, instead of typing it over with their own translation),
the numbers show that, notwithstanding natural variations between local realities,
PE is much more widely used than the resistance narratives lead us to think.
There may be good reasons why the discourse on translators’ resistance caught
on so steadily. In the current state of afairs, any type of empirical research will
oppose the narrative of resistance, thus serving as reinforcers of the reasonableness
of PE, but in the process it contributes to acceptance with no criticism. In fact, it is
easier to argue against unreasonable fears than to answer difcult questions, such as:
‘Have the technological advances really helped translators become more efcient
and produce better quality, or is the industry simply selecting the satisfcers who
are happy to quickly select the least bad of all options?’ and ‘Does the industry want
translators to be trained to be satisfcers, or is it ideal to train them to be fexible
optimisers?’ Whether misgivings are reasonable or not, the choice of whether to
post-edit should always be consultative, as imposition of any process on the transla-
tor will lead them to “feel that the material agent gets precedence and is inevitable,
no matter how unftting it might be for the task at hand” (Cadwell et al. 2018: 17).
We propose that new views of PE should be brought forward, so that questions
like these are more frequently studied and debated. In the next sections, we pres-
ent a few arguments towards the claim that PE needs to be studied in more detail,
beyond the acceptance/resistance divide.
40 Félix do Carmo and Joss Moorkens
revision by a diferent translator before delivery to clients, because that is the only
way to guarantee ftness for purpose. In DARPA’s handbook for the Global Auton-
omous Language Exploitation programme, the description of PE production is of
a two-stage process that includes a revision pass by a second translator (Dorr et al.
2010). In a broad study of interactive PE carried out with universities and translation
companies in collaboration, the outputs of PE were revised by diferent translators,
not for evaluation purposes, but for quality assurance (Sanchis-Trilles et al. 2014).
This revision of PE could not be accepted in commercial contexts, which are so
opposed to redundancy, were it not that PE is in fact a form of translation.
The current context in which PE is performed, within CAT tools, also raises ques-
tions about classifying PE as revision: does it make sense to say that if translators edit
a TM fuzzy match, they are translating, but if they edit an MT suggestion, they are
revising? Sanchez-Gijón et al. (2019) found both processes to be quite similar. There
are ongoing discussions and tests to identify the threshold at which a suggestion from
the MT system is more useful than a fuzzy match from TM, and setting this arbitrarily
may harm performance (Moorkens and Way 2016; Zaretskaya 2019). More impor-
tantly, this shows how close translating and revising are to each other in PE.
In professional settings, PE is determined by specifcations, requirements of style
guides, client terminology and consistency, among many other external factors that
change from project to project, but which are also updated from assignment to
assignment. However, some of the most cited studies on PE productivity exclude
most or all of these factors, testing only one of two scenarios: very limited test
conditions, employing students who focus on language issues without considering
any external factors, or difcult-to-reproduce ideal lab conditions with professional
translators who are experienced and work daily in the same conditions used for
the tests (Vasconcellos 1987b; Zhechev 2014). One study even suggests that PE
does not require as much research as translation or revision (Wagner 1985), a claim
which can only be valid in those ideal scenarios in which the expertise of transla-
tors makes that research redundant. Moreover, many of these studies are done with
simplifed interfaces, in which sentences appear in isolated text boxes with no
TMs or terminological support (Plitt and Masselot 2010). Often in these studies all
segments need to be post-edited, excluding the need for the translator to decide
whether to validate, to edit, or to retranslate a segment from scratch because of
conficting support resources, as so often happens in real scenarios. When studies
are done in actual production settings, including TMs, termbases and MT content,
productivity gains are not as high as other studies claim (Läubli et al. 2013).
Claims about increased productivity have, nevertheless, been generalised to all
work environments. This generalisation is particularly problematic when we know
that it is impossible to test and measure in the lab all external and internal factors
involved in professional PE. To mention just one internal factor, types of errors
produced by MT systems are not reproducible if one uses a diferent system, a dif-
ferent language pair or diferent training data, or simply changes the test data. This
unpredictability of MT output and errors has been especially evident since the
advent of neural MT (Castilho et al. 2017; Daems and Macken 2019).
42 Félix do Carmo and Joss Moorkens
The description of PE, based on these lab tests, is therefore incomplete and
of limited validity. Outside of lab conditions, PE may, for example, require more
complex reading and writing than revision, or even than translation from scratch.
Krings (2001) has shown that reading slows down when MT content is added,
particularly when it is medium-quality content, which requires a careful analysis
to decide whether it is best deleted or retained to be adequately edited (as was also
found with statistical MT by Moorkens and Way in 2016). This also reveals that,
although one may consider that any content already in the target language will be a
welcome help for translators, even ‘satisfcers’ may not like to have all the segments
processed by MT. One needs to accept that the decision to delete and retranslate a
whole MT sentence may be the most efcient one, in view of contents that require
either extensive reading or extensive writing.
Taking into consideration all that happens at professional translators’ work-
benches during PE, we propose that it should be considered a type of translation.
This is not only because PE represents an evolution of industrial translation pro-
cesses and because it fulfls the same purpose as translation (to produce a good
target text in an efcient and efective way), but also because it requires advanced
writing and reading skills in two diferent languages. In the next section, we focus
on one of these skills: editing.
non-linear actions ‘editing’, since the translator is reading and intervenes in the text
only to delete, insert, move or replace units. Both forms of writing (translating and
editing) occur in translation, in revision and in PE.
It is not only in Translation Studies that editing is described in terms of actions.
Early studies of edit distance by Damerau (1964) and Levenshtein (1966) estimate
the fewest number of operations necessary to transform one segment into a difer-
ent one. The purpose of those early estimates was to correct spelling errors in typed
text or errors in computer code. This approach was later adopted for Translation
Edit Rate (TER) (Snover et al. 2006), one of the metrics most used by the MT
community for tasks such as automatic PE and quality estimation, or to compare
the quality of MT output from diferent systems. These metrics assume the shortest
distance between an unedited and an edited string, but inevitably, translators do not
take the shortest possible route from raw MT output to post-edited segment (as
described in the later section on complex editing). Consequently, TER is not a fair
description of what actually happens during the process, but more a description of
translation products (Daems and Macken 2019; do Carmo 2017).
which constrained work to the four editing actions. In that experiment, the global
average editing was also close to the threshold (26% for the autocomplete mode
and 24% for the mode with the four editing actions). The study showed how
detailed analyses of results show high variation in editing rates depending on text,
user or mode of work.
Further studies of editing could focus on the editing threshold: the efort rates
for segments above and below the threshold could be identifed in fne-grained
studies and the percentage of segments close to upper or lower bounds (with higher
or lower quality) could be analysed. Naturally, any threshold will necessarily vary
based on diferent factors, like language pair, TM or MT quality, domain and proj-
ect specifcations. It would also be interesting to try to identify the threshold using
typical translation process research methodologies: is it possible to see an efect on
efort (be it technical, measured in keystroke frequency, or cognitive, measured
by eyetracking) when translators move from segments in which they are below
the editing threshold to segments where they need to translate? All this research
potential is a strong argument for making the editing threshold an object of study.
further analyses can be made with the four actions. We can, for example, divide
the four actions into those that imply writing content (insertion and replacement)
and those that do not afect the content but just manipulate position (deletion and
movement). Secondary actions are also associated with more efcient methods,
like overtyping in the case of replacement, or dragging in the case of movement.
Using the editing actions as a guideline or an instrument of analysis can be
very helpful. For example, descriptions of the translation process (usually at the
character level, with many recursions and details that do not survive in the fnal
version) could be more interpretable if they followed this model. In addition, since
efciency is a fundamental feature of translation, this simplifed view of editing
could become an important consideration in translator training (do Carmo 2017).
Complex editing
We should stress that any description of editing consisting merely of a sequence of
edits is a simplifcation of a complex process. When editing and, most relevantly,
when translating, translators manipulate segmentation. For example, they select
a word but replace it with a phrase or a clause; they make non-linear edits when
they apply scattered actions, moving in both directions within a sentence; they
make recursive edits, coming back to the same word several times; they replace a
word which may be embedded in a larger change within a group of words; and
they make partial edits, when only part of a word is replaced, or discontinuous
ones, in which, for example, an ‘s’ for plural is added to diferent words in the same
sentence.
Many changes may not be visible in the fnal result because of backtracks, and
only the fnal edits survive in the translated and revised target text. This compli-
cates the study of editing behaviour, and it is another argument in favour of the
editing threshold: we need to partition editing data according to varying degrees
of complexity. But even the editing threshold is difcult to measure if we use tools
like TER, which estimates the minimum number of edits from the starting text to
the fnal edited text, when the actual number of edits and keystrokes is inevitably
greater.
The main conclusion from their data collection is that the difculties in capturing
a description of actions are related to the lack of alignment with source text words
and to recursive operations applied to the same units.
In the discussion section of their paper, the authors comment on how best to
assist human translation processes with automated tools: “At what moment during
the translation would the mechanical help be most welcome? Would a translator be
better supported during the ‘linear’ translation production or during the translation
pauses?” (Carl and Jakobsen 2009: 136). They then discuss the distracting impact of
typing suggestions and how to integrate these and other aids into translation tools.
They advocate that process analysis may help to identify reading patterns and to
develop tools even for the reading task.
By 2016, a few signs seemed to indicate that technology and research might
have reached the evolutionary state required to pursue the goal of predicting edit-
ing actions:
Predicting the editing actions that translators should perform is one of the
toughest challenges for the development of tools that specifcally support PE. This
is the theme of the next section.
efcient process, even if it means retyping a few words that were in the origi-
nal sentence. That was the conclusion of a study which compared two modes of
editing: traditional PE (the system presents one full MT suggestion and there is
no interaction) and Interactive Machine Translation (IMT), where the translator
writes and the system presents the next word (Alabau et al. 2016). In that study,
almost all users preferred not to work with such interactive help. This shows that,
although the technology is available to model and predict PE work, the question of
how to ofer the help in a useful and usable way remains open.
There is one very important diference between writing a translation and edit-
ing it: if you are writing a translation from start to fnish, the whole translation is
being generated in your brain even if your tool presents you with suggestions for
word or sentence completion. Generating means frst creating an abstract notion
of the meaning and intention of a sentence, and only then, through syntactic pro-
cesses, giving it form. When this generation process is confronted with a sequence
of ever-changing suggested completions for the sentence, a high cognitive load is
created by the dynamics of the process (Alabau et al. 2016).
For editing, the generation process should not be triggered, since the user is
looking for mistakes. To actually be ‘editable’, the sentence presented by the MT
system must be good enough for the translator to worry only about certain points
that may be corrected through the application of well-directed actions. When that
cannot be done, the generation process is triggered. At that moment, the translator
decides to delete everything, and this becomes a writing task, which moves us to
the level in which editing becomes translating.
Having a good MT suggestion, however, is not enough to support editing. The
interface elements—the mechanisms that build the communication between the MT
systems and human actions—must also provide the necessary conditions for the
editing to proceed in an efcient way. IMT may not be the best model to support
editing since it is inspired by translating.
(Christensen 2011; Pym 2011b). A more recent study compares statistical MT and
NMT interactive systems (Daems and Macken 2019), and another (Coppers et al.
2018) tries to combine several resources in an interface that is intended to be intel-
ligible and practical for users.
Translation throughput is nowadays often limited only by the capacity of the
interfaces between humans and technology, as is recognised by analyses of typ-
ing speed, mobile input interfaces and voice recognition (Moorkens et al. 2016).
Furthermore, technological development must be based on correct models of pro-
cesses, at the risk of not being useful (in the sense that they solve problems) or
usable (in the sense that they improve processes) (Rabadan 2010).
The main conceptual model of how translators produce translations is ‘translat-
ing’: translators write full sentences, typing characters in a linear way. Therefore,
auto-completion features have been the basis for the features that support writing
in interactive translation systems—as translators start typing, words or full sentences
are suggested to them (Green et al. 2014; Hokamp and Liu 2015). However, while
auto-completion supports linear writing, editing is scattered and implies other
actions, some of which involve only the position of words. These actions cannot be
adequately supported by auto-completion systems and would beneft from systems
that were able to predict which words may require deletion, or in which positions
they should be placed (do Carmo 2017).
An interactive tool that supports editing should present suggestions, depending
not only on the words selected but also on a contextual choice from the editing
actions available: if there is a learnt model that estimates a high probability that a
certain word is to be deleted, the tool may suggest that action beforehand. The
interactive tool may use predictive writing functionalities to support the insertion
and replacement actions, but it should also incorporate adaptive features to support
frequent deletions and movements.
The challenge posed by editing is, as we have seen, not just a technical mat-
ter. Having tools that support decisions about which words to delete, or tools that
present alternatives to replace words, afects not just the daily life of translators but
our own theories and conceptions of the processes involved, as well as our overall
perception of the value of tasks performed by professional translators.
6. Conclusions
Throughout this chapter, we have reviewed descriptions and analyses of the editing
task (when translators make small changes to text) that enable a clear consider-
ation of the PE process and support our assertion of PE as a translation task. We
have discussed how the view of PE as a form of revision may have contributed to
devaluing this process, mostly in the professional world, but also afecting peda-
gogical approaches.
As an argument to counter that view, we propose a threshold that, even if it is
based only on a measure of post-editors’ technical work, serves to highlight the
level of writing and translating that is required during PE. As we have seen, the
Editing, post-editing and revision 49
four editing actions, which are at the centre of our view of editing, appear in very
early studies of the translation process such as those by Nida and Toury, but they
also appear very early in computer science in the form of edit distance metrics.
Although apparently simple, these four actions may be fundamental to supporting
descriptions of the complexity of translating, revising and post-editing, and to fos-
tering new methods of studying the details of production behaviours.
Throughout the chapter, there are several indications that PE is a specialised
process which should be carried out by specialised translators. We have shown how
editing is part of a creation process which has to be efcient and achieve diferent
levels of quality. During PE, translators need to read more content than in normal
revision, and they need to write in a more varied way, most frequently editing,
but with the ability to quickly decide to delete an MT suggestion and translate
the sentence from scratch. Post-editors need to have more strategies than mere
transference; they need to know how to avoid replicating the source content in a
structure that is inappropriate in the target language. Knowledge and practice of
the four editing actions may help translators become more efcient at editing, but
this must be done with the awareness that they can move above the threshold, into
translating. Finally, the evolution of translation tools, ofering more interactivity
and support, requires users who are profcient at using these features. That way, the
tools and resources at their disposal become instruments that sustain their profes-
sional development.
Through the study of editing, we have ofered a view in which PE may be a
more rewarding process for translators, a process in which optimisers and satisfcers
alike can identify their role in a demanding professional environment.
3
POST-EDITING HUMAN
TRANSLATIONS AND REVISING
MACHINE TRANSLATIONS
Impact on effciency and quality
Revision and post-editing are intuitively comparable: the frst is the process of
correcting a human-translated text, the second is the process of correcting a
machine-translated text. While correcting these texts, revisers and post-editors
alike have to be aware of a variety of potential translation solutions, they have to
make corrections on diferent levels (mechanical to conceptual, stylistic to politi-
cal) and their tasks become even more similar as the degree of expected quality
increases (Vasconcellos 1987a).
What is presumed to set them apart, however, is the nature and distribution of
the errors, with machine-translation (MT) output containing more errors and more
predictable errors compared to human translation (Vasconcellos 1987a). Another
presumed diference between revision and post-editing is sociological rather than
mechanical: “it is easier to wield the metaphorical red pen on the output of some
faceless machine than on one’s colleague’s hard work, especially if one is slightly
predisposed to disparage the machine’s capabilities” (Somers 1997). Having to work
together with other translators, or ‘interpersonal aptitude’, is cited as an important
component of translation revision competence (Robert et al. 2017a).
The question is whether the assumptions made by Vasconcellos are still valid
today. MT systems have evolved enormously since the 1980s, and the output is
likely closer to human translation quality than it used to be. In addition, in our
increasingly global and digital society, communication often takes place in a virtual
manner, which could make sociological issues less relevant, as translation agencies
can be asked to revise a translation without knowing the person (or machine) who
prepared it. Given that readers and translators are not necessarily capable of distin-
guishing between human-translated and machine-translated texts (Vasconcellos and
Bostad 1992; He et al. 2010), the question is whether a reviser would be.
This chapter is structured as follows. We frst discuss relevant research related
to diferences between post-editing and revision regarding quality and efciency
Post-editing human translations 51
and introduce our research questions and hypotheses. We then outline our experi-
mental set-up, describing the characteristics of the English source texts (for each
of which we used both a human and a machine translation into Dutch) and the
data collection process. This is followed by the analysis section, where we describe
how we operationalized quality and efciency and the steps taken in our statistical
analysis. In the results section, we present the results of the statistical analysis for
each research question and some additional exploration where relevant, without
discussion. In the discussion and conclusion section, we discuss the results and link
back to our research hypotheses.
1. Related research
Revision is listed as a key way to ensure translation quality in the ISO 17100
standard, and it has been integrated into translator training. According to Allman
(2006), it is important for the reliability of a translation to receive special attention
during the revision process. He argues that translations by professional transla-
tors are more likely to contain transfer errors than language errors. McDonough
Dolmaya (2015), on the other hand, found that translations do contain language
and style problems in addition to transfer problems. She also found that half of the
problems are not detected during the revision process and that most attention goes
to the correction of language and style rather than transfer.
This raises the question of whether revision is in fact the most efcient way
of obtaining a high-quality translation. Revision can be a time-consuming pro-
cess, but revisers are often not given the time they need to properly revise a
text. According to Parra-Galiano (2016), there must be “a balanced relationship
between cost, usefulness and necessity” for revision to be efective, a sentiment
that is echoed by Künzli (2007a). Another risk is that the reviser introduces
errors. While Robert et al. (2017b) assume that students who have been taught
revision would be more tolerant and introduce fewer hyperrevisions (preferential
changes), this hypothesis has not been confrmed. Martin (2012) goes so far as
to say that revision can do more harm than good and it is often not worth the
extra time and efort.
A potentially more efcient solution—at least for some text types—is post-
editing of MT. The increased speed of post-editing as compared to human
translation has frequently been established, and usually the quality of the fnal
post-edited product is comparable to or even better than that of a human transla-
tion (Plitt and Masselot 2010; Koponen 2016; Daems et al. 2017a). McElhaney
and Vasconcellos (1988) argue that the errors in MT (which in their case was
rule-based MT) are more of a local nature (occurring at word or phrase level)
compared to errors made by a human translator, that there are no inadvertent
repetitions or skipped passages in MT, and that “while the machine may not
always fnd the correct alternate translation for a word or phrase, neither will it
make wild guesses” (p. 141). On the other hand, a possible risk of post-editing
lies in the recurrent nature of MT errors and translators’ distrust of MT output.
52 Joke Daems and Lieve Macken
Not only can this cause irritation because translators have to correct the same
errors over and over again, but in addition there could be a negative impact on
the translators themselves, as they can become so used to the (incorrect) phras-
ing that they no longer spot the errors (Krings 2001). This fear is countered by
de Almeida and O’Brien (2010), who found that more experienced translators
made more essential changes as well as preferential changes during post-edit-
ing compared to less experienced translators. Interestingly, awareness of the
provenance of suggestions may infuence the translation process. In a setting
where translators worked in a tool without information about the origin of
the suggestions, Guerberof Arenas (2008) found that translators processed MT
suggestions faster than fuzzy matches from a translation memory and that the
quality was better as well. In contrast, Teixeira (2014) found that the presence
of origin information in general reduced translation time without a negative
efect on the quality. His fndings were more nuanced for specifc suggestion
types: exact matches from a translation memory were processed more slowly
and more segments were edited when no metadata was present, whereas for
MT suggestions, the presence or absence of metadata had no signifcant efect
on time, editing1 efort or quality.
Since the publication of these studies, neural machine translation (NMT)
has become the mainstream MT technology, mainly due to its ability to pro-
duce far better translations than its predecessor, statistical machine translation
(SMT). Numerous studies covering many language pairs and translation tasks have
demonstrated that NMT outperforms SMT (Bentivogli et al. 2016; Toral and
Sanchez-Cartagena 2017). Since NMT systems are able to take into account the
context of the entire sentence during translation, they can produce translations
that are more fuent. Van Brussel et al. (2018) carried out a detailed error analysis
on 665 sentences that were automatically translated from English into Dutch with
diferent translation engines and found that the NMT system produced a higher
number of fawless translations. On the other hand, the NMT output contained
less transparent errors, such as omissions (for example, the modal marker ‘would
like to’ was deleted in an otherwise faultless Dutch translation); this kind of error
might be quite challenging for post-editors.
In this study, we set out to determine diferences between revision and post-edit-
ing in a modern setting. We compared the interventions made on human-translated
and NMT-translated texts in two scenarios: when the instructions matched the ori-
gin of the text (translators were asked to revise a human-translated text or post-edit
a machine-translated text), and when the instructions did not match the origin
of the text (translators were asked to post-edit a human-translated text or revise a
machine-translated text). Our research questions were the following:
1 Do participants make more changes in a text when they assume they are post-
editing, or when they assume they are revising?
2 Is the revision of higher quality when participants assume they are post-editing,
or when they assume they are revising?
Post-editing human translations 53
3 Are the most optimal translations produced when participants assume they are
post-editing, or when they assume they are revising?
(See section 3 for definitions of ‘revision quality’ and ‘optimality’.)
1 Participants will make more changes during post-editing than during revision,
regardless of the actual origin of a text, as translators are generally more critical
of MT output and could find it harder to criticize the work of a fellow transla-
tor (Somers 1997).
2 Revision quality will be highest for post-editing, as post-edited quality can be
even better than that of human translation (Koponen 2016).
3 (Assumed) post-editing will be more optimal than (assumed) revision as post-
editors are generally trained to avoid changes related to fluency or style and
to focus only on actual errors that need to be corrected to improve the MT
output, whereas revisers are often trained to detect issues of fluency and style
as well. Additionally, in the case where people believe they are post-editing
MT while in reality the text is a human translation, they might be more critical
and therefore notice more errors than they would if they believed the text was
translated by a person.
2. Experimental set-up
Text selection
We selected articles from the Dutch Parallel Corpus (Macken et al. 2011) that
were originally published in the English newspaper The Independent and in the
Dutch newspaper De Morgen, which can be considered a high-quality newspaper.
The selection was made on the basis of text length (minimum 500 words), num-
ber of sentences, topic (including some items that would have to be researched,
but no specialist content) and relative timelessness (not obviously discussing a
specifc historical moment). Texts fulflling these criteria were translated with
the free online NMT systems DeepL2 and Google Translate3 in December 2018.
The quality of the translations was evaluated by the authors and by two language
experts with respectively more than 5 and more than 20 years of experience in
translation technology and translation quality evaluation. The two source texts
for which the MT quality was best, that is, the machine translation had the
lowest number of errors and the lowest number of critical errors (for example,
contradictions or unintelligible passages), were chosen for the experiment. A
few source text sentences that had no corresponding translation in the target
text were deleted to allow for a fair comparison between human translation
and MT. The fnal source texts were 603 words (38 sentences) and 595 words
(37 sentences) long, their corresponding human translations 609 and 634 words
respectively. DeepL was chosen for the fnal MT version, as its quality was slightly
54 Joke Daems and Lieve Macken
better than that of Google Translate for these texts. The MT versions were 641
and 625 words long respectively.
Identifcation of problems
To be able to distinguish between necessary and unnecessary changes, all problems
in the human and machine-translated versions of both texts were frst identifed.
The approach of Daems et al. (2013) was adopted, with manual annotation of
problems using a categorization scheme4 that is based on the dichotomy between
adequacy and acceptability. Annotations were performed by the authors, who have
more than fve years of experience working with this annotation scheme as well
as high inter-annotator agreement (see Daems et al. 2013 for a discussion). Cases
where the annotators disagreed were discussed before settling on a fnal annotation.
Apart from annotating the problems, we also assigned a severity weight of 0
to 3 to each problem. A severity weight of 3 was used for critical problems that
have a major impact on the accuracy and/or intelligibility of the translation; for
example, word sense problems such as the English word drug, used in the sense
of medicine, being translated in Dutch as drug (the Dutch translation for the illegal
substance) instead of geneesmiddel (the Dutch translation for medicine) or unin-
telligible translations in the target text such as maakt een veel urgenter geval as a
word-for-word translation of makes a far more urgent case. A severity weight of 2
was assigned to problems that cause a shift in meaning between the source text
and the translation or that afect the intelligibility of the translation; for example,
deletion of modality markers (the words potentially or practically are omitted in the
translation), a change in modality (no distinction between could and can) or the use
of hypernyms (petrochemical being translated as chemical). A severity weight of 1 was
used for minor problems, where the text can still be understood without efort and
the information contained in the translation is equal to that of the source text, but
there is a small error (for example, a spelling mistake such as the wrong plural form
enzymes instead of enzymen) or the intelligibility is afected a little (the name of an
organization is not translated or explained). A severity weight of 0 was reserved for
diferences that are not actual problems; for example, explicitations or omission of
non-essential information. An overview of the diferent error types found in each
of the texts can be seen in Figure 3.1.
For both texts, the MT output contains more errors than the human translation,
especially adequacy errors (not counting additions and deletions), and more errors
with severity weight 3. Deletions are far more common in human translation than
in MT output. Additions appear only in the human translation of text 2. Note that
additions and deletions were counted as errors only when they removed infor-
mation crucial for the reader or when they added information that could not be
derived from the source text and was therefore incorrect. Cases of deletion that did
not alter the meaning and cases of explicitation necessary for a target reader were
not counted as errors. Grammatical errors are found only in the MT output of text
1. Style and lexicon issues are more common in MT than in human translation.
Post-editing human translations 55
FIGURE 3.1 Total number of errors in each category by text, text origin (HT = human
translation, MT = machine translation) and severity weight
This supports Allman’s (2006) claim that the most important thing to check during
revision of human translations is reliability.
Participants
To ensure the ecological validity of the experiment, we worked with actual trans-
lation agencies. We selected eight diferent translation agencies in Belgium that
ofered revision and/or post-editing as a service on their website and contacted
them by e-mail. They were asked to have two translators perform the same task
independently. Other instructions were minimal: the agencies were asked to ensure
that the fnal text would be of publishable quality for a general Dutch-speaking
audience (in Belgium and the Netherlands) and they were asked to submit a Word
document with ‘Track Changes’ enabled. The agencies were not aware that they
were participating in a research project.
Four of the agencies were told that the texts were human translations and
were asked to perform revision; the other four agencies were told the texts were
machine-translated texts and were asked to perform post-editing.
In reality, the texts could occur in one of four sets: (1) both text 1 and text 2
were human translations, (2) both text 1 and text 2 were machine translations, (3)
text 1 was human translation and text 2 was MT, (4) text 1 was MT and text 2 was
human translation. Since two of the agencies had only one translator available, we
reached out to two additional agencies to ensure a balanced design, with each set
and each condition (instruction to revise/instruction to post-edit) occurring an
equal number of times.
56 Joke Daems and Lieve Macken
1A HT HT revise
2A MT MT revise
3A HT MT revise
4A MT HT revise
1B HT HT revise
2B MT MT revise
3B HT MT revise
4B MT HT revise
5A HT HT post-edit
6A MT MT post-edit
7A HT MT post-edit
8A MT HT post-edit
9A HT HT post-edit
10A MT MT post-edit
7B HT MT post-edit
8B MT HT post-edit
The distribution of tasks, eliminating potential task or task order efects, can
be seen in Table 3.1. The number in the participant code indicates the translation
agency, the letter the participant; for example, participants 1A and 1B both worked
for the same translation agency.
Prices charged by the translation agencies were somewhat higher for revision
than for post-editing. We calculated the price per text per participant (including
VAT). Prices ranged from 36.24 euros to 75.2 euros for post-editing (with a mean
of 53.85 euros and a median of 56.87 euros) and from 49.16 euros to 72.6 euros for
revision (with a mean of 62.58 euros and a median of 64.28 euros).
3. Analysis
As explained in section 2, we obtained 32 Dutch translations in total: 16 diferent
versions for each of the two source texts, of which 8 were based on the human-
translated text, and 8 on the translation generated by DeepL. Half of the human
translations and machine translations were revised; the other half were post-edited.
To analyse the changes made to the original translations by the revisers/post-edi-
tors, we combined automatic metrics with manual annotations.
An automatic metric that is often used in the feld of MT is (H)TER (Snover
et al. 2006), henceforth referred to as TER (Translation Error Rate), which quan-
tifes the amount of editing (insertions, deletions and substitutions of single words
Post-editing human translations 57
TABLE 3.2 Baseline TER and error scores for each text and origin, on text and sentence
level. μ = mean (sum of all values divided by the number of items), M = median (for an
uneven number of items, the middle value when sorting all items in ascending order; for an
even number of items, the average of the two middle values)
Text Text TER TER baseline sentence Error score Error score baseline
origin baseline text level baseline text sentence level
level level
and:
MAX (100 − actualTER,1)
edit efficiency =
MAX (100 − baselineTER,1)
Intervention optimality is the weighted harmonic mean of revision quality and edit
efciency, weighting revision quality higher than edit efciency7 with β = 2.5, as
quality was deemed to be 2.5 times as important in the SDL study.
The revision quality formula is adapted from the second formula suggested by
Robert (2012): taking the number of necessary changes, subtracting the number of
Post-editing human translations 59
overrevisions, and dividing the result by the total number of errors. In our adapta-
tion, the errors are weighted according to their corresponding severity. If the total
number of errors equalled zero, revision quality was automatically set either to 1
(high), if the participant did not introduce any errors of their own, or to 0 (low), if
the participant introduced errors of their own, in order to avoid division by zero.
If the outcome of the revision quality formula was negative, for example, when
the number of errors introduced by the participant was greater than the number of
errors solved, revision quality was also set to 0.
Edit efciency compares the actual TER value with the baseline TER value. It is
possible that either or both of these values will equal zero. To avoid division by zero,
we reversed the values by subtracting them from 100 (a proxy for max TER). As such,
they become measures of how much of the original text was expected to be retained
(100—baseline TER) and how much of the text was actually retained by a partici-
pant (100—actual TER). In cases where this value was less than or equal to zero (for
example, if TER was exactly 100 or greater than 100), it was automatically set to 1
(indicating that almost everything was edited). Edit efciency was then calculated by
dividing the reversed actual TER score by the reversed baseline TER score. For exam-
ple, if a sentence had a baseline TER of 30, and an actual TER of 80, their reversed
values would be 70 and 20 respectively. By dividing the reversed actual TER by the
reversed baseline TER, we get an edit efciency of 0.29, indicating that a participant
retained only 29% of the text they were supposed to retain according to the baseline.
and text origin (MT versus human translation)—had any predictive efects on the
dependent variable. To compare and select models, we calculated Akaike’s Infor-
mation Criterion (AIC) value (Akaike 1974) and selected the model with the
lowest AIC value, and we used the step function from the lmerTest package in R
to perform backwards elimination of non-signifcant efects.
We built diferent models to test our research questions. The predictor variables
(text origin and condition) were always tested independently as well as with inter-
action efects, as we expected the dependent variables to be infuenced diferently
by diferent combinations of predictor variables (revision for human translation,
revision for MT, post-editing for human translation, post-editing for MT). We also
included text as a fxed efect, as it had only two levels (text 1 and text 2) and could
therefore not be included as a random efect. Including interaction efects for text
with either condition or text origin did not improve the model.
Returning to the three research questions we asked in section 1:
1 Do participants make more changes in a text when they assume they are post-
editing, or when they assume they are revising a text? To model this, actual
TER was used as the dependent variable, with condition (revision/post-edit-
ing), text origin (MT/human translation) and text as predictors.
2 Is the revision of higher quality when participants assume they are post-editing,
or when they assume they are revising a text? To model this, revision quality was
used as the dependent variable, with condition, text origin and text as predictors.
3 Are the most optimal translations produced when participants assume they
are post-editing, or when they assume they are revising a text? To model this,
intervention optimality was used as the dependent variable, with condition,
text origin and text as predictors.
For the sentence-level analysis, the models that were tested always consisted of
the dependent variable (TER, quality or intervention optimality), with condition
and text origin plus interaction as potential predictors. We included intercepts for
participants and sentences as random efects, as well as by-sentence random slopes for
the efect of text origin. A by-participant random slope for the efect of text origin
was included in the TER model only, as it led to a singular ft for the other models.
We supplemented the statistical analysis with a more detailed exploration of
our data where relevant, using exploratory graphs in Excel, to try to gain a better
understanding of our fndings.
4. Results
TER
Linear mixed efects analysis showed a signifcant interaction efect between text
origin and condition on actual TER scores (efect size: 13.08, standard error: 4.4,
p < 0.01), indicating that while actual TER scores were comparable for human
Post-editing human translations 61
translation in both conditions (revision and post-editing), the score was signifcantly
higher when revising MT output than when post-editing MT output. There was
also a signifcant efect of text, with more edits taking place in text 2 than in text 1
(efect size = 5.86, standard error: 1.75, p < 0.01). There was no signifcant main
efect of either condition or text origin. The predicted values for the interaction
efect can be seen on the plot in Figure 3.2.
The efect plot in Figure 3.2 shows the predicted TER score values depending
on text origin and condition. We can see that the predicted mean TER score for
HT in both conditions and MT in the post-editing condition is close to 12, with
the lines indicating the 95% confdence interval (there is a 95% chance that this
interval contains the true mean). The less overlap between confdence interval
lines, the more likely that the diference in mean is statistically signifcant. The
signifcant interaction efect of 13.08 can be seen on the plot as the diference
between the predicted TER value for HT in the post-editing condition (used as a
baseline) and the predicted TER value for MT in the revision condition.
Figure 3.3 indicates that there is some individual variation across participants
regarding TER scores, with participant 4B editing the most and participant 2A
editing the least. The lmerTest step function did indeed show a signifcant contri-
bution of this random factor to the model (p = 0.01).
If the intercept for a participant is close to zero, this indicates that their mean
TER score is close to the baseline of 12 shown in Figure 3.2. The more to the
right, the higher this individual participant’s TER scores generally are; the more to
the left, the lower. Here as well, the lines indicate confdence intervals with a 95%
FIGURE 3.2 Interaction efect plot for the impact of text origin (HT = human transla-
tion, MT = machine translation) and condition (PE = post-editing, REV = revision)
on actual TER scores
62 Joke Daems and Lieve Macken
Maar op een schaal van een miljoenste van een millimeter Maar op een schaal van een miljoenste van een millimeter
kunnen materialen ongewone en onvoorspelbare kunnen materialen ongewone en onvoorspelbare
eigenschappen krijgen. Daarom wordt gevreesd voor eigenschappen krijgen. Daarom wordt gevreesd voor
gezondheids- en milieurisico's. Sommige specialisten gezondheids- en milieurisico's. Sommige specialisten
deskundigen roepen op tot een tijdelijk verbod op roepen op tot een tijdelijk verbod op nanotechnologie
nanotechnologie omdat volgens hen de ultrafijne deeltjes omdat volgens hen ultrafijne deeltjes die gebruikt worden
die gebruikt worden in cosmetica, de industrie en de in cosmetica, de industrie en de technologie dodelijk
spitstechnologie volgens hen dodelijk zouden kunnen zijn. zouden kunnen zijn.
De organisatie Friends of the Earth schat zegt dat in De organisatie Friends of the Earth zegt dat in Australië
Australië mogelijk naar schatting 300.000 mensen in de naar schatting 300.000 mensen in de raffinage en
raffinage- en lassector assemblageblootgesteld worden assemblage blootgesteld worden aan nanodeeltjes. Verder
aan nanodeeltjes. Verder zouden 33.000 mensen zouden 33.000 mensen blootgesteld worden bij de
blootgesteld worden bij de verwerking van het werken verwerking van poeders, vooral in de farmaceutische en
met poeders, vooral in de farmaceutische en cosmetische cosmetische industrie. Friends of the Earth pleit ook voor
industrie. Friends of the Earth pleit ook voor een een verbod op onderzoek, ontwikkeling en productie van
moratorium verbod op het onderzoek, de ontwikkeling synthetische nanodeeltjes tot er een wetgeving is
en de productie van synthetische nanodeeltjes tot er een ontwikkeld: "Daarmee zouden een enorme menselijke en
regelgeving wetgeving is ontwikkeld: "DaarmeeDat zou financiële kosten vermeden worden, alsen ook veel
een enorme menselijke en financiële kost vermeden schadeclaims van mensen die aangetast zijn, zoals het
worden, en alsook schadeclaims van mensen die geval was net als met asbest."
aangetast zijn slachtoffers, net zoals het geval was met
asbest, vermijden."
In Amerika gaan soortgelijke stemmen op. In oktober In Amerika gaan soortgelijke stemmen op. Tijdens een
verklaarde Mihail Rocco, vicevoorzitter van de workshop nanotechnologie van het
Nationale Wetenschaps- en Technologieraad, Milieubeschermingsagentschap verklaarde Mihail Rocco,
Tijdenstijdens een workshop nanotechnologieworkshop vicevoorzitter van de Nationale Wetenschaps- en
van het Milieubeschermingsagentschap verklaarde Mihail Technologieraad, dat "federale agentschappen niet
Rocco, vicevoorzitter van de Nationale Wetenschaps- beschikken over methodes om de uitstoot van nanodeeltjes
en Technologieraad, dat "federale agentschappen niet te controleren. Toch kunnen ze in de hersenen
beschikken over methodes beschikken om de uitstoot van terechtkomen en schade aanrichten."
nanodeeltjes te controleren. Toch kunnen ze in de hersenen
terechtkomen en mogelijk schade aanrichten."
FIGURE 3.4 Revision of human translation for text 1, performed by two diferent
participants
chance of capturing the true mean. To better understand these fndings, we looked
at our data in more detail. The degree of individual variation is clearly visible in
Figure 3.4, showing a screenshot of the revision of the human translation for text 1,
performed by two diferent participants from diferent translation agencies.
Post-editing human translations 63
FIGURE 3.5 Distribution of actual TER scores for sentences with baseline TER 0 by
text origin (HT = human translation, MT = machine translation) and condition (PE =
post-editing, REV = revision)
We further found that more than half of the sentences requiring no editing
(baseline TER score of 0) were nevertheless edited (actual TER scores of 1–100+)
and that, especially for human translation, around half of the sentences requiring
some editing (baseline TER of 20–60) received no editing (actual TER scores of
0). In particular, the sentences where we expected no editing (baseline TER score
of 0) showed interesting patterns (see Figure 3.5): a higher number of sentences
were lightly edited (TER scores of 10–20) in the post-editing condition than in
the revision condition, regardless of text origin. This changed with a higher degree
of editing (20+), where we found more sentences in the revision condition, but
only for MT origin.
Revision quality
Revision quality was best predicted by the mixed efects model including text
origin, condition and text as predictors, with interaction efect for text origin
and condition. There were no main efects of either condition or origin on qual-
ity, but a signifcant interaction efect (p = 0.01) with efect size 0.27 (standard
error: 0.098), as can be seen on the efect plot in Figure 3.6: the quality is lower
when revising human translation as opposed to post-editing human translation, and
higher when revising MT as opposed to post-editing MT. There was also a signif-
cant efect of text, with text 2 being of better quality (efect size = 0.15, standard
error: 0.04, p < 0.01).
64 Joke Daems and Lieve Macken
FIGURE 3.6 Interaction efect plot for the impact of text origin (HT = human transla-
tion, MT = machine translation) and condition (PE = post-editing, REV = revision)
on revision quality
FIGURE 3.8 Percentage of errors that were revised (= necessary) and not revised
(= underrevision) by text, text origin (HT = human translation, MT = machine transla-
tion) and condition (PE = post-editing, REV = revision)
66 Joke Daems and Lieve Macken
errors that were corrected (= necessary interventions) and those that should have
been but were not (= underrevision). It is striking that, overall, less than half of the
error items were corrected. More errors were corrected in the MT output, regard-
less of condition. Interestingly, fewer necessary changes were made when the text
origin matched the instructions (post-editing a machine-translated text or revising
a human-translated text) than when they did not (post-editing a human-translated
text or revising a machine-translated text).
If we compare the number of necessary revisions and underrevisions across con-
ditions and text origins for diferent error categories (Figure 3.9), there seem to
be only minor diferences between error categories. Errors in adequacy, the most
common error category, were corrected somewhat more often during post-editing
than during revision when the text was a human translation, and somewhat more
often during revision than during post-editing when the text was MT. Lexical
issues in MT were solved more often during revision than during post-editing.
In addition to the error items in the original texts, we also looked at the number
of overrevisions (that is, errors introduced by a participant)8 and found that they
mostly occurred with MT, regardless of condition: there were fve instances of
overrevision for text 1 in the revision condition, and nine instances of overrevision
in each of text 1 in the post-editing condition, text 2 in the revision condition
and text 2 in the post-editing condition. With human translation, there were two
instances of overrevision in text 1 in the post-editing condition, and one instance
of overrevision in each of text 1 in the revision condition, text 2 in the revision
condition and text 2 in the post-editing condition. This indicates that, while more
FIGURE 3.9 Number of underrevisions and necessary revisions by error category for
each text, text origin (HT = human translation, MT = machine translation) and condi-
tion (PE = post-editing, REV = revision)
Post-editing human translations 67
errors are corrected in the MT output than in the human translations, participants
also introduced more errors of their own while editing MT output.
Intervention optimality
Intervention optimality was best predicted by the model including condition and
text origin with interaction efect as predictors and text as predictor. There was
no signifcant efect of either condition or text origin, but a signifcant interac-
tion efect, with revision being more optimal when the origin of the text was MT
(efect size = 0.27, standard error: 0.096, p = 0.01). There was also a signifcant
efect of text, with intervention optimality being higher for text 2 (efect size =
0.15, standard error: 0.04, p < 0.01). The interaction efect can be seen in Figure
3.10.
Sentence-level models
The fnal models, that is, the models with the best ft according to AIC values, are
presented in Table 3.3, with signifcant efects marked in bold.
Analysis at sentence level generally confrms the text-level fndings: the interac-
tion efect between origin and condition on TER is comparable and the interaction
efect between origin and condition on quality is somewhat smaller but still sig-
nifcant. Though the sentence-level model for intervention optimality shows the
FIGURE 3.10 Interaction efect for the impact of text origin (HT = human translation,
MT = machine translation) and condition (PE = post-editing, REV = revision) on
intervention optimality
68 Joke Daems and Lieve Macken
same trends as those of the text-level model, none of the predictors was found
to signifcantly infuence intervention optimality. Here, the random efects at the
sentence and participant levels explain more of the variance than the main efects.
found that professional translators made many preferential changes (de Almeida and
O’Brien 2010; Robert 2012). Thus the question arises: how can we adapt revi-
sion training to reduce the number of preferential changes (Robert et al. 2017b) in
order to increase efciency?
In addition to the edit rate, we looked at revision quality. For human translation,
the quality of post-editing was found to be higher than that of revision, whereas for
MT, the quality of revision was found to be higher than that of post-editing. This
increase in quality was found to be mainly due to the increased number of changes.
For human translation, this is in line with our second hypothesis: when they believe
they are post-editing, people dare to suggest more changes than when they believe
they are revising a text. A possible explanation for MT could be that participants pay
more attention to adequacy issues when they believe they are revising than when they
believe they are post-editing, and since adequacy issues often carry the greatest severity
weight, this has a direct impact on overall quality. A further factor is that we worked
with translation agencies that ofered post-editing as a service, which means that they
presumably have some experience with post-editing. As a post-editor is usually taught
not to introduce too many changes, this could explain the lower editing score and
quality score for MT in the post-editing condition compared to the revision condi-
tion. An additional factor could be the quality of the text. The MT output contained
more problematic errors than the human translation, so that when a participant revised
the MT text thinking it was a human translation, they may have become more critical
because of the higher number of errors. There is some evidence in writing research
suggesting that people become more critical of certain errors if a text contains more
of them (Broekkamp et al. 1996). In general, over half of the errors went uncorrected
in both conditions, in line with fndings by Robert (2012) and McDonough Dolmaya
(2015), although more errors were corrected in MT.
With respect to our third hypothesis, the analysis for intervention optimality
indicated that, while post-editing was more optimal for human translation, this
was not the case for MT, where revision was found to be more optimal. The main
conclusion is that participants performed better when they had the wrong assump-
tion about the provenance of a text (post-editing human translations and revising
MT), which indicates that assumptions about the nature of a text are likely to
infuence the optimality of the revision or post-editing process. This raises ques-
tions about the diferences between revision and post-editing. Is it necessary to
maintain the distinction, or would it be better to ofer translators texts to review
without giving them any idea about the provenance of the texts? This question is
up for debate, especially given the contradictory evidence about providing prov-
enance information (Guerberof Arenas 2008; Teixeira 2014). As translation itself is
becoming an increasingly integrated process with the advent of interactive, adaptive
systems (where the distinction between translation memory and MT suggestions
is not always clear-cut), the distinction between revision and post-editing might
become less relevant as well.
The main strength of this study is at the same time its main limitation: work-
ing with actual translation agencies greatly increases the ecological validity of our
70 Joke Daems and Lieve Macken
fndings, but the associated costs lead to a relatively small number of observations
(four per text for each text origin and each condition). Conducting the same
experiment on a larger scale could further improve the predictive power of the
models presented. A limitation of using TER as a measure of editing efort is that it
measures the diference between a reference and a fnal text, but it does not refect
actual efort, as keystroke logging can. We did not use keystroke logging because
we wanted to ensure the ecological validity, but it is likely that a comparable study
employing keystroke logging could generate additional interesting insights.
Acknowledgements
This study is part of the ArisToCAT project (Assessing The Comprehensibility of
Automatic Translations), a four-year research project (2017–2020) funded by the
Research Foundation—Flanders (FWO)—grant number G.0064.17N.
We would like to thank the anonymous reviewers for their suggestions, which
greatly helped improve the quality of this chapter. In particular the suggestions
related to the statistical analyses were helpful. We would further like to thank
our department’s statistician, Koen Plevoets, for helping us determine how to best
implement the reviewers’ suggestions.
Notes
1 Throughout this chapter, ‘editing’ is used to refer to changes, regardless of whether par-
ticipants thought they were revising or post-editing. Whenever ‘post-editing’ is meant,
the term ‘post-editing’ is used in full.
2 www.deepl.com/translator
3 https://translate.google.com/
4 http://users.ugent.be/~jvdaems/TQA_guidelines_2.0.html
5 www.cs.umd.edu/~snover/tercom/
6 https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/tokenizer.
perl
7 This calculation is inspired by the F2-measure, which is the harmonic mean of precision
and recall, weighing recall higher than precision. In information retrieval, precision is a
measure of result relevancy, while recall is a measure of how many truly relevant results
are returned.
8 Preferential changes were not counted as errors; only changes introduced by the partici-
pants that caused errors were counted. For example, the word ‘science’ correctly translated
as ‘wetenschap’ (science) was changed into ‘theologie’ (theology), or sentences that con-
tained modality (‘could’, ‘might’) were translated as facts, changing the original meaning.
PART II
Non-professional revision
and post-editing
4
NON-PROFESSIONAL EDITING
IN THE WORKPLACE
Examples from the Canadian context
Matthieu LeBlanc
servants feel the need to edit French translations that they deem—paradoxically—to
be of excellent quality. I will also look at how my observations compare to those
of Dubois, who conducted a larger-scale study of translation practices within the
government of New Brunswick (Dubois 1999). I will conclude by outlining the
reasons why non-professional revision and editing are worthy of investigation.
of the region’s bilingual workforce. In other words, bilingualism had in some ways
become a valued resource that was to be marketed and capitalized on. This brings
us to our study of the New Brunswick public service.
When one examines New Brunswick’s ofcial Language Policy, most recently
updated in 2015, it becomes clear that the provisions with respect to the language
of work are not at all binding. This can be explained by the fact that, under the
Ofcial Languages Act of New Brunswick, “language of work” is not mentioned. The
policy on language of work states:
The fact that that this policy merely encourages members of both ofcial lan-
guages to work in the language of their choice without actually giving them the
right to do so sets it apart from the federal language policy, whereby federal civil
servants have the right to work in English or French in designated regions of the
country.
As an ofcially bilingual province, New Brunswick has defned obligations
under the Ofcial Languages Act. More specifcally, the Language of Service Policy
and Guidelines state that:
In order to meet its obligations under the Act, the provincial government must
make use of translation and interpretation services. The New Brunswick Transla-
tion Bureau assists the government in fulflling its obligations to provide bilingual
services as outlined in the Act. The Bureau provides services such as written
translation (from English into French and from French into English), simultane-
ous interpretation for conferences, seminars and other gatherings, consecutive
76 Matthieu LeBlanc
interpretation for court proceedings and administrative tribunal hearings, and other
linguistic services. Services are provided to the various government departments as
well as to the Legislative Assembly.
Services provided by the Translation Bureau are governed by a policy (AD-1502
Translation and Interpretation Services). The objective of this policy is “to establish
the principles for a working relationship between Departments and the Transla-
tion Bureau, in order that public expectations and legal requirements regarding the
availability and quality of communication in the ofcial languages are expediently
met”. More precisely, the policy states:
ideologies that underpin language choices? And what role does the sociolin-
guistic setting play?
For the purposes of my study, I chose to focus on the Moncton ofces of a large
government department. The department was chosen because it was located in a
bilingual region of the province and it employed a fairly signifcant number of civil
servants, 240 to be precise. Of these, 63 are Anglophones while 177 are Franco-
phones. The proportion of Francophones (74%) is extremely high, given that they
represent only a third of the Greater Moncton population. However, this can be
easily explained: since many of the positions are designated as “bilingual” (meaning
that they require a knowledge of both English and French), Francophones are more
likely to be hired, given their higher rate of bilingualism. Most positions require
at least a two-year college diploma or a four-year bachelor’s degree. Furthermore,
the work environment requires employees to make use of their language skills on a
daily basis. These are professionals who are called upon to draft a variety of docu-
ments, such as e-mails, letters, minutes, website/social media content and reports.
They meet very frequently, are required to make presentations and must commu-
nicate with civil servants within and outside the department, as well as with the
department’s “clients”1 and the general public. In other words, language use is at
the core of their work.
Semi-structured interviews
For the purpose of my study, I focused on three subsections of the Moncton ofce,
as it was not possible to gain access to all the subsections. The department’s work is
very sensitive; it has a social mandate, and most of its clientele receive social assis-
tance. Given the confdential nature of the work, I was not granted permission to
take part in meetings or training sessions, or to shadow employees at work. I was,
however, able to conduct 21 semi-structured interviews, which provided the bulk
of the information gathered, and to carry out non-participant observation.
Of the 21 interviewees, 19 were Francophones. These 19 individuals were
interviewed in French, the 2 Anglophones in English. I have translated into Eng-
lish the excerpts from the French interviews cited in this chapter. The interviews
lasted between 40 and 65 minutes, and all participants were required to sign an
informed consent form.
Participants were recruited using various methods. The department’s general
manager was instrumental in helping me identify the subsections of the ofce
and the employees that would best suit my needs. In a memo sent by the gen-
eral manager to all employees, information was provided on the research and the
researcher, and employees were invited to contact the researcher if they wished to
be interviewed formally or to discuss the topic informally. I was also allowed to
approach employees directly in their ofces (face-to-face interactions) and to ask
interviewees to recommend other participants.
Finally, the questions for the semi-structured interviews dealt with several mat-
ters: the linguistic background of the civil servants, their language practices at home,
78 Matthieu LeBlanc
the government’s language policy and the languages used at work (for drafting,
during meetings, while interacting with co-workers, etc.). Since the interviews
were semi-structured, many other topics were broached, including translation,
minority language rights and language varieties. The topic of translation came up
in almost every interview.
workers, who are called upon to work with a particular “clientele” (internal term
used within the department). The social workers are the ones who are directly
responsible for child welfare, child protection, children and youth under the care
of the department as well as programs for young ofenders and youth at risk and for
public housing, to name a few.
translation strategy that produces a target text that conforms to the conven-
tions established in the target language and the spontaneous form of expres-
sion commonly used by native speakers. The concept of idiomatic transla-
tion, which is closely associated with conventions, rules and social context,
takes into account the constraints of the target text and of current usage as
well as the rules and conventions observed by the majority of speakers.
(Delisle et al. 1999: 144)
80 Matthieu LeBlanc
This is consistent with what Mossop observed for the Canadian Public Service:
[t]he federal government’s approach has been inspired by the role of English-
to-French translation. It has been pointed out (for example in Juhel 1982,
p. 55ff) that English is overwhelmingly the translated language in Canada, and
French the translating language. Translation in Canada is a form of commu-
nication that goes mainly in one direction only. Given that Quebeckers read
so much translation as opposed to original French writing, it is argued that if
the translations are not idiomatic, then the French language will cease to be
an instrument of cultural identity and ultimately political survival.
(Mossop 1990: 347)
The same arguments are made for Francophone minorities outside of Quebec.
Since French is almost exclusively the translating language, translations are expected
to be idiomatic.
In this excerpt, it is clear that no judgement is being made on the translator’s abil-
ity to translate into idiomatic administrative French. The fact that the interviewee
states “They’re always perfect” and the reference to “proper French” are a testament
to this. That being said, the interviewee thinks that parts of the translation are not
suitable for its intended audience and that it must be edited to better fulfl its function.
In another interview, the discussion focused on the use of the French word
abus in French translations. While the interviewee knew that the preferred term
in French for translating “abuse” was not always abus, she questioned the choices
made by translators in this respect. In the case of “sexual abuse”, for example, trans-
lators often use agression sexuelle, violence sexuelle or atteinte sexuelle in the French
translations.
82 Matthieu LeBlanc
C13: In those cases, cases like “abuse”, you have to be so clear. You want
people to understand. That’s the frst thing.
M: Of course.
C13: I regularly change it to abus, for example in abus sexuel. My clients
don’t say violence sexuelle à l’égard des enfants [sexual violence against
children]. They don’t understand that.
M: Ok, because you . . .
C13: Well, because abus sexuel is what everybody says, at least in Canada.
Apparently someone somewhere must have decided that it’s not quite
right or that it sounds too close to English, I’m not sure.
M: Ok.
C13: Strangely enough, it’s [abus sexuel] in the dictionary, so I don’t see what
the real problem is. Even the government’s database [TERMIUM] says
it’s ok, so . . .
M: Right, right.
C13: But translators are afraid to use it I guess [laughter].
M: Maybe because they’ve been told not to use it or something?
C13: No doubt. But that’s sort of irrelevant, isn’t it? When you want some-
one to understand the message, I don’t know. This is an important
matter, as you know.
M: Absolutely.
The question in this last excerpt is not to determine whether the use of abus sex-
uel in French is right or wrong. (To be clear, the use of abus and the expression abus
sexuel have been criticized by language specialists, but their use is so widespread
that they are now considered “correct” by several dictionaries and other resources.)
The real question is to know why questions related to language norms—or correct
usage—seem to take precedence over transparency and communication. The use
of “irrelevant” by the interviewee can be interpreted as an indication of frustration.
According to her, the logical thing would be for the translator to choose words or
expressions that speak to the end users of the translation, regardless of their con-
tested status among language experts.
In a third interview, a civil servant refects on the conditions under which the
translations are produced and the sometimes difcult position that translators fnd
themselves in:
M: No?
C7: Well they’re in a difficult position; they’re in their offices 200 kilometres
away. They translate I don’t know how many things a day for all kinds
of departments. They don’t really know what exactly our clients need.
And we don’t really specify our needs, either. We just send things to
translation and hope to get it back before the deadline.
M: So are you saying that translators are sort of in the dark, because of the
way the whole thing is structured?
C7: Yes.
M: So in some ways they’re not close enough to the people working in the
departments, you for example, so they’re not aware of your specific needs?
C7: Yes, that’s the problem. Half the time they probably don’t even know
who they’re translating for, so it’s not their fault.
M: Ok, I see what you mean.
C7: The government is a big bureaucracy. There is not always time to per-
sonalize things, if you know what I mean.
This last comment clearly speaks to the distance that separates the translators—
in fact, all of the translational activity—from the civil servants in the departments
and thus from the end users.
In a previous study carried out in three translation services and agencies in New
Brunswick (see LeBlanc 2013), translators were unanimous in saying that increased
automation and use of computer-assisted translation tools meant that they were now
further removed from their departmental clients (civil servants) and the end users of
the translations. What is more, many translators had refected on their inability to
take into account the specifc needs of the end users given the increased productivity
requirements over the previous 15 years and the streamlining of the translation process.
In all three agencies and services, translators had very little contact, or none at all, with
the clients requesting the translation. In that 2012 study, one translator-reviser com-
mented on the fact that her translations were sometimes edited by the client:
M: So you are aware of that? That your client sometimes modifes your
translations?
T21: Yes, of course. I’m not saying that it happens a lot, but in specific cases
it does. I kind of discovered it by accident. Once the client even sent
us the modified text.
M: Very interesting. So how does that make you feel?
T21: You mean that the client edits my translation?
M: Yes.
T21: I’m not sure. I feel ambivalent about that. It’s tricky for us, you know.
M: Ok.
T21: Take for example a text that’s for a specific type of reader, someone
who is for example less educated or someone who doesn’t understand
technical terms or long, complex sentences [laughter].
84 Matthieu LeBlanc
M: Ok, ok.
T21: Well, the clients [civil servants] are not always in the wrong when they
change some of the words we use or the wording of the text.
M: How so?
T21: Because there is usually no way for us to know who the reader is,
except in really obvious cases, of course. The client, though, knows
this. And if the end goal is to clearly communicate the message, well
then, the client might be doing the right thing by editing our transla-
tions. By making them more, I don’t know, appropriate.
M: For the reader.
T21: Yes. But what’s tricky, as I was saying, is that if I as a professional translator
were to use some of the wording or words used by the civil servant when
he edits my translations, I’d be reprimanded by my reviser [laughter].
M: You would, you would.
T21: That’s the paradox in some ways, isn’t it? There’s very little latitude, at
least on our end [translators and revisers], when it comes to conven-
tions. It’s as if we always have to stick to what is safe, what is not con-
tentious, you know, to what’s prescribed by language authorities and
what not. I can see why we do that, because translators are in some
ways the guardians of French. But when you think of it, we don’t
always remember that our first goal is to communicate. We sometimes
do a disservice to the reader by not adapting our translations.
M: Very interesting, very interesting.
T21: Yes it is.
This is, in essence, a situation wherein the needs of the end users of the transla-
tions are not being fully met and where the civil servants who commissioned the
translations are required to edit some of the wordings. They reported this being the
case with translations destined for a specifc audience, namely benefciaries of social
assistance, for reasons they made very clear. The main concern for social workers is
that the French texts are sometimes too formal or too standard in their wording or
that the terms used are too technical or unknown to the end user.
This raises many questions regarding not only the process of translation per se,
but also the ideological underpinnings of translation into French in the Canadian
context. First, as the New Brunswick Translation Bureau is a centralized entity and
its translators are far removed from the civil servants working in the departments as
well as the end users, there is very little contact between the translators and their cli-
ents. There is also no practical way for these clients to make it known to translators
during the translation process that end users have specifc needs in terms of linguistic
adaptation. Finally, the translators have a vested interest in not using terms that are
objectionable or wordings that are considered stylistically unacceptable, even if these
terms and wordings might speak more to the end user. In translation school, in
translations services and agencies and for certifcation exams, translators are judged
on their knowledge of standard French and are often times penalized if they deviate
Non-professional editing in the workplace 85
from the norm or established order. As one translator from my 2012 study pointed
out, this would be a “risky endeavour”, a subversive act in some regards.
Civil servants in the departments, however, are not under pressure to “perform”
linguistically, to use the correct, sanctioned terms or wordings, and are thus in
a safer position to adapt the translations to their intended audience without any
consequences. They are the ones “ordering” the translations and are thus free to do
what they want with them. If they modify the translations, then these translations
become the “ofcial” translations. They are also the ones who are the most aware
of the end users’ needs in the cases considered here.
diferences between the varieties of French (Acadia, Quebec, France), but more
because of the specifc needs of the intended audience, most of whom have lower
levels of formal education and literacy. Since education levels of end users come
into play, questions of style and register are brought to the fore.
[t]ranslation studies finds itself today at a stage where its traditional focus on
translator and interpreter training and on the advancement of the status of
translators and interpreters as professionals is no longer sufficient to address the
complexity of real-life situations of translating and interpreting. As increasing
numbers of non-professionals translate and interpret in a wider range of con-
texts and in more diversified forms, their work emerges not only as an alterna-
tive to established professional practice, but also as a distinctive phenomenon,
which the discipline has yet to recognize as a noteworthy area of study.
(2012: 149)
The same can be said for “revision” or “editing”. Most of the focus has been on
the work of professional revisers, as is the case with translators and interpreters. But
as my study shows—and as Dubois’s study highlights—the practice of non-profes-
sional editing is most likely common as well, including within public institutions.
It should for this reason be of interest to Translation Studies scholars.
The questions raised in the case study I presented here are numerous and serve
to show that non-professional editing should not be considered as a marginal activ-
ity. Firstly, we have gained a better understanding of what compels civil servants
to edit translations produced by professional translators. This sheds light on the
process of translation itself and shows that, in this specifc instance, professional
translators are not informed or aware of the specifc requirements of the intended
audience of the texts they are called upon to translate, to the point that we are left
to wonder if the process of translation—from commission through production to
delivery—is so decentralized that translators and commissioners no longer com-
municate directly with one another. Does the fact that translators are more than
ever translating parts of texts, sometimes even stand-alone sentences, have an efect
on the end product (LeBlanc 2013)?
Non-professional editing in the workplace 87
[t]ranslation and interpreting studies would do well to learn from the inter-
lingual activities of non-professionals, instead of trying to control these activ-
ities, or focus exclusively on the quality of output and/or perceived loss of
status by translators and interpreters vis-à-vis non-professionals translating
and interpreting. Otherwise, translation scholars will lose valuable opportu-
nities for enhancing their scholarly knowledge, and translators and interpret-
ers will miss valuable opportunities for professional growth.
(Pérez-González and Susam-Saraeva 2012: 158)
The same can be said for the editing of professional translations by non-profession-
als. What can we, as researchers in the feld of translation and revision, learn from
88 Matthieu LeBlanc
Notes
1 The word client is used in this chapter in two ways. First, client is used by civil servants to
refer to the actual individuals they serve, in this case the beneficiaries of social assistance.
Second, translators and the government translation service use the word client to refer
to either the civil servants who commission the translations or the government depart-
ments. To avoid confusion, whenever possible I will use civil servants to refer to those
who commission translation services and end users to refer to the individuals to whom the
translations are destined, that is, the actual readers of the translations.
2 In this context, a “professional reviser” is a trained professional whose primary responsibil-
ity is to revise translations produced by professional translators. Professional revisers are in
most cases experienced translators who, later in their career, are promoted to the rank of
reviser (or sometimes senior translator/reviser).
5
WHEN THE POST-EDITOR IS
NOT A TRANSLATOR
Can machine translation be
post-edited by academics to
prepare their publications in English?
Nowadays, any individual with Internet access can avail themselves of one of the
multiple machine-translation (MT) services available online either for free (e.g.
Google Translate, Bing Translator, DeepL, Reverso)1 or for a fee (e.g. Lilt, DeepL
Pro)2. In fact, MT is currently used for personal and professional purposes by a
myriad of users, from professional translators to individuals that resort to it as an
aid in drafting documents in languages that they do not master or as a means of
communication, be it to interact with others or to understand texts written in a
foreign language.
Despite the increasing popularity and quality of MT, it is a well-known fact
that MT output still requires post-editing. In this chapter, we focus on a specifc
type of MT user, namely non-native speakers of English who wish to publish
their research in English but draft their research papers in their frst language
(L1). Given that English is the undisputed lingua franca in academia (Bennett
2013, 2014a, 2015) but that there seems to be a stronger ability to think by writ-
ing in the frst language (Breuer 2015), we carried out an experiment aimed at
determining whether Spanish-speaking physicians would be able to publish their
research in English by frst drafting a paper in their L1 (Spanish), and then post-
editing a machine-translated version of that paper in their L2 (English). We term
this process self-post-editing, as the authors themselves are the ones doing the
post-editing of their own texts.
The experiment was designed as a follow-up of a similar one that was carried
out in an exploratory phase of our project and was reported elsewhere (O’Brien
et al. 2018; Goulet et al. 2017). Here, we focus on the diferent types of edits per-
formed by fve Spanish physicians when carrying out self-post-editing tasks, and
we compare them with the subsequent edits made by a professional proofreader
who was hired to proofread the self-post-edited texts and had no access to the
original Spanish documents. The comparison is made in an attempt to determine
90 Carla Parra Escartín & Marie-Josée Goulet
the types of edits that non-native speakers of English are able to identify and the
types they cannot identify.
We frst look at the research related to our work that inspired our experiments
(section 1), and at the main motivation for the analysis presented in this chapter
(section 2). In section 3, we summarise our research methodology and experi-
mental set-up. Section 4 discusses the results of the experiment, while section 5
summarises our work and hints at new avenues of research for future work.
1. Related research
English is undoubtedly the dominant language of scholarly publications (Flowerdew
2001; Graham et al. 2011; Bennett 2013, 2014a, 2014b, 2015; Breuer 2015). Accord-
ing to Bennett (2013, 2014a, 2014b), non-native-English speakers are discriminated
against by editors and referees of international journals. Flowerdew (2001) suggests
that this disadvantage goes against natural justice and is likely to impoverish the
creation of knowledge. For an author, the decision to publish an article in English
as a Foreign Language (EFL) rather than in their L1 might be infuenced by various
factors, such as the desire to disseminate their results internationally (Burgess et al.
2014; Martin et al. 2014) or to be recognised by their peers (López-Navarro et al.
2015). For example, in a survey of 1,717 Spanish researchers across diferent felds,
a strong association was found between publication in English and the desire to be
recognised and rewarded (López-Navarro et al. 2015). The researcher’s discipline
would also be a determining factor in this decision (Fernandez Polo and Cal Varela
2009; Burgess et al. 2014). For example, a study conducted in 2009 at the Uni-
versity of Santiago de Compostela showed that researchers from the experimental
and health sciences were more likely to publish in EFL than researchers from the
humanities and social sciences (Fernandez Polo and Cal Varela 2009).
Scientifc writing in EFL is challenging. Hanauer and Englander (2011) addressed
this issue by attempting to quantify the “burden” of writing in EFL. They showed
that writing in EFL was associated with greater anxiety and less satisfaction once
the article was completed, compared to writing in L1. Studies of second language
writing among university students are also relevant. For example, Van Waes and
Leijten (2015) found that students were less productive when writing in a second
language (English, German, Spanish or French) compared to their mother tongue
(Dutch). For her part, Breuer (2015) reported that the freewriting technique had a
less exhilarating efect in the second language (English) than in the mother tongue
(German), which could indicate a weaker ability to think when writing in a second
language, compared to writing in the mother tongue.
Non-native speakers who want to publish in English sometimes have to resort to
the services of professional revisers or translators to make their articles suitable for
publication in international journals. In a survey for one medical journal, Benfeld
and Feak (2006) showed that the acceptance rate was the same for authors who
were not native speakers of English, but that many more revisions were required
for the articles submitted by these contributors. However, professional revisers or
When the post-editor is not a translator 91
translators may not possess adequate domain knowledge and may have to enter into
discussion with the author in order to clarify certain issues (Willey and Tanimoto
2015). Resorting to literacy brokers, then, costs time and money. Considering
these previous studies, it is clear that the use of MT could provide some advantages
in the context of scientifc writing in EFL.
We carried out an exploratory study on this subject, engaging nine partici-
pants from diferent scientifc disciplines and with diferent L1s. The experiment
consisted of drafting abstracts both in the L1 (+MT) and directly in English. The
initial fndings, described in O’Brien et al. (2018), showed that the diferences in
terms of quality were not signifcant. Nor were there signifcant diferences in the
median times needed for drafting the abstract in L1 (+MT and self-post-editing)
vs. in EFL. This initial data was further analysed in Goulet et al. (2017), where we
compared the edits implemented by the proofreader under each condition (self-
post-editing or EFL writing). Our fndings suggested that there were no substantial
diferences in the number of edits performed (5% and 6% of the total number of
words in EFL and MT respectively). This more in-depth analysis allowed us to
hypothesise that MT does not have a negative impact on the quality of the texts.
2. Motivation
The results of our preliminary study encouraged us to do a follow-up study focus-
ing on the self-post-editing process, one specifc feld, and a single language pair.
This would allow us to carry out a more in-depth analysis of the suitability of
MT as an academic writing aid. Bearing in mind that the quality of MT systems
between Spanish and English is relatively high and that Spanish L1 writers usually
struggle to write in English, we chose to focus on this language pair. The domain
of medicine was chosen due to anecdotal evidence that Spanish physicians seek to
publish in English, which was confrmed by the study carried out by Fernandez
Polo and Cal Varela (2009).
The work reported in this chapter builds on the work done in Parra Escartin
et al. (2017), where we report on an initial analysis of the edits performed by the
participants and the proofreader, focusing on whether the edits were of an essential
or preferential nature, but not entering into further linguistic analysis. Essential
edits are those that are required in order to ensure that the sentence (or part of it) is
grammatically correct and/or accurate in comparison to the source text (c.f. Parra
Escartin et al. 2017). Preferential edits are those that are performed but the unedited
machine-translated sentence would still be grammatically correct, intelligible and
accurate, in relation to the source text, even if the edit was not implemented (c.f.
Parra Escartin et al. 2017).
Our aim was simply to determine whether the physician-participants would be
in a position to submit research papers for publication using a general machine-
translation engine followed by post-editing. Our analysis revealed that both the
physicians and the proofreader performed essential as well as preferential edits. The fact
that essential edits were also performed by the proofreader indicates that the physicians
92 Carla Parra Escartín & Marie-Josée Goulet
would still beneft from hiring a linguistic broker before submitting their texts for
publication. In other words, the level of quality achieved during the post-editing
task by the authors alone would not have been sufcient for publishing their papers
in English. Similar fndings were observed by Temizöz (2016), who compared
professional translators to subject-matter experts performing post-editing tasks and
found out that the professional translators made fewer errors.
We also noticed that the proofreader seemed to have taken the role of an editor,
not only proofreading the text but also making more preferential edits than essen-
tial edits (9.02% vs. 7.75%). Mellinger and Shreve (2016) and Bundgaard (2017a)
observed a similar phenomenon in professional translation workfows, suggesting
that despite the small number of participants in our study, the results seem to be in
line with what happens in the translation industry.
To gain further insights into the type of MT errors the physicians were able to
identify, we decided to carry out a more detailed analysis of all the edits. In doing
so, we also expected to be able to identify the type of edits that an automated
post-processing system should be capable of identifying and correcting. The main
research questions we seek to answer in this chapter are:
1 Without any training in MT or post-editing, what types of errors are non-native speak-
ers capable of identifying in a machine-translated text?
This question seeks to identify the types of linguistic units (e.g. nouns, verbs,
phrases, and so on) and dimensions (e.g. semantics, syntax, style) involved in
the edits made by the physicians.
2 What types of edits are still required after post-editing to yield a grammatically correct
document?
This question seeks to identify the types of linguistic units and dimensions
affected by the edits performed by the professional proofreader we subse-
quently hired.
articles in English, some Spanish physicians do feel the need to rely on additional
supports (other colleagues with a better level of English, native speakers, profes-
sional translators and proofreaders), provided they have access to them and/or the
budget to pay for professional services. The questionnaire also confrmed that, in
some cases, MT is already being used as a writing aid, although its usage is limited
to short passages of text or individual words, rather than full documents.
At the very end of the questionnaire, respondents were invited to participate
in an experiment using MT as a writing aid. Although initially 31 respondents
showed interest in our experiment, only 5 (3 men and 2 women) were available
for the experiment and completed it. Our participants belong to diferent medical
specialties: neurosurgery (1), internal medicine (1), gynaecology (2) and immunol-
ogy (1). Four of them were between 20 and 30 years old and were engaged in their
residencies (that is, they are at an early stage of their careers). The ffth participant
works as a researcher in a university or research centre and is between 30 and 40
years of age. They are all native speakers of Spanish (a requirement to participate),
and besides English speak other languages (Catalan, French, German, Italian and/
or Portuguese). We asked them to report their self-perceived level of English and
subsequently to take an online English test at the Cambridge English website3 and
report back to us the grade obtained. As can be observed in Table 5.1, except for
one of them (P03), the participants’ test result was the same or better than their self-
perception of English level. All of them had at least a B1 level of English, which
means they can produce simple connected text on topics which are familiar or of
personal interest, among other abilities (CEFR 2019).
Not surprisingly, the participant who works in a university or research insti-
tution (P05) had the most experience publishing research papers. In fact, P05
reported publishing 15 papers, 5 of which had been written in English. The other
participants (except P01, who had not published any papers previously) had pub-
lished papers, although not as many as P05. P03 reported publishing a paper in
English. When asked about their strategies for writing texts in English, all of them
reported the same one: they write in EFL and then carry out a self-revision. P02
was the only one who reported having used Google Translate as a support tool, but
just to confrm the translation of individual words or sentences.
P01 B2 C1–C2
P02 B1 B1
P03 C1 B2
P04 B2 B2–C1
P05 B1 B1–B2
94 Carla Parra Escartín & Marie-Josée Goulet
Experimental set-up
Our experiment consisted of three distinct phases: (1) publication drafting in Span-
ish, (2) MT and self-post-editing, and (3) professional proofreading.
After they had completed the pre-task questionnaire and the English placement
test, we asked the fve participants to provide us with drafts of a future publication
in Spanish. Where a full text was not possible, we asked them to provide us with a
section of a publication of at least 750 words, preferably the discussion section or
a section similar to it. All participants complied, with some sharing a draft version
of their papers (P01, P03). Our reasoning for choosing the discussion section was
that it is usually more discursive than other sections (Skelton and Edwards 2000),
and we therefore thought it could be one of the most challenging sections to write
for EFL writers, and a good test set for our experiment. The only instruction we
gave the participants was to try to avoid writing sentences longer than 20 words (or
approximately 1.5 lines in Microsoft Word), whenever possible.
Bearing in mind that our participants are not experts in MT, and with the aim
of mimicking a real scenario where they resort to the tools they know, we chose to
use Google Translate.4 We sent the English translations to the physicians and asked
them to post-edit the translations as well as they could using the Track Changes
functionality of Microsoft Word. As the experiment was carried out remotely, the
Track Changes functionality was used as a way of allowing us to study their edits
without requiring them to install any software and fgure out how to send us the
additional fles with the meta information. No training in post-editing was done:
our participants were asked to correct the English text they had received without
any further intervention on our side. Once they had sent us the post-edited fles,
we asked them to fll in a post-task questionnaire about their experience.
We then hired a professional medical translator as a proofreader, to edit the texts
and correct any remaining mistakes. The proofreader was provided only with the
English texts post-edited by our participants, with all the changes introduced by
them confrmed. We did not explain to her the origin of the texts, and she did
not have access to the original Spanish. As our goal was to measure the extent to
which extent physicians with a limited knowledge of English are capable of cor-
recting MT output, we also instructed her to focus on the surface level and avoid
over-editing. These were our instructions:
The texts are written in English and we are looking for a surface revision,
that is, pay attention to grammar, orthography, punctuation, syntax, and
major stylistic problems. We would like the texts to read well enough to be
submitted to a scientific conference, for example. The texts belong to the
medical domain and are all parts of scientific papers written by doctors.
As with our participants, we asked the proofreader to use the Track Changes
functionality in MS Word to allow us to analyse her edits. Table 5.2 illustrates how
the original text written in Spanish underwent modifcations at each stage of the
experiment prior to annotation.
When the post-editor is not a translator 95
TABLE 5.2 Sample sentence written by P05 and all transformations undergone during the
experiment
Step 1: Drafting in L1 El factor subyacente que parecia explicar dicha asociación era
(Spanish) el impacto del tabaco como factor independiente tanto en la
ulcerogénesis como en el desarrollo de EPOC.
Step 2: Machine The underlying factor that seemed to explain this association
Translation (English) was the impact of smoking as an independent factor in both
ulcerogenesis and COPD development.
Step 3: Self-Post- The underlying factor element/aspect that seemed to explain
Editing (English) with this association was the impact of smoking as an independent
access to the original factor in both ulcerogenesis and COPD development.
source text in Spanish
Step 4: Monolingual The underlying element/aspect that seemed to explain this
proofreading (English) association was the impact of smoking as an independent
factor in both ulcerogenesis and the development of COPD
development.
Edit annotation
As previously mentioned, we frst did a round of annotation focusing only on whether
the edits made by the physicians and the proofreader were essential or preferential
(Parra Escartin et al. 2017). In order to answer our new research questions (see section 2),
we required a more fne-grained taxonomy that would allow us to determine the
type of edits performed and the type of linguistic units afected. Before going forward
with the annotations, we revisited the edit and error typologies developed for diferent
purposes that might be applicable for our study. Among the ones we considered, there
were typologies for the annotation of errors made by second language learners (Breuer
2015; Niño 2008), as well as for the annotation of human- or machine-translation
errors (Costa et al. 2015; Yamada 2015; Mitchell et al. 2014; Daems et al. 2013; de
Almeida 2013; Bojar 2011; Temnikova 2010; Mossop 2007b; Vilar et al. 2006; Llitjós
et al. 2005). We fnally decided to use Lafamme (2009), as we had done in our pre-
liminary study involving participants with diferent mother tongues and from diferent
disciplines (O’Brien et al. 2018; Goulet et al. 2017), because it was the only typology
that would allow us to annotate the types of edits made in both contexts: the text post-
edited by the physicians and the text proofread by the proofreader.
The typology proposed by Lafamme (2009) had proven to be useful in a similar
experiment, and it would also allow comparisons across experiments. To ensure
annotation quality, we created a decision tree (see the Appendix at the end of this
chapter). Forty edits were annotated by the two authors of this chapter to verify
the annotation quality. We observed that the main sources of discrepancies came
from the linguistic dimension, namely in the diferent “style” categories and in
cases where it was not clear if the edit was of a more semantic nature. Moreover,
we also fagged most of the cases of disagreement as “to be discussed” in a post-
annotation meeting to reach an agreement, or else we kept the original annotation
96 Carla Parra Escartín & Marie-Josée Goulet
by the author who subsequently annotated the rest of the data. The verifcation
proved that the decision tree was efective, and thus that a single annotation with
discussion of potential doubts was possible. One of the authors then annotated all
1126 edits (307 made by the participants and 819 made by the proofreader) and
consulted the other author for 31 difcult cases (2.7% of all annotated edits).5
Lafamme’s typology (Lafamme 2009) allowed us to annotate the modifcations
according to three aspects: the type of operation, the type of unit and the linguistic
dimension (see Figure 5.1). However, we had to make some minor changes to this
typology. First, since it focuses only on lexical changes, that is, those that afect
words, we added the “punctuation” dimension, which includes punctuation marks,
and also an “unknown dimension” category to account for all those cases where
the linguistic dimension is not clear. We also decided to distinguish between sty-
listic changes due to a preferential choice and those required to elevate the register
of a text. In addition, when an edit afected more than one word, we considered
the sequence of words as a unit, rather than annotating each word. We used the
category “phrase” for these word sequences and additionally included clauses and
full sentences as an option. The fnal typology is presented in Figure 5.1, and some
examples of annotated phrases follow.
FIGURE 5.1 Edit dimensions distinguished by Lafamme (2009) and possible annotations
in our data
4. Data analysis
Once all data had been annotated as explained in Section 3, we compared the results
across participants. Although we asked our participants to send us a text of at least
750 words, the overall word count per participant varied, ranging from the 686
words provided by P05 to the 1413 words provided by P01. Additionally, as shown
in Table 5.3, the number of words difered after each phase of our experiment.
25.00% 22.12%
19.15% 18.69%
20.00%
15.20% 15.22%
15.00%
9.16% 9.31%
10.00% 7.12%
0.00%
P01 P02 P03 P04 P05
P01 were the participants with the highest edit rates, followed by P04. Participants
P02 and P05, for their part, performed very few edits in the self-post-editing
stage.6 A potential explanation is that the English level of P02 and P05 was lower
than that of P01, P03 and P04. However, given the small number of participants,
we can only hypothesise that the English level has a certain impact on their ability
to post-edit a machine-translated text. This hypothesis is partially confrmed by
P02’s comment on the post-task questionnaire that the MT output was better than
her own level of English.
If we now compare the participants’ edit rates with the proofreader’s, the
edit rate increases for all participants: the proofreader’s interventions range from
15.20% (P03 and P05) to 22.12% (P04). It also seems that a higher English level
does not necessarily imply a better ability to post-edit, as the two participants
with the highest level of English (P01 and P04) are also the ones with the high-
est edit rate by the proofreader, followed closely by P02, who was one of the
participants with the lowest level of English. As mentioned, however, a previous
analysis of the edits revealed that the proofreader had also introduced a signifcant
number of stylistic edits, despite having been instructed not to: between 6.26%
and 11.09% of the edits performed by the proofreader were of a stylistic nature
(Parra Escartin et al. 2017). This will be further discussed when we present the
detailed analysis of the edits.
Let us recall that the overall objective of our annotations was to answer our two
research questions: (1) Without any training in MT or post-editing, what types of errors
are non-native speakers capable of identifying in a machine-translated text? and (2) What
types of edits are still required after post-editing to yield a grammatically correct document?
To this end, we will now describe the types of edits made by all participants and
compare them to the edits made by the proofreader. Where necessary, we will dif-
ferentiate between participants.
During the annotation process, we discovered that some edits were not of a
linguistic nature. The participants tended to go beyond the post-editing task and
add additional content directly in English, content that had not been included in
the Spanish text (16% of the edits). In fact, P01, the participant who added the
most content, inserted a completely new paragraph. The proofreader also added
content occasionally (3% of the edits), although in her case it was usually to explain
a technique or procedure better because it was not clear enough from the English
text she had been given. The benefts of engaging a proofreader specialised in the
medical feld were also apparent, as she added justifying comments in this regard
when she felt that new content had to be added.
One of the challenges of annotating the edits was that some edits were obviously
linked to others, but it was not possible to determine which edit had provoked the
other. In those cases, we annotated only one edit, as we had done on previous occa-
sions. To account for these cases, we included a specifc question in our decision tree
(see the Appendix). The following example from one of the proofread texts illustrates
this. As shown in bold, a number of elements in the sentence were replaced by others
and strike-through is used to indicate that the words were deleted.
When the post-editor is not a translator 99
Edit distribution
As explained earlier, the edit typology proposed by Lafamme (2009) distinguishes
three main aspects: (1) the type of operation, (2) the type of linguistic unit afected
and (3) the linguistic dimension afected by the edit. The types of operation carried
80%
70%
60%
50%
40%
30%
20%
10%
0%
Edits adding new Edits provoked by Edits related to Edits related to a Edits provoked for
information another edit or one element only group of elements an unknown non-
linked to another linguistic reason
edit
Participants Proofreader
80%
60%
40%
20%
0%
Add Delete Move Replace
Participants Proofreader
out by our participants and by the proofreader were very similar. As shown in
Figure 5.4, replacements account in both cases for approximately 64% of the edits.
Next come addition (24% for the participants and 20% for the proofreader) and
deletion (10% and 12% respectively), while movement is by far the least used oper-
ation (2% and 5% respectively).
These results are in line with those obtained in another study with diferent
participants, although in that case we analysed only the edits made by the proof-
reader. In Goulet et al. (2017), the distribution was 56% for replacements, 23% for
additions, 13% for deletions and 7% for movements. Taking into account that the
previous study involved nine participants from diferent disciplines and language
pairs, we can conclude that there seems to be a tendency to have twice as many
replacements as additions.
If we now take a closer look at the individual results, the relative frequency
of additions and deletions was the same in all cases except the proofread text of
P03, where there were more deletions (25%) than additions (18%) (see Figure
5.6). This could indicate that, although there seems to be a general trend, difer-
ences may be observed depending on the text being proofread. It is not possible
to determine at this stage whether this diference observed for P03 in the second
most frequent type of operation is correlated with the participant’s level of Eng-
lish. P03 had a B2 level of English, which could be considered similar to the level
of P04 (B2–C1) and P05 (B1–B2), who both followed the general tendency of
the group. Furthermore, the distribution varies slightly across participants (see
Figure 5.5), while it seems to be more consistent in the case of the proofreader
(see Figure 5.6).
Figure 5.7 shows the distribution of the overall number of edits by linguistic unit.
Unsurprisingly, the largest number of edits involves a phrase (37% in the case of the
participant edits, and 28% in the case of the proofreader edits). Next by order of
frequency are nouns (10% and 19% respectively), verbs (9.4% and 8.9%), determin-
ers (11% and 7%), adjectives (5.5% and 4.5%) and prepositions (4% and 6%). These
results difer from the ones in our frst study, where these same linguistic units were
the ones most often afected, although their relative frequencies difered. In Goulet
When the post-editor is not a translator 101
100%
80%
60%
40%
20%
0%
P01 P02 P03 P04 P05
80%
70%
60%
50%
40%
30%
20%
10%
0%
P01 P02 P03 P04 P05
40.00%
35.00%
30.00%
25.00%
20.00%
15.00%
10.00%
5.00%
0.00%
Participants Proofreader
FIGURE 5.7 Edit distribution by type of linguistic unit (participants vs. proofreader)
102 Carla Parra Escartín & Marie-Josée Goulet
20.00%
18.00%
16.00%
14.00%
12.00%
10.00%
8.00%
6.00%
4.00%
2.00%
0.00%
et al. (2017), nouns were the largest group undergoing edits (27%), followed by
determiners (16%), phrases (13%), prepositions (13%) and verbs (11%).
If we look more closely at the individual participants (see Figure 5.8), it is sur-
prising to see that P05 had an equal edit rate (17.6%) for conjunctions, nouns and
verbs, followed by an equal edit rate (5.9%) for commas, prepositions and spaces.
P02 focused on adjectives, determiners and nouns (12.5% of the edits in each case),
followed by commas, conjunctions and prepositions (6.3%). The remainder of the
participants had a more balanced rate of edits across linguistic units. P01 focused
primarily on nouns (12.8%), followed by verbs (10.4%), determiners (8%), adverbs
(6.4%) and abbreviations (5.6%). P03, on the other hand, focused on determiners
(18.2%), nouns (8%), verbs (8%), commas (5.7%) and pronouns (4.5%). Finally,
P04 focused on adjectives (11.5%), determiners (9.8%), verbs (9.8%), nouns (6.6%)
and pronouns (4.9%). While the largest number of edits were performed on nouns,
followed by determiners, verbs, adjectives and prepositions, the only frequently
edited unit across all participants was nouns. Four of the fve participants also had
high numbers of edits for determiners (P01, P02, P03 and P04) and verbs (P01,
P03, P04 and P05), while adjectives were ranked high only for P02 and P04.
Prepositions were also highly ranked for only two participants (P02 and P05),
who, interestingly, were the two with the lowest English profciency. This seems
to indicate that individual participants focus on diferent language units and with
an uneven distribution. In order to be able to confrm whether a correlation exists
between the linguistic units afected and the English profciency level, further data
would need to be gathered.
Let us now look at the linguistic dimensions afected by the annotated edits. As
seen in Figure 5.9, the linguistic dimension most afected by the participants’ edits
is clearly stylistic changes, although the edit rate for stylistic edits introduced by the
proofreader is signifcantly higher (34% as opposed to the 21% for the participants).
When the post-editor is not a translator 103
40.0%
35.0%
30.0%
25.0%
20.0%
15.0%
10.0%
5.0%
0.0%
Morphology
Punctuation
Preference
Orthography
Terminology
dimension
Semantic
Syntax
Style - Register
Typography
Unknown
AND/OR
Style -
Participants Proofreader
FIGURE 5.9 Edit distribution by afected linguistic dimension (participants vs. proofreader)
In second place comes the edit rate for the syntactic dimension: 23.5% for the par-
ticipants and 22.6% for the proofreader. The third largest group varies. In the case
of the physicians, 16.6% of edits are related to terminology, while only 7.1% of the
proofreader edits belong to this category. We hypothesise that this could be due to
the fact that the physicians, being as they are specialists in their feld, know the ter-
minology of their feld in English and hence were able to correct the MT output in
that regard, while failing to correct other MT issues that were more related to their
overall English competence (e.g. fuency issues). In the case of the proofreader, the
third most common type of edit was related to typographic and ortho-typographic
errors, which seems to indicate that a proper use of tools such as the grammar and
spellchecker available in Microsoft Word could have reduced the need for those
edits. Finally, 10.4% of the edits performed by the physicians were semantic edits
(10.4%), whereas only 3.5% of the proofreader edits were semantic edits. This
could again be due to the fact that the physicians are specialists in their felds and
decided, upon reading the English machine-translated text, that a semantic change
was required to properly express what they wanted to say. Temizöz (2016) reports
similar results, although her dataset was also rather reduced (a 482-word document
machine-translated from English into Turkish and post-edited by 10 subject-matter
experts and 10 professional translators). Her subject-matter experts performed best
when correcting terminology and performed like professional translators when
fxing mistranslations and accuracy and consistency issues. In her study, professional
translators performed better than subject-matter experts only in the language
fuency category.
Acknowledgements
This research was carried out while Carla Parra Escartin was working at the
ADAPT Centre in Dublin City University. Her contribution was supported by the
European Union’s Horizon 2020 research and innovation programme under Marie
Skłodowska-Curie grant agreement No 713567, and by Science Foundation Ire-
land at the ADAPT Centre (Grant 13/RC/2106) (www.adaptcentre.ie).
APPENDIX
The annotation decision tree
YES
NO
Yes: the edit is about content No: the edit is at the linguistic level
What kind of language unit is undergoing an edit? What kind of group is undergoing an edit?
FIGURE 5.10
Notes
1 https://translate.google.com; www.bing.com/translator; www.deepl.com/translator;
www.reverso.net/text_translation.aspx?lang=EN.
106 Carla Parra Escartín & Marie-Josée Goulet
2 https://lilt.com; www.deepl.com/pro.html.
3 In order to cross-check their self-assessment with their actual English level, participants
were asked to complete an English-level test of 25 questions and to let us know their
final results. The test can be found here: www.cambridgeenglish.org/test-your-english/
general-english/
4 The experiments were run in the first half of 2017, when the shift to neural MT had
already occurred.
5 In this chapter we focus on the analysis of the edits performed. In our previous work,
we also referred to essential edits not implemented. On average, each participant missed
between 0.6% and 1.32% of those necessary edits (Parra Escartín et al. 2017).
6 The percentages reported here, although similar, differ slightly from the ones reported in
Parra Escartín et al. (2017). This is due to the new annotation process we used, where
some edits previously not accounted for have been taken into account to allow a more
detailed analysis.
PART III
Professional revision in
various contexts
6
REVISION AND QUALITY STANDARDS
Do translation service providers follow
recommendations in practice?
Madeleine Schnierer
on translation projects. Robert (2012) and Ipsen and Dam (2016) looked at transla-
tion revision and the correlation between revision procedure and error detection.
This chapter seeks to address this area of research by, frstly, describing the
awareness of quality among the TSPs who participated in a survey and, secondly,
investigating whether TSPs follow the recommendations of EN 15038/ISO 17100
in practice. According to EN 15038/ISO 17100, one central quality assurance
measure is the obligatory step of translation revision, which also is supposed to
lead to a more objective evaluation of the translation due to the fact that this step
is carried out by someone other than the translator (cf. ISO 2015a: 10–11). If one
wishes to work in line with the standards, this step must be carried out for every
translation project. Given that EN 15038 was the frst standard to defne translation
revision as an obligatory step in the translation process, it is quoted several times in
this chapter. The empirical investigations supporting this present chapter were also
largely based on questions related to EN 15038 because it was still in force when
the survey was carried out. ISO 17100, the subsequent standard, primarily difers
from its predecessor in the following ways, as set out in May 2016 in the foreword
to the German edition of DIN EN ISO 17100 (my translation):
a The structure of ISO 17100 differs from that of EN 15038; this new structure
reflects the general workflow of a translation project in that it presents the indi-
vidual steps of the process in a chronological sequence: preparatory processes
and activities, production processes and follow-up processes.
b Special attention has been paid to the expansion of the defined terminology so
that the areas of services, technology, language and content, involved parties
and processes are presented in a thematically separated way.
c The skills required by a translator now also include their specialist expertise.
d A translator can now also prove his or her translation qualification in the form
of an official certificate.
f The skills required by project leaders (e.g. project managers) are defined.
g The use of translation technology and translation tools is explicitly discussed.
h The issue of “requirements for the project management of translation services”
is more strongly focused.
i The requirement for the targeted processing of client feedback has been
included in the standard.
(ISO 2015b: 2)
As one can see, ISO 17100 may well have been restructured and, in certain
areas, expanded, but the key requirements have remained the same. These include
the requirement for translation revision, and therefore the survey underlying this
chapter and its results can be considered as valid for ISO 17100 as well.
In addition, ISO 17100 includes detailed lists of those linguistic and content-
related aspects which should be considered during the translation and revision
Revision and quality standards 111
process, and which can therefore be seen as revision parameters. The extent to
which these aspects are actually used in practice will be one of the main subjects
of this chapter.
The chapter will begin by listing the standards relevant to TSPs and describing
certifcation processes (sections 1 and 2) and the possibility of registration (section
3). Section 4 will deal with translation revision as it is understood by EN 15038
and ISO 17100. Finally, the results of our investigation of TSPs in Austria will be
presented in section 5.
(Continued )
112 Madeleine Schnierer
Lists of standards with varying degrees of detail can be found in publications such as
Arevalillo (2005), Budin (2007), Thelen (2013), Ottmann (2017), Schmitz (2017)
and ISO (2020). Table 6.1 contains standards that could be relevant to TSPs, and
Table 6.2 lists standards for quality management systems, which TSPs are also using
with increasing frequency to certify their services. Where necessary, the subtitles
have been translated from German into English.
Revision and quality standards 113
1 ISO 9000 Defnition of the principles and terms for quality management
systems. An explanation of quality management systems and
the terms used in the series of standards ISO 9000.
2 ISO 9001 Model description of the entire quality management system.
Basis for a comprehensive quality management system.
3 ISO 9004 Guidelines regarding both the efectiveness and efciency
of the quality management system. Contains instructions
for organising a company along the lines of Total Quality
Management (TQM).
2. Certifcation
Standards are not laws but rather non-binding recommendations. According to
Austria’s standardisation body ASI (Austrian Standards International), a standard is
a qualifed recommendation that, although publicly accessible, is not free of charge
and is drawn up by consensus according to an internationally recognised process. It
should ofer maximum benefts to everyone and is accepted for general and recur-
ring use by a recognised standardisation organisation (Austrian Standards 2014). An
example of a more detailed description of the design and scope of application of
standards can be found in Schnierer (2019).
By undergoing certifcation, one is also making a commitment to meet the
requirements of a standard. According to Austrian Standards (2019), the potential
advantages of certifcation (in terms of increased competitiveness) can be summed
up in three ways:
114 Madeleine Schnierer
• Certification creates trust and confirms the quality of products and services.
An independent body confirms that a product, service, management system
or set of employees meets the requirements of the relevant standard and that
compliance is continuously reviewed. This distinguishes the company from its
competitors.
• Certification can also bring a competitive advantage, in particular because it is
often a requirement in tender processes.
• Certification opens up new markets. Certification can help in winning new
clients and creating alternative sales and marketing channels. In an age of glo-
balisation, the introduction of standards and certification under them are cre-
ating global reference values.
Since 2015, freelance translators and translation companies have been able
to obtain ISO 17100 certification. Response from freelancers has been lim-
ited. This is partly due to the fact that many colleagues remain unaware of
the possibility of getting certified, are discouraged by the associated costs or
believe that certification fails to bring measurable benefits. . . . To date, cli-
ents have rarely required ISO 17100 certification.
(Maier and Schwagereit 2016)
• Application
The application is accompanied by documentation from the applicant that
includes general information, descriptions of the company profile, specialisa-
tions and processes (related to ISO 17100) and the average number of employ-
ees working in the relevant locations (over the past 12 months) and of transla-
tors (employed and freelance, over the past 12 months).
• Initial and recertification audit
The initial and recertification audit looks at all requirements specified in ISO
17100. The audit comprises interviews with the CEO/head of the service
provider, translators/freelancers/vendors and project managers, and other
staff responsible for tasks relevant to the certification criteria (e.g. account-
ing/invoicing, human resources, quality management). The conclusions of the
audit are presented and further steps defined.
116 Madeleine Schnierer
• Audit findings
In case of deviations from the requirements of the standard, appropriate cor-
rective actions are specified by the lead auditor. The auditor may also make
recommendations regarding issues of quality and opportunities for improve-
ment related to the operations of the service provider. Such recommenda-
tions are documented in the audit report without affecting the issuing of the
certificate.
• Audit report for certification
The information provided by the lead auditor to the body responsible for the
certification decision includes the audit report, comments on deviations and
the corrective actions taken by the client and a recommendation of whether
or not to grant certification.
• Issuing the certificate
The certification body decides whether to issue the certificate, which is valid
for six years.
• Surveillance activities
Surveillance audits are carried out on a two-year cycle to ensure that the
certified service provider continues to meet the requirements of ISO 17100
between recertification audits.
• Recertification
A recertification audit is carried out in order to extend the validity of the
certificate.
A striking feature of the LRQA process is the fact that most contacts between
auditors and the company take place at the management level. The LICS process,
on the other hand, gives employees at every level the opportunity to speak during
the audit phase. A further diference between the two processes relates to the period
of validity of the two certifcates. An LRQA certifcate is valid for three years, with
surveillance audits beginning after a maximum of 12 months. In the case of the
LICS, the certifcate is valid for six years, and surveillance audits are carried out
every two years. The certifcation processes of both institutions are clearly structured
but appear very complex to applicants. One should also consider the costs of these
processes, about which neither institution provides any information on its website.
3. Registration
Registration is not an entirely uncontroversial means of circumventing certifca-
tion. It is simply an independent declaration in the form of a fee-based registration
that is made public and can be used for marketing purposes. Such registration is
ofered by DIN CERTCO GmbH, which belongs to TÜV Rheinland, although
Revision and quality standards 117
this company has not only been warned about its practice of making statements
that could lead to confusion between the terms registration and certifcation but also
obliged, in the form of a fnal judgement, to refrain from doing so (Schneider
2012).
At the time of this present research (January 2019), the situation appeared
unchanged because the section Certifcates and Registrations on the DIN CERTCO
website continued to list 280 companies that have registration numbers and are
described as certifcate holders according to ISO 17100. At frst glance no clear dis-
tinction is made between registration and certifcation, although the list of companies
consists exclusively of registered companies who claim to comply with ISO 17100
without confrmation by an independent or recognised body. This is made clear by
the fact that, in the valid until column, the expiry date of these so-called certifcates
is given as unlimited, which would never occur in the case of a proper certifcation.
At least each entry ends with the following Note (DIN CERTCO 2019):
Of the companies registered in this way, 219 are based in Germany, 54 are active
in other European countries, and 7 are outside Europe, as shown by the list on the
corporate website of TÜV Rheinland in January 2019.
The advantages of registration are obvious: it is a simpler and cheaper process
that involves neither an audit nor ongoing control. Once registration has been
obtained, it has no time limit and generates no further costs.
Throughout this process, the translator shall pay attention to the following:
If the restriction mentioned next had not been made, one could have seen these
quality criteria as parameters for revision.
Regarding revision methods, EN 15038 (CEN 2006a: 11) made the following
recommendation:
The TSP shall ensure that the translation is revised. The reviser . . . shall be
a person other than the translator and have the appropriate competence in
the source and target languages. The reviser shall examine the translation for
its suitability for purpose. This shall include, as required by the project, com-
parison of the source and target texts for terminology consistency.
The restriction “as required by the project” led to confusion—and was much
criticised by such writers as Mossop (2007a), Robert (2008, 2012) and Parra-
Galiano (2016)—because it apparently ofered the option of not revising every
translation or at least avoiding the systematic comparison of the source and target
texts.
The recommendations of ISO 17100 are no longer qualifed in this way (ISO
2015a: 10):
The TSP shall ensure that the target language content is revised.
The reviser, who shall be a person other than the translator, shall have
the competences mentioned in 3.1.5 in the source and target languages. The
reviser shall examine the target language content against the source language
content for any errors and other issues, and its suitability for purpose.
In the section entitled ‘Revision’ (ISO 2015a: 10), the international standard
details several key aspects of the revision process:
This shall include comparison of the source and target language content for
the aspects listed in 5.3.1. As agreed upon with the project manager, the
reviser shall either correct any errors found in the target language content or
recommend the corrections to be implemented by the translator.
NOTE Corrections can include retranslation.
Any errors or other issues affecting target language content quality should
be corrected and the process repeated until the reviser and TSP are satisfied.
The reviser shall also inform the TSP of any corrective action he/she has
taken.
The translator shall raise any uncertainty as a query with the project manager.
the SPSS software package and then fed into Excel because it has better presen-
tational options.
The questionnaire and interview setting followed the recommendations of
Giroux and Tremblay (2002, 2009) concerning their structure: The online ques-
tionnaire consisted of 10 subject areas with a total of 48 open and closed questions.
The follow-up interviews were structured in line with the 10 subject areas of
the online questionnaire, and almost every area included more detailed, follow-up
questions. The interviews were recorded and subsequently transcribed. The data
analysis was carried out in line with the recommendations of Dörnyei (2007) con-
cerning the method of labelling the interview answers, and the recommendations
of Field (2013) in that we used simple statistical tools and rejected the application
of inferential statistics since we had only 31 participants.
Firstly, some overall results—the answers from all 31 participants—will be pre-
sented (Figures 6.1 to 6.3). Then comparisons will be drawn between the answers
from those 19 participants who are the focus of this study because they are either
certifed or, according to their own information, are not certifed but work in line
with the standards (Figures 6.4 to 6.8 and Tables 6.4 and 6.5).
Company profle
The profle of each company was established from answers to questions about its
annual project volume and the number of translators with whom it works (see Figure 6.1).
The survey participants operate with networks of translators (working directly
or as subcontractors) that range in size from 2 to 500 people. More than half work
with small networks of up to 10 translators and handle a total of 2950 projects per
year. Moving clockwise on the pie chart, seven of the companies have a medium-
sized network of 20–30 translators and carry out a total of 2750 projects, three have
large networks of 40–90 translators and deal with 5400 projects per year while fve
companies with very large networks of 150–500 translators handle an annual total of
12,000 projects. The sample is thus broad—from tiny to huge companies.
1 10 200
2 150 2000
3 200 4000
4 300 1500
5 350 1500
6 500 3000
TABLE 6.4 Profles of certifed companies and uncertifed companies declaring that they
work in line with EN 15038
1 2 500 5 1
2 2 200 10 10
3 3 400 0 0
4 3 170 4 5
5 4 150 4 3
6 5 50 10 5
7 5 500 1 3
8 10 50 20 0
9 15 500 25 25
10 15 800 25 5
11 20 150 20 20
12 30 4000 200 200
13 30 3000 50 6
14 30 2000 150 10
15 36 1400 90 2
16 40 1000 40 5
17 40 1500 350 70
18 50 1500 300 15
19 50 3000 500 500
Revision and quality standards 125
Two certifed companies and one uncertifed company that works in line with
EN 15038 report changes in their workfow. These changes are described as fol-
lows: “Unlike earlier in the company’s history, revision has become a standard”;
“we work in a more structured way”, and “using the standard as a basis has
enabled us to meet demands for transparency”. As could be expected, workfows
are afected in only a few of the certifed companies and uncertifed companies
that work in line with the standard, according to respondents. This is unsurpris-
ing because the workfows recommended in the standards come from practice
or, put another way, the standards recommend the best practices, which these
companies are likely to apply already.
this question again against the background of ISO 17100, where there is no doubt
that every translation has to be revised. If translation agencies do not always carry
out revisions and therefore cannot meet the requirements of ISO 17100 (and, for-
merly, EN 15038), the translation profession will not gain credibility. However, the
small size of our sample means that such generalisations cannot be made.
Revision methods
Figures 6.6 and 6.7 show how revision is carried out by the 19 questioned companies
that responded to the non-compulsory question about methods. They were asked to
choose among the revision methods considered as most common by Robert (2008,
2012) as well as Robert and Van Waes (2014): (1) monolingual revision of the fnal
translation, (2) bilingual revision (comparison between translation and source text),
(3) monolingual revision followed by bilingual revision and (4) bilingual revision fol-
lowed by monolingual revision. The question was asked in such a way that, for each
revision method, participants could choose between always, mostly, rarely und never.
Figure 6.6 shows the choices of certifed companies. Most of them (fve out of six)
practice the simple bilingual revision method. The answers show that, without excep-
tion, the certifed companies always carry out at least a bilingual revision. One even
always carries out the double-method (bilingual followed by monolingual revision).
On the other hand, the answers show that certifed companies rarely (two out of six)
or never (four out of six) use the monolingual revision method. It is thus reasonable to
assert that certifed companies uphold the core requirement of EN 15038/ISO 17100.
As Figure 6.7 shows, the situation among uncertifed companies that work in
line with EN 15038 is somewhat less clear. More than two-thirds of them (9 out
FIGURE 6.7 Choice of revision method—uncertifed companies that work in line with
EN 15038
of 13) claim that they either always carry out at least a simple bilingual revision (7
out of 13) or—in the case of two companies—the double-method of bilingual
followed by monolingual revision. Only one company states that it mostly uses the
monolingual method, which does not meet the requirements of the standards.
It can thus generally be asserted that all the certifed companies uphold the
requirement of “comparison of the source and target texts” (CEN 2006a: 11) and
that most of the uncertifed companies working in line with EN 15038 meet the
requirement as well. This assertion also applies to ISO 17100, given that it has the
same requirement: “examine the target language content against the source lan-
guage content” (ISO 2015a: 10–11) and that certifcations carried out in line with
EN 15038 were carried over to ISO 17100.
1 —linguistic correctness 13
2 —correct reproduction of the content 8
3 —meeting client needs 7
4 —completeness 5
—good style 5
5 —use of the correct specialist terms 3
6 —no additions 1
—acceptable linguistic register 1
—correct formulation 1
and thus have to adhere to the quality criteria listed in 5.4.1 of EN 15038 and 5.3.1
of ISO 17100, it is clear that they also adhere to these quality criteria during the
revision process, using them as revision parameters. As mentioned in ISO 17100,
these criteria are also concretely recommended for revision. Furthermore, revi-
sion parameters are an excellent tool for the objective treatment of mistakes (they
receive a label as a result of which they can be classifed). Also, when used regularly,
they can be seen to be very time-efcient—it is no longer necessary to individually
comment on each intervention. Table 6.5 lists the quality criteria that were named
by the respondents in answer to an open question.
If one compares these criteria with those set out in EN 15038 and ISO 17100,
one can see that the criteria ‘completeness’ and ‘correct reproduction of the
Revision and quality standards 129
content’ do not appear in the standard but were mentioned by a considerable num-
ber of those questioned (13 out of 31). Special attention should also be paid to the
criterion ‘meeting client needs’. This criterion is not addressed in the standards
but should not be neglected by TSPs (and it was, indeed, mentioned by almost a
quarter of those questioned).
Annamari Korhonen
According to Arnt Lykke Jakobsen (2019: 64), “translation and revision are more in
transition than ever before”. Jakobsen is referring to the transition that is brought
about by new technologies, above all machine translation. This chapter, how-
ever, discusses another way in which revision could take on a bigger role in the
translation industry’s workfows: diferent kinds of revision tasks could be used in
the design of new services when language service providers (LSPs) expand from
translation into a wider selection of multilingual communication services. In such
production environments, revision takes on a purpose beyond translation quality
assurance.
Jakobsen (2019: 69), like many others, groups translators together with writers.
Dam-Jensen and Heine (2013: 90–1; see also Risku et al. 2016) discuss writing,
translation and adaptation as three types of text production and consider similari-
ties and diferences between these three tasks. This chapter builds on that line of
thought, seeing translation frst and foremost as text production, as creating com-
munications for many diferent purposes, and it looks at the potential of revision
not only in correcting translators’ errors, but also in editing texts further. To help
understand the fexibility, complexity and vast potential of revision, the concept of
revision continuum is introduced.
The ideas presented here are based on the diferent ways in which LSPs that
operate in Finland, and that mainly serve corporate and public sector clients, use
translation revision in real-life business contexts. These diferent ways have been
investigated by means of an online survey of LSP representatives. Specifc focus is
frstly on revision task specifcations in terms of revision parameters (see Mossop
2014a) and the allowed degree of creativity, secondly on various circumstances
that may require revision to be carried out in a specifc manner and thirdly
on who decides the scope of revision. The role of revision in the production
132 Annamari Korhonen
LSPs usually follow a more or less standardised production workfow that con-
sists of various tasks from planning and fle preparation to translating, revising and
generating target fles (see, for example, Drugan 2013: 105–6; Dunne 2011: 169–
70; Gouadec 2007). Although descriptions of workfow difer in some specifcs,
they generally agree that revision is a well-established and necessary part of the
workfow. The translation industry standards EN 15038 (European Committee for
Standardization 2006a) and ISO 17100:2015 also require revision of the target texts
as part of the translation workfow.
Bisiada (2018: 290–1) presents a workfow description that is of particular
interest in that it foregrounds the text modifcation phases. His model includes
a translation stage (Orientation—Drafting—Revising), which takes place within
a translation company, and an editing1 stage (Stylistic editing—Copyediting—
Structural editing—Content editing), which takes place outside the translation
company (within a publishing company in the case of Bisiada’s data). However,
such a straightforward division may not apply in the context of translation services
ofered to corporate clients. When an industrial client outsources the translation
of its corporate communications to an LSP, they usually expect to receive fnalised
products that are ready for publication online or in printed form. While in most
cases they review the materials before actually publishing them, this review may
not constitute an actual editing process. From the point of view of efciency and
fnancial viability, it makes sense to include the editing stage in the translation stage
or, more specifcally, in the revision task that is considered part of the translation
stage in Bisiada’s workfow model and takes place within the LSP.
The revision continuum could be used in two diferent ways: frstly, as a theo-
retical model that would help us imagine all the possible ways in which revision
could be carried out, and secondly, as a practical tool that describes the diferent
revision levels applied by an individual LSP. Building such a theoretical model
and creating such a practical tool are not simple tasks, and are in fact well beyond
the scope of a single survey. In-depth interviews and analysis and classifcation of
revised materials would probably be necessary. I will return to the potential uses
and benefts of the revision continuum after the analysis of the survey data.
and requirements, general guidelines applied to all revision or review jobs would
be impractical. Instead, any guidelines for revision or review should be tailor-made.
The need to tailor the task description for diferent purposes of course resonates
well with the idea of a revision continuum.
All these studies discuss revision as quality assurance. Variation in the scope of
revision is not a particular area of focus in any of them, nor are the creative aspects
of revision tasks considered. The present study aims to fll this gap and take revision
research in a new direction to uncover the full potential of this important part of
the workfow.
4. Research design
The survey to be discussed here looked at workfow processes from the LSPs’
viewpoint, examining how they have designed their production workfow and the
related revision policies. The participants were therefore representatives of Finn-
ish LSPs that identify primarily as translation agencies. Based on publicly available
sales information, LSPs that were at least medium-sized as translation businesses,
although not very large from the point of view of Finnish businesses generally,
were selected as recipients of the online questionnaire. To allow the inclusion of an
adequate number of companies, no defnite sales limit was established; instead, sales
fgures from several years were examined to identify companies with steady annual
sales of several hundred thousand euros or more.
In order to obtain some preliminary quantitative data on the kinds of revision
policies and practices that might be prevalent among LSPs, the link to the ques-
tionnaire was sent to a single representative of each company. They were informed
that their responses would be used for research purposes and published. To ensure
protection of business secrets, the survey recipients cannot be described in greater
detail here.
In the cover letter, it was emphasised that the respondent should be familiar with
the company’s revision processes and services. The respondents were thus expected
to provide answers based on the companies’ established ways of working instead
of the respondents’ own preferences. However, it was not possible to control who
actually responded to the survey. The respondents were not required to enter their
own or the company’s name, because it was assumed that they would then be more
reluctant to respond. This means that the respondent may have been someone with
incomplete or outdated knowledge. Similarly, it cannot be confrmed that an actual
company policy or practice exists regarding all the details addressed by the survey;
some of the matters discussed here may not have been considered by some LSPs
at all. In these cases, the responses would in fact refect the respondents’ individual
preferences.
A link to the online questionnaire was sent by e-mail to 26 Finnish LSPs.
Reminders were sent, and some of the large companies were contacted via per-
sonal connections to ensure a response. A total of 11 LSPs responded to the survey
(response rate 42.3%); these represent a major portion of the Finnish translation
136 Annamari Korhonen
industry in terms of combined sales. The most prominent LSPs were well repre-
sented among the respondents.
The questionnaire had 29 questions, some with two parts. The questions were posed
in Finnish and divided into four sections: (1) basic background information about the
company, (2) the company’s service range, (3) the revision procedure and (4) creative
translation and editing services ofered. Both open and closed questions were used.
Not all the data yielded by the questionnaire are analysed in this chapter; here,
the focus is on the section dealing with the revision procedure, with particular
attention to the scope of revision, its allowed level of creativity and who has the
authority to make decisions about these matters. To learn more about the role of
revision in the workfows for creative translation and editing services, some of the
questions in the fourth part of the questionnaire were also examined. The follow-
ing questions are discussed here:
1 Does the typical translation workflow include a revision task carried out by
someone other than the person who translated the text?
2 Which text features is the reviser expected to pay attention to?
3 What types of stylistic editing is the reviser expected to carry out?
4 In what situations may the reviser make or propose changes to deviate from
source text content?
5 Has the company defined different revision levels, or may the reviser decide
the scope of revision?
6 Is the reviser provided with a description of the scope and objectives of each
revision task?
Basic information about all these matters was obtained from closed questions,
and the responses to them are presented in section 5. However, some of the open-
ended questions provided a more nuanced picture by revealing contextual factors
behind the practices. Information from these questions has therefore also been
included in the present analysis.
In analysing the responses, the companies were divided into major (fve respondents)
and minor operators (six respondents) based on their number of employees, countries
of operation, service range and selection of language pairs. The responses to the back-
ground information section of the survey showed that all the major operators ofered
translations in all language pairs, had operations in several countries, and had an extensive
service range including creative translation services. The division into major and minor
operators will be used in the presentation of the survey results in the next section.
format. As the data were limited, it must be kept in mind that any conclusions are
only preliminary, hypotheses for further study at best. Since it is difcult to obtain
a larger sample among Finnish LSPs, any further study will have to rely on in-depth
methods such as interviews.
Revision parameters
Figure 7.2 lists 14 revision parameters from four groups (A–D) and shows the num-
ber of respondent companies that included each parameter in the scope of typical
138 Annamari Korhonen
revision. The respondents were able to select several options—which was also the
case for most of the other questions presented in this chapter.
The options used in the questionnaire roughly follow the revision parameters
identifed by Mossop (2014a: 134–5). Some modifcations were made to use word-
ings that were more likely to be familiar to the respondents;2 this was somewhat
challenging as the jargon used at LSPs varies considerably from one company to
the next (see Uotila 2017: 45). Two of Mossop’s parameters were divided fur-
ther so that more detailed information of the task content could be obtained: the
parameter ‘sub-language’ was divided into ‘stylistic suitability for the text type’
and ‘terminology’, while ‘mechanics’ was divided into ‘linguistic (grammatical)
correctness’ and ‘compliance with client’s style guide’. It is true that client-spe-
cifc style guides often include instructions on appropriate grammar. Still, general
grammatical correctness and compliance with a style guide constitute two diferent
things to check, which made it logical to separate them in this context. Similarly,
style and terminology, while both aspects of sub-language, are diferent from each
other in that style can be understood as a feature of all texts, while terminology is
more important in some texts than in others. The diferences in how many respon-
dents selected each of these options proved the divisions justifed.
The only two parameters that all respondents marked as part of the typical
revision procedure were ‘linguistic correctness’ and ‘terminology’; the same two
parameters were considered most important by Uotila’s (2017: 54) respondents.
From language check to creative editing 139
None of the large operators expected the reviser to engage in major content edit-
ing. It is rather interesting that three minor operators did, but we can only speculate
whether these companies specialise in creative translation and communication ser-
vices, or whether the respondents perhaps just had a diferent defnition of major
content editing in mind.
From language check to creative editing 141
Figure 7.4 lists some situations in which LSPs may allow content editing during
revision. Only one respondent indicated that they allow revisers to freely deviate
from source content; on the other hand, one respondent allowed no deviations at
all. It seems to be a fairly common practice that changes to content are allowed
with certain text types of specifc clients. In my own experience, which is sup-
ported by the responses to some of the open-ended questions in the survey, this is
usually based on an agreement between the LSP and the client to the efect that
some text types are given a special treatment. Two respondents selected the ‘Other,
please specify’ option: both described cases where the client has specifcally ordered
a creative translation or wanted the text to be edited further. It must be noted that
both of these respondents also selected other options; these were therefore not the
only situations where they allowed deviation from source text content.
The responses to some of the open-ended questions in this survey indicate
that the client’s requests and what had been agreed with the client are the most
important factors in deciding what kinds of changes are allowed during the revision
task. The respondents repeatedly mentioned the wishes of the client and the fact
that service specifcations must be mutually agreed upon. In some other transla-
tion contexts, the client’s wishes may not need to be automatically observed, and
Mossop (2014a: 123) indeed does not recommend doing so. When producing a
commercial service in an extremely competitive operating environment, however,
listening to the client is clearly of crucial importance. Dunne (2011: 176) stresses
that a translation can be adequate or inadequate only in relation to the communi-
cative function that it should fulfl (see Nord 1997: 34–7), which is “not a quality
inherent in a target text, but rather is a quality assigned to the target text by an
evaluator from his or her particular point of view”—and that point of view, in the
context of a business-to-business translation service, can only be the client’s.
Other factors that respondents mentioned as having an impact on the scope of
revision include the text type, the target audience and the intended use: the text
must be revised so that it works as intended in the target context. However, the
target audience, or the end user, was mentioned far less frequently than the client,
which clearly implies that in all considerations, the client comes frst. Other fac-
tors to be considered include local legislation, which may require changes to the
text, and layout, which may require omission of some content so that the text can
ft into the designated space. With some text types, strict limits on the number of
characters are imposed.
One respondent foregrounded a further factor that can be best described as a
precondition for all the other revision policy choices: the pricing of the job must
allow enough time to produce the necessary quality level. As creative editing is a
time-consuming activity, it can be carried out only if the price of the project has
been negotiated to allow the use of adequate time. According to the respondent
who raised the price issue, translations of marketing texts must be sold to clients
under service labels that justify the higher price. The label makes it easier for the
client to accept that creative quality takes time to produce. This is a crucial mat-
ter for LSPs, because in the commercial reality within which they now operate,
142 Annamari Korhonen
translation prices are often pushed down to the limits of proftability (see European
Commission 2018a), and it is simply not possible to spend enough time on all
translations to hone them to perfection.
may therefore be a lot of untapped potential in how the workfow’s revision phase
could be used to produce diferent services for clients.
Some of the responses to the fourth section of the questionnaire, which charted
the creative translation and editing services of the LSPs, provided proof that revision
is already being used in that manner by some LSPs. Respondents from companies
that ofer both a transcreation3 service and a separate creative editing service were
asked to explain how these two difer in terms of the workfow or the practical
execution of the task. Two respondents answered this question; both stated clearly
that the workfows used when producing these two services are similar. One of the
two also explained that both services are based on a regular translation workfow to
which a more extensive editing phase is added.
Although this shows that dividing the work into phases is clearly considered a
useful practice, using the same production process for diferent services could also
indicate a need for further service development. Such a need was in fact identifed
by several respondents: when asked whether they have established defnitions for
their creative translation and editing services, only one respondent stated conf-
dently that service descriptions exist for all services. All the others were more or
less unhappy with their current service defnitions or admitted that service design
had not yet been completed. Five of the eleven respondents did say that their com-
panies had increased their service range in recent years, most of them in the area of
marketing and content production services. With the development of new services,
service design is probably an ongoing efort for many LSPs.
parameters are considered more essential than others. This makes it fairly easy to
defne a budget revision service that includes only the most important parameters
(linguistic correctness and terminology being the most obvious candidates based
on both the present study and Uotila’s 2017 fndings), as well as a full service that
would encompass all or most of the revision parameters. The level of creativity to
be allowed—which was discussed earlier as deviation from the source text content
but could also be understood as creative use of language—is another powerful way
of making a diference between types of revision. Further variables would include
choosing between a spot check and complete revision, and between unilingual and
comparative revision, or including both in the workfow as separate steps.
Next, let’s look at how the revision continuum could help LSPs avoid wasting
resources. LSPs often engage in ferce competitive bidding in which price is the
most important factor. The company that has the best production process, resulting
in adequate quality at the lowest price, wins. Adequate (or ft-for-purpose) revision
can be considered as key to adequate quality. In practice, this means that LSPs must
consider when to apply extensive revision and when a less thorough check will do,
and the depth of revision must be refected in the price.
The need to make the task description and the price meet has not previously
been fully recognised in revision research. Martin (2007: 58), for example, takes it
for granted that revision needs to be kept “within sensible and afordable limits”.
The underlying assumption appears to be that the price the client pays for revi-
sion is always the same, and the cost to the LSP of revision must be afordable with
respect to that price, which of course often limits revision to a minimum level.
This results in problems that could be solved by increased variation in the price of
revision. The survey results presented here have shown that LSPs already use the
revision step in the workfow to produce services that are sold under various labels
for which a higher price is charged, for example creative editing or transcreation.
This proves that revision is an important part of the workfow, with potential to
make a diference between regular translation and a high-quality creative com-
munication service; charging diferent prices for diferent types of revision is thus
justifed. From the clients’ point of view, it also makes sense that they receive texts
with the quality level and style that they need in each case, and pay only for the
level that they need.
It could of course be argued that in the case of extensive editing of a transla-
tion, we are no longer talking about revision in the sense usually ascribed to the
term in Translation Studies. Creative editing could be seen as falling outside the
realm of translation revision, and ample justifcation for that approach can cer-
tainly be provided. One such justifcation can be found within this very survey:
it seemed to be a fairly common practice that when the translation workfow
includes creative editing, it also includes another revision step such as language
review or proofreading. However, I believe creative editing should be discussed
under the overall concept of revision when it is carried out within the LSP
directly after the translation phase in the workfow, by the same people who also
do other revision work.
146 Annamari Korhonen
The revision continuum is presented here as a hypothesis only, and its further
development and practical application is left to future work. The factors that deter-
mine the placing of tasks on the continuum must be elaborated based on more
thorough empirical research on LSP practices. Diferent revision tasks can then
be identifed and defned in order to create a representation of how revision is
currently being used. On that basis, new, efcient ways to make use of revision in
service production could be revealed. The very shape of the visual representation
could change as a result of more detailed research: a simple continuum between
two extreme task types might not be adequate for dealing with all the diferent fac-
tors involved. The roles of diferent actors or agents, such as the project manager
as the one who decides what to include in the workfow (see Stoeller 2011: 296),
as well as the client as an agent that infuences all decision-making, are also worth
examining.
7. Conclusion
The survey results presented here make it clear that although LSPs are often seen as
a fairly unifed entity, a closer look at their service workfows reveals many difer-
ences between them and in how they serve their clients. It is logical that diferences
should exist: LSPs are free enterprises that compete against each other and work
hard to fnd the best practices that will allow them to get a larger share of the avail-
able business. It is unlikely that clients, whose knowledge of translation is usually
limited, are aware of all the diferences in how the services are produced. Clear def-
initions of services, referring to workfow and task content, and using terminology
that can also be understood by people who are not experts in translation, would
be useful to clients and would allow them to make informed purchase decisions.
It must be noted that when revision is expanded to include creative editing,
it no longer equals quality assurance. Revision and quality assurance have always
been strongly linked by both researchers and practitioners. When Drugan (2013:
37) asked her interviewees how they manage translation quality, they responded
by explaining their revision procedures—forgetting at frst all the quality manage-
ment measures that take place at other stages of the process. However, if we look
at revision as a task that goes beyond checking and reaches into the production of
creative translation services, we must also accept that quality assurance is only one
possible purpose of revision. A shift in how revision is seen and defned is therefore
necessary: instead of merely checking for errors, it needs to be seen as part of the
text production efort.
As mentioned earlier in this chapter, the translation industry standards EN 15038
and ISO 17100 both take a strict view on revision, requiring that all target lan-
guage content be revised. Considering the fexibility of practices generally adopted
by LSPs, and the need to ensure proftable operations by not wasting resources, it
seems that any widespread adoption of the standards may not take place unless these
requirements are reconsidered. As has been repeatedly found in empirical studies,
for example Rasmussen and Schjoldager (2011: 101) and Schnierer in Chapter 6
From language check to creative editing 147
of this volume, revision is sometimes not possible for practical reasons. Adoption
of the standard would, therefore, mean having to follow requirements that are not
fnancially and practically viable in the translation industry.
Research into LSP workfows challenges the way a translator’s work is tradition-
ally seen—as an individual, isolated efort where a translation is created as a result of
one person’s thought processes. Any up-to-date theory of translation must account
for how translations are created in real production contexts. In the ongoing efort
to bridge the gap between translation theory and practical work (see Chesterman
and Wagner 2014), a move towards recognising the impact of teamwork in every-
day working environments would be a welcome development. Research in areas
such as the sociology of translation has already resulted in great advances in our
understanding of such environments in recent years; more detailed investigations of
translation workfows would contribute to this same goal.
Notes
1 While Mossop (2014a) uses the term ‘editing’ primarily when discussing non-transla-
tions, Bisiada (2018: 290) explicitly states that he uses the term for both translations and
non-translations. There is really no reason why the various editing tasks could not be
performed for a text that has been previously translated; the text is then no longer treated
as a translation. This process is also recognised by Mossop and is in fact included in his
glossary definition of the term ‘editing’ (Mossop 2014a: 224).
2 A good example of the terminological variation is that Finnish LSPs generally do not use
the concept of revision (or the direct Finnish correspondent of the word) when referring
to checking translations (Uotila 2017: 44–5), and that for Danish LSPs, it is only one of
several terms that are used (Rasmussen and Schjoldager 2011: 100).
3 Risku et al. (2017: 54) cite Rike’s (2013: 72f) definition of transcreation as “a concept in
which the advertising text and message are completely rewritten and redesigned in order
to produce a creative and effective target text”. The term is used here in this sense, refer-
ring to a commercial service that meets this definition.
8
EXPLORING A TWO-WAY
STREET
Revisers’ and translators’ attitudes
and expectations about each other in
biomedical translation
What motivates revisers’ and translators’ decision-making and, hence, their options
is a common interest among researchers within process-oriented DTS and transla-
tor training.1 Revisers’ and translators’ attitudes and expectations are particularly
relevant if we wish to describe, understand and explain the motivations of these
professionals. Their attitudes and expectations have been the subject of some inves-
tigations, in particular those concerned with how professionals do and should
perform various translation activities in diferent domains.2 However, although
much progress has been made in researching translation revision (for example,
Mossop 2007a, 2014a; Robert 2014a; Robert et al. 2017a, 2018), to the best of
our knowledge, there is no research concerning attitudes and expectations about
revision practices in medical or biomedical translation. It is not known what atti-
tudes and expectations revisers have about professional translators and translation
in biomedical settings. How translators think they should translate and what they
think revisers expect from them are also not fully understood.
At the intersection of Descriptive Translation Studies and social sciences, our
interdisciplinary, empirical and descriptive study addresses the question of whether
revisers’ attitudes and expectations about competences and working practices are
similar to or diferent from those of translators. To do so, we shall look at the
results from a questionnaire circulated among professional revisers and translators
from June 2017 to April 2018. The questionnaire was originally part of a larger
descriptive study about the beliefs, translation behaviours and translation options
of 60 agents3 with diferent roles and levels of experience, namely novice transla-
tors, experienced translators, revisers and health professionals. Diferent types of
beliefs were elicited and then compared with translators’ behaviour and with revis-
ers’ and health professionals’ preferences regarding translation options in biomedical
Exploring a two-way street 149
translation (Valdez 2019). The analysis showed that translators and revisers not
only expressed beliefs associated with source and target orientation (which was
the focus of the study), but also beliefs about competences and working practices.
Consequently, a follow-up study was conducted with a larger group of participants
(n = 71), to whom the same questionnaire was administered. The fndings dis-
cussed here concern the attitudes and expectations expressed about competences
and working practices in revision and translation.
The next section provides a brief overview of the main guiding concepts
of our research, that is, ‘attitudes’ and ‘expectations’ and their connection to
translation norms. Then we contextualize biomedical translation within medi-
cal translation and defne what is meant by these terms. The chapter then goes
on to describe and discuss the methods used in our study. In the fnal sections,
the results are described and discussed, together with the implications and our
conclusions.
More concretely, the beliefs that translators and revisers expressed about compe-
tences and working practices were:
• the agent’s beliefs about what ‘other agents should do’ in a situation, hence-
forth normative attitudes (for example, how revisers think translators should
translate, and how translators think revisers should revise);
• the agent’s beliefs about what other agents ‘do’ in a particular context, which
will be referred to as empirical expectations (for example, how revisers think
translators translate, and how translators think revisers revise);
• the agent’s beliefs about what ‘others believe s/he should do’, called normative
expectations (for example, what revisers believe translators think revisers should
do, and vice versa).
The distinctions between diferent types of beliefs are often overlooked within
social sciences in general and Translation Studies in particular. Within the former,
Bicchieri clarifes that “important distinctions . . . are often missed in surveys,
because questions about attitudes are often too vague to capture these distinctions”
(2017a: Kindle location 346).
In addition, research has suggested that even though the attitude-behaviour
relationship has motivated a considerable body of literature in the social sciences,
the relationship between attitudes and behaviour is at least arguable. In his much-
cited literature review, Wicker concluded that there is “little evidence to support
the postulated existence of stable, underlying attitudes within the individual which
infuence both his verbal expressions and his actions” (Wicker 1969: 75), and he
added that “it is considerably more likely that attitudes will be unrelated or only
slightly related to overt behaviors than that attitudes will be closely related to
actions” (Wicker 1969: 65), a view that the authors of this chapter share.
In other words, what people say they ‘believe’ may or may not coincide with
what they actually ‘do’. In translation, too, “there may . . . be gaps, even contra-
dictions”, Toury (2012: 88) explains, “between explicit arguments and demands,
on the one hand, and actual behaviour, on the other”. This lack of convergence
Exploring a two-way street 151
between what people say they believe and what they do may have multiple causes:
they may lack awareness of their own behaviour, their statements may be deliber-
ately or unintentionally misleading and they may model their behaviour on what
they believe others expect of them, because people are “social animals embedded
in thick networks of relations” (Bicchieri 2017a: Kindle location 311).
What agents believe they should do in a particular situation is largely based on
the shared beliefs, attitudes and expectations within a particular group about what
is considered appropriate and inappropriate behaviour in a specifc situation within
a certain target culture, language and system (Valdez 2019: 46). That is precisely
the topic of this study.
Within this perspective, behaviour is conditioned by the belief that most agents
in one’s network conform to the norm and believe they ought to conform to the
norm. These beliefs are assumed to inform the conditional preference to act in a
certain way in a specifc situation.
An agent’s interpretation of what should be done, given their community’s
shared beliefs about appropriate and inappropriate courses of action, is actually
already present in Toury’s defnition of translation norms “as the translation of gen-
eral values or ideas shared by a community—as to what is right and wrong, adequate
and inadequate—into performance instructions appropriate for and applicable to
particular situations” (Toury 2012: 63). Here revision and translation norms can be
interpreted as non-binding orientations of behaviour: revisers and translators always
have a choice. It is their expectations of what they consider appropriate and what
they think the community expects of them that tends to constrain their options and
hence their decision-making.
Revisers’ and translators’ behaviour is not only infuenced by what they believe
most other agents believe they should do, but also by what they (revisers and trans-
lators) think most agents in their community actually do. It is all these beliefs that
inform an agent’s preference to act in a certain way in a specifc situation and what
the agent believes others should do. As Hermans (1999: 74) formulates it, transla-
tors’ decisions result from “certain demands which they [translators] derive from
their reading of the source text, and certain preferences and expectations which
they know exist in the audience they are addressing”. In this study, it is assumed
that this also applies to revisers.
Since individual choices depend on what agents believe others in their commu-
nity do and what they believe is appropriate and inappropriate behaviour (Bicchieri
2017a: Kindle location 232), revision and translation can be considered interde-
pendent actions. In other words, it is not sufcient only to elicit what revisers
and translators think they should do, since what they believe should be done in a
specifc situation may be constrained by what they believe others expect of them
and what they believe others do.
Revisers’ and translators’ statements about attitudes and expectations can thus be
seen as extratextual data and an essential source for the “reconstruction” of transla-
tion norms referred to in the literature as “semi-theoretical or critical formulations,
such as . . . statements made by translators, editors, publishers and other persons
152 Susana Valdez and Sonia Vandepitte
involved in or connected with the activity” (Toury 1995: 65). Such statements
also respond to Chesterman’s (2016: 83) call for more evidence of norm-governed
behaviour:
2. Biomedical translation
Medical translation is generally considered a type of scientifc-technical transla-
tion concerning medicine and a range of subject areas related not only to health
(including pharmacology, surgery, psychology) but also to other felds (such as law)
(Karwacka 2015: 271; Montalt 2011: para. 4). The importance of medical trans-
lation in the dissemination of knowledge and new discoveries is unquestionable
(Karwacka 2015: 271). The facilitation of specialized and non-specialized com-
munication (expert-to-expert, and diferent combinations and variations of expert
to layperson and layperson to expert) through medical interpreting has also been
attracting the attention of translation scholars (for example, Lesch and Saulse 2014;
Li et al. 2017; Major and Napier 2012).
Within the healthcare environment, the medical devices industry has been
playing an increasingly important role in the European economy (European Com-
mission 2018b, under “The importance of the medical devices sector”). On the
one hand, “medical devices are crucial in diagnosing, preventing, monitoring and
treating illness, and overcoming disabilities” (European Commission 2019b). Med-
ical devices are even considered by the World Health Organization as “ever more
indispensable” in healthcare provision (World Health Organization 2018). The
medical devices industry also represents a growing sector of 27,000 companies and
675,000 employees in the European Union and hence “an infuencer of expendi-
ture” (European Commission 2018b).
Biomedical translation is defned here as the translation of content from bio-
medicine, the science and profession responsible for medical devices, from
“innovation, research and development, design, selection, management [to their]
safe use” (World Health Organization 2017: 20). It includes mainly texts related
to medical devices. A medical device is considered “any product intended by its
manufacturer to be used specifcally for diagnostic and/or therapeutic purposes and
necessary for its proper application, intended by the manufacturer to be used for
human beings” (European Parliament 2007: 23–4). In accordance with European
legislation (Council of the European Union 1993: 30), each medical device is
accompanied by an instructional text. These texts are written or commissioned by
Exploring a two-way street 153
the manufacturer and are thus written by experts to be read by experts (health pro-
fessionals) or laypeople. The aim is to instruct the health professional or layperson
on how to correctly and safely use the device.
3. Methodology
Within the feld of biomedical translation, this chapter describes how revisers think
translators translate, how translators think other translators translate and how revisers
revise (empirical expectations); how revisers think translators ‘should’ translate and
how translators think revisers ‘should’ revise (normative attitudes); and what revisers
believe are the essential characteristics of a good translation, what translators think
about other translators’ expectations of their work and what translators think about
revisers’ expectations of translators’ work (normative expectations) (Table 8.1).
Questionnaires were the method selected for data collection since they are seen
as the optimal instrument to elicit beliefs not only in social sciences (for example,
Bicchieri 2017a: Kindle location 1134) but also in Translation Studies (Kuo 2014:
106; Robert and Remael 2016: 586). The well-documented problems associated
with the elicitation of beliefs in general and the use of questionnaires in particular
were taken into account in the data collection and the design of the questionnaires
(for example, see Callegaro 2008 on social desirability bias). This was done mainly
by (1) adopting a self-administered method of data collection, (2) assuring par-
ticipants that their personal information would be treated confdentially, (3) pilot
testing the questionnaires, and (4) acknowledging that the respondents’ answers
may not be truthful (Gile 2006).
Data collection
The links of the online questionnaires5 were sent by e-mail, together with the
informed consent form, to the pre-contacted participants recruited (1) from a call
for participants posted on dedicated Facebook pages for Portuguese translators and
associations; (2) on the basis of a pre-selection of profles of translators and revisers
who self-identify as specialized in medical or biomedical translation on Proz.com
and on the websites of Portuguese translation associations (APTRAD and APT);
(3) through a request sent to Portuguese universities with the intent of recruiting
novice translators that might ft the profle; and (4) from personal acquaintances.
Each participant received a questionnaire tailored to their experience and/or pro-
fession, namely reviser, novice translator or experienced translator. No fnancial
compensation was ofered to the participants.
diferent sections. The revisers’ questionnaire was divided into fve sections: (1) pro-
fessional profle (fve questions), (2) assessment of the quality of a translation (two
multiple choice questions), (3) reviser’s beliefs about revisers (self-beliefs and beliefs
about others) (three open questions and two Likert scale questions), (4) reviser’s
beliefs about translators (two open questions and two Likert scale questions), and
(5) reviser’s beliefs about the readers of the translation (three open questions and
two Likert scale questions). The translators’ questionnaire was divided into four
sections: (1) professional profle (fve questions), (2) translator’s beliefs about trans-
lators (self-beliefs and beliefs about others) (two open questions, two Likert scale
questions, one star scale question and one yes/no question), (3) translator’s beliefs
about revisers (three open questions and two Likert scale questions), (4) translator’s
beliefs about the readers of the translation (three open questions, two Likert scale
questions and one star scale question).
Normative attitudes about competences and working practices were elicited
by asking revisers the open question “In general, how do you think translators
‘should’ translate?” and novice and experienced translators the open question “In
general, what criteria do you think reviewers ‘should’ use to judge the quality
of a translation?” In order to elicit empirical expectations, revisers were asked
the open question “In general, how do you think translators ‘actually’ trans-
late?” while translators (both novice and experienced) were asked “How do other
translators with the same experience as you translate?” and “In general, how do
you think reviewers assess a translation?” Finally, to elicit normative expectations,
revisers were asked the open question “In general, which are the essential char-
acteristics of a good translation?” while translators (both novice and experienced)
were asked “In general, how do other translators with the same experience as you
think you ‘should’ translate?” and “In general, what expectations do you think
reviewers6 ‘have’ of your work?”
The questionnaires were designed using the online SurveyMonkey tool,7 which
allows for the collection of responses and their export for external codifcation
and analysis in the NVivo quality analysis software. NVivo 12 Mac allows for the
processing of qualitative unstructured data resulting from the open questions to
which a participant “gives the response in his or her own words” (Ballou 2008:
547), which is especially useful when conducting an exploratory study regarding
an unexplored topic like attitudes and expectations in biomedical translation. The
rich raw data provided by the participants were systematically coded and organized
by emergent themes (following Saldanha and O’Brien 2013: iBook location 564).
Thematic analysis has been defned as “the process of working with raw data to
identify and interpret key ideas or themes” (Matthews and Ross 2010: 373).
Participants
In total 71 participants answered the questionnaires, all native speakers of European
Portuguese, with experience in biomedical translation and/or revision. There were
23 revisers, 32 novice translators and 16 experienced translators. The diferent
Exploring a two-way street 155
levels of experience (novice vs. experienced translators) and the distinct professions
of the participants (translators vs. revisers) allowed for a comparison and contrast
of their belief statements.
4. Results
Revisers’ profles
The 23 revisers (7 men) had experience in the revision of biomedical transla-
tion ranging from 1 to 20 years (average of 7 years). All revisers had a degree
in translation at the BA or post-graduate level and/or a degree in medical
sciences. All revisers worked with the language pair English to European Por-
tuguese (95.65%), with the exception of one that revised Spanish–Portuguese
translations. Some of the revisers worked in several language pairs besides the
main English–Portuguese pair and also revised from Spanish (43.48%), French
(26.09%) and German (8.70%).
Revisers were asked to select, from a list of text types in the (bio)medical domain,
all the types they had worked with.8 From that list, the most frequently revised were
patient information leafets (56.52%), user manuals for devices (56.52%) and soft-
ware (43.48%), summaries of product characteristics (52.17%), (material) safety data
sheets (47.83%) and training material (47.83%).
manuals for medical devices (26.67%) and software (20.00%), training material
(26.67%) and labels (20.00%).
13
Terminological norms 5
13
Accuracy 5
13
Language norms (grammar, spelling, syntax) 2 3
12
4
Adapt to target audience 1
6
Readability 1
4
1
Natural sounding and fluency
4
1
Detail-oriented 2
Consistency
3
Conciseness 1
Reduce nuances of ST
1
0 5 10 15
FIGURE 8.1 Revisers’ normative attitudes, empirical expectations and normative expec-
tations about translators and translations
Exploring a two-way street 157
A lot of the times, we get literal translations that immediately give away it is
a translation and not the original text. This makes it hard to read and means
that, most of the time, we need to read the text several times to understand.
As well as this, it provides leeway for errors (false friends, etc.).
Revisers also referred to potential causes of this “automatic pilot” translation proce-
dure, namely lack of self-revision and tight deadlines. Self-revision, identifed as a must
for translators, was also identifed as a root cause of lack of quality, which together
with the “automatic pilot” procedure is attributed by some revisers to lack of time. As
explained by some revisers, “[s]ometimes ‘shortcuts’ are taken in order to comply with
deadlines, perhaps, resulting in translations of inferior quality” and “[t]hey actually
work for the deadline, which is extremely short and sometimes non-realistic. Consid-
ering the demands of the client in quality and sometimes the load and complexity of
instructions and workfows, this has consequences for the translation quality.”
“How do other translators with the same experience as you translate?” and “In
general, how do you think reviewers assess a translation?”; and to elicit normative
expectations, translators were asked “In general, how do other translators with the
same experience as you think you ‘should’ translate?” and “In general, what expec-
tations do you think reviewers ‘have’ of your work?”.
In their responses to these open questions, the majority of novice and experi-
enced translators referred to the high expectations held by other professionals and,
more concretely, by revisers (that is, normative expectations). Some novice and
experienced translators believed that revisers expected perfection. Two of the nov-
ice translators, for instance, clearly stated that they believed revisers did not accept
any types of error. “In a professional environment”, one of the novice translators
wrote, “all jobs are expected to be perfect in terms of achieving the goals com-
panies give you. If you work on your own, then you should be hard on yourself.”
However, the majority expressed the belief that revisers expected and accepted a
translation that shows some small or minor “slips”. Two of the experienced transla-
tors believed revisers expect them to deliver a good translation “that will not take
too long to revise”, as one translator noted, and three other translators believed that
revisers held high expectations: “I think their expectations are high”, “I strive to
deliver excellent quality translations”, “they expect high-quality work”.
Other novice and experienced translators believed that revisers expected a less-
than-perfect product: “that it is good, even if it is not perfect”, as one wrote,
and “if I have translated a text related to a feld I do not usually work with, the
reviser might need to check/change some of my terminological choices”, another
clarifed.
Two more broad themes emerged from the answers of the biomedical transla-
tors: they directly or indirectly referred to translators’ competences and, like the
revisers, they referred to the translation process itself.
Regarding the competences, novice and experienced translators alike reported
normative expectations and attitudes. They are expected by other translators and
revisers, and they expected other translators and revisers, to be profcient in ‘infor-
mation mining’. They referred specifcally to the documentation and terminological
process (52 mentions), followed by ‘planning and management’, mainly time man-
agement (16 mentions) and ‘language’ competence (11 mentions)—professionals
are expected to know and comply with writing and linguistic norms (including
grammar, spelling and punctuation) (Figure 8.2). Less frequently, translators also
referred to detail orientation (fve times), which is described by the translators
as “being thorough” and “with attention to detail”. Surprisingly, subject-matter
and technological competences were mentioned only three times each, suggest-
ing that these translators believed that they are expected to prioritize ‘information
mining’ and ‘planning and management’ over their knowledge of the subject and
the efective use of software. From an industry perspective, the fact that ‘infor-
mation mining’ outweighs ‘subject-matter knowledge’ may suggest that knowing
how to conduct research and documentation is more desirable than knowledge of
a specialist feld. Finally, among the least mentioned themes were ‘defning and
Exploring a two-way street 159
5
Information mining 17
30
Plan and management 6
10
Language 2
4 5
Detail orientation 2
3
Thematic
3
Technological 1
2
Define and evaluate translation problems 1
1
Knowing how to take responsibility 1
1
Conscious process
1
Risk analysis
1
Ability to measure one’s own abilities
1
Knowing how to learn from feedback
1
0 5 10 15 20 25 30
I think revisers should not act as judges, but as part of the value chain. So,
their purpose should be to create a better product than what they get from
the previous stage. If the product has the adequate level of quality, the reviser
should not change the product received.
When translators were asked about how revisers should assess a translation (that
is, their normative attitudes), experienced translators indicated that they should
focus on objective criteria such as grammar and style, which “in fact correct and
improve a text and not criteria of ‘changing just for the sake of changing,’ only to
160 Susana Valdez and Sonia Vandepitte
justify their own revisers’ salary and, sometimes, even to humiliate the translator”.
Another experienced translator commented:
more often than not I find myself refuting marked errors that are not errors at
all. Either the error severity is not correct, or there’s no error at all. I believe
revisers work in good faith, but sometimes I begin to wonder.
These expectations are not surprising, as the ISO 17100: 2015 standard makes
reference to “competence in research, information acquisition, and processing” as
one of the professional competences of translators, and also indicates that during
the translation process the translator is expected to comply with industry termi-
nology (both the terminology specifc to the domain and to the client) and with
the reference material provided (including style guides) (ISO 2015a: 6; 10). Even
though compliance with reference material was one of the aspects most referred to
by both revisers and translators, one of the experienced translators clarifed that, in
his experience, “we normally don’t have access to reference material nor contact
with specialists on the client side”. This is an important point for revisers to con-
sider when checking and assessing a translation.
Although reliance on a network of translators and domain experts is not
included in ISO 17100: 2015, it was frequently mentioned by novice and expe-
rienced translators as part of their needed competences. For instance, one of the
novice translators welcomed the opportunity to focus on the increasing impor-
tance of collaborative work:
Though translating is, in some ways, a solitary task, particularly for freelanc-
ers, at the same time, teamwork is important as it ensures the quality of the
provided translation services and allows for additional viewpoints of a single
item or topic. Sharing knowledge means gaining knowledge and expanding
experience. Translators are not infallible machines. Therefore, seeking help
and advice from fellow translators should be encouraged as a means of growth.
Most striking about the data was translators’ strong negative attitudes towards
revisers’ preferential changes. Subjective preferential changes refer to those correc-
tions made by a reviser that are based not on objective parameters of quality but
rather on subjective ones. These changes, also referred to in the literature by the
terms ‘hyper-revision’ and ‘over-revision’ (Mossop 1992: 85), are considered “sug-
gestions for improvement” rather than errors since “nothing is technically wrong”
(Densmer 2014). More often than not, these changes create problems in the qual-
ity control process and particularly in the relationship between the reviser and the
translator. That is probably why the surveyed experienced translators expressed
clear opinions about preferential changes. For them, the work of revisers is subjec-
tive, and it generates a sense of injustice. Their changes introduce insecurity and
doubt about the revision process, such that one of the experienced translators
wrote, “I already know that the reviser is going to change the text a lot, which is
rather unpleasant from an emotional point of view, but tough luck.”
Belief statements such as this suggest a power struggle between revisers and
translators with potential consequences for the translation process. The frictions
between these two groups of professionals indicate that the authorship of the trans-
lation is being put into question. Even though scientifc-technical translation is
increasingly seen (and accepted) as the product of a collaborative endeavour, as was
expressed by some of the participants, the translators’ belief statements may signal
the challenging of the role of the reviser and translators’ diminishing decision-
making power over the last version of their translations.
To conclude, the potential lack of communication and trust between revisers
and translators can hinder the quality of the translation and ultimately damage the
image of the translator. The fndings suggest that translators are questioning what is
expected of them. If translators do not understand the reasons motivating revisers’
corrections, they are not able to follow revisers’ feedback and their competence can
be put into question, as expressed in the revisers’ belief statements. This may lead to
the perception that the quality of translators’ work is below the expected standard.
As a consequence, translators receive negative feedback from revisers, which jeopar-
dizes their professional reputation. Thus, working relationships between revisers and
translators can be contentious even though both groups agree on the quality param-
eters that should govern biomedical translation, in line with ISO 17100: 2015.
Though exploratory in nature, our study has aimed to lay the foundations
for further research. Future lines of research should include studies of scientifc-
technical translation, which increasingly seems to be considered a collaborative
efort demanding that revisers and translators work together. This raises the ques-
tion of how the industry will cope with these challenging power relations. For
instance, even if the Codes of Ethics of Portuguese translators’ associations do not
yet contemplate potential limits to revisers’ work, the codes should be monitored
in order to assess how the industry is dealing with these challenges. Likewise, given
that university training is based mainly on developing individuals’ competences in
translation/revision, starting with an assessment of individual student performance,
there is a pressing need to inquire into how training can adapt to the increasing
Exploring a two-way street 163
Acknowledgements
Special thanks are due to Alexandra Assis Rosa for her comments on an earlier
version of the chapter, as well as to the anonymous peer reviewers and Isabelle S.
Robert for insightful feedback.
Notes
1 For example, Schwieter and Ferreira (2017); Ehrensberger-Dow et al. (2015).
2 For example, Sosoni (2017) reported on translators’ attitudes about translation crowd-
sourcing; Corrius et al. (2016) examined students’ and professionals’ attitudes to gender
in advertising translation; Feinauer and Lesch (2013) discussed the “idealistic” expecta-
tions of healthcare professionals about interpreters.
3 While a variety of definitions of the term “agent” has been put forward, this chapter
adopts the definition suggested by Simeoni, who saw it as a sociological concept for “the
‘subject,’ but socialized. To speak of a translating agent, therefore, suggests that the refer-
ence is a ‘voice,’ . . . inextricably linked to networks of other social agents” (1995: 452).
For an overview on agents and agency in TS, see Buzelin (2011).
4 Concerning the nomenclature of attitudes and expectations, attitudes can be defined
as a relatively stable system of beliefs concerning an object or person which results in
the evaluation of that object or person (Lawson and Garrod 2001: iBook location 91;
Marshall 2003: Kindle location 1156; Abercrombie et al. 2006: 21; Bruce and Yearley
2006: 13; Darity 2008: 200; Fleck 2015: 175). Normative attitudes can be expressed by
statements like “I believe that others should/shouldn’t do X” and should not be confused
with preferences (Bicchieri 2017a: Kindle location 293–5). In turn, expectations are
defined, according to Bicchieri (2017b), as “just beliefs” that can be empirical or norma-
tive about what happens or should happen in a given situation. Empirical expectations
are typically expressed in sentences such as “I believe that most people do X”, “I have
seen that most people do X” and “I am told by a trusted source that most people do X”
(Bicchieri 2017b). Normative expectations are expressed by statements such as “I believe
that most people think we ought to do X”, “I believe that most people think the right
thing to do is X”, “I think that others think I should X” (Bicchieri 2017b).
5 For the questionnaire aimed at revisers, visit www.surveymonkey.com/r/95VVFGJ; for
novice translators, visit www.surveymonkey.com/r/9BJDXBR; for experienced transla-
tors, visit www.surveymonkey.com/r/9PZMNDS.
6 On the questionnaires, the term ‘reviewer’ was used instead of ‘reviser’ to refer to the
same professional, since, according to our research, this was the most common term in
biomedical revision.
7 For more information on this tool, visit www.surveymonkey.com.
8 Since these answers respond to an ‘all that apply question’, a translator could choose more
than one text type and, therefore, the percentages do not add up to 100%. The same
applies to the questions aimed at novice translators and revisers.
9 Concerning the translation and self-revision process of translators, revisers’ normative
expectations are not applicable because revisers’ normative expectations would refer to
the revisers’ own process, that is, revisers’ beliefs about what others believe the reviser should
164 Susana Valdez and Sonia Vandepitte
do in the course of their work. In this section we are concerned with revisers’ beliefs
about the process of translators.
10 A systematic search conducted in September 2018 in the Translation Studies Bibliogra-
phy by keyword and abstract was not able to identify studies of user manuals for medical
devices and software within medical translation.
11 It should be noted that ‘accuracy’ and ‘plan and management’ also emerged as common
themes in the topic analysis, but given that they were not expressed when eliciting all
three types of beliefs, they are not considered, for the purposes of this study, to be beliefs
as strong as ‘terminology’ and ‘language’.
9
ANOTHER LOOK AT REVISION
IN LITERARY TRANSLATION
Ilse Feinauer and Amanda Lourens
is brought under scrutiny, and the relationship between these activities is recon-
sidered. (The distinction between revision and editing is not under consideration
here, and we use the term ‘editor’ or ‘proofreader’ as assigned by the project manag-
ers to describe the revision-related activities.)2
The 2017 study as well as the present study involve professional revisions
performed on three works of fction translated from Afrikaans into English3 for
one of the larger book production companies in South Africa that employs free-
lance translators, revisers and editors. For both studies an empirical analysis of the
documented relationships between the agents involved in three diferent literary
translation projects was conducted.
Scocchera (2013: 144–5) points out that revision seems to be identifed through
the comparative aspect, as can be seen in the following defnitions. Delisle, Lee-Janke
and Cormier (in Scocchera 2013: 144) describe revision as
the term revision refers to a comparative check carried out on the TT and its
respective ST in order to identify problems and errors and introduce the nec-
essary corrections or amendments. In the context of professional translation,
revision indicates one particular stage in the chain of production of translated
documents and can be defined as the process aimed at identifying features of
a draft translation that fall short of the required quality standards and at intro-
ducing the necessary amendments and corrections.
(our emphasis)
For Robert et al. (2017a: 4), revision is the reading of a draft translation by a per-
son other than the translator to detect features of the draft translation that fall short
(we would say may fall short) of what is acceptable (according to the translation
and revision brief) and to make appropriate corrections and improvements before the
translation is delivered to the client. However, Mossop (2014a) states that revision is
performed not only by a person other than the translator, but also by the translator.
He draws a clear distinction between two main types of revision, depending on the
agent doing the work.
Scocchera (2013: 143) sheds new light on the concept of revision by introduc-
ing an etymological perspective: Revision has as its root revisere, meaning “to look
again”. This points towards revision as an additional examination of the text, when
the translator or reviser takes a fresh approach and sees “with new eyes” as if seeing
it for the frst time. Even though Scocchera (2013: 143) is of the opinion that this
Another look at revision 167
etymological defnition does not tell us much about the actual nature and scope
of the actions performed by revisers, we believe that this etymological perspective
might prove to be extremely useful when reconsidering the current defnitions of
‘revision’ and ‘editing’.
Scocchera (2013: 144–5) goes further, stating that a ‘comparative examination’
is a typical and critical trait of revision, with the implication that revision takes
place only when a cross-check between the source text and target text is per-
formed. However, she remarks that the extent of the cross-check will vary in
diferent situations, and allows that revision might at times be a unilingual activity:
The rationale behind this research was to investigate whether the theoretical under-
pinnings of the revision process are enacted by the various real-life agents in our
case study. In other words, we wanted to know whether theory speaks to practice
as far as literary revision is concerned. If this is not the case, could the theoretical
base be enriched by a description of the real-life events taking place during a liter-
ary translation production process?
the prestigious Hertzog Prize for Afrikaans literature in 2004, and it occupies a
prominent position as a serious literary text in the Afrikaans literary polysystem
(Spies 2013: 191–2).
The award-winning youth novel Vaselinetjie/My name is Vaselinetjie tells the
coming-of-age story of an abandoned white baby girl who was raised by a couple of
colour and, in a heart-breaking turn of events, was taken away by child welfare
services and sent to a state orphanage at the age of 11. Vaselinetjie was awarded
the prestigious MER Prize for Afrikaans youth literature as well as the Jan Rabie/
Rapport Prize in 2005. The source text is canonized as a youth novel that has
been prescribed at high school level but is also seen as a ‘crossover’ book that both
teenagers and adults can relate to (Spies 2013: 193–4).
For the volume of short stories published in English as In bushveld and desert: A
game ranger’s life, a number of stories by Christiaan Bakkes, who is well known as a
seasoned traveller and game ranger in Africa, were selected by the publisher to be
translated. Bakkes, not having won any literary prizes, does not enjoy the same sta-
tus in the Afrikaans literary system as, for example, Winterbach. The target readers
of In bushveld and desert are people who enjoy well-written stories about Africa and
nature, and especially international tourists in Southern Africa (Spies 2013: 195–6).
We undertook an investigation of the various revision activities—comparative
as well as unilingual—performed during the revision of the three aforementioned
manuscript translations. ‘Revision’ is used to refer to all the activities pertaining
to the checking of the draft translation until the manuscript is ready for publica-
tion. This choice is motivated by the haphazard use of the terms revision, editing
and proofreading in the archival documents. Only one of the cases, for example,
mentions a reviser, while the other two cases refer only to editors and/or proof-
readers. In the case of In bushveld and desert: a game ranger’s life, the agent doing
the revision refers to herself as an ‘editor’. Although we do not agree with these
naming practices, we refer to the various agents in the terms used by the publish-
ing house: namely author, translator, reviser, editor, compiler6 and proofreader. The
various revision activities were mapped as diferent discernible stages, structured
via the fow of the e-mail correspondence.7 In the production process of To hell
with Cronjé, for example, fve stages could be identifed (see section 3). The only
constant factor across these processes is the translator, who was responsible for all
three translations.
Following the approach adopted by Munday (2012) in his case study of three
literary translation and/or revision processes, this study utilized archival material
in order to study three diferent sets of agents involved in the translation process.
According to Munday (2012: 104), archival documents have been underutilized
in Translation Studies, even though they hold the possibility of providing detailed
retrospective insight into the decision-making processes involved in translation and
revision. Since then, this kind of study, in which manuscripts, drafts and other
working documents such as e-mail correspondence are investigated, has emerged
as a discipline known as genetic Translation Studies. According to Cordingley and
Montini (2015: 1), this feld aims to reveal “the complexity of the creative processes
170 Ilse Feinauer and Amanda Lourens
engaged in [the production of modern literary works]”. Our study can therefore be
classifed as a genetic translation study: the productions investigated here are three
literary translations, and we investigated all stages of revision the manuscripts went
through before publication. See also the genetic study by Scocchera (2015), where
her focus was on the active roles of both translators and revisers in the genesis of
literary translations in Italy. Her object of study was the interplay between transla-
tors and revisers in the form of text changes, suggestions and comments on the
various manuscript versions.
Our working documents consisted of all e-mail correspondence among the
agents involved in the translation and revision of three Afrikaans works of fction.
These three sets of correspondence were substantial8 and include all discussions
between the agents working on the various drafts. The e-mail discussions were the
various agents’ only mode of dealing with translation challenges, including ter-
minological queries and grammar as well as content issues. Our analysis included
the agents who were involved, as well as how the production played out through
the diferent stages of the process that was shaped by their actions and interactions.
The actual draft manuscripts were not analysed; rather, we performed a discourse
analysis on the correspondence in order to describe the revision processes, includ-
ing the sometimes intricate interplay between acts of self- and other-revision. We
compiled an inventory of the various acts performed by the agents, as refected
in their documented discourse, and summarized these as the main activities per-
formed in separate stages.
3. Findings
The translator makes translation notes on her own translation, which can be inter-
preted as the footprint of her self-revision of her work. However, these notes are
aimed at an audience (the commissioning editor and the author), so that she does
not perform the action of self-revision in isolation before the draft enters a phase
of other-revision. She rather positions herself within a network of agents on whom
she relies for help, but to whom she also provides the justifcation for some of her
translation choices. For instance, she relies strongly on the author to answer certain
questions, especially those dealing with factual issues. In this phase she shows an
adherence to gatekeeping activities and specifcally tries to ensure that an accurate
Another look at revision 171
[p 89 of the original text. Klassifikasie van stollingsgesteentes hang af van oor-
sprong. I have translated this (oorsprong) with ‘origin’, but maybe this is not
correct. Does this perhaps mean it depends on where the rock is found?
Would the author please have a look and correct?] (our translation; word in
parentheses added as explicitation).9
p 89 van die oorspronklike teks. Klassifikasie van stollingsgesteentes hang af
van oorsprong. Ek het dit vertaal met ‘origin’, maar dink dis dalk nie reg nie.
Beteken dit dalk dit hang af waar die klip gevind word? Sal die skrywer asb kyk en
regstel?
Next, the author engages in other-revision when she revises the draft transla-
tion, but once again this does not happen with the author as the only agent at
work in this phase. Instead, she enters into a dialogue with the translator, providing
answers but also asking her own questions. The dialogic nature of her revision is
emphasized by the fact that she asks for advice from the translator, even when she
disagrees with the translator’s choices:
The network of agents is further expanded when the author’s notes to the trans-
lator are also sent to the reviser, who in the third stage of revision busies herself
with an other-revision, during which she often refers to the source text:
p 41 (Niggie). The word ‘furious’ in ‘furious swirling’ is too strong, I think.
The Afrikaans is hewige gekolk. Perhaps just leave out ‘furious’—‘the swirling
and the pitching of the stars’ works nicely.
In the fourth stage, the translator performs a revision of the already twice-
revised version (excluding her initial self-revision). In this stage, she is seen doing a
self-revision once again (the draft being her product), but she is simultaneously
performing an other-revision of both the author’s and editor’s suggestions:
[‘in what way the appalling nature of the day had been affected’—I am afraid
it does not make sense to me. Would you please reformulate?]
‘in what way the appalling nature of the day had been affected’—ek is
bevrees dit maak nie vir my sin nie. Sal jy asb maar herformuleer?
Shifting roles are apparent during this stage: The author becomes a reviser dur-
ing the second stage, but her product is revised in this fourth stage by the translator,
who is now in the role of (other-)reviser.
During the ffth stage, the editor is seen performing an other-revision of the
draft, but with the focus on gatekeeping activities, especially pertaining to the
target language. However, this is not a unilingual check since the source text is
consulted and used as a basis for the editing decisions in this last stage.
p 34 rank as a billy goat: The Groot Woordeboek does translate geil as rank,
fertile, sensual, so strictly speaking rank would be correct here. But rank in
English implies foul smelling, and while we know that Oompie does smell, I
get the impression that what is being suggested here is his sexual appetite—
the next sentence is a reference to his many wives. So wouldn’t randy as a
billy goat be a better description?
As with the frst translation, the translator writes accompanying notes to jus-
tify or to explain her translation choices, and to ask questions. Once again, her
Another look at revision 173
activities during this stage reveal networking actions rather than a solitary process
of self-revision:
[The huistannies—or simply the tannies. They are also called huismoeders. I
browsed websites of children’s homes and have seen huismoeders are indeed
called house mothers on various websites. Sometimes I have used this, but I
feel one cannot use this all over before it turns irritating. I have also steered
away from Auntie—used this sporadically only. As in the case of Auntie S’laki
as well as Tannie Hilde and in some instances Auntie Meredith. When ‘die tan-
nies’ is mentioned, I have frequently used ‘matrons’. It works for me. Please
see whether you agree. I myself was in a school hostel and we always referred
to the matrons.]
Die huistannies—of net die tannies. Hulle word ook huismoeders
genoem. Ek het op die webwerwe van kinderhuise rondgekyk en gesien
dat die huismoeders wel op verskillende webwerwe house mothers genoem
word. Ek het dit by tye so gebruik, maar dit voel nie vir my mens kan dit
oral gebruik sonder dat dit hinderlik word nie. Ek het ook weggeskram van
Auntie—dit net hier en daar gebruik. Wel in die geval van Auntie S’laki en
ook Tannie Hilde en hier en daar Auntie Meredith. Wanneer daar dus van
‘die tannies’ gepraat word, het ek dit dikwels ‘matrons’ gemaak. Dit werk vir
my. Kyk gerus of julle saamstem. Ek was self in ‘n skoolkoshuis en ons het
altyd van die matrones gepraat.
In a second stage, the editor revises the translation but refers back to the source
text, and points out that a problem that has been spotted during the translation can
actually be traced to a problem in the source text (the editor was involved in the
production of the source text as well):
[Yes, this was one of the problems in the Afrikaans not addressed then.
There was a lot of work needed for the manuscript, and one of the strange
things is that one would reach saturation point and just leave things as they
are since there were numerous more urgent issues screaming to be resolved.
I have told myself this is probably how it would have worked in real life—
the term [Peppie] shows a type of “floating” meaning, depending on the
circumstances where it is used. I have indeed now solved the issue of Hefner
who wanted to get rid of Peppie as head boy. He now wants to get rid of
“that particular Peppie”]
Ja, dit was een van die probleme in die Afrikaans wat destyds nie aang-
espreek is nie. Daar was ongelooflik baie werk aan die manuskrip, en een
van die vreemde dinge is hoe mens naderhand ‘n versadigingspunt bereik en
dinge maar los soos hulle is as daar talle dringender kwessies is wat roep om
opgelos te word. Ek het vir myself gesê dit is waarskynlik hoe dit in die werk-
like lewe sou gewerk het—dat die term [Peppie] ’n soort van ’n “swewende”
174 Ilse Feinauer and Amanda Lourens
betekenis het, afhangende van die omstandighede waarin dit gebruik word.
Ek het wel nou die kwessie opgelos van Hefner wat van ‘n Peppie as hoof-
seun ontslae wou raak. Hy wil nou van “that particular Peppie” ontslae raak.
The editor mostly approves the translator’s choices, although she sometimes indi-
cates that she has changed the translator’s text:
[I have felt there could be a misunderstanding when one now reads about
house mothers and then about matrons. (As if there is a number of house
mothers with one matron at the head of all of them.) Therefore I have made
all matrons. I have kept the Aunties and Tannies unchanged.]
Ek het gevoel daar is ruimte vir misverstand as mens nou van house moth-
ers lees en dan van matrons. (Asof daar dalk ‘n klomp house mothers kan
wees met een matron aan die hoof van hulle almal.) Ek het dit dus deurgaans
matrons gemaak. Die Aunties en die Tannies het ek onveranderd gelaat.
In a third stage, the draft translation is other-revised by the author. She goes
beyond being dissatisfed with the translator’s and editor’s changes:
However, the author also expresses her dissatisfaction with some of her own
choices in the Afrikaans source text and indicates that she has engaged in some
rewriting of the original during the revision process:
real addicts to keep it as REAL and CURRENT as possible. Hope you like
it? I think it puts Vaselinetjie once again in the NOW and not in the past any
longer (I have also added mobile phones, Mxit and iPods.)
Beers and cigarettes were upgraded to marijuana, meth and crack.
Gatte refers to the police.]
GANGSTER TALK: Daar is versteekte voordele om in ‘n rehab sentrum
vir dwelmverslaafdes jou boek te sit en oorskryf . . . so het ek dus by egte
druggies gaan kers opsteek vir my nuwe dwelm woordeskat om dit so EG en
HEDENDAAGS as moontlik te maak! Hoop julle hou daarvan? Ek dink dit
plaas Vaselinetjie weereens in die NOU en nie meer in die verlede nie. (Ek
het ook selfone, Mxit en iPods ingebring.)
Biere en sigarette is ge-upgrade na dagga, tik en crack.
Gatte verwys na polisie.
[Regarding A’s changes: Many are good and a distinct improvement, but
sometimes it seems to me as if A was in a totally different “mode” to what she
had been while writing the original Vaselinetjie. Therefore I could not keep
all her changes as is in the text, since some would have harmed the book.
And since after all these years I am still CRAZY about Vaselinetjie, I would
not like to see anything being done to the book that will spoil it.]
Wat A se veranderinge betref: Baie daarvan is goed en ‘n besliste verbeter-
ing, maar soms is dit vir my asof A in ‘n heel ander “modus” is as wat sy was
toe sy die oorspronklike Vaselinetjie geskryf het. Ek kon haar veranderinge
dus nie slaafs in die teks aanbring nie omdat party daarvan die boek skade sou
aangedoen het. En omdat ek na al die jare nog steeds MAL is oor Vaselinetjie,
sou ek nie graag wil sien dat daar enigiets aan die boek gedoen moet word
wat dit bederf nie.
In a ffth stage the author again self-revises her adapted text. Now she bases her
revision activities on the editor’s comments which she addresses one by one—a
self-revision that is shaped by an agent other than the self. Self-revision occurs
when she insists on certain decisions to rewrite. A form of other-revision occurs
176 Ilse Feinauer and Amanda Lourens
in the same comment when she revises the editor’s suggestions, as in this example
where she agrees with the editor:
[Editor: Some of the scenarios and expletives are just too crude—it will sink
the book totally when it comes to considering it for use in the classroom.
Author: Another snag. I agree. Whereas werfetter, poester, and jou nage-
boorte could be quite humorous in Afrikaans, “motherfucker” and “clit” and
“cunt” are simply just terrible. To me even “bitch” is worse than “teef?”
I struggled here. “Motherfucker” and “clit” have been removed. I’ve kept
one “cunt”, but that’s negotiable. I’ve made a few Afrikaans additions and
INSIST THAT THEY STAY THERE. (“Untie that naai,” . . ., tanggeboortes,
etc.) it is almost impossible to find a child in the poorer community who
speaks pure Afrikaans or English. In that sense it saves our attempt in telling
Vas in English. English Vaselinetjie has a chance to be the MOST GENUINE
CAPE SOUTH AFRICAN BOOK up til now!]
Editor: Party van die scenarios en vloekwoorde is net te kru—dit sal die
boek heeltemal kelder wanneer dit kom by gebruik in die klaskamer.
Author: Nog ‘n tamelêtjie. Ek stem saam. Waar werfetter, poester, en jou
nageboorte in Afrikaans nogal humoristies kan wees, is “motherfucker” en “clit”
en “cunt” plainweg net aaklig. Selfs “bitch” is vir my erger as “teef?” Hier het
ek gesukkel. “Motherfucker” en “clit” is verwyder. Ek het een “cunt” oorge-
hou, maar dis onderhandelbaar. Ek het ‘n paar Afrikaanse toevoegings gemaak
en DRING AAN DAT DIT INBLY. (“Untie that naai,” . . ., tanggeboortes,
ens.) Daar is amper nie meer iets soos ‘n kind wat ‘n suiwer Afrikaans of Engels
praat in jou armer gemeenskap nie. In daai sin, red dit ons poging om Vas
in Engels te vertel. Engelse Vaselinetjie staan dus die kans om die MEES EG
KAAPSE SUID-AFRIKAANSE BOEK tot op hede te wees!
Lastly, the translator revises the draft, thereby engaging in self-revision once
again, but she also engages in other-revision. She tries to persuade the other agents
in the network to retain more of the style and tone of the original source text,
instead of accepting all of the author’s changes and rewrites:
[The whole part about the mobile phones seems inserted. I would omit it.
This does not belong here.
I do not agree with all the Americanisms littered throughout the text. I
don’t think it rings true in the South African context—regardless of how deep
the influence is of television, etc. See e.g. outta—p. 123; cussing (p. 57)—I’ve
never heard someone use the word except for in a cowboy movie/book. I
also do not like bootie at all and shall really prefer it to not be used. Can
we please replace this with something else? Dissed (p. 100)—I can maybe
go along with that, but would prefer that it is not used so often. (See also
pp. 79; 115; 194.) And oh, I also do not like homie at all. (pp. 115, 131.) It’s
so American gangster!]
Another look at revision 177
Die hele gedeelte oor die cellphones klink aangelas. Ek sou dit weglaat.
Dit hoort nie hier nie. Ek stem nie saam met die Amerikanismes waarmee
die teks nou besaai is nie. I don’t think it rings true in the South African
context—al is die jonges ook hoe onder die invloed van televisie, ens. Sien bv.
outta—p. 123; cussing (p. 57)—ek het nog nooit gehoor dat iemand die woord
gebruik behalwe in ‘n cowboy-fliek/boek nie. Ek hou ook absoluut niks van
bootie nie en sal regtig verkies dat dit nie gebruik word nie. Kan ons dit met iets
anders vervang, asb? Dissed (p. 100)—ek sal nog daarmee saamgaan, maar sou
verkies dat dit nie so baie gebruik word nie. (Sien ook pp. 79; 115; 194.) En,
ai, ek hou ook niks van homie nie (pp. 115, 131.) Dis so American gangster!
In the frst stage, the translator once again engages in self-revision but relies strongly
on the establishment of a dialogue with the author. The translator is acutely aware
of her own lack of knowledge regarding the outdoors and asks for the author’s help
with the meaning and uses of certain words and phrases:
[The leopard: I truly cannot find a translation anywhere for muisneuse. And
the only Afrikaans vingerhoede I know are ‘foxgloves’—and they are not indig-
enous, therefore this must be a different type of vingerhoed. C, please help.]
The leopard: Ek kry sowaar nêrens ‘n vertaling vir muisneuse nie. En die
enigste Afrikaanse vingerhoede wat ek ken, is ‘foxgloves’—en hulle is nie
inheems nie, dus moet dit ‘n ander soort vingerhoed wees hierdie. C, help asb.
Apart from just revising her own translation, the translator frequently comments on
aspects such as content which lead to some translation problems:
[I have no idea what speklap is. Could also not find it anywhere. I have stuck
to ‘a roll of cloth’. Help will be appreciated. Someone has asked in the mean-
time whether this is not perhaps ‘shammy’ (chamois) in other words seemsleer]
Ek het geen idee wat speklap is nie. Kon dit ook nêrens kry nie. Ek het
maar volstaan met ‘a roll of cloth’. Hulp sal waardeer word. Iemand het intus-
sen vir my gevra of dit nie dalk ‘shammy’ (chamois) is nie, maw seemsleer.
178 Ilse Feinauer and Amanda Lourens
When the author revises the draft translation in the second stage, he engages
in an other-revision, but specifcally via a process of answering the translator’s
questions—once again a dialogue instead of a solitary activity:
[Translator: I also do not know so well what ‘boomeilande’ are. I have said
‘wooded islands’. But is it an island in the river with trees on, or is it some-
thing such as ‘a clump of trees’, C?
Author: Also see the story Fed up. Boomeilande—tree islands—wooden
islands—these are clumps of trees on an elevated area in the floodplains
that usually are grassplains. Floodplains are usually dry, only during floods
they are under knee-deep or deeper water. Then the tree islands are true
islands.]
Translator: Ek weet ook nie lekker wat ‘boomeilande’ is nie. Ek het gesê
‘wooded islands’. Maar is dit ‘n eiland in die rivier met bome op, of is dit iets
soos ‘a clump of trees’, C?
Author: Sien ook die storie Fed up. Boomeilande—tree islands-wooden
islands—hierdie is groepies bome op verhewe grond in die vloedvlaktes wat
oor die algemeen gras vlaktes is. Vloedvlaktes is die meeste van die tyd droog,
net tydens vloede is dit onder kniediep of dieper water. Dan is die boomei-
lande ware eilande.
The translator performs an other-revision in the third stage, but her activities
are informed by a strong loyalty to the source text and the author, as is evident
from the following example where she criticizes the editor’s changes to her version
(which stays loyal to the source text):
[In the first paragraph I (and C) said “Next to the Land Rover . . . Horace
McAllistair cleared his throat softly.” S has removed the man’s surname, to
only read
“. . . Horace cleared his throat . . .” C has a tendency to call certain people
by their names and surnames. It’s a type of ‘signature’ thing. I think of this
man as Horace McAllistair—not as Horace.]
In die eerste paragraaf het ek (en C) gesê “Next to the Land Rover . . .
Horace McAllistair cleared his throat softly.” S het die man se van uitgehaal,
sodat dit slegs lees “. . . Horace cleared his throat . . .” C het nogal ‘n neiging
om deurgaans sekere mense op hulle naam en van te noem. Dis weer ’n
soort ‘signature’ ding. Ek dink aan hierdie man as Horace McAllistair—nie
as Horace nie.
In the fourth stage, the translator’s other-revision takes the form of an edit of the
editor’s edit—this time without referring back to the source text, but with a strong
adherence to the author’s suggestions. In this stage, the focus is on gatekeeping
activities in the sense that the target language receives emphasis, although personal
opinion seems to dictate a substantial number of her changes:
Another look at revision 179
[‘In the setting late afternoon sun . . .’ now sounds terrible to me. Can we not
only use ‘In the setting sun’?]
‘In the setting late afternoon sun . . .’ klink darem nou vir my aaklig. Kan
mens nie volstaan met ‘In the setting sun’ nie?
In the ffth stage, the proofreader performs other-revision as a light edit without
any reference to the source text and focuses on the correct and authentic form of
the target language:
This may be nitpicking, and the sense is clear, but ‘Two metres of brute force
was Fighting’—as it stands, the subject is ‘two metres’, not ‘force’, so strictly
speaking it should be ‘were fighting’. Perhaps it could be changed to ‘Two
meter’s worth of brute force’.
‘Larder’ is a very English10 word—I don’t think I have ever heard the word
referring to a South African one—we would call it a pantry.
Lastly, the author engages in other-revision but uses the source text as the norm
for his fnal decisions. An example is:
differ from technical networks because, unlike the latter, they are not neces-
sarily stable; they ‘may have no compulsory paths, no strategically positioned
nodes’. In other words, whereas technical networks (e.g., electronic, rail,
etc.) appear as a given structure that can be extended—hence as something
that can be mapped—actor-networks can only reveal themselves when activated.
By highlighting creativity and unpredictability, both concepts, that of actor-
network and that of translation, point to the difficulty of reifying the process
by which (scientific) facts and artefacts are produced, hence the need to ana-
lyze this process from the inside, to observe how actors make their decisions
and interact while still unsure of the outcome.
(our emphasis)
By viewing the revision processes from the inside and by observing the actors’
decisions and interactions, we were able to see that there is no clear-cut line
between self- and other-revision. Our main fndings can be summarized as fol-
lows: It does happen that a translator performs an initial self-revision, but this
process may involve a dialogue with other agents in order to solve certain problems
180 Ilse Feinauer and Amanda Lourens
Further distinctions could then be drawn, based on (1) the presence or absence
of comparison with the source text and (2) the agent at work. A division could
then be made depending on whether comparative activities are involved, so that
two main types of revision are discerned: (1) unilingual revision and (2) source text-
informed revision.
Unilingual revision could include the rewriting of a text in plain language, or
the fnal reading of a translated manuscript. Source-text informed revision could
include any type of comparative reading, whether you compare the whole transla-
tion to the source or refer to the source text only when a passage in the translation
is questionable. An editor could also be contracted to do a unilingual check of a
text for publication but may stumble across an unintelligible passage, for which the
solution may lie in fnding the source of the text (in another language) in order to
clarify the meaning.
These two main types of revision could then be refned into agent-driven tasks, so
that unilingual revision would include unilingual editor-revision as well as proofreader-
revision. Unilingual editor-revision could, for example, be the more encompassing
type of editing as typically performed by the commissioning editor or compiler
(where content issues may be addressed). It could also be a combination of sty-
listic editing (thus prioritizing the needs of the readers) and copy-editing in the
sense of a focus on adherence to linguistic rules, depending on the specifc task.
Proofreader-revision could then be conceptualized as a solely gatekeeping activ-
ity depending on the task at hand—a last mechanical check for errors in typing,
punctuation and typography, or even a last thorough copy-edit to ensure that no
language errors slip through. In all of these instances, the project manager could
brief the agents that a comparative check is not needed, but include in the brief the
ratio of gatekeeping and language therapy tasks to be performed.
Source-text informed revision may be conceptualized as translator-revision,
author-revision, reviser-revision (by a second translator) or source-text-informed
editor-revision (editing with reference to the source text). Once again, the project
manager can assign tasks as required by the project. If the author is willing to be
involved in the translation and revision process, author-revision may be more or
less emphasized depending on the author’s envisioned role. There might even
be some rewriting by the author—a process that could be called transcreation,
although this should be carefully overseen by the project manager, so as not to
jeopardize the entire translation process. Translator-revision may refer to the frst
translator’s initial check of his or her own translation, but it may also refer to this
translator’s revision of the author’s inputs. Furthermore, it can be seen as an inter-
active process involving questions, clarifcation and justifcation. Reviser-revision
would be congruent with the traditional notion of other-revision, meaning that
another translator checks the work of the frst translator against the source text.
Editor-revision happens when a non-translator revises the manuscript with the
main focus on gatekeeping activities, especially those pertaining to the target
language. This may also include the editor consulting the source text in order to
solve some remaining issues.
182 Ilse Feinauer and Amanda Lourens
For all these activities, the specifc styles and applications will vary from project
to project and also from agent to agent, depending on the nature of the text and on
individual working styles and personalities. It is therefore of vital importance that
the project manager take all these factors into account in order to ensure that the
production process runs smoothly. We therefore want to emphasize that our pro-
posed repackaging of terminology is not a universal model, but rather a framework
that can be adapted for individual projects. This framework is a data-based one,
drawing on patterns that emerged from the analysis of real-life data.
With this contribution of a repackaged sociologically driven, empirically based
terminology for literary revision, we hope to have added to the theoretical basis of
language practices.
6. Concluding remarks
A similar investigation should be undertaken for non-literary translation projects,
both for publication and non-publication purposes as well as for other literary
translation projects where, for example, not all agents have command of both
source and target languages. This would make it possible to see whether our fnd-
ings have wider application than this case study and whether other revision roles
could be ascribed if additional agents were to be involved. This might be of par-
ticular signifcance for translation where English is paired with a South African
indigenous language other than Afrikaans. It might also be worthwhile to investi-
gate non-human translation processes to see whether a terminological distinction
between revision, editing and post-editing is still necessary, or whether the term
‘revision’ is sufcient to cover these revision activities as well. The lumping of these
activities under the sole term ‘revision’ may also make it easier to follow a more
sociological approach in researching real-life translation projects as “agent-grounded
researches . . . from the viewpoint of those who engage in it, in particular (social,
cultural or professional) settings” (Buzelin 2011 in Scocchera 2016b).
Notes
1 The archived e-mail correspondence between the agents about their processes (the same
material used for the current study) was analysed in a qualitative way. We suggest that this
chapter be read in conjunction with Feinauer and Lourens (2017).
2 We found that the terms ‘revision’ and ‘editing’ were used in an apparently haphazard
way in the archival material. In our view, these terms are used in practice without all the
agents having agreed on the precise content of a task labelled ‘revision’ or ‘editing’ by
project managers.
3 It is important to note that in South Africa, all agents (including the source text author)
working with Afrikaans and English as a language pair are fully bilingual. This implies
that a comparative reading for accuracy, readability and target language literary style
could be done by all agents involved.
4 Mossop lists 12 revision parameters, categorized into four groups. Tailoring as well as
grammar, spelling, punctuation, house style and correct usage fall into Group C, which
deals with problems of language and style. The other groups deal with content, transfer
and presentation (Mossop 2014a: 134).
Another look at revision 183
5 The authors would like to express their gratitude to Carla-Marié Spies for the use of the
dataset collected by her and published as appendices to her PhD dissertation (Spies 2013).
6 In Bushveld and desert is a compilation from four volumes of Afrikaans short stories,
selected and arranged chronologically by a specific agent (the compiler) tasked by the
project manager.
7 The full set of e-mail correspondence can be requested from the authors.
8 Nearly 30,000 words in total (66 pages, 1.5 line spacing).
9 Our English translation of the original communication between the agents will be given
first, followed by the original Afrikaans as copied from the e-mail correspondence. In
cases where the correspondence took place in English, only the English will be given.
Quoted excerpts from the source texts are italicized.
10 The proofreader means British English.
PART IV
Training
10
REVISION AND POST-EDITING
COMPETENCES IN TRANSLATOR
EDUCATION
Kalle Konttinen, Leena Salmi and Maarit Koponen
The translation industry and institutional translation services obey two compet-
ing principles: maximisation of productivity and quality. For long-term success, a
translating organisation needs to balance productivity with adequate quality assur-
ance measures. The two primary ways to safeguard the quality of translation are
revision and, when machine translation (MT) is part of the workfow, post-editing.
Even with ongoing automation in many aspects of translation service, revision and
post-editing rely on human skill and expertise. Revision and post-editing in digital
translation environments are part of the same workfow as translation and depend
on similar skill sets. Graduates of translator education programmes are thus well
placed to perform these tasks. Consequently, revising and post-editing belong to
the core objectives of translator education, as refected, for example, in the inclu-
sion of both skills in the EMT Competence Framework (European Master’s in
Translation Network 2017).
Revision is typically carried out in “translate-edit-proofread” workfows (Kock-
aert and Makoushina 2008) that rely on translation service standards like ISO 17100
(2015) and refect “traditional hierarchical approaches to managing professional
translation” (Drugan 2013: 125). Alternatives to such top-down quality assurance
scenarios exist in crowdsourcing (see Jiménez-Crespo 2018), where it is to some
extent possible to remedy any shortcomings in professional expertise or attitudes
among the contributors through alternative quality control mechanisms and work-
fow practices that replace traditional revision (Zaidan and Callison-Burch 2011).
However, standardised translation workfows still prevail as the dominant form of
translation service.
Integration of MT has shown promise for improving productivity in translation
workfows, but post-editing by humans is still indispensable as the only practical
means to ensure targeted quality levels. Technological developments and changing
practices in translation production are leading to increased integration of human
188 Kalle Konttinen et al.
translation with MT. Through this “reconfguration of the translation space” (Pym
2013: 492), revision and post-editing come closer to each other both conceptually
and in practice. The partial confuence of translation, revision and post-editing
tasks in a digital translation environment, together with the need to fnd efcient
and pedagogically efective curricular solutions for teaching these tasks, present
translator education with the challenge of identifying both shared features and dif-
ferences in the relevant competences.
In this chapter, we look at the commonalities and diferences in revision and
post-editing competences in order to identify a basis for an efcient and pedagogi-
cally efective model for teaching revision and post-editing in translator education
programmes. Utilising the commonalities of revision and post-editing, while tak-
ing into account their diferences, we present objectives, learning content and
teaching methods for revision and post-editing training in the translator educa-
tion programme at the University of Turku. Section 1 discusses previous work on
teaching revision and post-editing. Section 2 presents an analysis of the overlap and
the diferences between post-editing and revision competences. Section 3 describes
a translator education curriculum where the initial training in revision takes place
in translation courses and the initial training in post-editing in translation technol-
ogy courses, while integration of the two activities into the translation workfow
takes place in project-based translation courses. Section 4 summarises the key ideas
for pedagogical planning.
more pragmatic about the task” (Schjoldager et al. 2008: 804). The module itself
consisted of an intensive week with lectures, workshops and introduction to the
tools used, distance-learning assignments, and a fnal exam that included revision
from B-language to A-language (Schjoldager et al. 2008: 808–9).
Another example of advanced training in revision is discussed by Way (2009),
who describes a course that integrates professional practices, including revision, in
the fnal year of translator education. The course includes an introduction to the
mechanics of the revision and editing process through suggested reading as well as
an introduction to using spell checkers. After examining real examples of a revis-
ing and editing process at a translation agency, practical revision tasks are carried
out, frst as other-revision and then as self-revision, using an evaluation sheet as an
aid (Way 2009). Revision taught as part of a translation course is also described by
Pietrzak (2014), who uses a method she calls “group feedback”, where students
revise each other’s translations of the most problematic passages in the source text,
compiled and anonymised by the teacher. According to Pietrzak, this way the
students learn to concentrate on the text under revision without feeling that their
errors are exposed, and also gain experience in quality management. Scocchera
(2019) reports on an experimental revision module within a translation course,
where students were asked to revise the same translated text at four stages, frst
without and then with the source text, and subsequently write comments to the
imagined translator, before and after theoretical training on revising.
Revision is generally taught into the students’ A-language. Robert et al. (2018:
7) state that revision into the foreign language may not be considered the best
practice; however, “it is common in countries with languages of lesser difusion”.
Revision into the B-language may be more problematic than in the A-language,
as Mossop (2007a: 19), citing Lorenzo (2002), points out: “the more time the
students spent revising, and the more changes they made, the worse the output”.
Lorenzo (2002) also found that students were better at revising others’ translations
than their own. Way (2009) also considers self-revision to be more demanding than
other-revision and advocates starting with other-revision in training. In contrast,
Mossop (2020) suggests that other-revision may difer psychologically from self-
revision and fnds no basis in current translation pedagogy research for deciding
when to introduce self-revision and other-revision.
Training in post-editing has been integrated into translator education curricula
either as specifc courses or as part of translation courses. An early proposal for post-
editing course content was made by O’Brien (2002), who argues that post-editing
involves particular skills separate from translation, and therefore, specifc training
is needed. Doherty and Kenny (2014) describe a course design covering hands-on
development and implementation of a (statistical) MT system as well as post-editing
through both theoretical lectures and practical labs. Flanagan and Christensen
(2014) describe a course where translation students were frst introduced to the
basics of MT as well as post-editing guidelines and then completed two indepen-
dent post-editing tasks, one following the guidelines for “good enough” quality
and the second for “publishable” quality as defned by TAUS (2010). Koponen
190 Kalle Konttinen et al.
both ‘competence’ and ‘skills’ are defned by ‘ability’, and various authors use them
interchangeably. For the purposes of this chapter, we follow the terminology used
by each author cited in the following.
Some translation competence models, such as the European Master’s in Trans-
lation (EMT) Competence Framework, revised in 2017 (European Master’s in
Translation Network 2017), list revision and post-editing among the skills related
to translation, and MT skills among those related to technology. Overall, the role
of MT in the framework has changed from the previous EMT competence doc-
ument, which emphasised “knowing the possibilities and limits of MT” (EMT
Expert Group 2009: 7), to the current document, in which “the ability to inter-
act with machine translation in the translation process is now an integral part of
professional translation competence” (European Master’s in Translation Network
2017: 7). In the “Translation Competence” section, the revised EMT framework
lists the ability to “check, review and/or revise [the student’s] own work and that
of others according to standard or work-specifc quality objectives”, the ability to
“apply post-editing to MT output using the appropriate post-editing levels and
techniques according to the quality and productivity objectives”, as well as the
ability to pre-edit the source text “for the purpose of potentially improving MT
output quality” (European Master’s in Translation Network 2017: 7). Post-editing
and revision skills are thus considered part of translation competence. The specifc
MT skills listed under the section “Technology” involve the ability to “master the
basics of MT and its impact on the translation process” and to “assess the relevance
of MT systems in a translation workfow and implement the appropriate MT sys-
tem where relevant” (European Master’s in Translation Network 2017: 9).
For a more detailed view of revision and post-editing competences, we next
examine models related explicitly to revision and post-editing. These models
include the revision competence model presented by Robert et al. (2017a), the
revision competence model by Scocchera (2017a, 2019), the skills or competences
needed in revision described by Künzli (2006b) and Mossop (2020), as well as
the competences needed for post-editing listed by Rico and Torrejón (2012), and
Pym’s (2013) list of 10 skills needed in the “Machine Translation age”, which com-
bines MT and TM skills.
The revision competence model of Robert et al. (2017a) is a multicompetence
model which borrows translation-related parts from two competence models. From
the PACTE model (see, for example, Hurtado Albir 2017), it borrows the Bilingual
and Extralinguistic subcompetences as well as the Knowledge-about-Translation
subcompetence; from Göpferich’s (2009) TransComp model, it borrows the
Translation routine activation subcompetence and the Tools and Research sub-
competence. To these, Robert et al. add subcompetences specifc to revision: the
Strategic subcompetence, which forms the centre of the model, the Knowledge
about Revision, Revision Routine Activation and Interpersonal subcompetences,
as well as psycho-physiological components addressing, for example, a “revising
frame of mind as opposed to retranslating” (Robert et al. 2017a: 14). The Strategic
subcompetence is the most essential, as it involves planning and carrying out the
192 Kalle Konttinen et al.
revision task (“selecting the most adequate procedure in view of the task defnition,
reading for evaluation, applying a detection strategy [and] applying an immediate
solution or problem-solving strategy, making only the necessary changes”) and
evaluating the result (Robert et al. 2017a: 14). Scocchera (2017a, 2019) pres-
ents a multicompetence model of revision that comprises a partly similar set of
components: analytical-critical competence, operational competence, metalinguis-
tic-descriptive competence, interpersonal competence, instrumental competence,
and psycho-physiological competence.
Künzli (2006b) identifes a set of three competence categories to consider in
syllabus design for revision: strategic competence, professional and instrumental
competence, and interpersonal competence. Strategic competence involves revi-
sion-specifc elements of defning the task at hand, applying relevant evaluation
criteria, and deciding how to address a problem (Künzli 2006b: 11), while profes-
sional competence refers to “knowledge related to professional translation practice”
and instrumental competence to knowledge of the tools and information sources
(Künzli 2006b: 15). Interpersonal competence is the “ability to collaborate with
the diferent actors involved in a translation project: translators, revisers, translation
companies, commissioners and/or source-text authors” (Künzli 2006b: 13–14).
A specifc and practice-oriented list of abilities necessary for revisers is presented
by Mossop (2020: 118–19) as part of his description of the work of a reviser.
According to Mossop, these abilities include detecting problems in a translation,
quickly deciding on revisions, and making small changes instead of retranslating,
which also requires an understanding of diferent revision procedures (unilingual
reading of the translation only, reading only part of the translation, reading with
a focus on only specifc features), and applying them where appropriate. Some of
the abilities listed by Mossop are interpersonal, such as being able to explain the
necessary changes to the translator and the more trainer-oriented ability to identify
weaknesses and strengths and provide feedback. According to Mossop, the reviser
should also be able to “appreciate other people’s approaches” and not judge transla-
tions simply for being diferent from their own. Finally, Mossop lists what he calls
personal qualities: diplomacy in confict situations, leadership in group projects,
and cautiousness to avoid making the translation worse.
Many parallels to these revision competence models can be seen in various
discussions of post-editing competences. Rico and Torrejón (2012) present three
categories of post-editing competences: linguistic skills, core competences, and
instrumental competences. They divide core competences into attitudinal or psy-
cho-physiological competence (the ability to cope with specifcations and client
expectations for text quality and the ability to overcome uncertainty) and the stra-
tegic competence required to arrive at informed decisions when choosing among
alternatives (Rico and Torrejón 2012: 170). Instrumental competences, on the
other hand, include knowledge of MT systems and their capabilities, skills related
to terminology, MT dictionary maintenance skills, skills in assessing corpus quality,
controlled language pre-editing skills and some programming skills, while linguis-
tic skills cover language and textual skills as well as cultural knowledge (Rico and
Revision and post-editing competences 193
Torrejón 2012: 170). Like the revision models of Künzli (2006b) and Robert et al.
(2017a), some of these competences can be seen to overlap with translation.
The skills presented by Pym (2013) are grouped under three headings: “learning
to learn”, “learning to trust and mistrust data” and “learning to revise”. Learning
to revise comprises the detection and correction of errors, substantial stylistic revis-
ing, and the ability to revise and review in teams (Pym 2013: 496). According to
Pym (2013: 493), the translator’s skill set is undergoing “a very simple and quite
profound shift” from identifying and generating translation solutions to selecting a
solution from those automatically proposed by TM and MT systems integrated in
the translator’s computer-assisted translation (CAT) tool, and adapting the solution
to the context. Robert (2018: 130, 145) presents a similar integrative view, noting
that the use of TM matches is, in fact, a form of other-revision just like the revision
of human translations. A parallel can thus be drawn between other-revision, modi-
fying TM suggestions and post-editing MT output, particularly as these activities
are increasingly integrated in the same software tools.
All the revision and post-editing models discussed thus far include subcom-
petences they share with translation. Leaving these aside, the subcompetences
specifcally related to revision and post-editing can be grouped into three larger
categories for purposes of identifying commonalities and diferences: (1) strategic
subcompetences related to the revision or post-editing process, (2) interper-
sonal, psycho-physiological or attitudinal subcompetences, and (3) instrumental
or tools subcompetences related to the use of translation technology. Table 10.1
presents the three groups of subcompetences, divided into those common to
revision and post-editing and those specifc to each. For revision, operationalisa-
tions for a largely similar set of constructs can be found in Rigouts Terryn et al.
(2017), Robert et al. (2017b), Robert et al. (2018) and Scocchera (2017a, 2019).
In contrast to the models in Robert et al. (2017a), Künzli (2006b) and Rico
and Torrejón (2012), the category of strategic subcompetences also encom-
passes declarative knowledge about revision, post-editing and MT. In syllabus
design and pedagogical practice, a strategy and the domain where it is applied
are intrinsically linked. Thus, learning how to “arrive at informed decisions
when choosing among alternatives” (Rico and Torrejón 2012: 170) relies on
“knowledge related to professional translation practice” (Künzli 2006b: 15) and
on knowledge about typical errors that need to be attended to when revising
or post-editing. Metalinguistic knowledge, a part of metalinguistic-descriptive
competence in Scocchera’s model, falls here under the category of strategic
subcompetences, although it is also an important prerequisite for interpersonal
subcompetence when communicating with the translator.
In the frst group in Table 10.1, strategic subcompetences, the diferences
between revision and post-editing are mainly due to the origins of the texts:
human translation versus MT. For revision, the defning characteristics involve the
professional roles of the translator and the reviser, as well as the typical features of
translations produced by human agents. For post-editing, the distinguishing trait
is the fact that the “agent” producing the translation is an algorithmic machine,
194 Kalle Konttinen et al.
and this origin of the translation is likely to be refected in the features of the MT
output, including typical errors.
In the second group, interpersonal, psycho-physiological or attitudinal sub-
competences, an important characteristic feature of revision is communication.
While revision often involves communication between human agents, post-editing
is usually conceptualised as a process that involves the post-editor and the text,
and possibly some form of interactive use of the MT system. Thus, the main
characteristic feature of post-editing in this group of subcompetences seems to be
the ability to apply a set of principles or rules in choosing a post-editing level that
suits the intended purpose. In terms of communication, the use of MT may also
involve communication with developers of the MT system in a workfow where
the translator identifes and analyses MT errors and reports them to the developer
to improve the quality of the output.
As for the third group, ‘instrumental subcompetences’, post-editing requires some
knowledge of MT, and in some cases it may even involve technical tasks. Revision is
less dependent on knowledge of technology, even when carried out in a technologi-
cal environment. However, user interfaces specifcally designed for revising, and the
improved functionalities of corpus tools and grammar-checking and spell-checking
tools, are becoming increasingly important technical aspects of revising.
It is interesting to note that in previous studies, revision is often seen as
a demanding task more suited to later stages of translator education and for
more experienced translators (for example, Way 2009; Mossop 2020), whereas
Revision and post-editing competences 195
related activities. Furthermore, separate courses for revision and post-editing may
not be the optimal solution if there is a need to economise teaching resources and
the students’ time due to the constraints of a master’s programme that lasts only one
or two years. In our view, the similarities and diferences in the subcompetences
required for revision, post-editing and translation provide a solid basis for distrib-
uting the learning content over the curriculum, with a view to achieving both
efciency and efectiveness.
Our proposal for the frst rough division in the learning content utilises the
diferences between revision and post-editing in the categories of instrumental sub-
competence and interpersonal, attitudinal or psycho-physiological subcompetence
(see Table 10.1). Much of the introductory content for revision in the category
of interpersonal, attitudinal or psycho-physiological subcompetence is non-
technological and focused on communication between the agents of translation
production. Such content fts well into translation courses, where communication
with the translator, justifying corrections and giving feedback can be practised
along with learning to translate. On the other hand, much of the preliminary
content associated with post-editing is technical and focused on interaction with
algorithmic machines. Such content can be adequately dealt with in translation
technology courses, where the connections between post-editing and translation
technology are in focus.
The full picture is more complicated, of course, as many of the text modifca-
tion skills in the category of strategic subcompetences are shared by revision and
post-editing, and some of the skills associated with revision are technological. For
most students, translation technology is likely to be an unfamiliar domain, while
improving texts is a skill that may be familiar from previous studies, albeit mostly
as monolingual editing of one’s own texts or those of peers. It is mainly due to the
expected unfamiliarity of the students with MT as a translator’s tool that we fnd
it useful to emphasise the close connection between technology and post-editing
by placing post-editing in the translation technology course and not in regular
translation courses.
The second part of our proposal puts the revision and post-editing knowledge
and skills acquired in the introductory courses to the test in real-life-like situational
constraints. One suitable approach takes the form of collaborative project-based
translation courses that simulate the business processes of a translating organisation
(for a more detailed description, see Konttinen et al. 2017). There, translation,
revision and post-editing can be integrated into the technology-supported transla-
tion service workfow. Besides opportunities to practise the skills, the collaborative
workfows in such courses provide a setting where students gain a comprehensive
overview of the various aspects of revision and post-editing as quality assurance
activities.
To illustrate the application of such combinations of distributed and inte-
grated curricular solutions, we shall now look at the revision and post-editing
training given in the two-year master’s-level Multilingual Translation Stud-
ies Degree Programme at the University of Turku (120 ECTS credits). The
Revision and post-editing competences 197
TABLE 10.2 Outline of the Multilingual Translation Studies Degree Programme2 at the
University of Turku
challenging texts and in multiple settings, frst as revision in class, then in pairs,
and later in team projects. Also, the role of the teacher changes gradually from an
instructor to a collegial mentor and reviser of the students’ translations.
Since each of the fve language pairs may ofer as many as fve optional specialist
translation courses, we are presenting their learning outcomes regarding revision in
summarised form. The students are expected to learn to function independently in
the role of a reviser; identify, categorise, indicate and correct errors and defciencies
in texts; apply style guides and other authoritative sources; prioritise various needs
for corrections; and balance the severity of defciencies against project specifca-
tions, client expectations and the prospective readership. Finally, the students are
supposed to have learned to accommodate situational constraints (for example,
time, cost and efciency) and to communicate and justify any corrections and
amendments made. In some of the translation courses, students learn the use of a
few computer aids and the functionalities of word processing software useful for
revision, such as spelling, grammar and style checkers, displaying changes, inserting
comments and comparing documents, but the more challenging aspects of tech-
nology are left for the translation technology courses.
data and their observations, the students then write a brief retrospective refection
on their post-editing process, and general observations are discussed in the class.
4. Conclusion
The translation, revision and post-editing competences are fundamental build-
ing blocks in a translator’s skill set. Continuing advances in MT technology and
improvements in the user interfaces of digital translation environments have already
brought, and will continue to bring, the three activities ever closer together in
actual textual operations. TM systems mix human and machine-originated text,
as some of their contents are entirely translated by a human translator, some are
machine-translated and post-edited, and some may be raw MT output. The infor-
mation on the origin of a text segment may not always be visible, thereby blurring
the boundaries between human and machine output. Given this trend, we advo-
cate a holistic and integrative approach to teaching the strategic subcompetences
for all kinds of textual operations, as described in this chapter.
What is likely to remain unchanged in the face of technological development
is the need to make corrections and improvements in translated texts, whether the
text is a human translation, a machine translation or a fuzzy match from a transla-
tion memory. There will always be dyadic relationships between the agents who
carry out the textual operations, and thus a need for interaction, whether the
agents are a human translator and a reviser, or an algorithmic machine and a post-
editor. For curriculum planning, a crucial diference between training in revision
and training in post-editing lies in how to teach the students to communicate any
requirements and suggestions for changes in human translations, on the one hand,
and in MT and TM output, on the other. While communication between human
agents is best learned in translation courses, human-machine interaction can be
more efectively practised in technology-oriented courses.
A further consideration for curriculum planning is the need to integrate the
subcompetences learned in translation courses and translation technology courses
into the translation service workfow. For this, we suggest using learning environ-
ments that are as close to the real-life challenges of the translation market as possible
but without all the risks of real-life translation. In our view, translation company
simulations like the Multilingual Translation Workshops ofer a promising way to
achieve this goal.
Notes
1 The structure of the Degree Programme in Multilingual Translation Studies at the Uni-
versity of Turku is presented online in the study guide at https://opas.peppi.utu.fi/en/
degree-programme/3220. The learning outcomes cited in the text are translations of
course descriptions that are available in Finnish.
2 Specific course titles may be subject to change. Titles given in the table are in accordance
with the curriculum for the academic year 2019–2020.
11
IMPROVING REVISION QUALITY
IN TRANSLATOR TRAINING
WITH translationQ
Gys-Walt van Egdom
As evidenced in the introduction to this volume, recent decades have seen a surge
of interest in revision. The growth in the number of publications on revision is,
for the most part, due to the gradual understanding of revision as an expert subtask
of translation (“self-revision”) and as a task in its own right (“other-revision”) (see
Englund Dimitrova 2005; Shih 2006; Allman 2007; Parra-Galiano 2016). Revision
is considered an activity that requires a specifc set of competences, all of which are
to be strategically deployed to obtain a successful outcome (see Robert et al. 2017a,
2017b, 2018; Scocchera 2019). Debate about revision competence is interwoven
with debate about translation quality, which has also been quite lively in recent
years (see Huertas-Barros and Vine 2019; Huertas-Barros et al. 2018; Moorkens
et al. 2018). There are two reasons why revision competence is bound up with
quality: (1) thorough revision seems to be a sine qua non for high-quality transla-
tion (ISO 17100 2015a; see section 1); (2) a competent reviser is a person who is
“good” at his job, who displays expert behaviour (see also Kim 2009).
Further impetus is given to the debate on revision by the ongoing technolo-
gization of translation services. One of the consequences of the susceptibility of
translation to technologization and automation is that Language Service Providers
(LSPs) are probably spending more time on revision and revision-related activities
such as post-editing (see Mossop 2007a; Martin 2007; for a discussion of the role
of revision skills in post-editing and of the diferences between revision and post-
editing, see O’Brien 2010).
Due to the gradual recognition of revision as an activity requiring specifc
competences and the perceived prominence of revision-related practices in the
translation workfow, revision also seems to be high on the agenda of translator
trainers. In the revised EMT competence framework, it is stated that, upon gradu-
ation, students must be able to “check, review and/or revise their own work and
that of others according to standard or work-specifc quality objectives” (European
204 Gys-Walt van Egdom
Master’s in Translation Network 2017: 8). In recent decades, scholars have tried
to fnd ways to cultivate revision skills within academic and vocational translation
programs. For example, Mossop (1992) spearheaded research into revision compe-
tence in the early 1990s by setting out the learning objectives of a revision course.
Around the turn of the century, extensive empirical investigation of revision pro-
cesses had become possible with the development of keystroke logging software
(such as Translog). Around that time, scholars like Jakobsen (2002) started to com-
pare the revision processes of experts and novices (trainee translators). In part, such
profling of revision behaviour along a cline of expertise has been done with a
view to providing measures that can distinguish non-expert from expert behaviour,
distilling good practices and thereby creating ways to speed up competence acquisi-
tion during training, and ultimately improving revision quality.
The person who must ensure that students’ revision skills are honed—the
translator trainer—has been largely relegated to the fringes of revision research,
however. Generally speaking, the trainer receives attention in studies in which the
impact of teaching strategies on learning processes is measured (Nakanishi 2007;
Pietrzak 2014; Vandepitte and Hanson 2018).
In this chapter, trainer-to-trainee revision will be observed from the translator
trainer’s (and to a lesser extent from the trainee’s) perspective. More specifcally, a
closer look will be taken at technological solutions in trainer-to-trainee revision
practices. Tools are scarce in translator training. Still, it seems that trainers and
trainees might beneft greatly from systems that are geared to everyday practices
such as trainer-to-trainee revision. Trainer-to-trainee revision is usually a repetitive
task that consists of detecting and correcting similar and identical errors time and
again, which is not only tedious for trainers but also tricky, because errors should
be labelled, weighted and corrected in a consistent manner. One can easily under-
stand how trainers (and trainees) might reap the benefts of a tool that does away
with repetitiveness.
With translationQ (TELEVIC/KU Leuven), a tool has been introduced that is
designed to help translator trainers speed up their revision work. An assessment will
be made of the extent to which translationQ is likely to contribute to quality in
trainer-to-trainee revision. The assessment is undertaken not because translationQ
is hailed by its developers as a “game-changing” tool that increases “consistency
and objectivity” in revision and evaluation, but mainly because it promises to make
lighter work of revision practices.1 This claim seems to mesh well with the recent
expansion of the notion of “quality” in Translation Studies, an expansion which
includes the virtual and non-virtual working environment and working conditions
(D’Hulst and Gambier 2019: 408; Angelelli 2019; Pym 2019b). Given this shift in
the understanding of overall quality in Translation Studies, we will critically exam-
ine the tool’s functionalities and assess whether they can help improve the quality of
feedback in trainer-to-trainee revision as well as the ergonomic quality of revision
in a didactic setting.
Before looking at the tool through the lenses of revision standards and ergo-
nomic principles, a brief introduction to revision quality will be provided along
Improving revision with translationQ 205
• carry out a detailed inspection of the target language content, bearing in mind
the source content and the purpose of the target content;
• correct errors in the target language content and/or recommend the imple-
mentation of corrections;
• inform the LSP of corrective actions;
• repeat the revision process until reviser and LSP are satisfied with the product.
However, the task description in the ISO standard does not eliminate all the
vagueness, not even when the relevant “aspects” of the comparative examination
206 Gys-Walt van Egdom
are taken into account (accuracy, syntax, spelling and so on; p. 10). As observed
by Rasmussen and Schjoldager (2011) and Mossop (2020), little heed is paid to
concrete procedures in revision standards. Empirical research fndings suggest
that when it comes to error detection or output quality, no procedure is more
successful than other procedures (Robert and Waes 2014). The only revision
procedure that leads to poorer performance is a “monolingual examination”
of target language content. Strictly speaking, such monolingual examination
cannot be aptly described as “revision” under the ISO standard. However, in a
recent empirical study, Ipsen and Dam (2016) have shown that revision quality is
better when the translation serves as the focal point rather than the source text.
What is often seen in research on procedural quality is that the revised transla-
tion is ultimately the yardstick for quality. But which procedure yields the best
results? In a professional revision setting, it is only logical that this question of
results should be central: when looking at a revised translation or even a revision
process, focus is on the successful detection and correction of errors. In translator
training, however, feedback about errors ought to be given more weight as it gives
students an opportunity for learning; trainer-to-trainee feedback serves a formative
purpose. Ideally, reviser feedback is tailored to students’ need for transparency and
completeness of information (Van Egdom and Segers 2019: 64).
Omer and Abdulahrim (2017) have put forward a general theory of constructive
feedback and they list the following criteria for high-quality feedback:
• immediate;
• specific;
• helpful;
• non-judgemental;
• accurate;
• relevant;
• tailored;
• solicited;
• frequent;
• balanced;
• confidential;
• understandable;
• based on first-hand data;
• conducive to better results (i.e. “[feedback] suggests plans for improvement”).
These quality criteria have been applied to the context of trainer-to-trainee revi-
sion by Van Egdom and Segers. To make these criteria more tangible, the authors
provide an example of constructive revision practices (2019: 64):
When a student has committed an error in his translation, the teacher can
indicate that an error has been made (by underlining [the error]), provide
information about the nature of the error and the severity of the error (by
Improving revision with translationQ 207
categorizing the error, weighting it and/or providing the student with extra
information) and suggest plans for improving the text.
For instance, when a student uses ‘Mr.’ as an abbreviation of ‘mister’ in a
Dutch translation of an English source text, the teacher can underline the
error, mention that the error is the result of interference, that it is a minor
error, and that ‘Mr.’ and ‘mr.’ are false friends, and indicate that the solutions
‘meneer’ or ‘de heer’ would be more suitable. Further feedback can be given
in the form of a reference to an authoritative source that discusses the error
[in this case: www.taaltelefoon.be/dhr-m-mr].
(our translation; emphasis added)
A quality criterion that is of great importance, but also very elusive, is “quality”.
The importance of the notion of “quality” in translator training cannot be overstated.
In trainer-to-trainee revision in the classroom, the reviser usually performs bilingual
assessments of several translations that have been derived from the same source text.
The “face validity” of trainer-to-trainee revision is at stake when feedback is not
consistent throughout a single student’s translation or across the translations of all
the students. Low face validity is generally a direct consequence of low inter-reviser
or intra-reviser reliability. Inter-reviser reliability refers to the degree of agreement
between revisers. The translation of the abbreviation ‘Mr.’ can serve to illustrate this
notion. The website ‘Taaltelefoon’ states that, despite the fact that use of the abbre-
viation is not recommended, ‘mr.’ is sometimes used in Dutch as an abbreviation of
‘meneer’ and ‘mijnheer’. As this authoritative source leaves room for fexibility, it is
not unthinkable that one reviser will underline ‘mr.’ as an error while another reviser
will leave the abbreviation as is. When this happens, inter-reviser reliability is low.
Intra-reviser reliability can be defned as the degree of agreement among mul-
tiple revisions of the same translation performed by a single reviser. Again, the
translation of ‘Mr.’ is an excellent case in point. When three students commit the
same error (e.g. ‘Mr.’ as an abbreviation of ‘meneer’), each error (each instance
of ‘Mr.’) ought to be treated equally. When the instructor-reviser fails to revise
identical errors in the same way, intra-reviser reliability is low. Low reviser reli-
ability can have serious consequences: students tend to discuss grades and feedback
amongst themselves, and they sometimes become aware of inconsistencies and
start questioning the “objectivity” of revision feedback, the fair-mindedness of
the instructor, and, ultimately, the competence of the instructor (see ISO 17100
2015a: 6; Bowker 2000, 2001; Van Egdom et al. 2018a, 2018b).
To sum up, mistakes that can give rise to low reviser reliability are:
1 identical errors that are sanctioned in one student version and overlooked in
another version or versions;
2 identical errors that are all sanctioned, but the error made by one student is
classified differently from that of another student;
3 identical errors that are all sanctioned, but the error weighting differs from one
version to another.
208 Gys-Walt van Egdom
it proved its worth in the context of website design (Scapin et al. 2000), in both
traditional and virtual environments (Bach and Scapin 2003; Bastien and Scapin
2001). Eight main criteria are believed to cover “all software aspects which have an
infuence on the users’ task completion”. Each criterion has been clearly defned
and is accompanied with a clear rationale. The criteria and defnitions are:
Taken together, these criteria will determine whether the ergonomic design of
translationQ truly reaches the quality threshold for user interfaces and whether it is
likely to lead to more efcient and sustainable trainer-to-trainee revision practices.
revision quality have served as points of departure in the assessment of this software.
In the remainder of this section, translationQ will be scrutinized more closely.
Introducing translationQ
The plans for an evaluation and revision tool originated at KU Leuven, where
Winibert Segers and Hendrik Kockaert were looking for a way to make lighter
work of translation evaluation and trainer-to-trainee revision (Kockaert et al.
2016). Their teaching experience had revealed that they were often correcting
and revising the same errors in student versions and, what was worse, that it was
often difcult to ensure the consistency of corrections and feedback across versions.
Working with Televic, they developed a tool that would reduce the repetitiveness
of evaluation and revision tasks, making these processes more efcient, and that
would also help to ensure consistency across versions and across assignments, thus
increasing reliability. In 2017, after a beta test with partners in academia and in the
language industry, the cloud-based tool translationQ was ofcially launched.
The tool currently consists of a revision module. In this module, a reviser
uploads a source text (in DOC, PDF or XLIFF format) onto the platform. The
tool will automatically start segmenting texts at sentence level, but the reviser
can also choose to merge sentences to form bigger text units. Once the text is
uploaded, the reviser (in translator training, the instructor) can publish an assign-
ment by formulating a brief (prospective text sender and audience, deadline, etc.),
providing reference material, and assigning the task to a number of users (in trans-
lator training, students). The users receive a notifcation e-mail with a direct link
to the assignment in the translationQ portal. The translators produce their trans-
lation on the platform or upload their translation to the input interface. In this
interface, alignment issues are fagged and can be solved by the users. In case of
technical problems, the translations can also be sent to the commissioner, who can
then upload them onto the platform. Once the versions have been submitted, the
reviser can start revising by reading the versions, highlighting errors and providing
feedback in a feedback window. When the revision process is completed, the user
is notifed that feedback has been provided. In the translationQ environment, users
can look at the results and compare them to those from other assignments as well
as those of peers (the “User report” tab will show the average score of the group).
The reviser can also compare the task-relevant data of users and groups in the tool,
and analyse error data in a spreadsheet fle.
Thus far, the revision processes seem very straightforward. The true cleverness
of the system lies in the storage and constant updating of error data in an “error
memory”. While correcting student versions, the reviser can draw on error data
that have been produced in other translation assignments as well as in the versions
under review. An algorithm goes in search of, detects and fags identical errors
(in corresponding segments of other students’ versions) and similar errors (in dif-
ferent segments of the same translation assignment), and suggests corrections and
feedback automatically. The same algorithm can also fag potential errors based on
212 Gys-Walt van Egdom
Indicator tQ
attention of the student be drawn to the error and its nature, and that “plans
for improvement” be suggested (see also Carless 2006; Omer and Abdulahrim
2017: 46). As can be seen in Figure 11.2, the translation feedback sheet does allow
for the immediate correction of errors, but it also encourages the reviser to provide
formative feedback. TranslationQ seems to check all the relevant boxes for con-
structive translation feedback (see Van Egdom and Segers 2019: 62):
• Upon detection, the error is highlighted in the target segment as well as in the
feedback sheet (“item”).
• Information is provided on the nature of the error (“category”, highlighting
of corresponding source-text unit).
• Information is provided on the severity (“score”).
• Suggestions for corrective actions are made (“correction”).
• Information is provided on the cause of the error (“feedback”).
On the translation feedback sheet, the reviser has ample room to refer the student
to authoritative sources that discuss the error (“feedback”).5
A feld on the feedback sheet that is particularly worthy of note is “category”.
TranslationQ makes revision processes authentic through the use of error catego-
ries that are common in the language industry: the Dynamic Quality Framework
(DQF) categories (O’Brien et al. 2011).6 The DQF categories are broadly in line
with the “aspects” of translation and revision that ought be taken into account
according to ISO 17100 (e.g. accuracy, syntax, spelling) (2015a: 10).
By employing the analytical categories of DQF, the translationQ engineers also
wanted to provide an impetus to more “objective” revision practices. As error
214 Gys-Walt van Egdom
categories are predefned, all users will be inclined to employ the same metalan-
guage and to refer to textual problems in the same way.
While it is true that using a common framework is a good way to reduce sub-
jectivity in revision, it is impossible to ensure complete objectivity. In recent years,
some criticism has been levelled at analytical categories: despite clear defnitions,
“it is often a matter of subjective judgement . . . whether an error falls into one
category or another” (Saldanha and O’Brien 2014: 101–2; see Van Egdom et al.
2018a).
The translation feedback sheet also asks the reviser to penalize (or reward) the
translator for a translation solution in the “score” feld. Subjective infuences loom
large in error weighting: in professional practices, the evaluator/reviser decides
whether an error is minor, major or critical (see Van Egdom et al. 2018a, 2018b).
In translationQ, the situation is similar: it will be up to the reviser to decide on the
category and the severity of an error.
Still, the tool seems to provide an adequate solution to problems with reviser
reliability.7 A problem recognizable for every trainer is related to repetitiveness in
trainer-to-trainee revision: it is difcult, if not impossible, to remember all revisions
of a single assignment, let alone of earlier assignments, when revising on paper or
Improving revision with translationQ 215
in a word processor. The error memory of translationQ provides a solution for the
reviser’s limited memory capacity. As mentioned, the tool detects identical errors
and similar errors and copies the information from earlier feedback sheets onto the
fagged fragments. The error memory also contains error data from other assignments
and can fag identical errors and similar errors from earlier assignments (Figure 11.3).
Provided that the reviser is familiar with the system and is able to manage error data
well, it is likely that, by working in translationQ, trainer-to-trainee revisions will
become more consistent—which can be considered a positive step toward objectivity.
The only relevant quality issue that has remained unaddressed in this discussion
is the “revision loop”: in the translation industry, revision is not necessarily com-
pleted when the reviser has examined the translation a single time; revision ought
to be repeated until the LSP and the reviser agree on the quality of the product
(ISO 17100, p. 11). At the time of writing, translationQ does not allow for the
amendment of a revised version by the student. The reason for this might be that
translator trainers seldom ask their students to resubmit their translations.
Criteria tQ
Providing guidance +
Reducing workload ±
Allowing for explicit control ±
Being adaptable ±
Reducing errors and allowing for error recovery ±
Having consistent coding +
Having signifcant coding ±
Being compatible +
Guidance
In a technological context, efective and efcient practices are heavily dependent
on the so-called “ease of use” and “ease of learning” of a system (see also Venkatesh
and Davis 2000). Bastien and Scapin state that “good guidance facilitates learning
and use” (1993: 9). If translationQ users know where they are in the accomplish-
ment of a revision task, what actions are required and how additional information
can be obtained, this would indicate that the tool provides good guidance.
The steps the reviser must follow in translationQ are logically structured. The
project preparation procedure is straightforward: having opened the “Source Texts”
tab in the Back Ofce, the reviser is asked to categorize the new project in the data-
base (for instance, in the folder “Medical Translation EN-NL”) and to upload the
source material. In the next tab, “Translation Assignment”, situated right under the
“Source Text” tab, the reviser can publish the assignment by formulating a transla-
tion brief and selecting the pool of translators (individual users or user groups).
The revision procedure is also well structured. When opening the “Revisions”
tab, the reviser must select an assignment. Before opening an assignment, the reviser
is asked to select the revision memory that is to be used for the assignment (for
instance, the memory “Medical Texts EN-NL”). The reviser can now start revis-
ing. In the revision mode, all relevant submissions are numbered and listed on the
left side of the page. The frst submitted translation is immediately shown along-
side the source text. The reviser starts making corrections to the frst translation
by selecting fragments with the click of a mouse. Every time an error is selected,
the translation feedback sheet pops up. Having flled in the felds on the sheet, the
reviser can confrm (“save”) the error and feedback. When the error is confrmed,
the algorithm will fag potential errors (see Figure 11.3) and ask the reviser to indi-
cate whether the same criteria can be applied to all the highlighted units.
Although it is possible to switch between diferent students’ versions, the reviser
will normally start with “Translator 1” and end with the fnal submission. What is
perhaps less evident when working in the revision interface is the fact that “error
feedback” can be edited using a right-click, and that changing the “status” of a
Improving revision with translationQ 217
submission (from “in progress” to “completed”) is not equal to publishing the results.
When all submissions have been revised, the reviser must click the “complete revi-
sion” button at the top right. The results are now published and accessible in the
“Reporting” module.
Unfortunately, the reviser is not automatically directed to the “Reporting”
module. The reviser must fnd out how to get to the user and assignment reports.
Some guidance is sometimes required in these situations. TranslationQ always
displays a question mark (icon) at the top right. By clicking, the user is directed to
a well-organized support page at the translationQ website. There they will often
fnd the information that is needed to overcome the obstacle (for instance, infor-
mation on open “User reports”). When needed information is not available at the
website, the user is invited to send feedback about an FAQ page and to formulate
a query (see Figure 11.4). All in all, the tool seems to ofer good guidance to the
reviser.
Workload
One of the ergonomic criteria that seems most difcult to meet is the workload
criterion. Complaints that tools are too complicated, complex and far from intui-
tive are often reported in the literature on CAT tools (e.g. O’Brien et al. 2017).
The tools have too many functionalities which are often difcult to fnd and tend
to display redundant information on the screen. The perceptual and cognitive load
in CAT tools afects task efciency in a negative way (with users being distracted
by “unnecessary” information), increases the probability of errors being made by
the user and increases the likelihood that the user will become hesitant to use the
tool (see Bastien and Scapin 1993: 18, 24).
The translationQ engineers attempted to design a tool that contains all the informa-
tion needed to execute the task at hand. At the top right, there is always an “app” icon,
which allows the user to navigate across the four main modules (the “Administration”
module, the “Back Ofce” module, the “Portal” module and the “Report” module).
All the key features of the modules are displayed on the left side of the screens. The
number of key features per module is always limited and they are summed up in a suc-
cinct manner (e.g. “Source Texts”, “Translation Assignment”, “Revisions”, “Revision
Memory”), thereby reducing reading efort. Clicking on one of the features does not
seem to display more than the information that is relevant to a subtask. As a result, it
seems unlikely that users will experience serious cognitive strain. However, there may
be cognitive issues with what seems to be the most demanding subtask: the revision
task itself. In the “Revision” tab, the screen is often a bit crowded with information: it
displays (1) subtasks in the Back Ofce, (2) a list with the names or number of students
who have submitted a translation, (3) the source text, (4) a student translation, (5) sta-
tus information and (6) either text item information, scoring information or general
feedback for the task at hand. Findings in a recent exploratory study suggest that flling
in the translation feedback sheet that pops up during revision is also seen as a tedious
activity (Van Egdom et al. 2019). The tediousness can be indicative of cognitive strain.
Workload has also been demonstrated to decrease when the number of action
steps for a task is limited. In this regard, some ergonomic concerns can be raised: it
does not seem possible to simply “drag a fle” to translationQ and publish an assign-
ment in two or three mouse clicks, or to create entire project templates that can be
used and reused (it is, however, possible to “clone” source texts).
Fast interaction is not something the translationQ user can expect (see Van
Egdom et al. 2019). Still, the cognitive strain and the number of action steps for
trainer-to-trainee revision tasks are reduced when working in translationQ, in that
potential errors in students’ versions are fagged and the reviser is allowed to reuse
feedback from earlier versions or assignments.
Explicit control
Explicit control is the third ergonomic quality criterion listed by Bastien and Scapin
(1993). Users should be allowed to defne their input and be able to exert control
Improving revision with translationQ 219
over the processing of actions throughout the program. In translationQ, tasks are
generally divided into subtasks, which all require an explicit command on the part
of the user (“Save”, “Submit”). To increase awareness of the consequences of an
explicit command, especially one with irreversible consequences, dual activation is
required (see Bastien and Scapin 1993: 25). In translationQ, two-step activation is
required to start processes like submitting a translation and publishing the revised
translations.
According to ergonomic guidelines, users should also be allowed to pace their
data entry. In the preparation phase as well as in the revision phase, input can always
be saved. Translators can also save their translation anytime. Revisers can navigate
between modules and subtasks throughout the process.
Essential in explicit user control is the possibility of interrupting and cancelling
transactions or processes. TranslationQ allows users to cancel and edit macro-pro-
cesses (e.g. creation of a source folder) and micro-processes (e.g. acceptance of
errors). There are, however, a few commands that are irreversible. Once the reviser
has published the revision results, it is no longer possible to:
This can pose problems in a didactic context. In many cases, there will be an in-
class discussion of the assignments. It sometimes happens that a student can prove
that a translation solution that has been marked as an error by the instructor-reviser
is in fact a correct solution. In these cases, the reviser should be allowed to change
the status of an error and to override the task score.
Adaptability
The fourth criterion, adaptability, is frequently mentioned in research on transla-
tion ergonomics. In more traditional translation tools, human and organizational
requirements were addressed only in an ad-hoc manner (Lagoudaki 2008; Olo-
han 2011). As mentioned in section 1, more consideration seems to be paid
to human interaction and to tools’ customizability in the designing of modern
translation tools. The adaptability of tools to diferent situations and diferent
users is of critical importance. Some content and processes in translationQ can be
adapted to ft users’ needs. For example, revisers can choose to merge segmented
sentences; this will allow trainee translators and revisers to work at paragraph
level. A recent study on translation ergonomics has shown that segmentation is
often a major source of irritation in professional practice (O’Brien et al. 2017;
for a discussion of the efects of sentence-level segmentation, see Bédard 2000;
Mossop 2006; LeBlanc 2013).
220 Gys-Walt van Egdom
Users can also choose to adapt error categories: the default categories can be
either replaced by institution-specifc categories or removed if deemed irrelevant.
However, error categories can be modifed only by an institutional administrator.
The consistency and exchangeability of error memories would be compromised if
individual users were to be allowed to customize error categories themselves.
Flexibility of processes has also been prioritized by Televic: a user can choose
to reuse source texts, select diferent error memories, revise anonymously, divide
up the work (among multiple revisers), opt for extensive revision (by flling out all
the felds in the feedback sheet) or concise revision (by flling out only mandatory
felds), override scores and so on. With its wide array of optional features, processes
in translationQ seem to be ftted to the needs of inexperienced users (who usually
prefer a simple project set-up) as well as experienced users. Still, there are some
situations in which the adaptability of the program is limited. For example, a reviser
might prefer revision by segment instead of revision by version: translationQ does
not have a functionality that allows a reviser to look at all translations of one par-
ticular segment at one glance. Furthermore, the workfow cannot be adapted to all
didactic situations: for example, as it is impossible to form a revision loop, students
cannot be asked to amend their version themselves in translationQ.
Error management
The ffth determinant of ergonomic quality is error management. Errors are inevi-
table in software design and software use. System errors, user errors and error
messages can disrupt entire processes and have a negative infuence on user expe-
rience. The principles of error management can be broken down into three
subcriteria: error protection, quality of error messages and error correction. To
prevent errors from occurring, an error is best detected before validation of an
action (Bastien and Scapin 1993: 33). A few measures have been taken by Televic
engineers to prevent errors from occurring. For example, translationQ alerts users
when mandatory felds, which are always marked with an asterisk (*), have not
been flled in (correctly) (see Figure 11.2). It is impossible for the user to proceed
with the confrmation of an error without flling in these felds. Another error
protection measure is the dual activation required for crucial steps in the prepara-
tion and revision process (publishing the assignment and publishing the feedback).
When errors do occur, software engineers should see to it that information about
the error is communicated to the user: an error message should be concise and
meaningful, and it should give a clear idea of the nature of the error and the action
required to correct it (Bastien and Scapin 1993: 34). In translationQ, error messages
usually appear immediately and are fairly informative. For instance, when import-
ing Word fles of student translations in batch, the reviser is given a warning when
the number of segments in the source text and a particular translation do not cor-
respond, and is asked to split or merge segments until both texts appear equally long.
Also essential in error management is correction. Although an error is always
disruptive, it can be less disruptive when the user has to correct only the data
Improving revision with translationQ 221
or command entries that caused the error (Bastien and Scapin 1993: 35). There
have been reports of students who have experienced a system error (without error
message) that forced them to recommence the assignment, starting with the frst
segment (Van Egdom et al. 2019). In addition, the principles of ergonomic design
dictate that users should be able to correct errors immediately (Bastien and Scapin
1993: 35). In an exploratory study by Van Egdom et al. (2019), two participants
(out of fve) complained about bugs in the program: while revising a number of
student versions, a series of error messages appeared simultaneously, all saying that
corrections had not been saved. From an ergonomic point of view, the fact that
the error messages appeared is positive, but they should have appeared immediately
after the system errors occurred. For the revisers, it was difcult to fnd out which
corrections had been lost due to the system error.
Consistency
An interface is consistent when design choices are maintained everywhere (Bastien
and Scapin 1993: 36). In translationQ, procedures, labels and commands are always
presented in a recognizable format, location and syntax, except in one case. The
label “completed” is used to determine the status of single submissions and is meant
to help the reviser keep track of what has been fully revised (and what not), but it
is also used to confrm the completion of the entire revision process and to initiate
the publication of the results. This means that one term is used to designate two
concepts, which is highly undesirable within a domain (see ISO 1087–1 2000). For
beginners, this might be particularly confusing, especially because revision status
and completion both feature in the same subtask. This issue could be resolved by
replacing the “complete” label that is used to confrm the completion of the entire
revision task: substitutes for this label could be “publish” or “publish all”.
Signifcance of codes
Such ambiguous labelling afects not only consistency; it can also be seen as a
coding issue. For a code to be signifcant, there has to be a strong semantic rela-
tionship between the code and the item/action the code refers to (Bastien and
Scapin 1993: 37). Codes ought to be meaningful and distinguishable. In the case
of the label “completed”, the distinguishability of actions is at stake. One also
wonders whether the coding in translationQ is meaningful to all users: “assign-
ment” is sometimes used when the term “project” seems more appropriate; “item”
is employed to refer to what is called a “text unit”; “source phrase” is used to refer
to a “source text unit”. The coding is thus sometimes arbitrary. Still, the software
engineers have taken the signifcance of codes into consideration when designing
the tool. Little use has been made of abbreviations. When abbreviations are used,
they are based on ofcial abbreviation rules/standards (e.g. ISO 639–1 2002 for
language coding). Furthermore, the engineers have sought to increase authenticity
by introducing DQF error categories. Authentic didactic practices are looked upon
222 Gys-Walt van Egdom
Compatibility
The fnal criterion listed by Bastien and Scapin is “compatibility”. In the transla-
tion industry, this term calls to mind struggles with pdf fles, fgures, tables and the
like. At the time of writing, translationQ cannot be used in CAT tools. However,
assignments can be exported and imported as XLIFF fles: a student can choose
to do an assignment in SDL Trados and then upload their version onto the trans-
lationQ platform. TranslationQ also accepts Microsoft Word fles; the program
segments the Word fles (be they source texts or translations) in a way similar to
CAT tools. Error memories can also be exchanged among revisers; as the confgu-
ration of an error memory is similar to that of a termbase, it can be exported as a
TBX fle (see ISO 30042 2019).
However disruptive incompatible programs and fle types may be, Bastien and
Scapin do not give an overly “technical” spin to compatibility: a tool is deemed
compatible when the organization of output, input and dialogue is consistent
with user characteristics and task characteristics (1993: 38). It seems superfuous
to mention that it is impossible to paint a comprehensive picture of user charac-
teristics. For one thing, the characteristics of trainers (revisers) difer markedly
from those of trainees (translators). Profling of users requires an in-depth study
of user types and personalities. However, some information on users can be
inferred from their core activities (teaching, learning) and the tasks that are to be
executed in the tool.
Trainees, for example, would beneft from a tool that has a professional “look
and feel”, a tool that refects professional practices. In educational theory, authentic
experiential learning (AEL) is seen as a key to student participation and success-
ful preparation for the market (for a discussion of AEL, see Kiraly 2016; Massey
2016; Buysschaert et al. 2017, 2018). Working in a tool that bears a resemblance
to a professional working environment would enhance authentic practices in the
classroom. In translationQ, the two-column layout is a good example of authentic-
ity: the page layout is somewhat similar to that of CAT tools; the environment is
familiar (see Bastien and Scapin 1993: 38). Another stride in the direction of AEL is
the implementation of professional error categories: in-class use of these categories
can help reduce the gap between training and the profession. Another feature that
is built in to foster AEL is the “dashboard” in the “Report” module. Through the
user reports, a student can glean an idea of task-specifc performance, track progress
and compare their performance with that of the group (see Figure 11.5). Finally,
there is the need for constructive feedback. This matter was addressed in section 2;
the only drawback seems to be that students are not allowed to process the feedback
by producing a fnal version.
Trainers also have a keen interest in using authentic revision processes and con-
structive feedback. Given the workload in translator training, however, they would
Improving revision with translationQ 223
FIGURE 11.5 User report showing the translation assignments, scores and average scores
of the group
also expect a tool that is efcient: general didactic practices, and revision practices
in particular, should require less time and energy when the tool is used. Using an
error memory seems to be a good way of doing away with repetitive revision. Still,
the results of our exploratory user study suggest that revision remains very time-
consuming, because the set-up of an assignment is sometimes difcult and flling in
the feedback felds tends to be a tedious task (Van Egdom et al. 2019). What was
not taken into account in the exploratory study is the fact that certain subtasks are
automated in translationQ; calculating scores for formative assessment, distributing
feedback among students, archiving assignments and translations, and registering
scores are automated in translationQ.
Despite the fact that the revision task itself seems to remain time-consuming,
the tool has great organizational benefts. From an organizational point of view,
the data in the error memory can also be very helpful: the data can be exported
as an Excel fle and experimented with in pivot tables. By pivoting the data in the
tables, learning trends can be visualized that seem outside the trainer’s grasp in
traditional didactic practices. The tables ofer insight into: general error and error
type frequency, error (type) frequency per group, error (type) frequency per stu-
dent and score distribution per error type. After careful examination, the reports
and error data might enable a trainer or a team of trainers to address the needs,
abilities and limitations of student groups and individual students, steering didactic
practices in the direction of “tailored” learning (see Hofmann and Miner 2009). In
terms of organizational ergonomics, this is highly desirable: not only does didactics
come to inform assessment, but also assessment comes to inform didactics (see Pym
1992: 283).
224 Gys-Walt van Egdom
Still, translationQ has quite a few redeeming ergonomic qualities that may con-
vince translator trainers to use the program. The most important of these have to
do with organizational ergonomics. Although flling in feedback sheets is reported
to require a bit of energy on the part of the reviser, the error memory allows for
the “reuse” of error input across assignments. This means that revision practices
become less repetitive. The tool is also technically compatible with the most fre-
quently used tools in translator training. What is more, translationQ seems highly
compatible with user expectations: students can work in a learning environment
that is “authentic” and also allows them to glean an idea of their learning progress
and of their strengths and weaknesses. Meanwhile, trainers can manage a database
for translation assignments, a translation repository, download error memories and
look up task-specifc and user-specifc data that are relevant to the assessment of
students and can inform didactic practices.
Is translationQ likely to improve the quality of trainer-to-trainee revision?
So long as the tool manages to overcome its growing pains (especially the prob-
lems with error messages), the results of our analytical review suggest a promising
Improving revision with translationQ 225
future. However, the review has abstracted from the realities in which trainer-to-
trainee revision practices are embedded. Empirical research is warranted in order
to formulate a conclusive statement about the quality of trainer-to-trainee revision
practices in translationQ. Direct observations, keystroke logging data and eyetrack-
ing data may allow us to develop a deeper understanding of revision processes in
translationQ. Information on user experience can also shed light on the qual-
ity of revision practices. Such information can be obtained through interviews,
questionnaires and focus group discussions. What is also lacking in our review is a
baseline with which the quality of translationQ can be compared. In other words,
no research has been published yet on the revision quality and ergonomic quality
of existing revision methods and tools. Revision practices in translationQ should
be pitted against the way revisers currently work. Time and extensive research will
tell whether translationQ can set new qualitative standards in trainer-to-trainee
revision.
Notes
1 www.televic-education.com/en/translationq (accessed 09/08/2019).
2 www.iea.cc/whats/ (accessed 11/11/2018).
3 This threefold distinction is also made at the IAE website: www.iea.cc/whats/ (accessed
11/11/2018).
4 https://matecat.com; https://casmacat.eu; https://lilt.com.
5 No direct reference is made to Omar and Abdulahrim; some of their criteria for con-
structive feedback do not seem to apply in this context (e.g. frequent), and others are
dependent on the concrete input of the reviser (e.g. understandable).
6 In some respects, the pop-up feedback sheet can even be said to resemble the “old” LISA
QA form, which lists roughly the same error categories but also urges the reviser/reviewer
to provide information on the severity of an error, etc. The LISA QA form can be found at
https://slideplayer.com/slide/4705/1/images/54/Use+of+Quality+Assurance+Forms.
jpg.
7 For the sake of completeness, it should be noted that reviser reliability can also be com-
promised when the names of students are shown during revision. TranslationQ has
functionalities for double-blind revision to neutralize bias.
8 https://support.televic-education.com/hc/en-us/articles/115004358289-8-Complet-
ing-your-revision.
12
THE MT POST-EDITING
SKILL SET
Course descriptions and educators’ thoughts
1. Relevant literature
The conviction that familiarity with translation technology is crucial to a success-
ful professional career is shared by industry stakeholders (Transperfect as reported
by Zaretskaya 2017: 123; SDL in their Corporate Translation Technology Survey,
The MT post-editing skill set 227
2017: 12) and translator educators (O’Brien 2002: 100; Bowker 2002; Doherty
et al. 2012; Doherty and Kenny 2014; Kenny and Doherty 2014). More recently,
the improved performance of MT, with its neural networks technology, and the
expanding presence of MT in the industry are posing a new challenge to translator
training programmes. As Cid-Leal et al. (2019) point out, there is a shift from com-
puter-assisted human translation to human-assisted machine translation. Colominas
and Oliver (2019) present a survey showing that, at Spanish universities, there is a
signifcant mismatch between the real use of MT by students and the use under-
stood (or recommended) by educators. This gap between the educators’ beliefs and
the actual practice of the students certainly leads to a misalignment between learn-
ing objectives and outcomes.
In fact, for some time researchers have generally agreed (O’Brien 2002; Şahin
2011) that post-editing is diferent from conventional human translation and
consequently requires specifc skills. Gaspari et al. (2015) observed that training
programmes lacked MT, translation quality assessment, and post-editing (PE) skills
in their syllabi. The proposed skill sets found in the literature (O’Brien 2002: 102–3;
Rico and Torrejón 2012: 169–70; Nitzke et al. 2019: 247–50) can be broadly clas-
sifed into two diferent types depending on the function (either more limited or
more extensive) attributed to the post-editor. One perspective assumes that the
function of the post-editor consists merely in editing and validating the translation
suggestions obtained with an MT system, this being referred to as a ‘downward
migration’ (Kenny 2018: 66) or a ‘limited or reductive role’ (Kenny and Doherty
2014: 290). This defnition is applied by a considerable number of stakeholders
in both the industry and the research community (Joscelyne and Brace 2010: 24;
KantanMT 2014; Pym 2013; Absolon 2018). At the other extreme, authors such
as Rico and Torrejón (2012), Sanchez-Gijón (2016), Rico (2017: 80), Blagodarna
(2018: 4), Moorkens (2018b: 4) and Pym (2019a) assume a more extensive job
profle. In addition to editing MT segments, a professional post-editor performs
other functions: linguistic pre-processing, augmenting systems with customized
glossaries and managing MT systems and the overarching workfows. It is clear that
new professional skills are needed, but neither the industry nor the research com-
munity seem to have reached consensus on their specifc defnition or delimitation.
Furthermore, the question has been raised of when such training should be intro-
duced and to what extent (basic introduction or advanced specialized knowledge)
(Plaza Lara 2019: 261–2; Nitzke 2019: 45).
The lack of agreement on the skills involved is observed, for example, in
ISO 18587 (2017). There the training perspective is added as an Annex to the
standard. The Annex states the potential benefts of MTPE training in a general
way and briefy describes the fve topics training may cover (advanced use of
translation memory and MT, advanced terminology work, advanced text-pro-
cessing skills, practice in both light and full PE, and use of Quality Assessment
tools). The need for consensus has already led to the emergence of survey-based
research to learn more about the current profle of the post-editor in general
(Gaspari et al. 2015), and particularly from the perspective of the industry
(Ginovart et al. 2020).
228 Clara G. Cid and Carme C. Ventura
2. Methodology
Participants
We e-mailed more than 200 educators in translation schools drawn from the EMT
list of members, but also from other resources so as to include the non-EMT
schools. For instance, a shared unpublished database of translation and interpret-
ing schools in Europe is available on the Translation Commons’ Learn/Resources
Hub,4 and there is the list of approved schools published by the American Transla-
tors Association (ATA n.d.). Alternatively, the Internet was browsed by country
(European Union 2020) to fnd the relevant schools. The method used for dis-
semination of the questionnaire was therefore convenience sampling. One possible
limitation of or bias in our preselected list of universities is that some translation
schools were unknown to the authors and were therefore not contacted.
We sent the 53 educators who agreed to participate a consent form for signature
and the link to the online questionnaire. After flling it in, they could agree to take
part in an interview with the lead researcher, and 48 did so.
1 PE training elements;
2 PE skills;
3 PE tasks.
However, a series of short questions also broached general or related matters such
as PE briefs and guidelines, PE feedback and translation technology tools (11 items
in total; see section 3).
To design those parts of the online questionnaire where we present our three core
topics (15 training elements, 11 PE skills and 14 PE tasks), we relied on previous
work by researchers in the feld. To name just a few, the list of 15 training elements
was mainly inspired by PE training courses developed by SDL, Tragora (n.d.),
DigiLing (n.d.), ASAP Translations (n.d.), TAUS (Van der Meer 2015) and fellow
researchers (for instance, Guerberof and Moorkens 2019). The list of 11 PE skills
The MT post-editing skill set 229
3. Quantitative results
A total of 61 questionnaires were submitted, of which 54 were considered valid
for the present study. Seven submissions were excluded because they concerned
courses at the undergraduate level or had another type of audience (such as ‘Train
the trainer’ courses).8
A total of 53 educators engaged in MTPE courses in 17 diferent countries
responded to the survey. As already mentioned, one educator taught two diferent
230 Clara G. Cid and Carme C. Ventura
courses, so we had 54 submissions from the following countries: Austria (1), Bel-
gium (4), Croatia (1), Czech Republic (1), Finland (3), France (6), Germany (8),
Greece (1), Ireland (1), Italy (5), Latvia (1), Malta (1), Poland (2), Portugal (1),
Spain (8), Switzerland (2) and the United Kingdom (8).
Weight of PE in the syllabus: On Table 12.1, the weight of PE in the syllabi of the
courses studied is displayed. However, one limitation must be acknowledged: the
possible interpretation of ‘syllabus’ as either a whole course or part of a course. To
deal with this ambiguity, more precise information was gathered when we analysed
the syllabi (see section 4).
Teaching methods and materials: We included a set of multiple-choice questions
(checkbox type, with 9 possible answers, a free-text feld and no limit on the num-
ber of answers to be selected). The average respondent selected 5.2 answers. Slide
presentations, hands-on PE, and MT output evaluation are the materials used most
often in PE courses (see Figure 12.1, which shows the absolute number of times
each item was selected and the percentage relative to the total of 277 selections by
all respondents).
0.0% 2.0% 4.0% 6.0% 8.0% 10.0% 12.0% 14.0% 16.0% 18.0% 20.0%
FIGURE 12.1 What training support(s) do you use for your MTPE training?
FIGURE 12.2 What two methods would you choose as the most important to be pre-
sented to MTPE students, regarding the evaluation of MT output?
quality should be added to the equation along with the editing distance. The
fndings are shown in Figure 12.2.
Quality estimation (QE), as a method with which users can evaluate MT out-
put, is represented by only 6% of the total 101 selections by respondents. It would
seem that QE is still a work in progress, and that no major use of it has been
reported in the industry or within academia so far. A change of paradigm can be
deduced from Figure 12.2: from automatic scoring (22%) to more human-centred
methods such as error categorization (35%) and task-based evaluation (38%).
Source or target text frst: We asked the respondents if they discuss what should
be read frst, the source or the target segment (radio button type of question).
Among these educators, 33% advise reading the source segment frst; 18%, the
target frst; and 49% do not hold a defnite position on this: they give their students
the opportunity to explore both approaches and discuss with them the advantages
and disadvantages of both.
Pricing models: We asked the educators whether they discuss with students how
to apply a rate for MTPE projects. Out of the 7 available options, the average
respondent chose 1.33 types of rate.
In Figure 12.3, it can be seen that 18 respondents do not discuss rates; 22 said
that price per hour is the recommended model (this corresponds to 30% of the
total 72 selections); 17 respondents selected ‘Price per source word (pre-analysis)’;
and only a few checked the post-analysis (5) or the target word option (3). It should
be emphasized that 7 educators used the ‘Other’ free-text option to explain that
they discuss various possible pricing scenarios and that the pros and cons of each
approach are debated. For instance, one respondent said they present the possibil-
ity of having a ‘price per project’. The possibility of having mixed-model pricing
FIGURE 12.3 Do you discuss with the students how to apply a rate on MTPE projects?
The MT post-editing skill set 233
with a fxed rate (source words) and a variable rate (editing distance), as proposed
by Bammel (2019), was not mentioned by anyone.
PE levels: When asked ‘Which PE levels do you show10 to your PE students?’,
given that it was a multiple-choice question, almost all the respondents chose both
light and full PE. Indeed, the average respondent selected 1.83 answers. Of the total
99 answers selected, ‘light PE’ represented 43%, ‘full PE’ 52% and ‘Other’ 5%. This
corresponds closely to the hands-on full PE task and the hands-on light PE task
seen in Figure 12.1.
Relation between raw output and fnal quality: To provide a more detailed response
than the light–full PE dichotomy discussed in the previous item, we also asked
which of the three relations of Figure 12.4 the respondents explicitly mention dur-
ing their course. Alternatively, the respondents could choose ‘Other/It depends’.
The average respondent chose two of these options (total answers = 108).
PE risks: To deal with the varied correlations between MT output quality and
the expected quality of the PE discussed in the previous item, several authors
(Mossop 2014b; Nitzke et al. 2019) have highlighted the need for problem-solving
strategies and the trade-of between necessary changes and over-editing. Since this
also seemed to be important to understanding the general situation at translation
schools, we asked the respondents in a radio-button type of question which of the
three PE errors (under-editing, over-editing or pseudo-editing) they believe the
students are more likely to commit. About half (49%) believe it is over-editing.
Some interviewees would justify this at the interviewing stage. According to them,
it could partially be explained by the fact that translation schools continue to stress
FIGURE 12.4 Which of the following relations between the raw MT output and the fnal
quality expected do you explicitly mention during the training?
234 Clara G. Cid and Carme C. Ventura
the quality factor in their human translation classes as a general principle. However,
40% of the respondents say that under-editing is also quite often a problem. This
percentage might be increasing with the advent of neural MT, as fuency is mis-
leadingly improved (Castilho et al. 2017) and some accuracy errors can go unseen
by translators in training. The remaining respondents either do not have an opin-
ion (7%) or believe their students tend to introduce errors into the MT output, or
perform pseudo-editing (4%).
Ethical and deontological practices: We asked the educators whether they discuss the
implications of using MT without informing the requester of the translation that
it is being used. Only ten of them do not discuss it, and among those who do, one
respondent used the free-text ‘Other’ feld to specify that students are advised not
to use MT without the agreement of the client.
PE brief: Another important aspect that has not been considered in previous
research is how the translation brief may vary with the advent of neural MT, adap-
tive MT, predictive writing, QE, etc. In answering the radio-button question
whether they present a PE brief to the students that is diferent from a translation
brief, 43 (80%) respondents answered ‘Yes’ and 11 (20%) ‘No’. We asked the 43
who answered afrmatively which elements should be present in a PE brief. Out of
the ten available options (including ‘Other’), the average respondent selected 5.76
in this multiple-choice question. In Figure 12.5, we present the absolute number
of times an option was chosen and the percentage this represents of the total 248
responses. The responses ranged from ‘PE level’ (38 responses) to ‘Examples of
scenarios indicating when to discard a segment’ (16 responses).
PE guidelines: As shown in the discussion of the previous item, PE guidelines
seem to be an important element in briefs (with 37 selections, guidelines received
the second highest number of votes). Out of seven possibilities (including ‘None’
and ‘Other’), we asked the respondents which PE guidelines they present to
FIGURE 12.5 Which of the following elements do you present to your students as being
necessary or interesting in a MTPE assignment or brief?
The MT post-editing skill set 235
None; 1; 1%
ISO; 5; 6%
TAUS post-editing
They design them guidelines; 30;
themselves; 9; 38%
12%
Other (various,
such as inspired
from the market
or clients, Only PE level
previous research, indication (light or
context-specific, full PE); 15; 19%
etc.); 12; 15%
the students (see Figure 12.6). It was a multiple-choice question and the aver-
age respondent selected 1.5 answers (total answers = 79). The most selected was
‘TAUS post-editing guidelines’ (30), followed by ‘Only PE level indication’ (15)
and ‘Other’ (12). In this last free-text feld, the educators explained that various
types of guidelines may be presented: inspired by the market/clients or by previous
research, or context-specifc guidelines. Nine respondents design PE guidelines
themselves.
Core topics
We will now discuss the core questions of the survey (PE training elements, PE
skills and PE tasks).
PE training elements: We enquired, via a multiple-choice question, which topics
were covered in the course (see Figure 12.7). With 15 options available, the average
respondent chose 9.07 (total answers = 490).
As seen in Figure 12.7, the most popular choice is ‘MT systems’ (51); only three
participants failed to select it. The next most popular choices were ‘PE levels: light
and full post-editing’ (48), ‘Practical PE exercises in the relevant language pair’ (48),
‘MT evaluation: human (scoring, ranking, error categorization)’ (46), and ‘Integra-
tion between CAT and MT system’ (44).
PE skills: We then asked the respondents to rate the listed 11 PE skills accord-
ing to their importance to a professional post-editor on a scale from 1 (slightly
important) to 5 (very important); each skill could be left unrated (not important).
Figure 12.8 shows the average score for each of the 11 skills, with the ‘Capacity to
post-edit up to human quality (full PE)’ included only for reference.
236 Clara G. Cid and Carme C. Ventura
FIGURE 12.8 Please rate the following MTPE skills and competencies according to the
importance
The MT post-editing skill set 237
FIGURE 12.9 What workload do you think the following PE-related tasks might carry
in the everyday work of a professional post-editor?
As can be observed from the scores in Figure 12.8, MTPE educators claim that
identifying MT output errors (4.61), decision-making about editing or discarding
MT results (4.48) and applying PE guidelines (4.42) are the three most important
PE skills, considering that the ‘Capacity to PE up to human quality’ (4.68) was
present only to give focus to the question. It may be surprising that the ‘Capacity
to post-edit to a good enough quality (light PE)’ is the fourth least selected capacity,
considering that ‘PE levels’ was the second training element covered in the courses
(Figure 12.7).
PE tasks: We asked the educators’ opinion about the load that PE-related tasks con-
stitute in the everyday work of a professional post-editor. Each of the 14 tasks listed
could be rated as main task (3), secondary (2), occasional (1) or not applicable (0).
In Figure 12.9, the average score for each PE-related task is displayed. MTPE itself,
which has the highest score (2.82), is shown only as a reference.
According to the educators surveyed, the tasks of ‘Quality control & text
checking’ (2.65), ‘Revision of post-edited MT output (bilingual)’ (2.64), and ‘MT
output quality evaluation’ (2.20) are the most practised by professional post-editors.
To conclude the questionnaire, we wished to elicit the thoughts and determine
the needs of the educators regarding PE courses: 20% said that current PE courses
are adequate for meeting needs, 36% are of the opposite opinion, and 44% do not
know. This may suggest that there is some uncertainty among educators about
238 Clara G. Cid and Carme C. Ventura
industry requirements or, at least, about the needs of their trainees. When asked if
they would like to have access to a third-party platform where their students could
practise real MTPE assignments, 76% of the respondents responded positively and
15% negatively, while 9% said their choice would depend on the specifcs of the
platform and the assignments provided.
4. Qualitative results
The syllabi
After the educators had been contacted and they had expressed their interest in
taking part in this study by signing the consent form, we requested their syllabus
outline if it was not available at their institution’s website. The 49 syllabi available
at the time enabled us to gain insight into the way PE is currently being taught in
translation master’s programmes in Europe. We were interested in:
First, more than one-half the syllabi are for EMT programmes. Second, the writ-
ten outlines contain highly varied levels of information. While some of them are
rich in content (name of instructor, teaching mode, teaching language, training
activities, methodology, competences and subcompetences, learning outcomes and
objectives, evaluation system, calendar), others contain general information only.
Course name: A total of 20 out of the 49 syllabi mention ‘post-editing’ in their title.
The remainder mention ‘computer-assisted translation’, ‘translation tools’ or ‘transla-
tion technology’; others focus on localization, project management, the translation
profession or the relevant language pair of the corresponding revision or editing course.
Compulsory or elective: Slightly more than 25% are elective; the remainder are
compulsory. Some courses are taught in more than one postgraduate programme,
and one possibility is that the same course was compulsory in one programme but
elective in another.
Contact and study hours: According to the syllabus outlines, fve modules ofer 60
contact hours or more; the other 44 typically range from 12 to 50 hours of contact
time. However, as was shown on Table 12.1, the hours dedicated exclusively to PE
constitute, more often than not, less than half of the syllabus. For two courses, the
study time is more than 300 hours; for the other 47, it ranges considerably, from
160 hours to 8 hours, and in these courses, the study time on PE in particular pre-
sumably varies accordingly.
The MT post-editing skill set 239
ECTS credits: The syllabi mostly range from 2 to 10 ECTS; only two syllabi
are worth 1 ECTS, three are worth 14 ECTS, and one is a quarter of the master’s
(22.5 ECTS).
Examination: Thirty-nine syllabi do not involve passing an examination or a test.
For evaluation, other tools such as assignments, an essay or a portfolio are used. Ten
syllabi include an examination but only four include PE in the examination. With
or without an examination, we wondered whether the students’ grades take into
account to any extent the fnal quality of the post-edited texts they deliver, which
is why we included this question in the interviews (discussed later).
Language pairs: Since PE has traditionally been more linked to courses on com-
puter-assisted translation (CAT) tools, project management or localization, some of
the syllabi are (or try to be), as some interviewees put it, ‘language agnostic’. Two
syllabi enable up to 14 language pairs to be handled. This may depend on the year
and the students a course attracts but, in general, the syllabi and the subsequent
interviews revealed that the educators have groups of students representing anything
from three to eight language pairs. It should be noted, however, that approximately
20 syllabi cover one single language pair, either uni- or bi-directional. The fact that
the population of students enrolled can be international either made it impossible to
evaluate the quality of the post-edited text (if the educator had not mastered the tar-
get language) or led the students to post-edit languages in which they are not native.
Prerequisites for enrolling: Approximately 35 of the courses do not have any formal
prerequisites, especially those that are compulsory, since the fact of being enrolled
in the master’s programme, for example, or having successfully completed the frst
year of the master’s, should mean that the students have the basics (of translation,
CAT tools or any feld that is needed for the given syllabus) necessary to undertake
the PE-related course. For the remaining courses, there is usually a recommenda-
tion, such as being able to use an MS Ofce suite (word processing, spreadsheet and
presentation software), being familiar with CAT tools or possessing other infor-
mation and communication technology (ICT) skills. For a couple of courses, the
completion of another, less advanced course is a prerequisite.
Distance learning: Thirty-seven courses require the presence of the student at
the university. This can probably be explained largely by the need for a laboratory
equipped with licensed software and tools. Even if the students could be connected
via a virtual private network (VPN), the educators would probably still need to
give hands-on on-site support. For example, students may have technical issues
with the VPN or the translation technology tools. Also, considering the content
of the class, an answer to one student’s question could be useful for the rest of the
class, and the oral debate about the quality of diferent translation solutions prob-
ably is (or should be?) a major part of the course.
The interviews11
The 49 interviews, which lasted between 15 and 25 minutes, were held mostly
in English, but also in French, Spanish and Catalan. They took place between
September and November 2019 and provided qualitative insights into a number
240 Clara G. Cid and Carme C. Ventura
Age of syllabus: In which year was the course frst given? Since when has it
included PE? Even though two courses go back to 2000 and 2005, the majority
are more recent, and the most pioneering syllabi started tackling the matter of PE
between 2012 and 2014. Especially in the past fve years, from 2015 to 2020, PE
has shown a clear growth trend: either new courses are being created from scratch,
or PE is gaining more weight in existing courses about CAT tools, MT, project
management or related felds. This probably has to do with the introduction of
standards such as ISO 18587 and the inclusion of PE-related skills in EMT, refect-
ing the reality of the market.
Tools and software: It is common practice to use more than one CAT tool during
the course. The four most used are Memsource, SDL Trados Studio, MateCat and
MemoQ. Microsoft Excel and Word are used by six of the educators to practise the
PE skill set in their courses. Finally, some mentioned Across, STAR Transit and
Lilt. On the topic of MT providers, the most used is Google Translate, followed by
DeepL. The remaining educators mentioned Microsoft and/or Bing, KantanMT,
The MT post-editing skill set 241
Tilde, e-Translation and SDL Language Cloud. The vast majority do not train MT
systems in their courses.
Plans to increase MTPE: At the time they were responding to the online ques-
tionnaire (from May to August 2019), more than half of the participants surveyed
already knew that their course would undergo modifcations. Most changes would
be to dedicate more ECTS and hours to PE; some would entail splitting a course
and making one stand-alone course in revision and PE. A couple of participants
said that PE would now be included in the undergraduate programme.
Use of MT in regular translation courses: We wanted to ascertain whether PE prac-
tice and the PE skill set are present throughout the whole programme and not
only in one single course. There seems to be an ongoing efort to increase the
use of CAT and MT tools in regular translation classes. Half of the respondents
either do not know about their colleagues’ use of MT in their classic translation
courses, or know that they do not introduce MT at all. The other half know at least
one traditional translation course where the educator includes some practice with
translation technologies. However, it is more often CAT tools than MTPE. Little
research has focused on the evolution of traditional translation courses to include
MTPE, and the authors are convinced that ‘Train the trainer’ courses would be
helpful in moving in that direction.
Teaching methods: The project-based approach, in which the students perform
‘multi-facetted learning activities in real (and not just realistic) working envi-
ronments’, held promise in the past decade (Kiraly 2012: 84). The interviewees
were asked whether they favoured a task-based or a project-based approach, or
another teaching method. A signifcant number of educators claim that their
course as a whole is not project-based, but rather task-based. Nonetheless, one
of these tasks is to work on a CAT-tool project to some extent. Even when some
interviewees frst said that their syllabus had a project-based approach, in further
discussion about Kiraly’s model, we agreed that it is somewhere in between: one
exercise that has the shape and appearance of a real translation project may last
over two weeks, but this is only one part of the course; before or after this spe-
cifc assignment, there are other exercises or activities on MT and/or PE which
are task-based. Approximately ten of the studied syllabi are structured according
to the project-based approach.
Error categorization of the MT output: Almost half of the interviewees do not cur-
rently have a structured exercise or one that involves comparing neural MT errors
to other types of MT errors (rule-based, statistical or hybrid). Even if an exercise
about error categorization is not present in a course, we asked the interviewees if
they had an opinion about the similarities and diferences between the NMT out-
puts nowadays compared to other MT systems in the past. All except one agreed
that the error typology has changed since the advent of NMT. One, for instance,
says “we dedicate almost an entire class to the typical errors and advantages of
RBMT, SBMT and NMT”. In general, they also observed the “improvements
regarding target language fuency that had already been reported in the research
community, such as an increased quality of morphology and syntax”, to quote one
242 Clara G. Cid and Carme C. Ventura
educator. Approximately 20 educators went further to state explicitly that the chal-
lenge now lies in the capacity to spot accuracy errors.
Source or target segment frst: This seems to be a controversial topic. We asked
the educators who had not chosen one or the other in the online questionnaire
(49%) what would determine a preference for one method over the other, in their
opinion, or whether they observed a tendency among the students. One explained:
I would say I’m a bit ‘old school’ in the sense I tend to focus on the
source text: in my opinion, ultimately, an MTPE task has the same basic
goal of a ‘traditional’ translation task: to convey the meaning of the ST.
Therefore, I tend to start reading the ST and only then the TT, trying to
make the most of the MT output in order to convey the ST meaning. In
my personal experience, reading first the TT can easily influence the way
we understand the ST, thus leading to a more error-prone state of mind
of the post-editor. However, I try not to influence my students, and I try
to make it clear to them that both approaches have merits and flaws. And,
despite the fact that I do not have the ‘scientific’ data to support my theory,
I would say the students that have a stronger background in ‘traditional’
translation tend to focus more on the ST. The less experienced students are
generally more open to try both approaches, and some of them prove the
TT-focus to work nicely as well.
Pre-editing of the source text: Most respondents do not include any assignment
or activity on pre-editing in their course. A few do have practice in controlled
language or pre-editing, but those who said on the survey that they cover it in
their class were mostly referring to a theoretical presentation of the concept
or a mention of the possibility of doing it in the industry, not actual hands-on
experience. This was clarifed during the interview. It has to be emphasized
that for the three cases (pre-editing not present, mentioned only, or practised
too), there was the possibility, confrmed by some educators, that pre-edit-
ing is more extensively covered in another syllabus. We also questioned the
participants about their opinion of the usefulness of controlled language and
pre-editing with neural MT outputs, and the general reaction was hesitant and
sceptical. Predominantly, their feeling is that the need for pre-editing may be
less signifcant with neural than with statistical MT (as has been suggested in
Nitzke et al. 2019), but that it can still be useful or necessary, depending on the
style and genre of the text and mainly to ensure high-quality originals. Some
mentioned how difcult it can be to introduce pre-editing in a real scenario in
the translation industry, when sometimes the LSC does not know who wrote
the source text, or when the producer of the source text cannot anticipate that
it is going to be machine-translated later. However, for those who still consider
that pre-editing will, to some extent, beneft the MT output quality, it is strik-
ing that formatting corrections came up quite often as factors that could have
an impact on the quality of the MT output. Some educators also referred to
The MT post-editing skill set 243
the proftability of pre-editing only when there are a certain number of target
languages into which a text must be translated.
Evaluation of the PE text: We asked whether the fnal delivered quality of the
post-edited text was considered when calculating a student’s grade. One of the
interviewees commented:
The quality of the final texts is not (directly) evaluated, as we’re more focused
on checking if the students are able to identify the usual features of MT, its
typical advantages and errors, and if they are able to properly take advantage
of MT in a MTPE task. However, we do discuss how different approaches to
MT influence the final quality of the text: for instance, we compare versions
of the same texts where a student tried to use as much as possible of the MT
output and other student chose to translate most of the text from scratch,
and we discuss, with the whole class, if we can clearly state that one is bet-
ter than the other, taking into account several factors that can influence that
result (the translator him/herself, the type of text, the technology used, the
intended purpose of the text, etc.).
There are more instances of courses in which the quality of the post-edited text is
not graded than courses where it is. Still, in 15 of the courses, the quality of the
product is taken into account for the purposes of evaluation. In some cases, even
when quality cannot be a consistent variable for grading, a compromise is found:
it is evaluated only for the target languages that the educator can competently cor-
rect; colleagues help with evaluating other target languages, or groups or pairs of
students review their peers’ post-edited texts.
Deontological issues with MTPE: When asked about the ethical question of
whether to inform a customer that MT is being used, some of the educators clearly
regarded not informing as being intrinsically a violation of the code of ethics; a
professional translator should always inform the customer about the tools they use.
More often, the view among the educators was that MT tools should be available
to translators without the prior agreement of the client, as long as they are used as
one more resource for providing a product whose quality is not inferior to the one
that would have been provided with human translation (which nowadays assumes
the use of CAT tools). Adopting a similar position, some interviewees stated that
they show their students how the use of an MT system remains embedded in the
metadata of each translation unit or segment in a CAT environment. Only a couple
of interviewees mentioned the potential dangers of sending confdential data to
the MT providers. Confdentiality is still an important aspect of MTPE, one that
providers such as Google seem to take seriously (Google Cloud n.d.). However,
the community of users still express doubts (Gheorghe 2019), which probably
means that educators should tackle these issues more often in their courses. Like-
wise, we enquired about the possibility of a translator post-editing the MT output
in a language in which the translator is not native. It seems that the ‘mother-
tongue principle’, which has already been progressively abandoned in the industry
244 Clara G. Cid and Carme C. Ventura
(Wagner et al. 2014: 103), is also not so important in MTPE training: more than
half of the respondents have to ‘accept’ more than one language pair in their PE
hands-on practice. This is just one more reason to start researching the best way to
include MTPE in regular translation classes.
Split training: To conclude, we asked the interviewees about their knowledge of
‘split training’ (Absolon 2019) because it appeared to be a quite unknown recent pro-
posal. Only one of our interviewees knew about it. The lead researcher explained her
understanding of it: it consists in dividing skills into subskills to a more or less granular
level and in attributing a tailored practical exercise to each subskill. Once introduced
to the concept, their opinions were diverse, ranging from a minority of positive views
through a majority who did not express a view either way, to some interviewees who
expressed their conviction that it would not be useful in their courses.
Finally, some educators introduced new topics that had not been foreseen in the
interview but that are certainly of current interest in the MTPE feld. One men-
tioned that, instead of focusing so much on classifying errors, it would be better to
show the students processes from real scenarios—for instance, how to prepare good
feedback after PE, so that engineers of MT systems can continuously improve their
algorithms with sound training data and linguistic insights.
5. Concluding remarks
Recently, the master’s programmes in translation or related studies have been
updated, especially regarding their MTPE content. Some of the educators in our
study have found the same trend is afecting undergraduate programmes too.
In this chapter, we have discussed the outcomes of a survey-based study mixed
with qualitative data from syllabi outlines and interviews with the relevant edu-
cators. In general, the customization or training of MT engines is excluded from
MTPE courses in European postgraduate programmes. We have also learned that it
is common practice to present more than one tool (CAT environment or MT sys-
tem), which calls for task-based activities rather than a project-based methodology.
According to our interviewees, few colleagues include practice in MTPE in their
‘traditional’ or ‘regular’ translation courses. This lack of intertwining of traditional
translation techniques with the use of technologies may partially explain why 76% of
our respondents say they would beneft from an online platform where their students
could have PE hands-on practice, as this would allow them to further combine these
two skill sets, traditionally separated at translation schools. Only in a few cases is the
fnal translation quality evaluated, since it is commonly understood that this is a com-
petence to be learned in regular translation courses; the emphasis in MTPE classes is
mostly placed on procedures and processes, the features of software, and maybe the
techniques for efcient keyboard use. The interviews with the educators also high-
lighted their scepticism about the use of controlled language or pre-editing for MT.
Whereas our interviewees often expressed their wish for a more holistic peda-
gogical approach, this seems difcult to put it into practice. Indeed, ICT skills for
translators, especially with regard to MTPE, are taught as a completely separate
The MT post-editing skill set 245
Acknowledgements
This chapter has been written within the framework of the Government of Cata-
lonia’s Industrial Doctorates Plan. The authors thank the 53 educators who kindly
shared their views and working methods with us. We are especially grateful to
the 48 who made themselves available for interviews with the lead researcher. We
express our gratitude to Isabelle S. Robert and Brian Mossop, two of the editors of
this book, for their interest in our work. Finally, our thanks go to the anonymous
reviewers for their constructive feedback, which enabled us to improve this chapter
signifcantly.
Notes
1 We understand ‘syllabus’ to mean the ‘summary outline of a course of study’, and ‘course’
as ‘a number of lectures or other matter dealing with a subject’ (following the definitions
of Merriam-Webster’s dictionary). We acknowledge that in certain places in the ques-
tionnaire and during interviews, the meanings might overlap.
2 Following Massey et al. (2019: 212–13), the term ‘educator’ has been used in this chap-
ter, even if in some cases (unfortunately, in our opinion), MTPE is still regarded as an
isolated, purely practical activity.
246 Clara G. Cid and Carme C. Ventura
3 According to ISO 18587, Translation Service Provider (TSP) and Language Service
Provider (LSP) include single freelance professionals. We use ‘LSC’ as a more restricted
term than TSP or LSP, but one that is more comprehensive than ‘translation agency’.
4 Prepared by a Lille 3 student for the EUATC during his Master’s 2 (TSM) internship at
Nancy Matis SPRL. Available at http://xl8.link/List.
5 The fact that she agreed to fill in the questionnaire twice, and to have two distinct inter-
views, shows that she was aware of reporting about two different courses. In the case of
the more subjective questions, that is, those that pertain to the instructor and were not
course-related, we have excluded her answers (53 submissions).
6 https://form.jotformeu.com/82844920241354.
7 www.jotform.com.
8 A short professional course provided by an expert in a specific field to update the knowl-
edge and practices of university educators.
9 Data Quality Framework of the Translation Automation User Society.
10 The word ‘show’ in the questionnaire was chosen on purpose to include, as far as pos-
sible, those courses in which time or other constraints may prevent actual practice, but
the two levels of PE are still defined theoretically.
11 Since the views shared by the educators were sometimes nuanced, vague modifiers such
as ‘majority’ or ‘some’ are at times used when reporting the qualitative results.
BIBLIOGRAPHY
Abdallah, Kristiina (2012) Translators in Production Networks. Refections on Agency, Quality and
Ethics.
Abercrombie, Nicholas et al., eds. (2006) The Penguin Dictionary of Sociology.
Absolon, Jakub (2018) ‘The need for competency-based selection and training of
post-editors’. ASAP-translation.com.
Absolon, Jakub (2019) ‘Human Translator 4.0 Manual of efective use of machine translation
for a modern translator’. ASAP-translation.com.
Aikawa, Takako et al. (2012) ‘The impact of crowdsourcing post-editing with the collabora-
tive translation framework’. Advances in Natural Language Processing, 1–10.
Akaike, Hirotugu (1974) ‘A new look at the statistical model identifcation’. IEEE Transac-
tions on Automatic Control 19:6, 716–23.
Alabau, Vicent et al. (2013) ‘CASMACAT: an open source workbench for advanced com-
puter aided translation’. The Prague Bulletin of Mathematical Linguistics 100:1, 101–12.
Alabau, Vicent et al. (2016) ‘Learning advanced post-editing’. In New Directions in Empirical
Translation Process Research (Carl et al., eds.), 95–110.
Allain, Jean-François (2010) ‘Repenser la révision. Défense et illustration de la relecture
croisée’. Traduire 223, 114–20.
Allen, Jefrey (2003) ‘Post-editing’. In Computers and Translation (Somers, ed.), Benjamins
Translation Library, 35.
Allman, Spencer (2006) ‘Acknowledging and establishing the hierarchy of expertise in
translator-reviser scenarios as an aid to the process of revising translation’. www.birmingham.
ac.uk/documents/college-artslaw/cels/essays/translationstudiesdiss/allmandissertation.
pdfs.
Allman, Spencer (2007) ‘Negotiating translation revision assignments’. In Proceedings of the
Translation Conference, 2007: Translation as Negotiation, 35–47.
Allport, Gordon Willard (1954) ‘The historical background of social psychology’. In The
Handbook of Social Psychology, Vol. I (Lindzey, ed.), 3–56.
Alonso, Elisa and Lucas Nunes Vieira (2017) ‘The Translator’s Amanuensis 2020’. Journal of
Specialised Translation 28, 345–61.
248 Bibliography
Alves, Fabio et al. (2010) ‘Translation units and grammatical shifts: towards an integration of
product and process-based translation research’. In Translation and Cognition (Shreve and
Angelone, eds.), 109–42.
Alves, Fabio et al. (2016) ‘Analysing the impact of interactive machine translation on post-
editing efort’. In New Directions in Empirical Translation Process Research: Exploring the
CRITT TPR-DB (Carl et al., eds.), 77–94.
Andújar Moreno, Gemma (2019) ‘El papel de la revisión editorial en la autoria múltiple
del texto traducido: la versión española de Beautiful Children, de Charles Bock, como
estudio de caso’. Sendebar 30, 35–60.
Angelelli, Claudia (2019) ‘Assessment’. In A History of Modern Translation Knowledge (D’Hulst
and Gambier, eds.), 435–42.
Angelone, Erik (2010) ‘Uncertainty, uncertainty management, and metacognitive problem
solving in the translation task’. In Translation and Cognition (Shreve and Angelone, eds.),
17–40.
Antonini, Rachele et al. (2017) ‘Introducing NPIT studies’. In Non-professional Interpreting
and Translation: State of the Art and Future of an Emerging Field of Research (Antonini et al.,
eds.), 1–26.
Aranberri, Nora et al. (2014) ‘Comparison of post-editing productivity between profes-
sional translators and lay users’. In Proceedings of the Third Workshop on Post-Editing Technol-
ogy and Practice (WPTP-3), 20–33.
Arevalillo Doval, Juan José (2005) ‘The EN-15038 European quality standard for translation
services: What’s behind it?’ www.translationdirectory.com/article472.htm [consulted 18
December 2018].
Arthern, Peter J. (1983) ‘Judging the quality of revision’. Lebende Sprache 2, 53–7.
Arthern, Peter J. (1991) ‘Quality by numbers: assessing revision and translation’. In Fifth
Conference of the Institute of Translation and Interpreting (Picken, ed.), 85–94.
ASAP Translations (n.d.) ‘English—Slovak MTPE course’. Nitra: ASAP-translation.com.
http://mtposteditors.com/tests/test_EN_SK_01.html.
ATA (n.d.) ‘Schools approved for voting membership applications’. www.atanet.org/certifcation/
eligibility_approved.php#.
Austermühl, Frank (2013) ‘Future (and not-so-future) trends in the teaching of translation
technology’. Tradumàtica: tecnologies de la traduciò 11, 326–37.
Austrian Standards (2015) ‘Certifcation Scheme S06 Translation Service Provider according
to ISO 17100’. www.iso17100.net/fleadmin/user/bilder/downloads-produkte-und-
leistungen/S-07.106-ISO17100_EN.pdf [consulted 29 December 2018].
Austrian Standards (2019) ‘Zertifzierung: was sind die Vorteile einer Zertifzierung?’.
www.austrian-standards.at/produkte-leistungen/zertifzierung/ [consulted 18 Decem-
ber 2018].
Austrian Standards Institute (2014) ‘Normung insight’ (K. Grün, orator) Ausbildungsmodul
Austrian Standards. Austrian Standards Meeting Center, Vienna.
Aziz, Wilker et al. (2014) ‘Sub-sentence level analysis of machine translation post-editing
efort’. In Post-Editing of Machine Translation: Processes and Applications (O’Brien et al.,
eds.), 170–99.
Bach, Cédric and Dominique Scapin (2003) ‘Ergonomic criteria adapted to human virtual
environment interaction’. In Proceedings of the 15th French-Speaking Conference on HCI,
24–31.
Bakkes, Christiaan (2004a) Die lang pad van Stofel Mathysen.
Bakkes, Christiaan (2004b) Stofel by die afdraaipad.
Bakkes, Christiaan (2006) Stofel in die wildernis.
Bibliography 249
Buysschaert, Joost et al. (2017) ‘Professionalising the curriculum and increasing employ-
ability through authentic experiential learning: the cases of INSTB’. Current Trends in
Translation Teaching and Learning E (CTTL-E) 4, 78–111.
Buysschaert, Joost et al. (2018) ‘Embracing digital disruption in translator training: tech-
nology immersion in simulated translation bureaus’. Revista Tradumàtica: tecnologies de la
traducció 17, 125–33.
Buzelin, Hélène (2005) ‘Unexpected allies: how Latour’s network theory could comple-
ment Bourdieusian analyses in translation’. The Translator 11:2, 193–218.
Buzelin, Hélène (2007) ‘Translations in the “making”’. In Constructing a Sociology of Transla-
tion (Wolf and Fukari, eds.), 135–69.
Buzelin, Hélène (2011) ‘Agents of translation’. In Handbook of Translation Studies Online
(Gambier and van Doorslaer, eds.).
Bywood, Lindsay et al. (2017) ‘Embracing the threat: machine translation as a solution for
subtitling’. Perspectives: Studies in Translatology 25:3, 492–508.
Cadwell, Patrick et al. (2016) ‘Human factors in machine translation and post-editing
among institutional translators’. Translation Spaces 5:2, 222–43.
Cadwell, Patrick et al. (2018) ‘Resistance and accommodation: factors for the (non-) adop-
tion of machine translation among professional translators’. Perspectives: Studies in Trans-
latology 26:3, 301–21.
Callegaro, Mario (2008) ‘Social desirability’. In Encyclopedia of Survey Research Methods
(Lavrakas, ed.), 825–6.
Carl, Michael (2012) ‘Translog-II: a program for recording user activity data for empirical
reading and writing research’. In Proceedings of the Eighth International Conference on Lan-
guage Resources and Evaluation (Calzolari et al., eds.), 4108–12.
Carl, Michael and Arnt Lykke Jakobsen (2009) ‘Towards statistical modelling of translators’
activity data’. International Journal of Speech Technology 12:4, 125–38.
Carl, Michael and Moritz Jonas Schaefer (2017) ‘Why translation is difcult: a corpus-based
study of non-literality in post-editing and from-scratch translation’. Hermes—Journal of
Language and Communication in Business 56, 43–57.
Carl, Michael and M. Cristina Toledo Baez (2019) ‘Machine translation errors and the transla-
tion process: a study across diferent languages’. Journal of Specialised Translation 31, 107–32.
Carl, Michael et al. (2011) ‘On the systematicity of human translation processes’. In Tralogy
2011. Translation Careers and Technologies: Convergence Points for the Future.
Carl, Michael et al. (2014) ‘Post-editing machine translation—a usability test for professional
translation settings’. In Psycholinguistic and Cognitive Inquiries in Translation and Interpreta-
tion Studies (Ferreira and Schwieter, eds.), 145–74.
Carl, Michael et al. (2016a) ‘The CRITT translation process research database’. In New
Directions in Empirical Translation Process Research (Carl et al., eds.), 13–54.
Carl, Michael et al., eds. (2016b) New Directions in Empirical Translation Process Research.
Carless, David (2006) ‘Difering perceptions in the feedback process’. Studies in Higher Edu-
cation 31, 219–33.
Castilho, Sheila and Sharon O’Brien (2016) ‘Evaluating the impact of light post-editing on
usability’. In Proceedings of the 10th International Conference on Language Resources and Evalu-
ation, LREC 2016, 310–16.
Castilho, Sheila et al. (2017) ‘Is neural machine translation the new state of the art?’. The
Prague Bulletin of Mathematical Linguistics 108:1, 109–20.
Catford, J.C. (1965) A Linguistic Theory of Translation.
CEFR (2019) ‘Common European framework of reference for languages’. www.coe.int/en/
web/common-european-framework-reference-languages/table-1-cefr-3.3-common-
reference-levels-global-scale.
252 Bibliography
Dam-Jensen, Helle and Carmen Heine (2013) ‘Writing and translation process research:
bridging the gap’. Journal of Writing Research 5:1, 89–101.
Darity, William (2008) International Encyclopedia of the Social Sciences, Volume 1.
Da Silva, Igor A. Lourenço et al. (2017) ‘Translation, post-editing and directionality: a study
of efort in the Chinese-Portuguese language pair’. In Translation in Transition: Between
Cognition, Computing and Technology (Jakobsen and Mesa-Lao, eds.), 108–34. Benjamins
Translation Library 133.
De Almeida, Giselle (2013) ‘Translating the post-editor: an investigation of post-editing
changes and correlations with professional experience across two Romance languages’.
PhD dissertation, Dublin City University. http://doras.dcu.ie/17732/ [consulted
12 November 2018].
De Almeida, Giselle and Sharon O’Brien (2010) ‘Analysing post-editing performance: cor-
relations with years of translation experience’. In Proceedings of the 14th Annual Conference
of the European Association for Machine Translation.
Dede, Volkan (2019) ‘Does a formal post-editing training afect the performance of novice
post-editors? An experimental study’. https://doi.org/10.13140/RG.2.2.23578.08643.
Delisle, Jean, Hannelore Lee-Jahnke and Monique C. Cormier (1999) Terminologie de la tra-
duction / Translation terminology / Terminología de la traducción / Terminologie der Übersetzung.
Densmer, Lee (2014) ’6 Reasons to stop preferential changes from ruining your QA process’.
Moravia’s Global Blog. https://info.moravia.com/blog/bid/351122/6-Reasons-to-Stop-
Preferential-Changes-from-Ruining-Your-QA-Process [consulted 24 November 2018].
Depraetere, Ilse (2010) ‘What counts as useful advice in a university post-editing training
context? Report on a case study’. In EAMT 2010: Proceedings of the 14th Annual Confer-
ence of the European Association for Machine Translation.
D’Hulst, Lieven and Yves Gambier, eds. (2019) A History of Modern Translation Knowledge.
DigiLing (n.d.) ‘Post-editing machine translation’. https://learn.digiling.eu/.
do Carmo, Félix (2017) ‘Post-editing: a theoretical and practical challenge for translation
studies and machine learning’. PhD dissertation, Universidade do Porto. https://repositorio-
aberto.up.pt/handle/10216/107518 [consulted 31 August 2019].
Doherty, Stephen and Dorothy Kenny (2014) ‘The design and evaluation of a statistical
machine translation syllabus for translation students’. The Interpreter and Translator Trainer
8:2, 295–315.
Doherty, Stephen et al. (2012) ‘Taking statistical machine translation to the student transla-
tor’. In Tenth Biennial Conference of the Association for Machine Translation in the Americas.
Dörnyei, Zoltan (2007) Research Methods in Applied Linguistics.
Dorr, Bonnie et al. (2010) ‘Part 5: machine translation evaluation’. In Handbook of Natural
Language Processing and Machine Translation (Olive et al., eds.), 801–94.
Drugan, Joanna (2013) Quality in Professional Translation: Assessment and Improvement.
Dubois, Lise (1999) ‘La traduction ofcielle au Nouveau-Brunswick: sa place et son rôle’.
PhD dissertation, Université Laval. https://www.collectionscanada.gc.ca/obj/s4/f2/
dsk1/tape9/PQDD_0007/NQ43065.pdf [consulted 7 September 2018].
Dunne, Keiran J. (2011) ‘From vicious to virtuous cycle. Customer-focused translation qual-
ity management using ISO 9001 principles and agile methodologies’. In Translation and
Localization Project Management: The Art of the Possible (Dunne and Dunne, eds.), 153–87.
Ehrensberger-Dow, Maureen and Andrea Hunziker Heeb (2016) ‘Investigating the ergo-
nomics of a technologized translation workplace’. In Reembedding Translation Process
Research (Muñoz Martin, ed.), 69–88.
Ehrensberger-Dow, Maureen and Gary Massey (2014) ‘Cognitive ergonomic issues in pro-
fessional translation’. In The Development of Translation Competence: Theories and Method-
ologies from Psycholinguistics and Cognitive Science (Schwieter and Ferreira, eds.), 58–86.
254 Bibliography
Göpferich, Susanne (2009) ‘Towards a model of translation competence and its acquisition:
the longitudinal study TransComp’. In Behind the Mind. Methods, Models and Results in
Translation Process Research (Göpferich et al., eds.), 11–37.
Gouadec, Daniel (2007) Translation as a Profession.
Goulet, Marie-Josée et al. (2017) ‘La traduction automatique comme outil d’aide à la rédac-
tion scientifque en anglais langue seconde: résultats d’une étude exploratoire sur la qual-
ité linguistique’. Anglais de Spécialité (ASp) 72, 5–28.
Graham, Mark et al. (2011) Geographies of the World’s Knowledge.
Green, Spence et al. (2014) ‘Predictive translation memory: a mixed-initiative system for
human language translation’. In Proceedings of the 27th Annual ACM Symposium on User
Interface Software and Technology (UIST ’14).
Groves, Declan and Dag Schmidtke (2009) ‘Identifcation and analysis of post-editing pat-
terns for MT’. In Proceedings of MT Summit, 429–36.
Guerberof Arenas, Ana (2008) ‘Productivity and quality in the post-editing of outputs from
translation memories and machine translation’. Localisation Focus 7:1, 11–21.
Guerberof Arenas, Ana (2013) ‘What do professional translators think about post-editing’.
Journal of Specialised Translation 19, 75–95.
Guerberof Arenas, Ana (2014a) ‘Correlations between productivity and quality when post-
editing in a professional context’. Machine Translation 28, 165–86.
Guerberof Arenas, Ana (2014b) ‘The role of professional experience in post-editing from a
quality and productivity perspective’. In Post-editing of Machine Translation: Processes and
Applications (O’Brien et al., eds.), 51–76.
Guerberof Arenas, Ana (2017) ‘Quality is in the eyes of the reviewer: a report on post-
editing quality evaluation’. In Translation in Transition: Between Cognition, Computing and
Technology (Jakobsen and Mesa-Lao, eds.), 188–206.
Guerberof Arenas, Ana and Joss Moorkens (2019) ‘Machine translation and post- editing
training as part of a master’s programme’. Journal of Specialised Translation 31, 217–38.
Hagemann, Susanne (2019) ‘Directionality in translation and revision teaching: a case study
of an A-B teacher working with B-A students’. Interpreter and Translator Trainer 13:1,
86–101.
Halliday, M.A.K. and J.R. Martin (1993) Writing Science: Literacy and Discursive Power.
Hanauer, David I. and Karen Englander (2011) ‘Quantifying the burden of writing research
articles in a second language: data from Mexican scientists’. Written Communication 28:4,
403–16.
Hansen, Gyde (2006) ‘Retrospection methods in translator training and translation research’.
Journal of Specialised Translation 5, 2–41.
Hansen, Gyde (2008) ‘A classifcation of errors in translation and revision’. In CIUTI-Forum
2008 Enhancing Translation Quality. Ways, Means, Methods, 313–26.
Hansen, Gyde (2009) ‘The speck in your brother’s eye—the beam in your own. Quality
management in translation and revision’. In Eforts and Models in Interpreting and Transla-
tion Research: A Tribute to Daniel Gile (Hansen et al., eds.), 255–80.
Harris, Brian (2017) ‘Unprofessional translation’. In Non-professional Interpreting and Transla-
tion: State of the Art and Future of an Emerging Field of Research (Antonini et al., eds.), 29–43.
He, Yifan et al. (2010) ‘Improving the post-editing experience using translation recommen-
dation: a user study’. In Proceedings of the Ninth Conference of the Association for Machine
Translation in the Americas, 247–56.
Herbig, Nico et al. (2019) ‘Multi-modal indicators for estimating perceived cognitive load
in post-editing of machine translation’. Machine Translation 33:1, 91–115.
Hermans, Theo (1999) Translation in Systems: Descriptive and System-Oriented Approaches Explained.
Bibliography 257
Kolb, Waltraud (2013) ‘Who are they? Decision-making in literary translation’. In Tracks and
Treks in Translation Studies (Way et al., eds.), 207–21.
Konttinen, Kalle et al. (2017) ’Multilingual translation workshop—developing professionals
in a simulated translation market’. MikaEL—Electronic Journal of the KäTu Symposium on
Translation and Interpreting Studies 10, 150–64.
Koponen, Maarit (2012) ‘Comparing human perceptions of post-editing efort with post-
editing operations’. In 7th Workshop on Statistical Machine Translation, 181–90.
Koponen, Maarit (2013) ‘This translation is not too bad: an analysis of post-editor choices
in a machine translation post-editing task’. In Proceedings of MT Summit XIV Workshop on
Post-Editing Technology and Practice (O’Brien et al., eds.), 1–9.
Koponen, Maarit (2015) ‘How to teach machine translation post-editing? Experiences from
a post-editing course’. In Proceedings of the 4th Workshop on Post-Editing Technology and
Practice (WPTP4), 2–15.
Koponen, Maarit (2016) ‘Is machine translation post-editing worth the efort? A survey of
research into post-editing and efort’. Journal of Specialised Translation 25, 131–48.
Koponen, Maarit (2018) ‘Learning to post-edit: an analysis of post-editing quality and pro-
cesses of translation students’. Presentation at International Association for Translation and
Intercultural Studies (IATIS) 6th International Conference, Hong Kong, 5 July 2018.
Koponen, Maarit and Leena Salmi (2015) ‘On the correctness of machine translation: a
machine translation post-editing task’. Journal of Specialised Translation 23, 118–36.
Koponen, Maarit et al. (2012) ‘Post-editing time as a measure of cognitive efort’. In Proceed-
ings of the AMTA 2012 Workshop on Post-Editing Technology and Practice, 11–20.
Koponen, Maarit et al. (2019) ‘A product and process analysis of post-editor corrections on
neural, statistical and rule-based machine translation output’. Machine Translation 33:1,
61–90.
Koskinen, Kaisa (2008) Translating Institutions. An Ethnographic Study of EU Translation.
Krings, Hans P. (2001) Repairing Texts: Empirical Investigations of Machine Translation Post-
Editing Processes (Koby, ed.).
Künzli, Alexander (2005) ‘What principles guide translation revision? A combined product
and process study’. In Translation Norms: What Is ‘Normal’ in the Translation Profession? Pro-
ceedings of the Conference Held on 13th November 2004 in Portsmouth (Kemble, ed.), 31–43.
Künzli, Alexander (2006a) ‘Die Loyalitätsbeziehungen der Übersetzungsrevisorin’. In
Übersetzen—Translating—Traduire: Towards a ”social turn”? (Wolf, ed.), 89–98.
Künzli, Alexander (2006b) ‘Teaching and learning translation revision: some suggestions
based on evidence from a think-aloud protocol study’. In Current Trends in Translation
Teaching and Learning (Garant, ed.), 9–23.
Künzli, Alexander (2006c) ‘Translation revision—a study of the performance of ten profes-
sional translators revising a technical text’. In Insights into Specialized Translation (Gotti and
Šarčević, eds.), 195–214.
Künzli, Alexander (2007a) ‘The ethical dimension of translation revision. An empirical
study’. Journal of Specialised Translation 8.
Künzli, Alexander (2007b) ‘Translation revision. A study of the performance of ten pro-
fessional translators revising a legal text’. In Doubts and Directions in Translation Studies,
Selected Contributions from the EST Congress, Lisbon 2004 (Gambier et al., eds.), 115–26.
Künzli, Alexander (2009) ‘Qualität in der Übersetzungsrevision—eine empirische Studie’.
In Translation zwischen Text und Welt: Translationswissenschaft als historische Disziplin
zwischen Moderne und Zukunft (Kalverkämper and Schippel, eds.), 291–303.
Künzli, Alexander (2014) ‘Die Übersetzungsrevision—Begrifsklärungen, Forschungsstand,
Forschungsdesiderate’. Trans-kom 7:1, 1–29.
260 Bibliography
Kuo, Szu-Yu (2014) ‘Quality in subtitling: theory and professional reality’. PhD dissertation,
Imperial College London.
Kurz, Christopher (2016) Translatorisches Qualitätsmanagement: Eine Untersuchung der Überset-
zungsdienstleistungsnormen DIN EN ISO 17100 und DIN EN 15038 aus übersetzungsprak-
tischer Sicht. tekom-Hochschulschriften, Band 24. Stuttgart: tekom.
Kuznetsova, Alexandra et al. (2017) ‘lmerTest package: tests in linear mixed efects models’.
Journal of Statistical Software 82:13, 1–26.
Lacruz, Isabel (2018) ‘An experimental investigation of stages of processing in post-editing’.
In Innovation and Expansion in Translation Process Research (Lacruz and Jääskeläinen, eds.),
217–40. American Translators Association Scholarly Monograph Series 18.
Lacruz, Isabel, and Gregory M. Shreve (2014) ‘Pauses and cognitive efort in post-editing’. In
Post-Editing of Machine Translation: Processes and Applications (O’Brien et al., eds.), 246–72.
Lafeber, Anne (2012) ‘Translation skills and knowledge—preliminary fndings of a survey of
translators and revisers working at inter-governmental organizations’. Meta 57:1, 108–31.
Lafeber, Anne (2017) ‘The skills required to achieve quality in institutional translation: the
views of EU and UN translators and revisers’. In Institutional Translation for International
Governance: Enhancing Quality in Multilingual Legal Communication (Ramos, ed.), 63–80.
Lafamme, Caroline (2009) ‘Les modifcations lexicales apportées par les réviseurs profes-
sionnels dans leur tâche de révision: du problème à la solution’. PhD dissertation, Uni-
versité Laval, Québec, Canada. http://hdl.handle.net/20.500.11794/20833 [consulted
7 November 2018].
Lagoudaki, Elina (2008) ‘Expanding the possibilities of translation memory systems: From
the translator’s wishlist to the developer’s design’. Unpublished PhD dissertation. Impe-
rial College London.
Läubli, Samuel et al. (2013) ‘Assessing post-editing efciency in a realistic translation envi-
ronment’. In MT Summit XIV Workshop on Post-Editing Technology and Practice, 83–91.
Lauscher, Susanne (2000) ‘Translation quality assessment: where can theory and practice
meet?’. The Translator 6:2, 149–68.
Lavault-Olléon, Elisabeth (2011) ‘L’ergonomie, nouveau paradigme pour la traductologie’.
ILCEA 14, 1–16.
Lavault-Olléon, Elisabeth (2016) ‘Traducteurs à l’oeuvre: une perspective ergonomique en
traductologie appliquée’. ILCEA 27, 1–9.
Lawson, Tony, and Joan Garrod, eds. (2001) Dictionary of Sociology.
LeBlanc, Matthieu (2013) ‘Translators on Translation Memory (TM). Results of an eth-
nographic study in three translation services and agencies’. The International Journal for
Translation and Interpreting Research 5:2, 1–13.
LeBlanc, Matthieu (2014) ‘Language of work in the federal public service: what is the situ-
ation today?’. In Fifty Years of Ofcial Bilingualism: Challenges, Analyses and Testimonies
(Clément and Foucher, eds.), 69–76.
Lee, Hyang (2006) ‘Révision: défnitions et paramètres’. Meta 51:2, 410–19.
Lemaire, Nathalie (2018) ‘Écrire, traduire, réviser les textes expographiques: vers une assur-
ance qualité à six mains’. Forum. Revue internationale d’interprétation et de traduction/International
Journal of Interpretation and Translation 16:1, 76–102.
Lesch, Harold Michael and Bernice Saulse (2014) ‘Revisiting the interpreting service in
the healthcare sector: a descriptive overview’. Perspectives: Studies in Translatology 22:3,
332–48.
Levenshtein, Vladimir I. (1966) ‘Binary codes capable of correcting deletions, insertions and
reversals’. Soviet Physics Doklady 10:8, 707–10.
Li, Shuangyu et al. (2017) ‘Interaction—a missing piece of the jigsaw in interpreter-medi-
ated medical consultation models’. Patient Education and Counseling 100:9, 1769–71.
Bibliography 261
Llitjós, Ariadna F. et al. (2005) ‘A framework for interactive and automatic refnement of
transfer-based machine translation’. In Proceedings of the EAMT Conference 2005, 87–96.
Lommel, Arle (2018) ‘Where’s my translation jet pack?’. In Proceedings of Translating and the
Computer 40.
Lommel, Arle and Donald Depalma (2016) ‘Europe’s leading role in machine translation—
how Europe is driving the shift to MT’. http://cracker-project.eu/wp-content/uploads/
Europes_Leading_Role_in_MT.pdf [consulted 31 August 2019].
Lommel, Arle et al. (2014) ‘Multidimensional Quality Metrics (MQM): a framework for
declaring and describing translation quality metrics’. Revista Tradumàtica 12, 455–63.
López-Navarro, Irene et al. (2015) ‘Why do I publish research articles in English instead
of my own language? Diferences in Spanish researchers’ motivations across scientifc
domains’. Scientometrics 103, 939–76.
Lorenzo, Maria Pilar (2002) ‘Competencia revisora y traducción inversa’. Cadernos de
traduçao 2:10, 133–66.
LRQA (Lloyd’s Register Quality Assurance) (2016) ‘Ablauf einer Zertifzierung’. www.lrqa.
de/unsere-services/zertifzierung/ablauf-einer-zertifzierung/ [consulted 29 December
2018].
Lüdecke, Daniel (2018) ‘sjPlot: Data Visualization for Statistics in Social Science’. https://
doi.org/10.5281/zenodo.1308157, R package version 2.6.2, https://CRAN.R-project.
org/package=sjPlot.
Macken, Lieve et al. (2011) ‘Dutch Parallel Corpus: a balanced copyright-cleared Parallel
Corpus’. Meta 56:2, 374–90.
Magris, Marella (1999) ‘Il processo della revisione e la qualità del testo fnale: alcune rifes-
sioni basate su un manuale di infermieristica’. Rivista internazionale di tecnica della traduzi-
one 4, 133–56.
Maier, Beate and Isabel Schwagereit (2016) ‘Ein Jahr ISO 17100—lohnt sich die Zertifzier-
ung?’. Forum ATICOM 2, 13–19.
Major, George and Jemina Napier (2012) ‘Interpreting and knowledge mediation in the
healthcare setting: what do we really mean by “accuracy”?’ Linguistica Antverpiensia 11,
207–25.
Marashi, Hamid and Mehrnaz Okhowat (2013) ‘The comparative impact of editing texts
translated into Farsi with and without the original English texts’. Perspectives: Studies in
Translatology 21:3, 299–310.
Marin-Lacarta, Maialen and Mireia Vargas-Urpi (2019) ‘Translators revising translators: a
fruitful alliance’. Perspectives: Studies in Translation Theory and Practice 27:3, 404–18.
Marshall, Gordon, ed. (2003) Oxford Dictionary of Sociology.
Martin, Charles (2012) ‘The dark side of translation revision’. Translation Journal 16:1.
Martin, Pedro et al. (2014) ‘Publishing research in English-language journals: attitudes, strat-
egies and difculties of multilingual scholars of medicine’. Journal of English for Academic
Purposes 16, 57–67.
Martin, Tim (2007) ‘Managing risks and resources: a down-to-earth view of revision’. Jour-
nal of Specialised Translation 8, 57–63.
Massardo, Isabella et al. (2016) ‘TAUS MT post-editing guidelines’. Amsterdam. www.taus.
net/think-tank/articles/postedit-articles/taus-post-editing-guidelines.
Massey, Gary (2016) ‘Incorporating ergonomics into the translation curriculum: why, where
and how’. Paper Presented at the 8th EST Congress, Aarhus.
Massey, Gary (2017) ‘Translation competence development and process-oriented pedagogy’.
In The Handbook of Translation and Cognition (Schwieter and Ferreira, eds.), 496–518.
Massey, Gary et al. (2019) ‘Training the translator trainers: an introduction’. Interpreter and
Translator Trainer 13:3, 211–15.
262 Bibliography
Matthews, Bob and Liz Ross (2010) Research Methods: A Practical Guide for the Social Sciences.
McDonough Dolmaya, Julie (2015) ‘Revision history: translation trends in Wikipedia’.
Translation Studies 8:1, 16–34.
McElhaney, Terrence and Muriel Vasconcellos (1988) ‘The translator and the postediting
experience’. Technology as Translation Strategy (Vasconcellos, ed.), 140–8.
Mellinger, Christopher D. (2017) ‘Translators and machine translation: knowledge and skills
gaps in translator pedagogy’. The Interpreter and Translator Trainer 11:4, 280–93.
Mellinger, Christopher D. (2018) ‘Re-thinking translation quality. Revision in the digital
age’. Target 30:2, 310–31.
Mellinger, Christopher D., and Gregory M. Shreve (2016) ‘Match evaluation and over-
editing in a translation memory environment’. In Reembedding Translation Process Research
(Muñoz Martin, ed.), 131–48.
Mendoza Garcia, Inmaculada and Nuria Ponce Marquez (2013) ‘The relevance of the
reviewer’s role: a methodological proposal for the development of the translation com-
petence’. Skopos 2, 87–110.
Mertin, Elvira (2006) Prozessorientiertes Qualitätsmanagement im Dienstleistungsbereich
Übersetzen.
Mitchell, Linda (2015) ‘The potential and limits of lay post-editing in an online commu-
nity’. In Proceedings of the 18th Annual Conference of the European Association for Machine
Translation, 67–74.
Mitchell, Linda et al. (2014) ‘Quality evaluation in community post-editing’. Machine Trans-
lation 28:3–4, 237–62.
Montalt, Vicent (2011) ‘Medical translation and interpreting’. In Handbook of Translation
Studies Online (Gambier and van Doorslaer, eds.).
Moorkens, Joss (2018a) ‘Eye tracking as a measure of cognitive efort for post-editing of
machine translation’. In Eye Tracking and Multidisciplinary Studies on Translation (Walker
and Federici, eds.), 55–69. Benjamins Translation Library 143.
Moorkens, Joss (2018b) ‘What to expect from neural machine translation: a practical in-class
translation evaluation exercise’. Interpreter and Translator Trainer 12:4, 375–87.
Moorkens, Joss and Sharon O’Brien (2017) ‘Assessing user interface needs of post-editors
of machine translation’. In Human Issues in Translation Technology: The IATIS Yearbook
(Kenny, ed.), 109–30.
Moorkens, Joss and Ryoko Sasamoto (2017) ‘Productivity and lexical pragmatic features in
a contemporary cat environment: an exploratory study in English to Japanese’. Hermes—
Journal of Language and Communication in Business 56, 111–23.
Moorkens, Joss and Andy Way (2016) ‘Comparing translator acceptability of TM and SMT
outputs’. Baltic Journal of Modern Computing 4:2, 141–51.
Moorkens, Joss et al. (2016) ‘Developing and testing Kanjingo: a mobile app for post-editing’.
Tradumàtica: Tecnologies de la traducció 14, 58–66.
Moorkens, Joss et al., eds. (2018) Translation Quality Assessment: From Principles to Practice.
Morin-Hernandez, Katell (2009) ‘La révision comme clé de la gestion de la qualité des tra-
ductions en contexte professionnel’. PhD dissertation, University of Rennes. http://tel.
archives-ouvertes.fr/docs/00/38/32/66/PDF/TheseMorinHernandez.pdf [consulted
29 December 2018].
Mossop, Brian (1982) ‘A procedure for self-revision’. Terminology Update 15:3, 6.
Mossop, Brian (1990) ‘Translating institutions and “idiomatic” translation’. Meta 35:2,
342–55.
Mossop, Brian (1992) ‘Goals of a revision course’. In Teaching Translation and Interpreting:
Training, Talent, and Experience (Dollerup and Loddegaard, eds.), 81–90.
Mossop, Brian (2001) Revising and Editing for Translators.
Bibliography 263
Mossop, Brian (2006) ‘Has computerization changed translation?’. Meta 51:4, 787–93.
Mossop, Brian (2007a) ‘Empirical studies of revision: what we know and need to know’.
Journal of Specialised Translation 8, 5–20.
Mossop, Brian (2007b) Revising and Editing for Translators, 2nd edition.
Mossop, Brian (2011) ‘Revision’. In Handbook of Translation Studies, Volume 2 (Gambier and
Van Doorslaer, eds.), 135–39.
Mossop, Brian (2014a) Revising and Editing for Translators, 3rd edition.
Mossop, Brian (2014b) Revising and Editing for Translators. Translation Practices Explained, 1
Online Resource, 244.
Mossop, Brian (2020) Revising and Editing for Translators, 4th edition.
Munday, Jeremy (2012) Evaluation in Translation: Critical Points of Translator Evaluation-making.
Nakanishi, Chiharu (2007) ‘The efects of diferent types of feedback on revision’. The Jour-
nal of Asia TEFL 4:4, 213–44.
Navarro, Ignasi (2012) ‘La postedició de continguts en publicacions diàries’. Tradumàtica 10,
185–91.
Neather, Robert (2012) ‘“Non-expert” translators in a professional community’. The Trans-
lator 18:2, 245–68.
Nida, Eugene Albert (1964) Toward a Science of Translating.
Niño, Ana (2008) ‘Evaluating the use of machine translation post-editing in the foreign
language class’. Computer Assisted Language Learning 21:1, 29–49.
Nitzke, Jean (2019) Problem Solving Activities in Post-editing and Translation from Scratch: A
Multi-method Study.
Nitzke, Jean and Katharina Oster (2016) ‘Comparing translation and post-editing: an anno-
tation schema for activity units’. In New Directions in Empirical Translation Process Research
(Carl et al., eds.), 293–308.
Nitzke, Jean et al. (2019) ‘Risk management and post-editing competence’. Journal of Spe-
cialised Translation 31, 239–59.
Nord, Britta (2018) ‘Die Überzetzungsrevision—ein Werkstattbericht’. Trans-kom 11:1,
138–50.
Nord, Christiane (1997) Translating as a Purposeful Activity: Functionalist Approaches Explained.
Nord, Christiane (2005) Text Analysis in Translation: Theory, Methodology, and Didactic Applica-
tion of a Model for Translation-oriented Text Analysis.
Notaristefano, Maristella (2010) ‘La revisione di una traduzione specializzata: interventi e
proflo del revisore’. Rivista internazionale di tecnica della traduzione 12, 215–25.
O’Brien, Sharon (2002) ‘Teaching post-editing: a proposal for course content’. In Proceedings
of the 6th EAMT Workshop Teaching Machine Translation, 99–106.
O’Brien, Sharon (2005) ‘Methodologies for measuring the correlations between post-editing
efort and machine translatability’. Machine Translation 19:1, 37–58.
O’Brien, Sharon (2008) ‘Processing fuzzy matches in translation memory tools: an eye
tracking analysis’. In Looking at Eyes: Eye-Tracking Studies of Reading and Translation Pro-
cessing (Göpferich et al., eds.), 79–102.
O’Brien, Sharon (2010) ‘Introduction to post-editing: who, what, how and where to next?’.
In The Ninth Conference of the Association for Machine Translation in the Americas.
O’Brien, Sharon (2012) ‘Translation as human computer interaction’. Translation Spaces 1:1,
101–22.
O’Brien, Sharon et al. (2011) ‘Translation quality evaluation framework’. www.taus.net/
component/rsfles/download?path=Reports%252FFree%2BReports%252Ftausdynamic
quality.pdf.
O’Brien, Sharon et al. (2014) ‘Kanjingo: a mobile app for post-editing’. In Third Workshop
on Post-Editing Technology and Practice, 125.
264 Bibliography
O’Brien, Sharon et al. (2017) ‘Irritating CAT tool features that matter to translators’. Hermes:
Journal of Language and Communication in Business 56, 145–62.
O’Brien, Sharon et al. (2018) ‘Machine translation and self-post-editing for academic writ-
ing support: quality explorations’. In Translation Quality Assessment: From Principles to
Practice (Moorkens et al., eds.), 237–62.
O’Brien, Tim and Dennis Guiney (2018) Staf Wellbeing in Higher Education: A Research
Study for Education Support Partnership.
O’Curran, Elaine (2014) ‘Machine translation and post-editing for user generated content:
an LSP perspective’. In Proceedings of the 11th Conference of the Association for Machine
Translation in the Americas, Vol. 2: MT Users Track, 50–4.
Olohan, Maeve (2011) ‘Translators and translation technology: the dance of agency’. Trans-
lation Studies 4:3, 342–57.
Omer, Ahmad A. and Mohamed E. Abdulahrim (2017) ‘The criteria of constructive feed-
back: the feedback that counts’. Journal of Health Specialties 5, 45–8.
Ortiz-Boix, Carla and Anna Matamala (2017) ‘Assessing the quality of post-edited wildlife
documentaries’. Perspectives: Studies in Translatology 25:4, 571–93.
Ortiz-Martinez, Daniel et al. (2016) ‘Integrating online and active learning in a computer-
assisted translation workbench’. In New Directions in Empirical Translation Process Research
(Carl et al., eds.), 57–76.
Oster, Katharina (2017) ‘The infuence of self-monitoring on the translation of cognates’.
In Empirical Modelling of Translation and Interpreting (Hansen-Schirra et al., eds.), 23–39.
Ottmann, Angelika, ed. (2017) Best practices—Übersetzen und Dolmetschen Ein Nachschlagewerk
aus der Praxis für Sprachmittler und Auftraggeber.
Parra Escartin, Carla et al. (2017) ‘Machine translation as an academic writing aid for medi-
cal practitioners’. In Proceedings of the MT Summit XVI, vol 1: Research Track, 254–67.
Parra-Galiano, Silvia (2001) ‘La revisión de traducciones en la didactica de la traducción: cara
y cruz de una misma moneda’. Sendebar 12, 373–86.
Parra-Galiano, Silvia (2006) ‘La revisión y otros procedimientos para el aseguramiento de la
calidad de la traducción en el ambito profesional’. Turjuman 15:2, 11–48.
Parra-Galiano, Silvia (2007a) ‘La revisión como procedimiento para el aseguramiento de la
calidad de la traducción: grados, tipos y modalidades de revisión’. SENEZ 32, 97–122.
Parra-Galiano, Silvia (2007b) ‘Propuesta metodológica para la revisión de traducciones:
principios generales y parametros’. TRANS 11, 197–214.
Parra-Galiano, Silvia (2011) ‘La revisión en la norma europea EN-150038 para “servicios
de traducción”’. Entreculturas: revista de traducción y comunicación intercultural 3, 165–87.
Parra-Galiano, Silvia (2015) ‘El conocimiento experto (pericia) en la revisión de traduc-
ciones: clave en la gestión y propuestas de investigación’. In VI Congreso Internacional
sobre Traducción e Interpretación organizado por la Asociación Ibérica de Estudios de Traducción e
Interpretación (AIETI), celebrado en la Universidad de Las Palmas de Gran Canaria, 23–25
de enero de 2013. (Extremera, ed.), 587–603.
Parra-Galiano, Silvia (2016) ‘Translations revision: fundamental methodological aspects
and efectiveness of the EN 15038:2006 for translation quality assurance’. In Interchange
between Languages and Cultures: The Quest for Quality (Zehnalova et al., eds.), 39–52.
Parra-Galiano, Silvia (2017) ‘Conceptos teóricos fundamentales en la revisión de traduc-
ciones y su refejo en el Manual de revisión de la DGT y en las normas ISO 17100:2015
y EN 15038:2006’. Hermeneus: Revista de la Facultad de Traducción e Interpretación de Soria
19, 270–308.
Pérez-Gonzalez, Luis and Şebnem Susam-Saraeva (2012) ‘Non-professionals translating and
interpreting. participatory and engaged perspectives’. The Translator 18:2, 149–65.
Pergnier, Maurice (1990) ‘Comment dénaturer une traduction’. Meta 35:1, 219–25.
Bibliography 265
Schwartz, Lane (2014) ‘Monolingual post-editing by a domain expert is highly efective for
translation triage’. In Proceedings of the Third Workshop on Post-editing Technology and Practice
(WPTP-3), 34–44.
Schwieter, John W. and Aline Ferreira, eds. (2017) The Handbook of Translation and Cognition.
Scocchera, Giovanna (2013) ‘What we talk about when we talk about revision: a critical
overview on terminology, professional practices and training, and the case of literary
translation revision in Italy’. Forum: International Journal of Interpretation and Translation
11:2, 141–74.
Scocchera, Giovanna (2014) ‘What kind of training for literary translation revisers?’. inTRA-
linea Special Issue: Challenges in Translation Pedagogy. http://www.intralinea.org/specials/
article/what_kind_of_training_for_literary_translation_revisers [consulted 6 September
2018].
Scocchera, Giovanna (2015) ‘Computer-based collaborative revision as a virtual lab of edito-
rial/literary translation genetics’. Linguistica Antverpiensia, New Series: Themes in Transla-
tion Studies 14, 168–99.
Scocchera, Giovanna (2016a) ‘Dalla cacofonia all’armonia: il ruolo della revisione collabora-
tiva nella traduzione editoriale’. mediAzioni 21.
Scocchera, Giovanna (2016b) ‘The sociology of revision: results of a qualitative survey on
the professional practice of translation revision for publishing purposes and its agents,
and how they relate to what we know (or we think we know) about revision’. Paper read
at the Eighth European Society for Translation Studies (EST) Congress, 15–17 September,
Åarhus, Denmark.
Scocchera, Giovanna (2017a) La revisione della traduzione editoriale dall’inglese all’italiano.
Ricerca, professione formazione.
Scocchera, Giovanna (2017b) ‘Translation revision as rereading: diferent aspects of the
translator’s and reviser’s approach to the revision process studies in book culture’. Transla-
tors and Their Readers 9:1, 1–20.
Scocchera, Giovanna (2018) ‘Collaborative revision in editorial/literary translation: some
thoughts, facts and recommendations’. In Traduire à plusieurs. Collaborative Translation
(Monti and Schnyder, eds.), 281–94.
Scocchera, Giovanna (2019) ‘The competent reviser: a short-term empirical study on revi-
sion teaching and revision competence acquisition’. The Interpreter and Translator Trainer
14:1, 19–37.
Screen, Benjamin (2017) ‘Machine translation and Welsh: analysing free statistical machine
translation for the professional translation of an under-researched language pair’. Journal
of Specialised Translation 28, 317–44.
Screen, Benjamin (2019) ‘What efect does post-editing have on the translation product
from an end-user’s perspective?’ Journal of Specialised Translation 31, 133–57.
Shih, Claire Yi-yi (2006) ‘Revision from translators’ point of view. An interview study’.
Target 18:2, 295–312.
Shreve, Gregory M. et al. (2014) ‘Efcacy of screen recording in the other-revision of trans-
lations: episodic memory and event models’. In MonTI Special Issue—Minding Translation
(Muñoz Martin, ed.), 225–45.
Shuttleworth, Mark (2002) ‘Combining MT and TM on a technology-oriented translation
masters: aims and perspectives’. In Proceedings of the 6th EAMT Workshop on Teaching
Machine Translation, 123–9.
Silva, Roberto (2014) ‘Integrating post-editing MT in a professional translation workfow’.
In Post-Editing of Machine Translation: Processes and Applications (O’Brien et al., eds.), 24–50.
Simeoni, Daniel (1995) ‘Translating and studying translation: the view from the agent’. Meta
40:3, 445–60.
Bibliography 269
Simianer, Patrick et al. (2016) ‘A post-editing interface for immediate adaptation in statisti-
cal machine translation.’ In Proceedings of COLING 2016, the 26th International Conference
on Computational Linguistics: System Demonstrations, 16–20.
Siponkoski, Nestori (2013) ‘Translators as Negotiators: a case study on the editing process
related to contemporary Finnish translation of Shakespeare’. New Voices in Translation
Studies 9, 20–37.
Skelton, John R. and Sarah J. L. Edwards (2000) ‘The function of the discussion section in
academic medical writing’. BMJ : British Medical Journal 320:7244, 1269–70.
Smith, Thomas J. (2007) ‘The ergonomics of learning: educational design and learning
performance’. Ergonomics 50:10, 1530–46.
Snover, Matthew et al. (2006) ‘A study of translation edit rate with targeted human annota-
tion’. in Proceedings of AMTA 2006, 223–31.
Solum, Kristina (2018) ‘The tacit infuence of the copy-editor in literary translation’. Per-
spectives: Studies in Translation Theory and Practice 26:4, 543–59.
Somers, Harold (1997) ‘A practical approach to using machine translation software’. The
Translator 3:2, 193–212.
Sosoni, Vilelmini (2017) ‘Casting some light on experts’ experience with translation crowd-
sourcing’. Journal of Specialised Translation 28, 362–84.
Specia, Lucia (2011) ‘Exploiting objective annotations for measuring translation post-editing
efort’. In 15th Conference of the European Association for Machine Translation, 73–80.
Specia, Lúcia et al. (2018) Quality Estimation for Machine Translation.
Spies, Carla-Marie (2013) ‘Die wisselwerking tussen die agente betrokke by die pub-
likasieproses van literêre vertalings’. Unpublished doctoral dissertation, Stellenbosch
University, Stellenbosch, South Africa.
Statistics Canada. www.statcan.gc.ca/
Stoeller, Willem (2011) ‘Global virtual teams’. In Translation and Localization Project Manage-
ment: The Art of the Possible (Dunne and Dunne, eds.), 289–317.
Tardaguila, Esperanza (2009) ‘Refexiones sobre la revisión de traducciones’. Mutatis Mutan-
dis 2:2, 367–76.
Tatsumi, Midori (2009) ‘Correlation between automatic evaluation metric scores, post-editing
speed, and some other factors’. In Proceedings of MT Summit XII, 332–3.
TAUS (2010) ‘Machine translation post-editing guidelines’. www.taus.net/academy/best-
practices/postedit-best-practices/machine-translation-post-editing-guidelines [consulted
1 November 2018].
Teixeira, Carlos da Silva Cardoso (2014) ‘The handling of translation metadata in transla-
tion tools’. In Post-Editing of Machine Translation: Processes and Applications (O’Brien et al.,
eds.), 109–25.
Teixeira, Carlos da Silva Cardoso (2015) ‘The impact of metadata on translator performance:
how translators work with translation memories and Machine translation’. PhD disserta-
tion, Universitat Rovira i Virgili.
Temizöz, Özlem (2016) ‘Postediting machine translation output: Subject-matter experts
versus professional translators’. Perspectives 24:4, 646–65.
Temizöz, Özlem (2017) ‘Translator post-editing and subject-matter expert revision versus
subject-matter expert post-editing and translator revision’. Translator Education and Trans-
lation Studies 4:2, 3–21.
Temnikova, Irina (2010) ‘Cognitive evaluation approach for a controlled language post-editing
experiment’. In Proceedings of the 7th International Conference on Language Resources and
Evaluation, 3485–90.
Thaon, Brenda (1984) ‘The role of a revision course in a translation program’. In La Traduc-
tion: l’universitaire et le practicien (Thomas et al., eds.), 297–301.
270 Bibliography
Thelen, Marcel (2013) ‘Translation quality assessment in translator training’. In Alles hängt
mit allem zusammen. Translatologische Interdependenzen (Ende et al., eds.), 191–202.
Thicke, Lori (2013) ‘The industrial process for quality machine translation’. Journal of Spe-
cialised Translation 19, 8–18.
Toral Ruiz, Antonio (2019) ‘Post-editese: an exacerbated translationese’. In Proceedings of
Machine Translation Summit XVII Volume 1: Research Track, 273–81.
Toral Ruiz, Antonio and Victor M. Sanchez-Cartagena (2017) ‘A multifaceted evaluation
of neural versus phraseased machine translation for 9 language directions’. In Proceedings
of the 15th Conference of the European Chapter of the Association for Computational Linguistics:
Volume 1, Long Papers.
Toral Ruiz, Antonio et al. (2018) ‘Post-editing efort of a novel with statistical and neural
machine translation’. Frontiers in Digital Humanities 5, 1–11.
Torrejón, Enrique and Celia Rico (2002) ‘Controlled translation: a new teaching scenario
tailor-made for the translation industry’. In Proceedings of the 6th EAMT Workshop on
Teaching Machine Translation, 107–16.
Toury, Gideon (1995) Descriptive Translation Studies—and Beyond.
Toury, Gideon (2012) Descriptive Translation Studies—and Beyond.
Tradumàtica Research Group. (n.d.) ‘POST-IT: Equipo de investigación’ [POST-IT: Inves-
tigating Team]. http://xl8.link/post-it.
Tragora Formación. (n.d.) ‘Curso online (80 h)—Posedición para traductores EN-ES’.
www.tragoraformacion.com/cursos/traduccion/curso-online-posedicion-traductores/
TÜV Rheinland DIN CERTCO (2019) ‘Registrations and certifcations’. www.dincertco.
tuv.com/search?locale=en&q=ISO+17100 [consulted 3 and 4 January 2019].
Uotila, Anna (2017) ‘Revision and quality assurance in professional translation: a study of
revision policies in Finnish translation companies’. Master’s thesis, University of Tam-
pere. http://urn.f/URN:NBN:f:uta-201712202991 [consulted 26 October 2018].
Valdez, Susana (2019) ‘Perceived and observed translational norms in biomedical transla-
tion in the contemporary Portuguese translation market: a quantitative and qualita-
tive product- and process-oriented study’. PhD dissertation, University of Lisbon and
Ghent.
Van Brussel, Laura et al. (2018) ‘A fne-grained error analysis of NMT, SMT and RBMT
output for English-to-Dutch’. In Proceedings of the Eleventh International Conference on
Language Resources and Evaluation (LREC 2018), 3799–804.
Vandepitte, Sonia and Joleen Hanson (2018) ‘The role of expertise in peer feedback analysis:
exploring variables and factors in a translation context’. In Quality Assurance and Assess-
ment Practices in Translation and Interpreting (Huertas-Barros et al., eds.), 315–25.
Van der Meer, Anne Maj (2015) ‘Post-editing course’. Online training course. www.taus.
net/academy/taus-post-editing-course.
van der Meer, Jaap and Achim Ruopp (2014) Machine Translation Market Report.
Van Egdom, Gys-Walt and Mark Pluymaekers (2019) ‘Why go the extra mile? How difer-
ent degrees of post-editing afect perceptions of texts, senders and products among end
users’. Journal of Specialised Translation 31, 158–76.
Van Egdom, Gys-Walt and Winibert Segers (2019) Leren vertalen. Een terminologie van de
vertaaldidactiek.
Van Egdom, Gys-Walt et al. (2018a) ‘How to put the translation test to the test? On prese-
lected items evaluation and perturbation’. In Quality Assurance and Assessment Practices in
Translation and Interpreting (Huertas-Barros et al., eds.), 26–56.
Van Egdom, Gys-Walt et al. (2018b) ‘Revising and evaluating with TranslationQ’. Bayt
Al-Hikma Journal for Translation Studies 2:2, 25–56.
Bibliography 271
Van Egdom, Gys-Walt et al. (2019) ‘Ergonomic quality in trainer-to-trainee revision pro-
cesses: a pilot study’. Paper Read at EST Conference, Stellenbosch.
Van Egdom, Gys-Walt et al. (forthcoming, 2020) ‘Ergonomics in Translator and Interpreter
Training’, special issue of The Interpreter and Translator Trainer.
Van Rensburg, Alta (2012) ‘Die impak van revisie op vertaalde eksamenvraestelle in ’n
hoëronderwysomgewing’. LitNet Akademies 9:2, 392–412.
Van Rensburg, Alta (2017) ‘Developing assessment instruments: the efect of a reviser’s pro-
fle on the quality of the revision product’. Linguistica Antverpiensia, New Series: Themes
in Translation Studies, 16, 71–88.
Van Waes, Luuk and Mariëlle Leijten (2015) ‘Fluency in writing: a multidimensional per-
spective on writing fuency applied to L1 and L2’. Computers and Composition 38, 79–95.
Vasconcellos, Muriel (1987a) ‘A comparison of MT post-editing and traditional revision’.
In Proceedings of the 28th Annual Conference of the American Translators Association, 409–15.
Vasconcellos, Muriel (1987b) ‘Post-editing on-screen: machine translation from Spanish
into English’. In Proceedings of the Conference Translating and the Computer 8 (Picken, ed.),
133–46.
Vasconcellos, Muriel and Dale Bostad (1992) ‘Machine translation in a high-volume transla-
tion environment’. In Computers in Translation: A Practical Appraisal (Newton, ed.), 58–77.
Venkatesh, Viswanath and Fred Davis (2000) ‘A theoretical extension of the technology
acceptance model: four longitudinal feld studies’. Management Science 46, 186–205.
Vieira, Lucas Nunes (2017a) ‘Cognitive efort and diferent task foci in post-editing of
machine translation: a think-aloud study’. Across Languages and Cultures 18:1, 79–105.
Vieira, Lucas Nunes (2017b) ‘From process to product: links between post-editing efort and
post-edited quality’. In Translation in Transition: Between Cognition, Computing and Technol-
ogy (Jakobsen and Mesa-Lao, eds.), 162–86.
Vieira, Lucas Nunes and Elisa Alonso (2018) ‘The use of machine translation in human
translation workfows: Practices, perceptions and knowledge exchange’. www.iti.org.
uk/images/downloads/ITIReport-Lucas.pdf.
Vilar, David et al. (2006) ‘Error analysis of machine translation output’. In Proceedings of the
5th International Conference on Language Resources and Evaluation (LREC’06), Genoa.
Vintar, Špela et al. (2017) ‘Labour market needs survey and the DigiLing model curricu-
lum’. In Project DigiLing: TransEuropean e-Learning Hub for Digital Linguistics. www.digiling.
eu/wp-content/uploads/2017/05/DigiLingReport_IO1.pdf.
Von Meck, Anoeschka (2004) Vaselinetjie.
Von Meck, Anoeschka (2009) My Name Is Vaselinetjie.
Wagner, Emma (1985) ‘Post-editing Systran—a challenge for commission translators’. Ter-
minologie et Traduction 3.
Wagner, Emma et al. (2014) Translating for the European Union Institutions.
Way, Catherine (2009) ‘Bringing professional practices into translation classrooms’. In Pro-
ceedings of the Eighth Portsmouth Translation Conference, ‘The Changing Face of Translation’,
131–42.
Wicker, Allan W. (1969) ‘Attitudes versus actions: the relationship of verbal and overt behav-
ioral responses to attitude objects’. Journal of Social Issues 25:4, 41–78.
Willey, Ian and Kimie Tanimoto (2015) ‘“We’re drifting into strange territory here”: what
think-aloud protocols reveal about convenience editing’. Journal of Second Language Writ-
ing 27, 63–83.
Winterbach, Ingrid (2002) Niggie.
Winterbach, Ingrid (2007) To Hell with Cronjé.
Winterbach, Ingrid (2010) To Hell with Cronjé.
272 Bibliography
Note: page numbers in italics indicate a figure and page numbers in bold indicate a table
on the corresponding page. Page numbers followed by “n” indicate a note.
ability 190–1; to adapt client specifications errors 3, 52; human translation 57,
201; to delete MT suggestion 49; to 57; intervention optimality 58–9;
detect errors 7; NMT 52; participants investigation results 129; machine-
98; thinking 89; translators 81 translation (MT) 57, 57, 190; mixed
academic writing 91; see also writing effects models 59–60; non-professional
acceptability 54 editing 78–86; post-editing 26–31, 188;
accuracy 139; comparative revision 139; quantitative 129; of real-life data 182;
consistency and 103; errors 9, 234, revisions 26–31, 58–9, 170, 188; risk
242; semantic 118; translations 54, 156, 159; sentence-level 60; thematic 154
171–2 analytical skills 10
Across (software) 240 Anglophones 74, 77–8
actions: editing 44–5; keyboard 42; non- annotations 54, 58; data 96; decision tree
linear 42–3 105; edit 95–7, 96; scale 14
active learning 47 anxiety 90
adaptability/adaptation 59, 210, 219–20 ArisToCAT project 70
additions 28, 30, 31, 42, 54, 100 assessment of quality 3–4, 227
adequacy 54, 66 attitudes: behaviour 149, 150; concepts of
adequate revision 145 149; of revisers and translators 148–64;
adjectives 102 subcompetences 193, 194
adverbs 102 Austria 129, 230
Afrikaans 166 Austrian Economic Chambers (WKO) 119
agent-driven tasks 181 Austrian Standards International (ASI) 113
Akaike’s Information Criterion (AIC) 60, 64 authentic experiential learning (AEL) 222
A-language 189–90, 198, 200; see also author-revision 181
language automatic metric 56
American Translators Association (ATA) 228 automation of process 87
analysis 56–8; of answers 156; case
studies based 134; data 91, 97, 97, behaviour(s): attitudes 149–50; editing 32,
97–103, 99–103, 120, 153–4; edit 104; 45–6; expectations 149; over-editing
editing efficiency 58–9; empirical 180; 33; participants 21; precursors of 149;
274 Index
reading 45; revision 9; translators 22, 46; interviewees 80; justifying 98; self-
typing 31, 45 revision 180; translators 177, 189
Belgium 55, 230 community, machine-translation (MT) 43
beliefs 149–52; about other agents 150; companies: background 136; choices of,
characterization 149; educators 227; certified 126, 126; classification, by
questionnaire 148; reviser 154; translator size 120; operators 136; profile 120–1;
154; types of 148–9 service range 136; size of, certified
bilingualism 15, 74, 126–7, 191 122, 123
Bing 89, 240 comparative examination 167
B-language 189–90, 198, 200; see also comparative reading 167
language comparative revision 139
briefs: assignments 211; PE 22, 201, 229; compatibility 210, 222–3
revised translation 8; revisers 4, 142; competences: core 192; defined 190;
revision training 7; translations 201, 205, discursive 87; instrumental 192;
212, 216 instrumental/tool 192; interpersonal
192; metalinguistic-descriptive 193;
Canada 73–4, 80 post-editing 12, 190–5; professional
CasMaCat project 47 192; psycho-physiological 192; revision
Catalan 239 190–5; skills and 38; strategic 192
categorization scheme 54 complex editing 45
certificate holders, described 117 compliance: certified/uncertified
certification: advantages 113–14; companies with EN 15038 124, 125;
importance 114; process of LICS with guidelines of EN 15038 124;
115–16; process of LRQA 114–15; recommendations for quality standards
quality standards 113–16; registration 123–5
and 117 computer-aided translation (CAT) 2–3, 9, 21,
challenges: of annotating edits 98; attitudes 37, 39, 43, 193, 209, 235, 238–41, 243
37; editing tools 47–8; educators 242; computer-assisted human translation 227
LSP workflows 147; post-editing 15; conflict 46
post-editors 52; real-life 202; translation conjunctions 102
170; translators 46; see also revision and consistency 25, 41, 210, 221; accuracy and
post-editing 103; of corrections 211; defined 210;
changes: in DIN EN 15038 110; essential errors 25, 220; feedback 224; objectivity
22; grammatical 28, 30; linguistic 11; and 204; PE 41; translations 118
necessary 54, 58; post-editing 12; contact and study hours 237–9
preferential 22, 31, 52, 57–8; revisers 8; content: editing 133, 139–41; machine-
semantic 103; stylistic 33; text content translation (MT) 47; parameters 139
140; unnecessary 21, 54, 57 continuum 133–4, 134, 144–6
civil servants 73–4, 77, 80, 82, 84–5 controlled language 242
claims, productivity 41 copy-editing 133, 181
clients 38, 41, 83; enterprise 39; French copy editor 245
as language of work with 78–9; core competences 192
specifications 201; translators 79, 84; core questions 235–8, 236–7
see also briefs Corporate Translation Technology Survey
cognitive effort 13–14 226–7
cognitive ergonomics 208 corpus approaches 3
cognitive load 47–8 correcting: human-translated text 50;
Cohen’s Kappa 28 identical errors 204; machine-translation
coherence 25 output 2, 94; same errors 211; spelling
collaborative networks 132 errors 43; translators’ errors 131;
commas 27, 31, 102 typographical mistakes 42; see also
comments: on deviations 116; editors changes
172, 175; inadequate for translation Council of the European Communities
between German 40; inserting 199; 10, 11
Index 275
English-to-Afrikaans translations 6 French 22, 74, 77, 90, 239; Acadian 85;
English-to-French 80 Canadian 79; Continental 85; editing
enterprise clients 39 translations by non-professionals 80–5;
ergonomics: cognitive 208; criteria into English 75; English translation of 9;
216; defined 208; in education 209; idiomatic administrative 81; as language
instructional 209; learning 209; of work with clients 78–9; Quebec
organizational 208; physical 208; 85; translations 73, 81; varieties 86;
translator trainer 208–10 vernacular 85
errors: accuracy 9, 234, 242; adequacy French–Finnish 197
54, 66; analysis 3, 52; categorization full post-edit/edit(ing) (FPE) 23–4, 23–4,
232, 241–2; consistency 25, 220; 26, 29, 31
correction 46; critical 53; grammatical
32, 54, 85; human translations 25, 55; German 4, 22, 90; English into 25, 33;
identification 46; introducing 12, 16; native speakers 24; newspaper 25;
machine-translation (MT) 41, 50–1, 55; translations 5
management 210, 220–1; nature and German–Finnish 197
distribution 50; ortho-typographic 103; German-to-Italian translations 5
revisers 51; spelling 43; style and 25, Germany 114, 117, 121, 129, 230
85; syntactic 32; types 25, 40–1, 54, 55; Global Autonomous Language Exploitation
typographic 103; typologies 95, 241; programme 41
unnatural 25 Google Translate 23–4, 53–4, 89, 93–4, 240
essential changes/edits 22, 91–2 graduate translators 39
ethical and deontological practices 234 grammar 25, 28, 30, 94, 117; changes 28,
e-Translation 241; see also translations 30; errors 32, 54, 85; metaphor 36
European Commission 2, 14–15 Great Britain 230
European Master’s in Translation (EMT) Greece 230
39, 191, 203 GREVIS research group 8
European Qualifications Framework 190 group feedback 189
European Union 1 guidance 210, 216–17
evaluation: human task-based 231–2; of
MT output 231, 231–2; of PE text 243 high-quality feedback 206
examination 206, 239 house style 9, 167, 182n4
Excel 60 human-assisted machine translation 227;
expectations: behaviour 149; concepts see also machine-translation (MT)
of 149; described 149; of reader 40; Human-Computer Interaction 47
revision 149 human task-based evaluation 231–2
experimental set-up: post-editing human human translations (HT) 25, 38,
translations 53–60; post-editors 94, 95 46; analysis 57, 57; errors 25, 55;
explicit control 210, 218–19 participants 55–6, 56; post-editing 51,
extralinguistic subcompetences 191 61; predictable errors compared to
eyetracking/eye-tracking: data 22, 33; 50; quality 50; reliability 55; revision
keylogging 9; subjective evaluations 62; text origin 60, 61, 63–7; see also
12–13; technology 13–14 machine translation (MT)
Hungarian 5
failure of translators 40 hyper-revisions 51, 57, 162
feasibility, monolingual post-editing 15 hypothesis 53, 168
feedback: consistency 224; group 189;
high-quality 206; translation 214 identification: errors 46; of problems
Finland 230 54–5, 55
Finnish 11, 134–7 identity crisis 38
fixed rate 233 idiom: compliance with clients style guide
formatting 117 139; translations 79; word 3
France 10, 230 idiomatic administrative French 81
Francophones 74, 77–8, 80, 85, 87 independent variables 59
Index 277
content 47; errors 41, 50–1, 55, 241–2; NMT see neural machine translation
online 89; output 2–3, 13, 23, 26–7, (NMT)
31–2, 41, 43, 46, 50–1, 53–4, 61, 66, non-linear actions 42–3
98, 103, 229, 231–2, 234, 237, 241–2; non-linear writing process 42
participants 55–6, 56; post-editing of non-native speakers 90; of English 89–90;
2, 12–15; process 36, 40; quality 44, errors 92, 98
53; statistical 35, 48, 242; suitability 91; non-official languages 74
technology 52; text 15, 60, 61, 63–7; non-professional editing/revision 73–88;
tools 243; translation memories (TM) editors 80; findings and analysis 78–86;
and 3; use by translators 38; use in regular French translations by 80–5; sociolinguistics
translation courses 241; see also post- 74–6; work environment and methodology
editing human translations; post-editors 76–8; see also editing/edit
machine translation post-editing (MTPE) non-professional interpreting and
226–45; deontological issues 243–4; translation (NPIT) 73, 86
literature 226–7; methodology 228–9; non-professional post-editing 12
participants 228; plans to increase 241; normative attitudes 150, 150
qualitative results 238–44; quantitative normative expectations 150, 150
results 229–38, 230, 231–7; training norms: attitudes and expectations 149–52;
244; see also post-editing (PE) competences and 160; language 82,
Malta 230 85, 156; linguistic 87, 158, 198;
Mann-Whitney U test 28–9 professional 87–8; terminological 156,
MateCat 240 160; translations 149; typology of reviser
matricial norms 42 interventions 5
mechanics 138, 189 Norwegian: literary translators 5; novels 5
MemoQ 240 nouns 102
memory see translation memories (TM)
MemSource 9, 240 omissions 25, 42, 52
metalinguistic-descriptive competence 193 online learning 47
metalinguistic knowledge 193 online questionnaire 7, 14, 119–20, 135,
micro processing working style 8 153, 226, 228–9, 241
Microsoft: Excel 240; Office suite 239; open-ended questions 141
Word 94, 240 opinion of educators 237
mixed effects models 59–60, 63 optimality, intervention 58–9
monolingual examination 206 optimisers 37, 40
monolingual post-editing 15 organizational ergonomics 208
monolingual revision 126–7 Organization for Economic Cooperation
Moses tokenizer 57 and Development 11
motivation: post-editors 91–2; revisers and orthography 25, 94
translators 148 ortho-typographic errors 103
movement 44–5 other-revision 178–81; authors changes
Multilingual Translation Studies Degree 175; competence 6; feedback 198;
Programme, University of Turku one dimensional and linear 168; PE
196–7, 197 process 8; quality standards 109; second
Multilingual Translation Workshop I/II 201 translator 165; self-revision and 189; by
students 7; translation workflow 137
necessary changes 54, 58; see also changes over-edit(ing) 21; as abstract concept
needs: for interactive editing tools 46–8; 22; behaviour 33; classification of
technological 37 30–1, 30–1; instructions to avoid 22;
Netherlands 55 preferential changes 22; quantification of
neural machine translation (NMT) 33, 35, 27–30; translation memories (TM) 21
41, 200; error typology 241; interactive over-revision 58, 162
systems 48; online 53; output 39, 52
New Brunswick Translation Bureau 75–6, PACTE model 191
79–80, 84–5; Official Languages Act 74–5 Pan-American Health Organization 2
Index 279
professional 94; reviewing and 205; reliability: human translations 55; revisers
symbols 11; text 92 207; translation 51
pseudo-editing 234 replacement 44
psycho-physiological competence 192 research: 1–17; community 227; core
psycho-physiological subcompetences questions 235–8, 236–7; design 135–6;
193, 194 hypotheses 53; post-editing human
public service, New Brunswick 79–80 translations 50–3; post-editors 90–1;
punctuation 23, 28, 30, 31, 94, 96 questions 52–3, 60, 92, 168; skills 10
retranslating 8, 191–2, 194, 198
qualitative results, MTPE 238–44 Reverso 89
quality 68; assessment 3–4, 227; reviser-revision 181
certification 113–16; compliance revisers 148–63; attitudes 149–64, 150;
recommendations 123–5; criteria 127–9, awareness 50; beliefs 149, 150, 154;
128, 207; estimation 43, 231, 232; biomedical translation 152–3; changes
human translation 50; management 8; data collection 153; errors 51;
systems 112, 113; MT 44, 53; overview expectations 149–52, 150, 156–7;
109–11; post-editing 16, 104; in practice function 1; interventions 5; literary
121; of product 7–9; registration 11; loyalties 11; methodology 153–5;
116–17; revision 7–9, 53, 58–9, 63–7, motivations of 148; norms 149–52;
64–6, 109–30, 203–25; standards 22, overview 148–9; participants 154–5;
30–1, 109–30; TM 44; translation 32–3, professional 7–8, 79, 86, 90–1; profiles
37, 51, 53, 111–13; for TSPs 111–12 155; proposed changes 5; qualifications
quantification of over-editing 27–30 11; questionnaire 153–4; reliability 207;
quantitative analysis 129 translators 10, 11, 32
quantitative results, MTPE 229–38, 230, revising 36
231–7 revision and post-editing 21–34; analysis
Quebec 74, 80 and results 26–31; classification of over-
questionnaires 92–3, 120, 136, 148–9; editing instances 30–1, 30–1; compared
beliefs 148; core 235–8, 236–7; data 35–49; 50–70; over-editing 21;
collection 153; design and data analysis quantification of over-editing instances
153–4; end-of-study 7; interview 120, 27–30; research 3; studies 22–6; time
134; online 7, 14, 119–20, 135, 153, and editing effort 26, 26–7; see also post-
226, 228–9, 241; open-ended 141; pilot editing (PE)
testing 153; post-eyetracking 9; post- revision continuum 133–4, 134; as
task 94, 98; pre-task 94; revisers 153–4; hypothesis 146; language service
translators 153–4 providers (LSPs) 145; role and benefits
of 144–6; as service design tool 144–5;
rapid post-edit(ing) 32 uses 134
rates: editing 43–4, 102, 102; fixed 233; revision quality: construct of 205–10;
variable 233 indicators of 212; with translationQ
raw output and final quality 233 210–15; in translator training 203–25
RBMT 51, 200, 241 Revision Routine Activation 191
reader expectations 40 revision(s) 188; adequate 145; aspects of
reading 36, 40, 42, 45 118–19; attitudes 149; bilingual 126–7;
real-life challenges 202 charges 56; comparative 139; compared
reception: EN 15038 119–20, 122; ISO with PE 40; competences 190–5, 194;
17100 119–20 content editing 141; course 1, 4, 6, 10,
redundancy 41 188, 204; defective 4, 8; defined 1–2,
registration: advantages of 117; certificates 36, 166, 205; empirical studies 5–11; in
and 117; in EN 15038 121; fee-based EN 15038 117–19; encompassing term
116–17; numbers 117; quality standards 180; expectations 149; history 1; human
116–17 translations (HT) 62; interventions 8;
related and general matters 230, 230–5, ISO 17100 117–19; levels 142, 142;
231–5 in literary translation 165–82; methods
Index 281