Simulated Training for Lifeboat Launch Skills
Simulated Training for Lifeboat Launch Skills
By
© Alan Dalton
A Thesis submitted to the
School of Graduate Studies
In partial fulfillment of the requirements for the degree of
Master’s of Science
School of Human Kinetics and Recreation
Memorial University of Newfoundland
May, 2015
St. John’s Newfoundland
Abstract
Freefall lifeboats (FFLB) are used worldwide as a means for evacuation and
escape. Currently, FFLB launch training is normally restricted to benign weather
conditions due to the inherent risk to personnel safety and asset integrity. Under such
circumstances, the coxswain cannot develop the heuristic techniques necessary for
launching under more likely dangerous and unpredictable evacuation and
environmental conditions. Simulators can provide enhanced training opportunities for
these conditions, so long as the simulation technologies and training paradigms
address the contextual, mathematical and behavior demands of the physical training.
A high level of fidelity should invoke a level of participant presence suitable
for performance-based learning and training objectives. The purpose of this research
was to determine the effect of post-launch feedback on the rate of skill acquisition of
novice participants performing simulated FFLB launches. Participants in two
independent groups each went through 24 consecutive simulated launches under
varying sea-states and visual conditions. One group was given pictorial feedback
about the quality of each launch. The rate of skill acquisition and time to launch of
this group was compared to a group that had no feedback.
Results show that: pictorial feedback did not affect launch success or time to
launch of our FFLB launching trials, wave height had the greatest affect on launch
success, visual clarity only had a significant affect on launch time in the no feedback
group, and sense of presence was not affected by the inclusion of feedback or
correlated to performance measures.
ii
Acknowledgements
I would also like to acknowledge the entire Virtual Environment for Knowledge
Mobilization Team, with special thanks to Jennifer Smith and Patrick Linehan for
their help and support through my project. Thank you to Steven Mallam for his help
in the pilot stages of this project as well as my data collection/analysis team of
Andrew Caines, Trevor MacNeil and Matthew Goldring.
To my mother Leslie and girlfriend Lauren, thank you for all of your love and support
through this degree. I wouldn’t have made it without you both at my side. Thank you
for believing in me.
Thank you to the participants who volunteered their time for this research. Without
your time and interest, this would not have been possible.
I would finally like to acknowledge the following institutions for their financial and
in-kind support throughout my research process: Research and Development
Corporation of Newfoundland and Labrador, Virtual Marine Technology Inc, The
iii
Atlantic Canada Opportunities Agency, Presagis, Petroleum Research Newfoundland
and Labrador, Defense Research and Development Canada, the National Research
Council Institute for Ocean Technology, and the Marine Institute.
iv
Table of Contents
Effect of Simulated Freefall Lifeboat Training on Launch Skill Acquisition ... i
v
3.3.2: Landing Zones .......................................................................................................................... 35
3.3.3: Presence Questionnaire ....................................................................................................... 37
3.3.4: Performance Measures ......................................................................................................... 37
3.4: Analyses of Performance Measures............................................................................38
Chapter 4 : Results........................................................................................................... 39
4.1: Performance Data .............................................................................................................39
4.2: Presence Questionnaire ..................................................................................................42
4.3: Presence & Performance Score Relations ................................................................43
vi
List of Tables
Table 3-1: Table of Practice Conditions ...................................................................... 34
Table 3-2: Table of Experimental Conditions ............................................................. 35
Table 3-3: Performance Measures collected ................................................................ 37
Table 4-1: Performance measures by group assignment ............................................. 39
Table 4-2: Performance on each individual trial expressed by summed score and
average time to complete. .................................................................................... 39
Table 4-3: Mean landing zone performance and completion times based on trial order
of experience for the 'no feedback' and 'feedback' groups. .................................. 40
Table 4-4: Landing zone performance based on wave height and visual clarity state
expressed as a percentage. ................................................................................... 40
Table 4-5: Landing performance and completion time sorted by wave height. .......... 41
Table 4-6: Landing performance and completion time according to visual clarity state.
.............................................................................................................................. 41
Table 4-7: Landing performance and completion time according to wave direction. . 42
Table 4-8: Presence Questionnaire scale question results. .......................................... 43
Table 4-9: Presence Questionnaire additional feedback responses. ............................ 43
Table 4-10: Correlations between Presence Questionnaire scores and Performance
Measures. ............................................................................................................. 44
vii
List of Figures
Figure 3-1. Simulator and Instructor Station set up. .................................................... 30
Figure 3-2. Location of participant placement in the simulator................................... 31
Figure 3-3. Simulator navigation control panel complete with: steering wheel, inside
and outside light switches, throttle, radio, and emergency stop button. .............. 31
Figure 3-4. Simulator navigation control panel: the ignition switch and radio. .......... 32
Figure 3-5. MOOG Series 6DOF2000E Electric Motion Platform Actuator .............. 32
Figure 3-6. The Instructor’s Station. ............................................................................ 33
Figure 3-7. From Simões Ré, Pelley and Veitch (2003) - Setback and progressive
setback.................................................................................................................. 36
Figure 3-8. The four wave quartiles: Peak of a wave (Q1), the downslope running
from peak to trough (Q2), the trough (Q3), and the upslope running from trough
to peak (Q4). ........................................................................................................ 36
Figure 5-1: Feedback picture of a successful downslope landing of the FFLB into size
8 waves moving south. ......................................................................................... 48
Figure 5-2: Feedback picture of a peak landing of the FFLB into size 8 waves moving
southeast. .............................................................................................................. 49
Figure 5-3: Feedback picture of an upslope landing of the FFLB into size 8 waves
moving southwest. ............................................................................................... 49
Figure 5-4: Feedback picture of a successful downslope landing of the FFLB into size
5 waves from the southwest. ................................................................................ 50
viii
List of Abbreviations
AI……………………………………………..………………….Artificial Intelligence
ANOVA……………………………………..……………………Analysis of Variance
ARPA……………………………………..…………….Automatic Radar Plotting Aid
BST……………………………………..……………………..Basic Survival Training
CAPP ………………………………….Canadian Association of Petroleum Producers
CSA………………………………………...……………….….Canadian Shipping Act
CIS…………………………………………..…………………..Canadian Ice Services
CSV…………………………………………..…………...….Comma Separated Value
DNV……………………………………………..…………………Det Norske Veritas
EER…………………………………………...………Escape, Evacuation, and Rescue
FFLB……………………………………………..…………….…..Free-Fall Life Boat
HMD……………………………………….………..…………Head Mounted Display
HSE ………………………………………..…………..…Health and Safety Executive
HUET………………………………..……… Helicopter Underwater Escape Training
IMO…………………………………..…………..International Maritime Organization
ISM…………………………………..………..…….International Safety Management
ISO…………………………………….International Organization for Standardization
LCD……………………………………..……………………...Liquid Crystal Display
LSA ………………………………………..…………………...Life Saving Appliance
MOUs………………………………………….……………… Mobile Offshore Units
MPR………………………………………………...…..Marine Personnel Regulations
MSC …………………………………………………..…….Marine Safety Committee
OIM…………………………………………………..…Offshore Installation Manager
PAR-Q…………………………………....Physical Activity Readiness –Questionnaire
PBS………………………………………..……….……Performance Based Standards
PQ……………………………………………..………….……Presence Questionnaire
RNLI…………………………………………..…..Royal National Lifeboat Institution
SA………………………………………………..……………...Situational Awareness
SOLAS……………………………………………..……………..Safety of Life at Sea
SMEs………………………………………………..………….Subject Matter Experts
SSQ…………………………………………………Simulator Sickness Questionnaire
STCW…………………………....Standard Training, Certification, and Watchkeeping
ix
SUS…………………………………………………………...……..Slater-Usoh-Steed
TARGETs…………………....…Targeted Acceptable Responses to Generated Events
TEMPSC ………………………….Totally Enclosed Motor Propelled Survival Craft
US……………………………………………………………………….United States
UK…………………………………………………………………...United Kingdom
VE……………………………………………………………….Virtual Environment
VMT……………………………………………….Virtual Marine Technologies Inc
x
List of Appendices
xi
Chapter 1 : Introduction
1.1: Background History
Launching lifeboats during an emergency abandonment scenario is a critical
safety operation that generally will be undertaken in unfavorable conditions. The
success of a marine evacuation is dependent on several factors, including the safety
equipment itself, the people who have to use it, the nature of the hazard that initiates
the emergency response, the prevailing environmental conditions, and the interaction
of all these factors.
The importance of effective training in the overall safety management system
has led to the assessment of methods for training lifeboat operators. Most lifeboat
coxswains on ships and offshore installations learn how to operate survival craft
through formal, live-boat training courses that comply with an accepted minimum
standard, such as the International Maritime Organization’s Convention on Standards
of Training, Certification and Watchkeeping (STCW, IMO 1995). However, there are
practical limits to what can be accomplished in such a training environment,
particularly when the training itself can expose the trainees to risks of accident and
injury. In the case of Free-Fall Lifeboats (FFLBs), it is not practical to use live-boat
training for rough weather launches, as these operations are too dangerous to be
undertaken for novice operators.
Sea trials are a poor method of investigating human performance issues for
many reasons. Crisis situations cannot be safely replicated in live systems, and only a
limited number of people can participate in a sea trial, making the observed results
difficult to generalize to the entire marine community (Patterson, 2002). It is also
impossible to control for external variables, such as swirling winds, changing weather
and lighting conditions, and inconsistent swell heights which make it very difficult for
investigators to identify cause-and-effect relationships relating performance and
success. Having these factors controlled within a simulation environment allows us to
identify which aspects of the launch have the greatest effect on human performance,
and will allow trainers to effectively create training curriculum in these areas.
Researchers in the maritime safety field suggest that simulation training
become a part of the training process for lifeboat coxswains, including traditional and
other emerging methods (Barber, 1996). Patterson, McCarter, MacKinnon, Veitch, &
1
Simões Ré, (2011) highlight that lifesaving craft are used in scenarios that are
generally characterized by rapidly escalating situations and adverse weather
conditions. Simulation technology is being implemented to provide a safe means for
offshore personnel to acquire experience launching survival craft in harsh weather
conditions and under emerging hazard scenarios, such as low visibility and high sea
states. Immersive full mission simulators, complemented by simpler multi-task and
special-task simulation tools, have been developed to provide realistic, effective, and
safe training for lifeboat coxswains. While the risk to the trainee is minimal in a
simulation environment, the range of training experience can be increased beyond
what could ever be safely done otherwise, thereby enabling trainees to improve
situational awareness, develop skills, and practice procedures in order to elicit
appropriate response in the real world.
It has been proposed that simulation must be presented to a trainee in a
realistic manner in order to be accepted as an appropriate replacement for physical
training (MacKinnon, Evely, & Antle, 2009). Tichon (2007) notes in her paper on
training in virtual reality that “there is an important role for interactive simulators in
replicating critical events and establishing them as a core component to cognitive
skills training programs. Simulated environments provide safe, controllable
environments in which necessary skills can be repeatedly practiced and the ability to
demonstrate the effectiveness or otherwise of these applications rests on the
development of strong performance measures.”(p. 288).
In studying the performance measures of the launching phase of a FFLB
evacuation using a scale model, Simões Ré & Veitch (2007) found the setback due to
the boat’s initial encounter with an oncoming wave, along with the progressive
setback due to subsequent wave encounters were the most important factors in
determining overall launch success. Setback occurs as the FFLB touches down into
the water and the force of the oncoming waves push the FFLB back towards the
installation that it is trying to escape. The amount of setback that is incurred by the
lifeboat is determined mainly by two aspects of an oncoming wave; the phase of the
wave in which the boat touches down, and the steepness of the wave. This research
shows that when launching FFLB’s into heavy weather conditions, it is vitally
important to hit proper wave phases to maximize the boat’s chances of successful sail
away. As it would be too dangerous to practice this skill in live drills, offering a
simulation alternative to trainees could be an effective way to achieve competency.
2
The existence of training transfer from virtual environments to the real world
is not well documented. In early cases, those trained in the real world performed the
task better than those trained in virtual reality (Kozak, Hancock, Arthur, & Chrysler,
1993). In more recent cases, participants executing a simple spatial task (Rose, Attree,
Brooks, Parslow, Penn, & Ambihaipahan, 2000), performing aircraft maintenance
(Barnett, Perrin, Curtin and Helbing, 1998), or practicing to use forestry machinery
(Lapointe & Robert, 2000), there was no significant difference found between those
trained using the real world equipment, and those trained in VR. This research
indicates that VR may be as effective in training many tasks as real world training.
This could in part be due to the improvements in simulation and VR technology as a
result of computing technology advancements along with continued research into
making reliable and valid VR systems. These innovations may be beneficial for
training coxswains to launch into heavy sea states and low visibility as training in
virtual reality takes the risks and dangers out of the learning process.
This experiment simulated the launch sequence of a free-fall lifeboat in
various sea and visual clarity states using Virtual Marine Technology’s (VMT)
SurvivalQuest system and is particularly concerned with the performance measures
defining where the boat makes first contact with the waves and the amount of time the
participant takes to execute the launch the boat. The purpose of this research is to
examine which aspects of the launch environment have the greatest effect on novice
participant’s performance, and to see if launch performance improves over the course
of their training for extreme weather and high sea state launches. This will contribute
to the growing body of knowledge regarding the need for increased specialized
training for lifeboat coxswains.
1.2: Hypotheses
The Null Hypothesis for this experiment is that there will be no change in
either the number of successful launches, or the time taken to launch between the no
feedback and feedback groups. However, it is expected that the alternative hypothesis
of a significant increase in successful launches, and a significant change in launch
time by the feedback group as we predict they will be spend more time actively
thinking about the outcomes of their launches. A significant change in launch time
between the feedback and no feedback group may occur, as we predict that the
feedback may either decrease launch times by giving participants additional
3
confidence about their launches, or significantly increase launch times as they may
begin to evaluate waves segments more hesitantly waiting for optimal sail away
opportunities.
4
Chapter 2 : Review of Literature
2.1: Free-Fall Craft
In the early 1960s a departure from the conventional davit and fall launching
system was first explored, and the development of free-fall launch began (Serco,
2007). The first free-fall launching apparatus was installed in 1961, although the
system did not see widespread acceptance and use in the marine and offshore
industries until the 1990s. The concept, which necessitated a complete rethink of the
Temporary Enclosed Motor Propelled Safety Craft (TEMPSC) hull form and seating
arrangements, abandoned davits, falls and hooks altogether in favour of releasing the
craft and allowing it to ‘free-fall’ to the water.
Launching of free-fall craft is carried out in one of two ways; down an
inclined plane away from the structure’s side to induce some forward motion before it
falls free of the launching ramp, or being allowed to fall vertically, albeit with a bow
down attitude. The free-fall TEMPSC differs considerably in hull form from the
davit-launched craft because of the need to minimize hydrodynamic loads when it
enters the water. To ensure the TEMPSC remains intact on entering the water from
launch heights as great as 35 metres, as well as minimizing the decelerations on the
occupants, the bow and forward canopy are designed to be submerged during launch
and the hull’s deadrise is increased to facilitate water entry. Survivor seats are
ergonomically shaped and orientated to minimize shock loadings on survivors. In
many designs the seats face aft with high backs though in some craft they face
forwards: this is in comparison to the benches around the sides and on the centerline
of conventional craft (Serco, 2007).
Before entering the water, the craft may rotate slightly during the free-fall if
the loading is not balanced, but should have sufficient stored forward momentum to
clear the impact point and drift directly away from the vessel or installation. With the
craft’s engine running and propeller engaged shortly before launch the craft can
maneuver away seamlessly with reduced risk of backwash. However, depending on
the direction and strength of wind and waves there is a possibility the TEMPSC may
broach, set back, lie across the waves, or even capsize if conditions are severe
enough. Although these are very real concerns it appears that little research has been
undertaken into launch performance where the results are in the public domain. Even
if scale or full-scale research is carried out interpreting the results is difficult; because
5
the conditions existing at the moment of the craft’s impact with the water, in what is
an essentially a random seaway, are difficult to predict. Unfortunately this means
there are no known criteria for the maximum sea state or wave directions where a
good prospect of successful free-fall launch can be assumed (Serco, 2007).
6
SOLAS, and the requirement for launching full compliment lifeboats has been
removed for participant and asset risk reasons (IMO, 2006b). Responsibility of
performing lifeboat drills now lies with the Vessel Master or Offshore Installation
Manager (OIM), depending on the environmental conditions (Patterson, 2007). This,
along with the drastically decreased confidence of crews in the safety and
practicability of lifeboat drills, has contributed to a culture of fear and unease
surrounding them (Ross, 2006).
In some respects the historical approach to lifeboats/TEMPSC and other
aspects of the evacuation sequence has been through the provision of effective and
reliable EER equipment. This has been done at the expense of limited improvement,
understanding and making meaningful estimates of what the limits of the equipment’s
operability would be. To some extent this is understandable as the number of
incidents where installation evacuations are required is small and hence there is little
real experience to consider. Even where data from such sources are available, it may
be more subjective than that gathered by independent and verifiable means. Also,
carrying out trials in adverse weather, and in doing so potentially exposing those
involved to risk, may be morally difficult to justify. However, this leads to the
training and skill evaluation of those involved in FFLB launching almost impossible
to benchmark; even though it could have a profound impact on its success.
Training for TEMPSC operators is completed in harbours and sheltered ports
under relatively nonthreatening conditions, because conditions more representative of
extreme maritime environments (e.g. wind and waves) may pose unnecessary risk to
trainers, students, and assets (Veitch, Billard, & Patterson, 2008b). Current
regulations do not require operators, or duty holders, to demonstrate the capability of
evacuation system performance as a function of weather conditions.
7
otherwise be prohibitively dangerous to collect if done with full-scale manned
equipment under controlled conditions (Simões Ré Veitch, & Spencer, 2010). The
STCW Convention was revised in 1995, and changes were made to a number of
regulations and recommendations, including possible inclusion of simulator-based
training within the curriculum. Prior to 1995, little was published about the utility of
maritime simulators for skill acquisition and trainee assessment. This changed when
the United States and the United Kingdom brought position papers to the international
level for the purpose of information sharing (Drown, 1996). Most recently, the IMO
has introduced the 2012 Manila Amendments to the STCW Convention. These
amendments contain improved guidelines on modern educational methods, such as
distance and web-based learning.
There can be many reasons for advocating the use of simulation for training.
Prof Peter Muirhead (2003) believed that the inexperienced mariner is likely to make
errors of judgment early in any real ship training. The consequences of these errors
could be both costly and catastrophic. In simulators, a mariner can make multiple
errors, and receive extrinsic feedback to assist in improved performance onboard
ships. Rapid repetition of difficult situations allows a review of tactics until a
satisfactory conclusion is reached. Many situations cannot be experienced at sea.
Emergency procedures, as well as maneuvering in difficult conditions or geographical
locations are readily available only within simulator for safety reasons discussed
previously. More importantly, the first-hand learning environment created by
simulations is critical for enabling trainees to experience emotional arousal during
performance episodes, develop an understanding of the relationships among the
different components of the system, and integrate new information with their existing
knowledge with a naturalistic environment (Keys and Wolfe 1990; Zantow, Knowlton
and Sharp 2005; Cannon-Bowers and Bowers 2010).
An additional benefit of simulation training is the ability to provide refresher
or recurrent training on board vessels and installations, making it possible for students
to practice the skills they have gained (O’Hara, 1990). Simulator training is able to
assist in the development of behavior patterns that students can draw upon during an
emergency situation (Hytten, 1989) and expose them to important contextual
characteristics relevant to the performance domain (Schiflett Elliott, Salas, and
Coovert 2004). Muirhead (1996) defines “skill” in the simulator context as “the
combining of mental and physical dexterity in the face of audio and visual cues to
8
perform tasks to meet specific objectives” (p.259). The belief is that the skills and
behaviors learned in a simulator will translate into real life situations and
performance. The possibility of maintaining skill development and acquisition
through at-sea training could give trainees an opportunity to have more frequent and
recurrent training. Research suggests that continued skill development past the first
successful demonstration of a skill set can lead to a better grasp of the desired tasks
(Taber, 2010). Saus, Johnson, and Eid (2010) proposed that simulation training could
be used as a means of improving maritime safety. Their research demonstrated that
situational awareness (SA) could be improved through simulator training, especially
in novice operators. Poor SA contributes to stress levels in both low and high
workload situations, which in turn can cause more human errors. They advocate for
the design of training to improve SA, since this could lead to greater prevention of
human error. This supports their idea that simulation training can contribute to an
enriched work environment.
Some experts in maritime education believe that simulation training could
replace in-service training for seafarer certifications (Ali, 2007), with a month of sea
service being replaced by one week of simulator time that would further enhance
physical training (Drown, 1996). Yet, some experts (Muirhead, 1996) believe
simulator training can never replace the real experience of physical training. He also
reports that many watch keepers and senior maritime officers do not have the chance
to acquire key skills, due to both safety and operational factors, and thinks that
simulators may be able to aid in bridging this training gap.
Simulation training has emerged in a number of different areas as a potentially
safe and effective alternative to traditional training methods. Simulation training can
provide obvious training benefits, as such an environment can be used to assess
learning aspects such as the capacity for developing and measuring situational
awareness (Saus, Johnson & Eid, 2010), visual-spatial ability (Kewman, Seigerman,
Kintner, Shu, Henson, & Reeder, 1985), and time-performance gains (Aggarwal,
Black, Hance, Darzi & Cheshire,, 2006). However, the level of skill transfer to the
real world is the critical component in examining the effectiveness of simulation
training (Seymour, Gallagher, Roman, O’Brien, Bansal, Andersen, & Satava, 2002).
Rose Attree, Brooks, Parslow, Penn, & Ambihaipathan (2000) examined learning and
performance between virtual and real-time training; the results from this research
demonstrated that those who completed virtual task training were less likely to be
9
affected by unexpected interruptions than those who completed real task training. In a
separate study, Barnett, Perrin, Curtin, and Helbing (1998) also concluded that
training motor skills in VR and real-world environments yielded statistically similar
results. The near-equality of VR and real world training for specific tasks presents a
great advantage in training dangerous activities.
When properly used and supported by well trained and experienced
instructors, simulator training should contribute to a reduction in accidents at sea and
improve capability and efficiency of trainees, by providing them with the necessary
experience and self confidence to carry out their onboard roles, functions and tasks.
10
fidelity is adding to the effectiveness of simulation (Ali, 2006). If a virtual
environment is accurately modeled after a real environment and possesses an increased
number of specific contextual cues relative to the training task when compared to a
desktop simulation of the same environment, the training conducted in such an immersive
environment should yield better retention of task knowledge than a desktop simulation of
the same environment (Jacquet, 2002). Some of the increased contextual cues in the
virtual environment would include the spatial relationships in the task environment, such
as the location of items in the work area with relation to each other and with oneself
(Jacquet, 2002). Instead of looking at a flat picture of an environment, the individual
immersed in a virtual environment can reach out and experience the spatial dimensions of
the world, and sense the spatial relationships between objects. Because this environment
contains more real world-like environmental cues, memories, and consequently learning,
from this task environment may be more readily activated when the individual encounters
the real world task environment in the future. Increased participant engagement should
lead to less time for task acquisition, and improved retention of task knowledge (Jacquet,
2002).
Validation is an ongoing process and therefore when components of a system
are changed they should be re-validated to ensure the fidelity level is consistent, if not
lost. For example, new motions may be “poor” motion representations. The accuracy
and fidelity of simulators can change dramatically from facility to facility. These
variances can be caused by the differences in mathematical models used to develop
the simulations, and facility operator modifications to models after installation. A
number of facilities use in-house staff to develop their own models, which creates
problems in reliably comparing data and results from one system to another.
To properly measure competency and proficiency in simulator training, as
prescribed in the STCW Code A, the simulators in question should be appropriately
validated for system performance, student performance (Muirhead, 1996), and
instructor assessment (Barber, 1996; Drown, 1996; Ali, 2007). Muirhead (1996)
suggests that outcomes must be based upon real world shipboard operations through
criterion-based goals (p. 263). Having a trained instructor and assessor is very
important to the delivery and validity of simulator instruction (Barber, 1996; Drown,
1996). Muirhead (1996) also proposes that those in charge of delivering simulation
instruction should have formal simulator training certification. It is for this reason that
institutions such as World Maritime University (Sweden), United States Coast Guard
11
(U.S.), and Transport Canada (Canada) continue the development of instructor
courses for simulation training (Ali, 2007; Patterson, 2007).
Industry has been the driving force for regulation, specification, and
classification of simulators in the last 10-15 years. Classification societies (such as
Det Norske Veritas) have taken it upon themselves to publish standards for simulators
(DNV, 2011) as one way to fulfill the requirements set out by the STCW code
(Muirhead, 2006, DNV, 2011). Konsberg, a Norwegian company, has begun a project
from a user-directed perspective that will examine simulation from a human factors
point of view. As reported in Safety at Sea International, the company believes that
aspects of human factors in simulation training are very important when examining
and assessing the effectiveness of the training (Safety at Sea (45), 2011). The
continued development of validation and certification processes for simulators and
simulations should be endorsed. The extent to which accuracy of a simulation needs
to be validated will depend on the proposed use of the simulation or the desired
training outcomes.
2.3.2.1:Regulation-I/12-Use of Simulators.
This regulation gives legal cover to the performance standards of marine
simulators being used for the training and assessment of seafarers and their
certification in compliance with STCW Convention.
12
equipment along with possible errors should form part of the simulation. Simulators
should be able to produce the emergency, hazardous and unusual conditions for the
effective training value. The most important aspect of the performance standards in
STCW Convention is the requirement of simulators to provide the simulator instructor
with control and monitoring facilities along with recording equipment for effective
debriefing to the trainees.
The second part provides the provisions for training and assessment
procedures and discusses the simulator trainers and assessors standard conduct for
simulator training. The STCW Convention foresees briefing, planning,
familiarization, monitoring, and debriefing to be part of any simulator based exercise.
It also highlights the importance of guidance and exercise stimuli by the instructor
during simulation, and use of peer assessment techniques during the debriefing stage.
Simulator exercises are required to be designed and tested by the simulator instructor
to ensure their suitability for the specified training objectives.
13
are the Seafarers’ Training, Certification and Watchkeeping (STCW) and the Safety
of Life at Sea (SOLAS) Conventions. STCW sets the requirements for initial and
refresher training while SOLAS sets the requirements for on-board drills.
Requirements for workers in the offshore oil and gas industry are contained in
guidelines issued by the International Maritime Organization (IMO) in Assembly
Resolution A.891(21) Recommendations on Training of Personnel on Mobile
Offshore Units (MOUs).
14
2.4.2: SOLAS and Drills
While the STCW ensures that crewmembers have demonstrated their
competence in the operation of survival craft, SOLAS ensures they develop and
maintain proficiency in operating the craft on their particular vessel. Until recently,
Regulation 19 under SOLAS Chapter III required that at least one lifeboat be lowered
each month with its operating crew (Sect. 3.3.1.5.), and that each lifeboat be lowered,
released and maneuvered by its crew at least once every three months (Sect. 3.3.3.).
Provisions have been made for free-fall lifeboats that only require a full launch every
six months or twelve months in the case where appropriate arrangements are made for
a simulated launch every six months (Sect. 3.3.4.).
In Canada, the provisions for on-board drills are contained in the Boat and
Fire Drill and Means of Exit Regulations. Section 18 of the regulations essentially
repeats the SOLAS provisions by requiring each davit launched lifeboat to be
launched and maneuvered in the water with its assigned crew at least once every three
months (Section 18.(2).e.) and every free-fall lifeboat to be launched and maneuvered
every six months (Section 18.(2).f.).
The need to periodically practice launching lifeboats with crew on board poses
some significant safety hazards. A study published by the United Kingdom’s Marine
Accident Investigation Branch (MAIB), in 2001, noted that sixteen percent (16%) of
the total lives lost over a ten (10) year period were due to practice launches and on-
board inspections (Review of Lifeboat Launch Systems’ Accidents). The MAIB
report has triggered significant changes in the regulatory environment. The IMO has
issued cautions about the risks involved with drills (See MSC.1/Circ.1206: Measures
to Prevent Accidents with Lifeboats), and has implemented revisions to SOLAS
Section 3.3.3 which no longer require that practice launches be conducted with crew
onboard.
The ability to conduct drills using real equipment is constrained with the new
SOLAS provisions. The decision to conduct a practice launch with crew onboard is
left up to the Master who must also take into account the occupational health and
safety implications of a crewed launch. Health and safety considerations, whether
required through onboard International Safety Management (ISM) procedures or
through national legislation (e.g.: Canada Labour Code), counter-balance the
perceived operational or training benefits associated with having the crew onboard for
15
a practice launch. In fact, mariner’s view the new requirements effectively ban
crewed launches during drills. While it may be feasible to conduct a practice in ideal
conditions (after a thorough risk assessment is done and after a test lowering of the
boat with no crew), it is unthinkable that a practice launch would occur in difficult
conditions such as high seas or low visibility.
16
2.5:Launching and Set Back
Immediately after being launched into the sea, a lifeboat is prone to being
pushed, or set back, by the advancing waves, particularly before it begins to make
way. The distance that a lifeboat is set back due to its first wave encounter is an
important performance measure. Previous tests have shown that set back distance
increases with weather conditions (Simões Ré & Veitch, 2001). For a given weather
condition, set back depends on the position on the wave that the boat is launched. The
least set back occurs when the boat is launched on a wave crest or downslope; the
most occurs when the launch occurs on the upslope, or face, of an advancing wave. In
the latter case, maximum set back has been found to be approximately twice the wave
height (Simões Ré & Veitch, 2001).
17
Simulators that trigger appropriate behavioral responses can be used by students to
learn and demonstrate the competencies necessary to perform tasks, when real
equipment is not a feasible training option.
Training simulators do not have to be an exact replica of the real world;
however they have to be realistic enough so that skills acquired in a simulator can be
used in the real world. Generally more complicated training requirements, require
more sophisticated simulators. Implementing simulation into a training program
requires matching realism to training objectives, and to strike a balance between
mathematical and cueing realism. For marine training simulators a commonly used
fidelity scale is: full mission (a high fidelity replica of the real-world intended for
advanced training); multi-task (a medium fidelity replica of the real-world intended
for operational training); limited-task (a partial replica of the real-world intended to
develop basic skills); and, special-task or single-task (specialized simulator to teach
particular skills) (Cross & Olofsson, 2000).
18
lecture on launch technique. National training standards, such as TP 4957, should
also be reviewed to shift rough weather launch training from lectures to
demonstration in a simulator. Such revisions to the template training standards will
bring the training regimes into alignment with Table A-VI/2-1 of the STCW Code
that envisions practical demonstration for all aspects of lifeboat launching.
Once simulation is accepted as a training method for lifeboat crews,
Administrations will need to approve individual simulation based training programs.
Before approval, the Administration will need to be satisfied that the simulator meets
the functional requirements defined in STCW, especially the two core requirements of
physical and behavioral realism. Administrations can conduct their own assessment,
or in some cases, accept a certification from Det Norske Veritias (DNV) that the
simulator meets the requirements of guidance document 2.14 Certification of
Maritime Simulator Systems. In either event, the accreditation criteria for lifeboat
launch simulators do not currently exist and need to be developed. Memorial
University of Newfoundland is presently conducting research to identify appropriate
accreditation criteria.
SOLAS currently permits ‘simulated’ launches of free-fall lifeboats in lieu of
actual launches with crews. In the case of free-fall lifeboats, the simulation
envisioned is not a numerical simulation but rather a means to trap the boat and
prevent it from launching from the ship. Regulation 19 of SOLAS should be
amended so practice using a suitable numerical simulator would be accepted in lieu of
crewed launches. The simulator could be brought on board or to the ship’s side
during port visits. The use of numerical simulators for practice drills would certainly
outstrip current training scenarios; eliminate the critical safety issue of crewed
launches; and, significantly raise the preparedness of the crew to react to an
emergency.
2.8: Presence
In its most common usage in the Virtual Environment (VE) community, the
term “presence” refers to a person’s sense of physical location, that “of being” in a
particular place. There is no standard recognized definition for presence; most of the
literature, however, proposes something similar to the following: “Presence is the
subjective experience of being in one place or environment, even when one is
physically situated in another place or environment.” Barfield & Hendrix (1995)
19
define presence as “the participant’s sense of ’being there’ in the virtual
environment”. This concept is confusing as its definition is relative to the
understanding of the words ’sense’ and ’being’. Lombard & Ditton (1997) proposed
to interpret presence as “a perceptual illusion of non-mediation”; presence is what
happens when the participant ’forgets’ that his perceptions are mediated by
technologies. This media-oriented approach allows one to analyze the causes of
presence with objective variables: number and consistency of sensory outputs, visual
display characteristics, aural presentation characteristics, interactivity, obtrusiveness
of medium, and number of people involved. Witmer and Singer (1998), defined
presence as the subjective experience of being in one place or environment, even
when one is physically situated in another.
Defining presence has even become multidimensional in scope for some
researchers, including a physical or perceptual dimension as well as a social
dimension and co-presence. The physical dimension refers to the sense of being
physically located in a mediated space (Lombard & Ditton, 1997). The social
dimension is based on the perceived existence of others and the perceived possibility
of interaction. Youngblut (2003) defines co-presence as “the subjective experience of
being together with others in a computer-generated environment, even when
participants are physically situated in different sites.” There are several things to note
about this definition. First, like presence, co-presence is a subjective construct and the
definition explicitly supports distributed VE applications. Also, in using the term
“others,” we do not restrict ourselves to all the participants being human. Some may
be computer-generated (artificial intelligence – AI) agents.
Social presence in the context of computer-mediated communications is an
active area of research addressing issues in organizational communications, use of
teleconferencing systems, and the role of Internet-based virtual communities. Social
presence is a subjective phenomenon that depends on properties of the medium, the
concept was developed to measure the ‘quality’ of a means of communication or,
more specifically, to support comparisons between media for defined tasks. Most of
the current measures of social presence for VEs are based on this early work, but
some researchers have taken a different view. In this case, social presence goes a step
further than co-presence to address social psychological ideas of personal interaction.
Biocca (1997) believes “social presence occurs when users feel that a form, behavior,
or sensory experience indicates the presence of another individual. The amount of
20
social presence is the degree to which a user feels access to the intelligence,
intentions, and sensory impressions of another.” This addresses more than a
replication of face-to-face communication, reflecting awareness of another’s
intelligence and intentions, and some sensory impression of the other.
21
development. By performing factor or cluster analyses, it is possible to identify
underlying dimensions of the measured construct.
A main disadvantage of questionnaires is that they are retrospective and
therefore rely on participant’s memories, which are an incomplete reflection of the
experience, and prone to several biases. For example, it seems likely that user’s
judgments will be more influenced by events near the end of the experience (recency
effect). Questionnaires are also sensitive to demand characteristics, for example, the
hints and cues in a research situation that may bias the participants’ responses. For
instance, Freeman, Avons, Pearson and IJsselsteijn (1999) have shown that simple
post-test presence ratings are sensitive to the effect of unrelated prior training
sessions.
Questionnaires have been shown to be sensitive enough to find differences in
presence when used to examine: mode of locomotion (Usoh Arthur, Whitton, Bastos,
Steed, Slater & Brooks, 1999), more sensory cues more presence (Dinh, Walker,
Song, Kobayashi, & Hodges., 1999), and narrow versus wide field of view (Arthur,
1999). However a study that examined whether questionnaires could show differences
in presence in participants that searched a real office as opposed to those who
searched a virtual office raised inconclusive results (Usoh, Catena, Arman, & Slater,
2000). This brings into question the usefulness of questionnaires when comparing
across different and/or no media, such as immersive virtual compared with real, and
desktop compared to head-mounted display. However the ITC-SOPI questionnaire
was designed to address this cross-media problem, though it is not yet widely used
(Lessiter, Freeman, Keogh, & Davidoff, 2000).
22
that immersion is based solely on the technological aspects of the system producing
the environment. In other words, to be immersive, a system must possess a high-
resolution display with a wide field of view. The system must also isolate the user
from real world sensations such as light and sound to allow the user to become a
participant in the VE, and not simply an observer of it.
Involvement is a psychological state experienced as a consequence of focusing
one’s attention on a coherent set of stimuli or meaningfully related activities and
events. Involvement depends on the degree of significance or meaning that the
individual attaches to the stimuli, activities, or events. In general, as users focus more
attention to the VE stimuli, they become more involved in the VE experience, which
leads to an increased sense of presence in the VE.
Witmer & Singer aimed to develop a measure of presence addressing factors
that influence involvement and immersion. Main categories of such factors were
derived from the work of Sheridan (1992) and Held & Durlach (1992):
Thirty-two items were designed based on the above factors and tested for
reliability. The final version of the PQ contained 19 items, rated on a seven point
rating scale with a midpoint anchor (e.g., 1= not compelling, 4 = moderately
compelling, 7 = very compelling).
The first version of the PQ was used in four experiments. In two experiments,
participants performed psychomotor tasks in a simple VE. In the other two
experiments participants learned complex routes through a virtual office.
Cluster analysis revealed three subscales: Involved/Control, Natural (in
regards to human interaction with the simulator, how well the controls match their
23
real world counterparts), and Interface Quality. PQ scores were then correlated with
measures for constructs associated with presence. PQ scores were significantly
correlated with the Simulator Sickness Questionnaire (SSQ) scores across
experiments (Van Baren & IJsselsteijn, 2004). Significant correlations with
performance of psychomotor tasks and spatial knowledge were found in some
experiments, but not in others. No significant effect of natural interaction (head
tracking) was found. A significant correlation was found with the Immersive
Tendency Questionnaire (Van Baren & IJsselsteijn, 2004).
Usoh, Catena, Arman, and Slater (2000) have argued that presence
questionnaires should be subject to a “reality test” where data obtained in a VE should
be compared to data obtained in the real world. In such a study (n=20, between-
subject design), they tested the PQ. It did not distinguish between real and virtual
experiences.
Youngblut and Perrin (2002) gave an extensive overview of research that has
been conducted with the PQ. The PQ gave consistent results (in two or more studies)
for the factors: display field of view, head tracking, task-related experience, and
gender. An experiment using the PQ and Slater-Usoh-Steed (SUS) questionnaire was
conducted to investigate the relation between presence and task performance.
Participants (n=40, between-subjects design) had to perform an aircraft maintenance
procedure in a virtual world. The amount of practice was varied. An effect of practice
was found only on the PQ Interface Quality subscale. The Involved/Control subscale
correlated negatively with the number of errors. A significant correlation (r=.51) was
found between the PQ and the SUS total scores, and also between all subscales. The
authors concluded that their data supported the argument that the PQ and the SUS
measured the same construct, but there was not enough evidence to draw conclusions
about their validity.
24
performance, characteristics of the user, such as ability and motivation, will influence
task performance (Heeter, 2001). Task performance measures are only applicable in
media environments where there is a clear task that should be performed.
In past research, task performance measures used when examining presence
include completion time and error rate (Basdogan, Ho, Srinivasan, & Slater, 2000),
number of actions (Slater, Linakis, Usoh, & Kooper, 1996), secondary task
performance (Nichols, Haldane, & Wilson, 2000), and transfer to real world
(Younblut & Perrin, 2002)
2.9: Feedback
Simulator based training has the ability of providing a breadth of knowledge
which other wise could only be gained through years of real world experiences, or not
experienced at all in safe real world training. Realization of this potential, however,
depends upon the ability of simulator training program to take into account the special
cognitive needs of the trainees and ability of the instructor to properly provide
feedback to the trainees.
Feedback to the trainee regarding standard of performance is very important
for maintaining interest and morale while improving performance (Stephen, 1985).
With regards to effectiveness of the feedback provided to the trainees, Stephen said
that two factors are important to be considered while providing feedback. First, timing
of the feedback is very important. Some errors can change the subsequent run of the
exercise and need to be corrected immediately. While other errors may take a series of
trials to properly analyze their influence on performance. In these cases, it becomes
more practical for the instructor to delay the feedback so it encompasses the full scope
of the problem behaviour. Delayed feedback also helps the trainees with time to think
and analyze their actions and consequences. The other factor influencing the
performance feedbacks effectiveness is redundancy. Studies indicate that repetition of
same feedback may reduce interest and motivation of the trainees.
While discussing the process of training on simulators, feedback provided to
the trainees was divided by Stephen (1985) into three sub-categories;
1. Intrinsic feedback where trainee will come to know appropriateness of their
actions through consequences achieved. This is the simplest form of the
feedback and is always present in simulator-based training. It is duty of the
25
instructor to ensure trainee has the perception of high standards to compare his
performance.
2. Augmented feedback can be provided to the trainees through providing the
participant with an overview of their track followed and changes made. This
bird’s eye view will help them in understanding their successive inter-related
actions with regards to consequences, and will even improve the intrinsic
feedback’s quality.
3. Supplemental feedback is highest form of feedback that can be provided to
the trainee. When the participant is on task, their mind can become pre-
occupied with the new information and they can come under stress and be
unable to grasp new ideas or approaches. When the simulation is finished,
providing them with a debrief of the exercise will be of great value as the
trainee’s mind will be free for self-criticism and true analysis of the actions
taken during the simulator session.
26
accurate measurement results. Subjective measures and standards rely primarily on
the examiner’s observation and interpretation of the performance.
Subjective evaluation and assessment of the trainee’s performance is very
delicate issue and needs special consideration. When using subjective measures and
standards, it is very important to have subject matter experts (SMEs) as assessors to
maintain validity and to use checklists and grading guidelines to maintain reliability.
A point to be noted here is that objective assessment was the traditional
conceptual approach and always looks attractive and reliable. But with new
requirements of competency based training and assessment, efforts to become more
and more objective reduces the validity proportionately. This problem can be
minimized by having qualified and experienced mariner as simulator instructor.
Muirhead (2006) describes the importance of monitoring and feedback to the
assessment process. He mentions the work of Hooper, Witt & McDermott (2000), a
pilot study utilizing hypertext and web tools to deliver exercise advice and feedback
in electronic format with future trials looking at embedded online assessments.
Muirhead also looks at Smith’s (2000) investigation into the development of
Instructorless Training, where a trainee can start, undertake and stop a simulator
exercise, without any referral to an instructor. He concludes that although these are
considered advances, Muirhead argues that such approaches place limitations on how
the final judgment of the performance against set criteria is made. Understanding the
varying levels of cognitive, affective and psychomotor skills that comprise the
measure of performance becomes paramount, and visual perception by an experienced
assessor must, or may, be mandatory. In most cases, determining competency to
perform tasks or functions should not rely solely on technology as the yardstick of
performance rather, it should be used as another supportive tool to the evaluation
process.
27
scenarios consisting of contextually relevant exercises or tasks created by the
researcher/trainer. These scenarios contain cues for the participant to exhibit
behaviors that have been identified as important for that particular task. In addition to
defining the tasks considered desirable to observe, the researcher/trainer must
determine what an acceptable response to the scenario is a priori by means such as
SME interview, task analysis, or investigation of standard operating procedures.
Acceptable responses are determined in advance so that the observer can have a
checklist at the time of the observation. This significantly adds to the reliability of the
observational rating. Event-based measurement, such as the TARGETs methodology,
provides the opportunity to observe behaviors that have a low frequency of
occurrence in the real world and therefore are difficult to observe in a purely
naturalistic setting (Fowlkes, Dwyer, Oser, & Salas, 1998).
28
Chapter 3 : Methodology
3.1: Participants
Fifty-four participants (46 male, 8 female) with a mean age of 23.0 ±2.9 years
were recruited to participate in this study. Participants were recruited through the use
of verbal scripts as presented to undergraduate classes at Memorial University, as well
as written scripts that were e-mailed to possible participants by the research team
(Appendix A). Participants were required to have no previous experience operating
small marine crafts, and had to meet the following selection pre-requisites:
1. Not current holders of STCW (Standards of Training, Certification and
Watchkeeping) lifeboat training certification
2. Little sensitivity to motion sickness
3. No health conditions that could be aggravated by increased anxiety
4. Lack of pre-existing heart or lung conditions that impair physical activity
5. Lack of pre-existing muscle or skeletal conditions that limit mobility
6. No fear of enclosed spaces
Those who met the above criteria were then screened and deemed able to
participate by completing the Physical Activity Readiness Questionnaire (PAR-Q –
Appendix B), All participants gave their written informed consent and were given the
opportunity to discuss any concerns with the investigator(s) prior to participating in
the study (Appendix C). Ethical approval for this study was granted by the Human
Investigations Committee of Memorial University.
29
3.2: Equipment
30
The dependent variables collected during this study include duration of each
trial, vessel position at the beginning and end of each trial, and vessel rotation in the
air.
Figure 3-3. Simulator navigation control panel complete with: steering wheel, inside
and outside light switches, throttle, radio, and emergency stop button.
31
Figure 3-4. Simulator navigation control panel: the ignition switch and radio.
32
Figure 3-6. The Instructor’s Station.
33
SurvivalQuest software makes it possible to manipulate many variables in a
given trial. Wave heights can range from “0 – calm water” to “8 – moderately high
waves of 5.5m - 7.5m”, wave direction can be expressed by any compass bearing (0°
– 360°), precipitation of rain and snow can range from 0% to 100%, and visibility can
range from 0 ft. to 100,000 ft.
Our study included two virtual wave heights as set by the SurvivalQuest
software: “Size 5” waves (Moderate waves of 2-3m of swell) and “Size 8” waves
(Moderately high waves of 5.5-7.5m of swell). It included 4 wave directions: South
(180°), South-East (135°), South-West (225°) and North (0°). Finally, it contained
three visibility states that were made up of different precipitation and visibility scores:
“Clear” combined 0% precipitation with 100,000 ft. visibility, “Rain Storm”
combined 100% precipitation with 10,000 ft. of visibility, and “Heavy Rain Storm”
combined 100% precipitation with 500 ft. of visibility.
Participants were given a total of three practice launches into calm, moderate,
and moderately big sea state conditions, and differing visual clarity states (Table 3.1).
This mimics the STCW 1995 (Section A – VI/2, section 5.4 of the International
Maritime Organization Model Course 1.23: students must complete a minimum of 3
practical launch/recovery exercises). All participants were given immediate feedback
after each of the three launches by the instructor. After completing the practice
launches, the participant began the experimental trials.
Participants were divided into two experimental groups, Group I and Group II.
Both groups completed 24 experimental trial conditions (Table 3-2) in a randomized
order. Group I was not provided any feedback regarding the success of the launch
following each trial. Group II was provided with immediate feedback after each
launch. Feedback was given in the form of a still image of the lifeboat on first contact
with the wave. The picture was displayed on the starboard screen in the cabin of the
lifeboat for fifteen seconds following the completion of the launch.
34
Table 3-2: Table of Experimental Conditions
Experimental Conditions
Trial Wave Height Wave Direction Visual Clarity
1 5 South Sun
2 5 North Sun
3 5 South‐East Sun
4 5 South‐West Sun
5 5 South Rain Storm
6 5 North Rain Storm
7 5 South‐East Rain Storm
8 5 South‐West Rain Storm
9 5 South Heavy Rain Storm
10 5 North Heavy Rain Storm
11 5 South‐East Heavy Rain Storm
12 5 South‐West Heavy Rain Storm
13 8 South Sun
14 8 North Sun
15 8 South‐East Sun
16 8 South‐West Sun
17 8 South Rain Storm
18 8 North Rain Storm
19 8 South‐East Rain Storm
20 8 South‐West Rain Storm
21 8 South Heavy Rain Storm
22 8 North Heavy Rain Storm
23 8 South‐East Heavy Rain Storm
24 8 South‐West Heavy Rain Storm
35
those weather conditions (Atlantic Canada Offshore Petroleum Industry Escape,
Evacuation and Rescue, 2010).
Figure 3-7. From Simões Ré, Pelley and Veitch (2003) - Setback and progressive
setback
Figure 3-8. The four wave quartiles: Peak of a wave (Q1), the downslope running
from peak to trough (Q2), the trough (Q3), and the upslope running from trough to
peak (Q4).
36
Each launch was evaluated by the wave quartile in which the boat landed. Q2
was determined to be the most successful launch outcome (score of 4 points),
followed by Q1 (score of 3 points), Q3 (2 points) and Q4 (1 point) respectively. Each
participant completed 24 launches in the Survival Quest FFLB simulator. Waves were
divided into four quartiles and participants were scored from 1-4 points according to
the section of wave on which they landed (1 “upslope”, 2 “trough”, 3 “peak”, 4
“downslope”). Therefore, performance scores for each participant could range from
24 (all upslope landings) to 96 (all downslope landings). As the FFLB contacted the
water ten evenly spaced points along the long axis of the boat collected wave height
information. A graphical analysis of these data was utilized to re-create the image of
the landing position of the free fall lifeboat on the wave. Total Landing scores were
calculated by summing each individual’s launch scores from their 24 launches. Time
to launch was recorded by the simulation program and was used to calculate mean
times to launch.
3.3.3:Presence Questionnaire
After the participant’s 24 experimental trials were completed each was
instructed to complete a modified Witmer and Singer Presence Questionnaire (1998)
that evaluated the quality of the simulated environment. Questions asked the
participant to quantify the quality, responsiveness and involvement of different
aspects of the simulator. Nine questions were graded on a scale from “0%” (not at all
responsive / not at all involved / not at all easy to anticipate) to 100% (fully
responsive / fully involved / very easy to anticipate), and three questions were open-
ended short answer questions asking about the strengths and shortfalls of the
simulated experience. Full questionnaire can be found in Appendix D.
37
Performance Measure Derived Variables Description
Position on Wave Wave quartile analysis Programmed recording of
wave heights under boat
as it hits the water,
graphically plotted
Time Total time to launch Measured in seconds.
Presence Questionnaire Subjective measure of Scoring different
simulator's ability to elements fo the
replicate real life simulator with
descriptive statistics.
38
Chapter 4 : Results
4.1: Performance Data
Analysis of variance between the feedback and no feedback groups showed
there was no significant difference in mean landing scores (p = 0.137) or mean time to
launch (p = 0.269) between the feedback and no feedback groups. These data suggest
that the feedback was not sufficient in improving performance over the course of the
trials (see Table 4.1).
Performance scores of specific trials are listed in Table 4.2 (see Table 4.2).
Table 4-2: Performance on each individual trial expressed by summed score and
average time to complete.
Performance by Individual Trial
No Feedback Feedback Combined Groups
Trial Wave Height Wave Direction Visual Clarity Mean Score Mean Time Mean Score Mean Time Mean Score Mean Time
1 5 South Sun 68 18 76 21 72 19.5
2 5 North Sun 78 21 73 25 75.5 23
3 5 South-East Sun 71 18 76 17 73.5 17.5
4 5 South-West Sun 71 18 65 22 68 20
5 5 South Rain Storm 71 19 76 22 73.5 20.5
6 5 North Rain Storm 61 22 75 28 68 25
7 5 South-East Rain Storm 58 20 61 21 59.5 20.5
8 5 South-West Rain Storm 70 21 80 22 75 21.5
9 5 South Heavy Rain Storm 77 22 74 28 75.5 25
10 5 North Heavy Rain Storm 65 27 78 29 71.5 28
11 5 South-East Heavy Rain Storm 67 19 70 21 68.5 20
12 5 South-West Heavy Rain Storm 67 20 69 20 68 20
13 8 South Sun 88 19 91 19 89.5 19
14 8 North Sun 69 21 69 24 69 22.5
15 8 South-East Sun 93 20 85 20 89 20
16 8 South-West Sun 82 19 82 20 82 19.5
17 8 South Rain Storm 74 21 94 23 84 22
18 8 North Rain Storm 79 21 58 24 68.5 22.5
19 8 South-East Rain Storm 72 20 77 18 74.5 19
20 8 South-West Rain Storm 85 21 92 24 88.5 22.5
21 8 South Heavy Rain Storm 80 20 94 21 87 20.5
22 8 North Heavy Rain Storm 71 22 57 23 64 22.5
23 8 South-East Heavy Rain Storm 84 20 89 20 86.5 20
24 8 South-West Heavy Rain Storm 75 22 85 20 80 21
Table 4-3: Mean landing zone performance and completion times based on trial order
of experience for the 'no feedback' and 'feedback' groups.
No Feedback
Upslope (1) Trough (2) Peak (3) Downslope (4) Mean Score Time to Complete
1st 6 Trials 8 5 5 9 68 21.8 s
2nd 6 Trials 7 5 4 11 73 20.0 s
3rd 6 Trials 4 6 5 12 80 20.5 s
4th 6 Trials 6 5 7 10 75 19.8 s
Feedback
Upslope (1) Trough (2) Peak (3) Downslope (4) Mean Score Time to Complete
1st 6 Trials 6 4 6 11 77 22.1 s
2nd 6 Trials 7 4 6 10 73 23.1 s
3rd 6 Trials 5 4 6 11 78 22.1 s
4th 6 Trials 6 5 5 10 74 20.8 s
Analysis on the effect of wave height, wave direction, and visual clarity were
conducted. Results are presented as a percentage of total landings in each wave
quartile for each condition (see Table 4.4).
Table 4-4: Landing zone performance based on wave height and visual clarity state
expressed as a percentage.
No Feedback
Sea State Visual Clarity Upslope (1) Trough (2) Peak (3) Downslope (4) Time to Complete
5 Sun 24.04% 20.19% 25.96% 33.65% 19s
5 Rain 34.62% 19.23% 23.08% 26.92% 21s
5 Heavy Rain 27.88% 19.23% 27.88% 28.85% 22s
8 Sun 16.35% 14.42% 18.27% 54.81% 20s
8 Rain 17.31% 25.96% 13.46% 47.12% 20s
8 Heavy Rain 17.31% 23.08% 19.23% 44.23% 21s
Feedback
Sea State Visual Clarity Upslope (1) Trough (2) Peak (3) Downslope (4) Time to Complete
5 Sun 18.27% 22.12% 37.50% 25.96% 21s
5 Rain 18.27% 23.08% 33.65% 28.85% 23s
5 Heavy Rain 24.04% 17.31% 28.85% 33.65% 24s
8 Sun 24.04% 7.69% 13.46% 58.65% 20s
8 Rain 21.15% 13.46% 16.35% 52.88% 22s
8 Heavy Rain 17.31% 18.27% 14.42% 53.85% 21s
40
mean performance scores were 15% better in size 8 waves when compared to size 5
waves (p = 0.001). There was also a significant increase in the amount of downslopes
hit between the two wave heights (48.72% for Size 8 waves as compared to 29.80%
for Size 5 waves) (p = .001), and a significant decrease in upslope and peak landings
(p = .001 and p = .047 respectively).
The feedback group performance scores approached significantly better scores
– an 11% increase, when comparing size 8 results to size 5 waves (p = .056). There
was also a significant increase in the amount of downslopes hit between size 8 waves
and size 5 waves (55.13% for Size 8 waves as compared to 29.48% for Size 5 waves
(p = .001), and a significant decrease in the number of trough (-7.7%) and peak
landings (-18.9%) (p = .039 and p = .001 respectively) (see Table 4.5).
Table 4-5: Landing performance and completion time sorted by wave height.
Performance By Waveheight
No Feedback Feedback Combined Groups
Score Average Time Average Score Average Time Average Score Average Time Average
Wave 5 68.67 20.42 72.75 23.00 70.71 21.71
Wave 8 79.33* 20.50 81.08** 21.33 80.21* 20.92
Table 4-6: Landing performance and completion time according to visual clarity state.
Performance by Visual Clarity
No Feedback Feedback Combined Groups
Score Average Time Average Score Average Time Average Score Average Time Average
Sun 77.50 19.25 77.13 21.00 77.31 20.13
Rain Storm 71.25 20.63* 76.63 22.75 73.94 21.69
Heavy Rain Storm 73.25 21.50* 77.00 22.75 75.13 22.13
41
Analysis of the effect of wave direction was obtained by comparing mean
landing quartile scores, mean launch performance scores, and mean completion time
of trials grouped by wave direction. Mean completion times were significantly
affected in both no feedback and feedback groups by wave direction (p = .028 & p =
.002 respectively). North waves took the longest to complete with a mean time of
23.92s across all participants.
Wave direction also significantly affected the amount of upslope landings for
the feedback group (p = .010), which had an approaching significance effect on the
average scores of the feedback group (p = .070). See Table 4.7.
Table 4-7: Landing performance and completion time according to wave direction.
Performance by Wave Direction
No Feedback Feedback Combined Groups
Score Average Time Average Score Average Time Average Score Average Time Average
South 76.3 19.8 84.2 22.3 80.3 21.1
South West 75.0 20.2 78.8 21.3 76.9 20.8
South East 74.2 19.5 76.3 19.5 75.3 19.5
North 70.5 22.3 68.3 25.5 69.4 23.9
42
Table 4-8: Presence Questionnaire scale question results.
Presence Questionaire Results No Feedback Feedback
Scale Questions Mean (St. Dev.) Mean (St. Dev.)
1. How responsive was the simulated enviroment to actions you
83.1 (8.2) 81.5 (9.5)
initiated (or performed)?
2. How natural did your interactions with the simulated
76.3 (16.1) 75.8 (14.2)
environment seem?
3. How completely were all of your senses engaged? 74.3 (12.6) 71.9 (13.7)
4. How much did the visual aspects of the simulated
83.8 (12.6) 81.6 (8.4)
environment involve you?
5. How much did the auditory aspects of the simulated
64.9 (21.7) 68.2 (20.3)
environment involve you?
6. How much did the motion aspects of the simulated
83.7 (10.9) 87.3 (7.1)
environment involve you?
7. Were you able to anticipate what would happen next, in the
simulated environment, in response to the actions you 73.3 (20.0) 72.8 (16.7)
performed?
8. How involved were you in the simulated environment
81.3 (8.2) 80.7 (11.2)
experience?
9. How much delay did you experience between your actions
33.8 (23.7) 32.4 (23.4)
and expected outcomes?
43
Table 4-10: Correlations between Presence Questionnaire scores and Performance
Measures.
Correlations
Measure Time PQ1 PQ2 PQ3 PQ4 PQ5 PQ6 PQ7 PQ8 PQ9 Score
Kendall's tau_b Time Correlation Coefficient 1 -0.101 -0.125 -0.046 -0.129 -0.058 -0.17 -0.16 -0.079 -0.066 0.048
Sig. (2-tailed) . 0.316 0.21 0.645 0.201 0.558 0.094 0.103 0.431 0.504 0.62
N 54 54 54 54 54 54 54 54 54 54 54
PQ1 Correlation Coefficient 1 .510** .269** .327** .302** .420** .283** .460** -0.09 -0.106
Sig. (2-tailed) . 0 0.009 0.002 0.003 0 0.005 0 0.377 0.294
N 54 54 54 54 54 54 54 54 54 54
PQ2 Correlation Coefficient 1 .434** .414** .341** .443** .288** .517** -0.128 -0.098
Sig. (2-tailed) . 0 0 0.001 0 0.004 0 0.201 0.327
N 54 54 54 54 54 54 54 54 54
PQ3 Correlation Coefficient 1 .340** .346** .223* .238* .410** -0.037 -0.034
Sig. (2-tailed) . 0.001 0.001 0.03 0.017 0 0.711 0.734
N 54 54 54 54 54 54 54 54
PQ4 Correlation Coefficient 1 .217* .439** .259* .454** -0.028 -0.19
Sig. (2-tailed) . 0.032 0 0.01 0 0.779 0.058
N 54 54 54 54 54 54 54
PQ5 Correlation Coefficient 1 .301** .223* .304** -0.026 0.03
Sig. (2-tailed) . 0.003 0.024 0.002 0.792 0.758
N 54 54 54 54 54 54
PQ6 Correlation Coefficient 1 0.144 .526** -0.023 0.001
Sig. (2-tailed) . 0.158 0 0.82 0.994
N 54 54 54 54 54
PQ7 Correlation Coefficient 1 .293** -0.046 0.005
Sig. (2-tailed) . 0.004 0.641 0.958
N 54 54 54 54
PQ8 Correlation Coefficient 1 -0.131 0.043
Sig. (2-tailed) . 0.193 0.666
N 54 54 54
PQ9 Correlation Coefficient 1 -0.164
Sig. (2-tailed) . 0.095
N 54 54
Correlations
Measure Time PQ1 PQ2 PQ3 PQ4 PQ5 PQ6 PQ7 PQ8 PQ9 Score
Spearman's rho Time Correlation Coefficient 1 -0.136 -0.167 -0.058 -0.169 -0.075 -0.23 -0.225 -0.092 -0.101 0.076
Sig. (2-tailed) . 0.327 0.226 0.679 0.222 0.592 0.095 0.102 0.507 0.466 0.586
N 54 54 54 54 54 54 54 54 54 54 54
PQ1 Correlation Coefficient 1 .634** .346* .429** .391** .521** .355** .566** -0.125 -0.126
Sig. (2-tailed) . 0 0.01 0.001 0.003 0 0.008 0 0.367 0.364
N 54 54 54 54 54 54 54 54 54 54
PQ2 Correlation Coefficient 1 .556** .539** .420** .542** .381** .658** -0.177 -0.114
Sig. (2-tailed) . 0 0 0.002 0 0.005 0 0.2 0.41
N 54 54 54 54 54 54 54 54 54
PQ3 Correlation Coefficient 1 .443** .442** .307* .312* .517** -0.051 -0.027
Sig. (2-tailed) . 0.001 0.001 0.024 0.022 0 0.714 0.846
N 54 54 54 54 54 54 54 54
PQ4 Correlation Coefficient 1 .275* .527** .339* .558** -0.048 -.270*
Sig. (2-tailed) . 0.044 0 0.012 0 0.731 0.048
N 54 54 54 54 54 54 54
PQ5 Correlation Coefficient 1 .385** .283* .371** -0.03 0.049
Sig. (2-tailed) . 0.004 0.038 0.006 0.829 0.727
N 54 54 54 54 54 54
PQ6 Correlation Coefficient 1 0.196 .646** -0.033 0.01
Sig. (2-tailed) . 0.155 0 0.812 0.94
N 54 54 54 54 54
PQ7 Correlation Coefficient 1 .388** -0.046 0.009
Sig. (2-tailed) . 0.004 0.739 0.95
N 54 54 54 54
PQ8 Correlation Coefficient 1 -0.175 0.077
Sig. (2-tailed) . 0.205 0.582
N 54 54 54
PQ9 Correlation Coefficient 1 -0.219
Sig. (2-tailed) . 0.111
N 54 54
**. Correlation is significant at the 0.01 level (2-tailed).
*. Correlation is significant at the 0.05 level (2-tailed).
44
Chapter 5 : Discussion
5.1: Introduction
The results presented provide benchmarks of completion time and success rate
of simulated FFLB launches into varying sea and weather states by novice operators.
Our experiments are concerned only with the launching phase of the FFLB evacuation
in an emergency response, which in real emergency procedures may also include an
escape before evacuation and rescue after it.
The performance of the evacuation systems in the tests did not include
mechanical issues, such as the reliability of the equipment, or launch failures due to
design and operational faults, and the boat from which the FFLB was launched was a
fixed object in the sea that was unaffected by wave swell.
This study set out to examine whether the addition of pictorial feedback post-
launch and landing would lead to better subsequent launches performance by novice
TEMPSC operators. It was hypothesized that those in the feedback group would
perform more successful landings through out their trials, as well as having a
significant change in launch time.
The most important findings from this study are:
Wave height had the greatest effect on launch success.
Pictorial feedback did not affect launch success or time to launch of our FFLB
launching trials.
Visual clarity only had a significant effect on launch time in the no feedback
group.
Sense of presence was not affected by the inclusion of feedback
45
boat may not be as big of a concern as properly maneuvering it after launching. The
lower success rate in the size 5 wave launches indicate that when developing training
protocol, some focus must be placed on the relatively smaller waves that have the
potential to set back the FFLB into the installation it is trying to escape even though
managing the rest of the evacuation may be simpler.
46
mathematical wave information from directly underneath the FFLB at ten places
equally spaced along the boats long axis as it hit the water. This technique’s major
shortfall was that it could not properly record successful launches into barreling
waves as the parameters for success on those waves was different from all others
tested. Developers began work to improve data collection by also collecting three
points from the horizontal plane of the FFLB, but did not pass pilot testing while the
data collection for this study was taking place. Another improvement made to the
CSV file collection was pulling a second set of wave height data three-tenths of a
second after the boat hit the water. This allowed a further verification of the wave
phase for the boat’s landing when choppy waves made assessment difficult.
Pooling of each participant’s launching data were completed though the use of
a Microsoft Excel template that would calculate slope information from the wave as
well as graph it for experimenter evaluation of wave phase. This information could
then be easily sorted to allow for further statistical analyses for both launch scores by
situation, as well as launch scores improvements through trial progression.
47
a given sequence, but the wave sequence that is randomly created at the start of each
trial would be different (for both the original trial and the feedback recording),
leaving the feedback unsatisfactory.
Text feedback was hoped to supplement the pictorial feedback with slope and
wave quartile information. However this too failed pilot testing as angled and
horizontal waves could not be read with enough reliability to produce accurate
feedback to the participant’s landing position.
Figure 5-1: Feedback picture of a successful downslope landing of the FFLB into size
8 waves moving south.
48
Figure 5-2: Feedback picture of a peak landing of the FFLB into size 8 waves moving
southeast.
Figure 5-3: Feedback picture of an upslope landing of the FFLB into size 8 waves
moving southwest.
Figures 5.1, 5.2, and 5.3 illustrate three examples of the feedback provided
from the larger wave height. The white caps of the waves are easily viewed and
launching decisions can be easily made according to their position. However,
effective feedback for size 5 waves depended largely on spotting the wave crest break
line as other aspects of the wave period were not often clear enough to present
49
information to the participant. In some cases these lines were choppy or non-existent
in the feedback picture making it hard to tell if launches were successful in real time.
Figure 5-4: Feedback picture of a successful downslope landing of the FFLB into size
5 waves from the southwest.
When observing Figure 5.4, it could be argued that it looks as though the boat
is touching down directly between two wave crests in a trough. However, analysis of
the wave information provided by the CSV output revealed a downslope landing.
While the wave information points would be from before reaching the deepest point
of the trough (as that is where the nose is touching down), it would be hard for a
novice FFLB launcher to correctly evaluate this feedback as a successful launch.
50
metric. For example, being able to launch an FFLB quickly does not give a full
indication of the quality of the launch. While evacuation is a time sensitive task, it is
unknown how much increased (or decreased) launch time could affect overall launch
and escape performance. The hypothesis that time to launch would significantly
decrease due to feedback was linked with the hypothesis that performance for the
feedback group would also be significantly different leading to a cause and effect
conclusion. In this case, if time to launch had increased with feedback, we could infer
the participants were more selective of the wave onto which they launched.
Conversely, if launch time decreased with better performance scores, we could infer
that feedback provided positive reinforcement and improved confidence over
sequential launches. It will be important as research moves forward to track the trend
between launch time and success to see what changes may occur between the two
measures. Ideally, as a coxswain gains experience launching, it would be expected
that the number of successful launches to trend upward while their time to launch
would have a decreasing trend. Further studies wishing to quantify the effect of
experience on launch time should include trial reoccurrence (as every launch situation
was only experienced once in our testing), as well as follow up sessions to evaluate
session-to-session changes.
5.8: Presence
Presence scores were generally high on their aspect scales and were not
significantly different between the groups. This indicates that participants perceived
presence during the trials was not affected by the inclusion of feedback.
Future studies can explore if these measures are consistent across other types
of presence questionnaires, and investigate how different questionnaire answers
correlate to physiological measures associated with presence throughout the
simulation experience as participants complete the trials either in the full mission
simulator or a desktop version of the SurvivalQuest system. These tests would further
investigate how presence and immersion could influence learning and retention of
FFLB launching skills.
In a study (Skalski et al, 2011) that compared realistic mapping controllers
(e.g. steering wheel) to other non-realistic mapping controllers (joystick, keyboard), it
was clear that a controller that replicated the real behaviors (steering wheel) in the
virtual environment led to increased levels of presence and enjoyment. Tamborini and
51
Skalski (2006) argue that this effect is related to the participant’s ability to access
mental models of the behavior more quickly and accurately using the realistic natural
mapping controllers.
52
While escaping the area after launch may be relatively easier in these smaller waves,
the possibility of the FFLB being set back into the installation it is trying to escape is
still a principal concern and was limited by factors of the simulation design.
Future research should further explore the relationship of launching success
rate on different wave sizes. The scale model testing of Simões Ré, Pelley and Veitch
(2003) can be expanded to see if novice operators can launch a scale FFLB onto
proper wave phases for successful sail away. This may bring more insight as to how
the visual characteristics of all waves (not just the simulation visuals) contribute to the
success rate of launches. Research should also investigate if the inclusion of the
hydrodynamic effects upon the installation from which the FFLB is launching helps
the participants in timing the smaller wave launches, through kinesthetic or visual
feedback.
53
5.12: Evaluating the Simulator
54
happening on the boat with the visual aspects of the simulation may lead to improved
scores in both domains - as that transfer of what was being depicted on the screen, and
what was being experienced in the simulator was one of the main reason subjects
scored the motion aspects of the simulation so high.
Feedback provided through SMEs further validated the need for improved
audio cueing. They noted that generally aside from the platform alarm, people inside
the FFLB before launching were very quiet to allow the coxswain to concentrate on
their task. Then, after hitting the water people would begin to make noises due to
impact injuries/strains and seasickness.
55
of information could be aided by considering within TEMPSC design specifications
both a minimum speed for given sea-state and standards of maneuverability in waves.
A limited amount data exists in respect of ‘set back’ although clearly there is a need
for this to be expanded. Coupled with this is the need to better understand the
hydrodynamic principles of the problem, possibly through the use of further, more
robust mathematical modeling.
56
Chapter 6 Conclusions
57
STCW coxswain training standards for inclusion of adverse weather launching. This
evaluation is paramount for the safety of those onboard vessels and installations.
Although the effect of simulation training on coxswain performance is not yet fully
developed, this research allows parallels to be drawn with other the long established
simulation training programs from both the medical and aviation fields. Many facets
of medicine use simulation to educate students and to aid experts is maintaining and
developing skills. Similarly, the maritime environment could potentially benefit from
simulation training as a viable alternative or complement to current standard STCW
training.
These preliminary findings provide an opportunity for those with an interest in
bringing attention to the usefulness of simulators in training adverse weather
launching. It establishes a basis on which future research can be expanded upon.
Training through the use of simulators may allow regulators, institutions, and
companies the prospect of enhancing and supplementing current lifeboat coxswain
training standards.
58
References
Aggarwal, R., Black, S. A., Hance, J. R., Darzi, A., & Cheshire, N. J. W. (2006).
Virtual reality simulation training can improve inexperienced surgeons'
endovascular skills. Journal of Vascular and Endovascular Surgery, 31, 588.
Atlantic Canada Offshore Petroleum Agency: Standard practice for the training and
qualifications of personnel.(2010). (No. 2010-028). Canadian Association of
Petroleum Producers. Retrieved from http://www.capp.ca/library/publications/
Barfield, W. & Hendrix, C. (1995). The effect of update rate on the sense of presence
within virtual environments. Virtual Reality: Research, development, and
application, I (1).
Barfield, W., & Weghorst, S. (1993). The sense of presence within virtual
environments: A conceptual framework. In G. Salvendy & M. Smith (Eds).
Human-computer interaction: Applications and case studies (pp.699-704).
Amsterdam: Elsevier.
Barnett, B., Perrin, B., Curtin, J., & Helbing, K. (1998). Can computer-based
assessment be used for motor skills learning? A training transfer study.
Proceedings of the Human Factors and Ergonomics Society: Vol. 2 (pp. 1432-
1436). Santa Monica, CA: Human Factors and Ergonomics Society.
Basdogan, C., Ho, C., Srinivasan, M. A., & Slater, M. (2000) An experimental study
on the role of touch in shared virtual environments. ACM Transactions on
Computer Human Interaction, 7(4), 443-460.
59
Bystrom, K.E., Barfield,W., & Hendrix, C. (1999). A conceptual model of the sense
of Presence in virtual environments. Presence: Teleoperators and Virtual
Environments, 8(2), 241–244.
Cannon-Bowers J.A. & Bowers C.A. (2010). Synthetic learning environments. In J.M.
Spector, M.D. Merrill, J.v. Merrienboer & M.P. Driscoll (Eds.) Handbook of
research on educational communications and technology (3 ed., pp. 306-315).
New York; Lawrence Erlbaum Associates
Chopra, V., Gesink, B. J., DeJong, J., Bovill, J. G., & Brand, R. (1994). Does training
on an anesthesia simulator lead to improvement in performance? British Journal
of Anesthesia, 73, 293.
Dalhstrom, N., Dekker, S., vanWinsen, R., & Nyce, J. (2009). Fidelity and validity of
simulator training. Theoretical Issues in Ergonomics Science, 10(4), 305.
Det Norske Veritas. (2011). Maritime simulator systems.(Standard No. Standard for
Certification No. 214).Det Norske Veritas.
Dinh, H. Q., Walker, N., Song, C., Kobayashi, A., & Hodges L.F. (1999). Evaluating
the importance of multi-sensory input on memory and the sense of presence in
virtual environments. Proceedings of the IEEE Virtual Reality 1999, 222-228.
Draper, J.V., D.B. Kaber, and J.M. Usher. (1998) “Telepresence.” Human Factors,
40(3), 354– 375.
Fowlkes, J. E., Dwyer, D. J., Oser, R. L., & Salas, E. (1998). Event-based approach to
training (EBAT).The International Journal of Aviation Psychology, 8(3), 209-
221.
Fowlkes, J. E., Lane, N. E., Salas, E., Franz, T., & Oser, R. (1994). Improving the
measurement of team performance: The TARGETs methodology. Military
Psychology, 6, 47-61.
Freeman, J., Avons, S.E., Pearson, D., & IJsselsteijn, W. (1999). Effects of sensory
information and prior experience on direct subjective ratings of presence.
Presence, 8:1–13.
60
Gallagher, A. G., Ritter, E. M., Champion, H., Higgins, G., Fried, M. P., Moses, G.,
Satava, R. M. (2005). Virtual reality simulation for the operating room. Annals
of Surgery, 214(2), 364.
Heeter, C. (1992). Being there: The subjective experience of presence. Presence, 1(2),
262.
Hooper, J.B. Witt, N.A.J. & McDermott, A.P. (2000), Automatic Student Feedback
and Navigation Simulation, 11th International Navigation Simulator Lecturers’
Conference (INSLC), Kalmar, Sweden.
Kennedy, R.S., N.E. Lane, K.S. Berbaum, and M.G. Lilienthal. 1993. “Simulator
Sickness Questionnaire: An Enhanced Method for Quantifying Simulator
Sickness.” Inter. Journal of Aviation Psychology, 3(3), 203–220.
Kewman, D. G., Seigerman, C., Kintner, H., Shu, S., Henson, D., & Reeder, C.
(1985). Simulation training of psychomotor skills: Teaching the brain-injured to
drive. Rehabilitation Psychology, 30, 11.
Kozak, J. J., Hancock, P. A., Arthur, E., & Chrysler, S. (1993). Transfer of training
from virtual reality. Ergonomics, 36(7), 777–784.
Lapointe J. & Robert J. (2000). Using VR for efficient training of forestry machine
operators. Education and Information Technologies, 5, 237-250.
Lessiter, J., Freeman, J., Keogh, E., & Davidoff, J.(2000). Development of a new
cross-media presence questionnaire: The ITC-Sense of presence. Paper
presented at the Presence 2000 Workshop, March 27–28, Delft.
Lombard, M., & Ditton, T. (1997). At the heart of it all: The concept of presence.
Journal of Computer-mediated communication, 3(2).
61
Ma, R., & Kaber, D. B. (2006). Presence, workload and performance eff ects of
synthetic environment design factors. International Journal of Human-Computer
Studies, 64(6), 541–552.
MacKinnon, S. N., Evely, K., & Antle, D. (2009).Does mariner experience effect
distance judgment of visual targets in virtual marine environments?
Interservice/Industry Training, Simulation, and Education Conference
(I/ITSEC), Orlando, Florida, United States.
Muirhead, P. M. (1996). The revised STCW convention and the new simulation
performance standards: Some implications for simulator designers, operators,
and instructors. In M. S. Chislett (Ed.), Marine simulation and ship
manoeuvrability (pp. 257). Rotterdam, Netherlands: A.A. Balkema.
Muirhead, P.M. (2003), A study of the impact of new technology and teaching
methodologies on global maritime education and training into the 21st Century,
Published Ph.D. Thesis, Curtin University of Technology, Perth, Australia.
Nichols S., Haldane C., & Wilson J.R., (2000). Measurement of presence and its
consequences in virtual environments. Int. J. Human-Computer Studies (52)
471-491
Patterson, A., McCarter, P., MacKinnon, S. N., Veitch, B., & Simões Ré, A. (2011).
(White Paper). Survival craft training using simulators. Virtual Marine
Technologies.
Results of a survey into lifeboat safety. (1994). (Survey) Oil Companies International
Marine Forum. Retrieved from http://www.ocimf.com
Review of lifeboat launching system accidents. (2001). (Safety Study No. 2001/01).
Marine Accidents Investigations Branch. Retrieved from
http:www.maib.gov.uk/publications/index.hfm
62
Robson, J. K. (2007). Overview of TEMPSC performance standards.(Research Report
No.RR 599). London, United Kingdom: Health and Safety Executive. Retrieved
from http://www.hse.gov.uk/RESEARCH/rrpdf/rr599.pdf
Rose, F. D., Attree, E. A., Brooks, B. M., Parslow, D. M., Penn, P. R., &
Ambihaipathan, N. (2000). Training in virtual environments: Transfer to real
world tasks and equivalence to real task training. Ergonomics, 43(4), 494.
Ross, T. W. (2006). Ship’s lifeboats: Analysis of accident cause and effect and its
relationship to seafarers’ hazard perception. (Master's, Dalhousie University).
Retrieved from http://www.msc1206.com/trevor_w_ross_report.pdf
Safety at Sea. (2011). Simulation training is all set for change. (45) 503.
Salas E., Rosen M.A., Held J.D., Weissmuller J.J. (2009) Performance Measurement
in Simulation-Based Training : A Review and Best Practices. Simulation Gaming
2009 40: 328
Saus, E. R., Johnson, B. H., & Eid, J. (2011). Perceived learning outcome: The
relationship between experience, realism, and situation awareness during
simulator training. International Maritime Health, 1(4), 258.
Schiflett S.G., Elliott L.R., Salas E., and Coovert M.D. (eds.) (2004), Scaled Worlds:
Development, Validation, and Application, Aldershot: Ashgate Publishing
Limited.
Seymour, N. E., Gallagher, A. G., Roman, S. A., O’Brien, M. K., Bansal, V. K.,
Andersen, D. K., & Satava, R. M. (2002). Virtual reality training improves
operating room performance. Annals of Surgery, 236(4), 458.
Simões Ré, Pelley and Veitch (2003). Evacuation Performance. Offshore Technology
Conference. Houston, Texas USA; 5-8 May 2003.
Simões Ré, A. and Veitch, B. (2007). Comparison of three types of evacuation system.
To appear, Transactions, Society of Naval Architects and Marine Engineers,
Vol.115, 20 p.
63
Simões Ré, A., Veitch, B., & Spencer, D. (2010). Escape, evacuation and rescue for
arctic and subarctic. (No. EXX027-01). St. John's, Newfoundland and Labrador,
Canada: Oceanic Consulting Corporation.
Simulation training is all set for change. (2011). Safety at Sea, 45, 503.
Skalski, P. (2004) The quest for presence in video game entertainment. Presented as
part of the presence panel at the Central States Communication Association
Annual Conference, Cleveland, OH.
Slater, M., Linakis, V., Usoh, M., & Kooper, R., (1996). Immersion, presence, and
performance in virtual environments: An experiment with tri-dimensional chess.
In M. Green (Ed), ACM Virtual Reality Software and Technology (pp. 163-
172), 1-4 July 1996.
Slater, M., & Wilbur, S. (1997). A framework for immersive virtual environments
(FIVE): Speculations on the role of presence in virtual environments. Presence,
6: 603– 616.
Tamborini, R., & Skalski, P. (2006). The role of presence in the experience of
electronic games. In P. Vorderer & J. Bryant (Eds.), Playing video games:
Motives, responses, and consequences. (pp. 225-240) Mahwah, NJ: Erlbaum.
Usoh, M., Catena, E., Arman, S., & Slater, M. (2000). Using presence questionnaires
in reality. Presence: Teleoperators and Virtual Environments, 9, 497-503.
Van Baren, J., & IJsselsteijn, W. (2004). Measuring Presence: A Guide to Current
Measurement Approaches. Deliverable of the OmniPres project IST-2001-
39237.
64
Veitch, B., Billard, R., & Patterson, A. (2008a). Emergency response training using
simulators. Offshore Technology Conference, Houston, Texas.
Veitch, B., Billard, R., & Patterson, A. (2008b). Evacuation training using immersive
simulators. The International Society of Offshore and Polar Engineers
Conference, Vancouver, British Columbia, Canada.
Welch, R.B. (1999). “How Can We Determine if the Sense of Presence Affects Task
Performance?” Presence, 8(5), 574–577.
Zantow, K., Knowlton, D. S., & Sharp, D. C. (2005). More than fun and games:
Reconsidering the virtues of strategic management simulations. Academy of
Management Learning & Education, 4 (4), 451-458.
65
Appendix
I am working as part of the Virtual Environments Project through the Major Research
Partnerships at MUN. I am currently running a study titled “The Effect of Simulation
Training Exposure on Skill Acquisition”.. It would be great if you would be able to
help me out and would like to participate in the study.
Participation in studies such as this are a great opportunity for students to learn more
about the research that is taking place at MUN, the types of research taking place in
the Health and Safety Industry, and represent a great opportunity for students to learn
more about the research process.
66
Participants must:
All participants will receive initial training and overview of the simulator system, its
operations and objectives for the experiment (i.e. successfully launch the boat).
Participants will be briefed on the ideal launch orientation of the boat when it hits the
waves and potential real-life consequences of failure.
During the session, you will take part in various trials, under different parameters and
conditions, in which you will repeatedly launch a free-fall lifeboat simulator into
varying ocean and surrounding conditions. Following each trail, some of you will
receive feedback on the successfulness of your launch, in an attempt to improve
subsequent trials.
If you choose to participate in this research study, you will be asked to attend one, 90-
120 minute session in the Fluids Lab (EN 1035), Engineering Building, at MUN.
If you have any questions surrounding the research study, and/or participation in the
study, or are interested in participating, please respond to this email and we can book
your session time!
Thanks,
67
Appendix B: Physical Activity Readiness Questionnaire
PAR-Q & YOU
Physical Activity Readiness
Questionnaire - PAR-Q (revised 2002)
(A Questionnaire for People Aged 15 to 69)
Regular physical activity is fun and healthy, and increasingly more people are starting
to become more active every day. Being more active is very safe for most people.
However, some people should check with their doctor before they start becoming
much more physically active.
If you are planning to become much more physically active than you are now, start by
answering the seven questions in the box below. If you are between the ages of 15 and
69, the PAR-Q will tell you if you should check with your doctor before you start. If
you are over 69 years of age, and you are not used to being very active, check with
your doctor.
Common sense is your best guide when you answer these questions. Please read the
questions carefully and answer each one honestly: check YES or NO.
YES NO
___ ___ 1. Has your doctor ever said that you have a heart condition and that you
should only dophysical activity recommended by a doctor?
___ ___ 2. Do you feel pain in your chest when you do physical activity?
___ ___ 3. In the past month, have you had chest pain when you were not doing
physical
activity?
___ ___ 4. Do you lose your balance because of dizziness or do you ever lose
consciousness?
___ ___ 5. Do you have a bone or joint problem (for example, back, knee or hip)
that could be
made worse by a change in your physical activity?
___ ___ 6. Is your doctor currently prescribing drugs (for example, water pills) for
your blood
pressure or heart condition?
___ ___ 7. Do you know of any other reason why you should not do physical
activity?
68
If you answered YES to one or more of these questions:
Talk with your doctor by phone or in person BEFORE you start becoming much more
physically active or BEFORE you have a fitness appraisal. Tell your doctor about the
PAR-Q and which questions you answered YES.
• You may be able to do any activity you want — as long as you start slowly and
build up gradually. Or, you may need to restrict your activities to those which are safe
for you. Talk with your doctor about the kinds of activities you wish to participate in
and follow his/her advice.
• Find out which community programs are safe and helpful for you.
If you answered NO
If you answered NO honestly to all PAR-Q questions, you can be reasonably sure that
you can:
• start becoming much more physically active – begin slowly and build up gradually.
This is the safest and easiest way to go.
• take part in a fitness appraisal – this is an excellent way to determine your basic
fitness so that you can plan the best way for you to live actively. It is also highly
recommended that you have your blood pressure evaluated. If your reading is over
144/94, talk with your doctor before you start becoming much more physically active.
PLEASE NOTE: If your health changes so that you then answer YES to any of the
above questions, tell your fitness or health professional. Ask whether you should
change your physical activity plan.
Informed Use of the PAR-Q: The Canadian Society for Exercise Physiology, Health
Canada, and their agents assume no liability for persons who undertake physical
activity, and if in doubt after completing this questionnaire, consult your doctor prior
to physical activity.
70
Appendix C – Consent Form
Small Craft Simulation Project,
c/o Faculty of Engineering of Memorial University of Newfoundland,
St. John’s, NL A1B 3X5
Consent to Take Part in Research
INVESTIGATOR(S): Dr. Scott MacKinnon, Mr. Steven Mallam, Mr. Alan Dalton,
Dr. Brian Veitch, Ms. Jennifer Smith, Mr. Randy Billard, Cpt. Anthony Patterson
You have been invited to take part in a research study. It is up to you to decide
whether to be in the study or not. Before you decide, you need to understand what the
study is for, what risks you might take and what benefits you might receive. This
consent form explains the study.
If you decide not to take part or to leave the study this will not affect your student
status [if applicable]
Introduction/Background:
71
expensive or dangerous to train in real world. In addition to the cost and safety
factors, it has been suggested that new behaviors and operations can be trained in a
more time efficient manner using a combination of virtual environment (VE) and real
world training, rather than real world training alone.
The benefits of decreased expenses, risk, and time associated with training that can be
gained by a simulator mean nothing if the simulator cannot provide an effective
environment to properly train new behaviors and operations. The effectiveness of
training in a simulator is influenced by the degree to which the user believes the
virtual environment matches the real world environment, a perceptual situation
referred to as ‘presence’. The realism of the simulation, mediates the level of presence
experienced by the operator.
It has been shown that increased number of cueing systems (ex. visuals, audio, force
feedback, motion) present in a simulation environment can increase the perceived
presence. Thus, a simulation environment with visual, audio and motion cues should
be more realistic than a simulation environment with just visual cues. The aim of this
research is to determine if the addition of motion cues to a lifeboat simulation
environment will increase the perceived presence, which will ultimately increasing
the effectiveness of training in the simulator.
2. Purpose of study:
The purpose of this study is to investigate if the addition of physical movement from a
motion platform will increase the perceived presence in a lifeboat simulation
environment, which will ultimately increase the effectiveness of training in the
simulator.
If you choose to take part in this experiment you will be asked to complete a PAR-Q
(physical activity readiness questionnaire) form and a pregnancy/vestibular disorder
questionnaire. You will be asked questions about any medical conditions you might
have that may restrict you from participating in this study.
72
You will be asked to attend one 2 hour session in the Fluids Lab (EN 1035),
Engineering Building, MUN. During their session you will be given an introduction
into the operation procedure of the lifeboat simulator. Once you are comfortable with
the operation procedure, you will be instructed to complete two trials, one with
motion and one without motion. Random selection will be used to determine the order
at which you will complete the trials. The motions will represent a moderate sea state.
Upon completion of each trial, you will be asked to complete the presence
questionnaire. In addition to the questionnaires, you will be asked to wear a heart rate
monitor with a chest strap and watch for the duration of the session. A five minute
baseline measure of your heart rate will be recorded prior to each trial.
In addition, a video camera may be used to capture your actions and reactions during
each trail and compared to determine if there were any differences between trials. The
video will be viewed only by the investigators for analysis.
4. Length of time:
You will be expected to come to the Memorial University Engineering Building for
one (1) session for approximately two (2) hours. Some testing might be performed
over weekends.
Risks:
Potential for slips, trips or falls resulting in physical bruising or injury. However,
since participants will be sitting and secured in a four point seatbelt at all times while
the motion bed is engaged, the risk of the slips, trips or falls will be minimal.
The use of LCD televisions to view computer generated graphics contains minimal
levels of risk, however participants may experience minimal eye strain.
73
The use of audio system may produce excessive sound levels that may cause
temporary hearing impairment. National guidelines for noise exposure will be
observed throughout the design and testing stages.
Discomforts:
Inconveniences:
Interruption of normal daily schedules (i.e. early mornings, late evenings, weekends,
etc.)
6. Benefits:
7. Liability statement:
Signing this form gives us your consent to be in this study. It tells us that you
understand the information about the research study. When you sign this form, you
do not give up your legal rights. Researchers or agencies involved in this research
study still have their legal and professional responsibilities.
74
8. What about my privacy and confidentiality?
Protecting your privacy is an important part of this study. Every effort to protect your
privacy will be made. However it cannot be guaranteed. For example we may be
required by law to allow access to research records.
Access to records
The members of the research team will see study records that identify you by name.
Other people may need to look at the study records that identify you by name. This
might include the research ethics board. You may ask to see the list of these people.
They can look at your records only when one of the research team is present.
Use of records
The research team will collect and use only the information they need for this
research study.
Your name and contact information will be kept secure by the research team in
Newfoundland and Labrador. It will not be shared with others without your
permission. Your name will not appear in any report or article published as a result of
this study.
75
If you decide to withdraw from the study, the information collected up to that time
will continue to be used by the research team. It may not be removed. This
information will only be used for the purposes of this study
Information collected and used by the research team will be stored by Dr. Scott
MacKinnon and he is the person responsible for keeping it secure.
You may ask Dr. MacKinnon to see the information that has been collected about
you.
9. Questions:
If you have any questions about taking part in this study, you can meet with the
investigator who is in charge of the study at this institution. That person is: Dr. Scott
MacKinnon
Or you can talk to someone who is not involved with the study at all, but can advise
you on your rights as a participant in a research study. This person can be reached
through:
76
Signature Page
Study title: The Effect of Motion Cues on the Perception of Presence in a Lifeboat
Simulation Scenario.
I understand that it is my choice to be in the study and that I may not benefit.
Yes { } No { }
I agree to be video taped during the data collection Yes { } No { }
___________________________________ __________________
Signature of participant Date
____________________________________ __________________
Signature of witness (if applicable) Date
77
I have explained this study to the best of my ability. I invited questions and gave
answers. I believe that the participant fully understands what is involved in being in
the study, any potential risks of the study and that he or she has freely chosen to be in
the study.
78
Appendix D – Presence Questionnaire
PRESENCE QUESTIONNAIRE
1. How responsive was the simulated environment to actions that you initiated (or
performed)?
0% 50% 100%
2. How natural did your interactions with the simulated environment seem?
0% 50% 100%
4. How much did the visual aspects of the simulated environment involve you?
0% 50% 100%
5. How much did the auditory aspects of the simulated environment involve you?
0% 50% 100%
6. How much did the motion aspects of the simulated environment involve you?
0% 50% 100%
79
Not at all involved Fully involved
7. Were you able to anticipate what would happen next, in the simulated environment,
in response to the actions that you performed?
0% 50% 100%
9. How much delay did you experience between your actions and expected outcomes?
0% 50% 100%
80
Additional Feedback
1. The aim of each launch was to time the release the boat so you entered the
downslope of the oncoming wave. Do you feel that you improved upon your
performance as the testing proceeded?
2. Of the complete simulation, which aspect (i.e. motion, visuals or auditory) did you
find the MOST realistic?
3. Of the complete simulation, which aspect (i.e. motion, visuals or auditory) did you
find the LEAST realistic?
81