0% found this document useful (0 votes)
58 views18 pages

The International Encyclopedia of Communication Research Methods

Uploaded by

rkim3369
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views18 pages

The International Encyclopedia of Communication Research Methods

Uploaded by

rkim3369
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Electroencephalography (EEG)

GLENNA L. READ and ISAIAH J. INNIS


Indiana University, USA

Although keeping us alive is arguably the most important function of the human brain,
the brain is responsible for a host of functions—including processing of environmental
stimuli. Electroencephalography (EEG) is a noninvasive technique that provides a
direct measure of brain electrical activity through placement of electrodes on the
scalp. EEG measures electrical activity instantaneously with high precision. Through
the analysis of event-related brain potentials (ERPs) and frequencies, EEG allows
researchers to investigate psychological mechanisms underlying perception and
behavior. EEG provides high-resolution real-time data that is able to capture rapid,
implicit processes not revealed through self-report.

History

In 1929 Hans Berger demonstrated the electrical activity of the human brain through
a series of experiments. By placing electrodes on the scalp and amplifying the signal,
Berger determined that changes in voltage resulting from electrical activity in the brain
could be plotted and measured—thus inventing (and recording) the first electroen-
cephalogram (EEG). Berger posited that oscillations in the EEG might be related to
cognitive activity in humans. Although neurophysicists were skeptical at first, other
researchers subsequently confirmed this observation (Adrian & Matthews, 1934). In
1964, Walter and colleagues reported the first ERP component, known as the contingent
negative variation (CNV) (Walter, Cooper, Aldridge, McCallum, & Winter, 1964). This
ERP reflected cognitive preparation for a future stimulus and its discovery prompted
further investigation into ERP components. In 1965 researchers discovered the P3 com-
ponent (Sutton, Braren, Zubin, & John, 1965), one of the most studied components to
date. In the late 1960s and 1970s researchers were most interested in discovering new
ERP components rather than addressing questions of broad scientific importance. As
a result, ERP research decreased in reputation. However, in the late 1980s increased
application of ERP research to broad questions of general scientific interest heightened
interest and enthusiasm in the technique (Luck, 2014).

The International Encyclopedia of Communication Research Methods. Jörg Matthes (General Editor),
Christine S. Davis and Robert F. Potter (Associate Editors).
© 2017 John Wiley & Sons, Inc. Published 2017 by John Wiley & Sons, Inc.
DOI: 10.1002/9781118901731.iecrm0080
2 E L E C T R O E N C E P H A L O G R A P H Y (E E G)

Physiological mechanism

EEG measures fluctuations in electrical activity of the brain over time. This electrical
activity usually ranges from −100 to +100 μV. Recordings appear as positive and
negative deflections that can be analyzed for frequency, amplitude, and latency, all of
which may have psychological meaning. The brain electrical activity measured by EEG
originates from summed postsynaptic potentials of cortical neurons. Postsynaptic
potentials are the voltages that arise after an action potential when neurotransmitters
are released and bind to a postsynaptic cell, altering the passage of ions across a
cell membrane. Postsynaptic potentials can be the result of localized depolarization
(excitatory postsynaptic potentials) or hyperpolarization (inhibitory postsynaptic
potentials) caused by the action potential. Postsynaptic potentials in similarly stimu-
lated cells tend to be synchronous and last for tens to hundreds of milliseconds. The
EEG recording reflects the summation of postsynaptic potentials of groups of cortical
neurons arranged perpendicular to the scalp. The postsynaptic potential creates an
electrical dipole. Cortical neurons are organized in a columnar structure in which the
orientation of their electrical fields are aligned. This enables the signal from multiple
neurons to be summated. EEG is only able to pick up activity at the end of electrical
dipoles of the synchronously firing neurons that are oriented perpendicular to the
scalp. EEG does not pick up the contrapolar dipole. Some cortical neurons are not
organized in this columnar fashion (e.g., those in the amygdala) and are therefore
unable to be detected using EEG. Dipoles of opposing polarity that are oriented toward
each other cancel out, so that neither dipole is detected.

Event-related potentials

An event-related potential (ERP) is an index of brain activity revealed through


systematic changes in electrical activity generated by primarily cortical neurons. As
the name implies, ERPs are electrical potentials that are associated with specific events.
ERPs are indicative of a number of cognitive activities that are associated with the
presence of a stimulus or response. For example, ERPs that occur relatively early after
a visual stimulus may fluctuate based on certain physical aspects of the stimulus.
ERPs are useful for measuring rapid and implicit cognitive activity because they
allow researchers to examine brain wave fluctuations instantaneously and with high
temporal precision. Thus, researchers may investigate ERPs that occur before conscious
awareness of a stimulus and track how response changes over time. Further, as ERPs
are rapid and automatic, they are less susceptible to concerns of social desirability than
some behavioral measures.
An EEG waveform has positive and negative voltage deflections that may be referred
to as waves or peaks. If a waveform is reliably elicited by a particular stimulus, is asso-
ciated with a specific cognitive process, and has a circumscribed scalp distribution,
then it is referred to as an ERP component. ERP components are often named based
on the direction of the deflection and the latency in milliseconds. The letter “P” pre-
cedes positive components. The letter “N” precedes negative components. For example,
E L E C T R O E N C E P H A L O G R A P H Y (E E G) 3

a positive deflecting component that occurs around 300 milliseconds after stimulus
onset is named P300. Similarly, a negative deflecting component that occurs around
100 milliseconds after stimulus onset is named N100. Often, the timing of the ERP
components is relative, not precise, and may be influenced by a variety of experimen-
tal factors. This means that the third positive deflection may be labeled P300 even if
it occurs after 400 milliseconds. For this reason, some researchers prefer to refer to
components based on their ordinal position. Thus, sometimes the names of ERPs are
shortened such that P300 becomes P3. This entry will refer to ERPs by their ordinal
position. Historically, components were plotted with negative deflections up. However,
contemporary researchers tend to plot components with positive deflections up. When
reading ERP literature it is important to check the axes of a scale to determine how the
researcher has plotted the components.
The amplitudes of ERP components are much smaller than raw EEG in general. Thus,
in order to be seen, these waveforms must be averaged over many trials—from 10 up
to 500 trials depending on the ERP component to be analyzed. By using event codes to
indicate when an event occurs researchers may align the EEG and average waveforms
over many trials. Brain activity unrelated to the event fluctuates from positive to neg-
ative and is inconsistent in regards to the timing of the event. Through averaging, the
unrelated signal will approximate zero. In contrast, ERP components occur consistently
in a time-locked manner and are not eliminated by averaging the signal; thus, averaging
results in the elimination of unrelated EEG waves while retaining ERPs.
ERP components can be analyzed for amplitude or latency. Amplitude of an ERP
component is thought to indicate the extent to which operations are engaged. Greater
magnitude components, both positive and negative in direction, may reflect a larger
cognitive response. Latency is thought to reflect the time at which operations have
been completed. Increased latency indicates that operations were completed at a later
time. Inferential statistics may be used to analyze peak amplitude, mean amplitude, or
latencies.
Stimulus-locked ERP responses are positive and negative deflections in the waveform
that are time- and phase-locked to specific stimuli. Endogenous ERPs are those that are
influenced by subjects’ perceptions or interpretations of stimuli. In contrast, exogenous
ERPs are those that correspond with the physical nature of the stimulus. According
to Bartholow and Amodio (2009), the functional significance of the ERP depends on
several things including the stimulus, nature of the task, the timing, location on the
scalp, and the researcher’s theoretical perspective. Additionally, interpretation of the
ERP may differ based on the modality of the stimulus. For example, the same ERP can
indicate different processes when elicited by a visual, versus auditory, stimulus. The
following descriptions are intended as a brief overview of some ERP components.
Early ERPs are thought to indicate automatic processing. Two early ERPs potentially
of interest to media researchers are the P1 and the N1. These two components reflect
attentional processes such that greater amplitude is indicative of more attention to
a stimulus. The P1 is affected by selective attention processes, arousal, and visual
parameters of a stimulus. The N1 follows the P1 and is modulated by spatial attention
and discriminative processing. The N1 has also been associated with the processing of
and attention toward auditory stimuli. A negative deflecting ERP that may follow is the
4 E L E C T R O E N C E P H A L O G R A P H Y (E E G)

N170. The N170 is a component that fluctuates in response to the presence or absence
of faces and is indicative of facial processing. The N170 is also enhanced for experts
when viewing the subject of their expertise (e.g., bird experts viewing birds; Tanaka &
Curran, 2001).
Later ERPs often reflect controlled cognitive processing. The P3 is an oft-studied
component that occurs 300–800 ms after stimulus onset. The P3 is associated with the
processing of novelty. The P3 is often examined in response to a task known as the
“oddball paradigm” in which repetitive stimuli are presented and then interrupted by
a deviant stimulus. Amplitude of the P3 increases as the perceived probability of an
event decreases. In an oddball task, the amplitude of the P3 is enhanced in response
to the deviant stimulus compared to the repetitive stimulus. The P3 is also thought
to index “context updating” of the state of the environment (Donchin, 1981). Greater
latency of the P3 reflects more effortful categorization of a stimulus. The negative slow
wave (NSW) is a sustained negative deflection seen during the maintenance period of
working memory tasks. Amplitude of the NSW increases as memory load in these tasks
increases. The NSW also indexes cognitive processes associated with self-regulation and
the ability of a participant to overcome cognitive conflict.
Some ERPs are also affected by perceived emotional content of stimuli. For example,
amplitudes of P1, N1, N170, and P3 components are enhanced for emotional stimuli
compared to neutral stimuli. Two components of interest are the early posterior neg-
ativity and the late positive potential. The early posterior negativity is enhanced for
emotion-evoking stimuli of positive valence. The late positive potential is thought to
reflect “intrinsic task relevance of emotion-related stimuli” (Luck, 2014, p. 107).
Other ERPs are time-locked to a behavioral response and thus may fluctuate based
on the formation and regulation of the response. Two components in particular—the
error-related negativity (ERN or Ne ) and the error-related positivity (Pe ) may be of
interest to media researchers. The ERN is a negative deflection that tends to occur
50–80 ms after an erroneous behavioral response. The ERN is associated with conflict
monitoring in the dorsal anterior cingulate cortex (dACC). The presence of the
ERN does not require that one is consciously aware that an error has been made,
although conscious awareness of the error does enhance the amplitude of the ERN.
The Pe is a positive deflection sometimes seen after the ERN. The peak amplitude
of this component occurs 250–400 ms after one has made an incorrect response. In
contrast to the ERN, the Pe occurs when one is consciously aware of having made
an error. The Pe is associated with the cognitive process of monitoring for conflict
between the preceding behavior and certain external cues regarding regulation of the
response.
Some ERPs occur as a participant prepares for an upcoming stimulus or response.
These anticipatory components are thought to reflect attention and control. They can
also be used to investigate the extent to which a participant is motivated to engage with
the stimulus. Some examples of anticipatory components are the contingent negative
variation (CNV) and the readiness potential. The CNV is an ERP with a negative deflec-
tion that is used as an expectancy measure. For example, the CNV is enhanced if there
is an expectation that another stimulus will follow. Some ERPs precede and accompany
movement—these ERPs are known as readiness potentials. The lateralized readiness
E L E C T R O E N C E P H A L O G R A P H Y (E E G) 5

potential (LRP) is a slow negative ERP that occurs before a motor movement is initiated
and reflects preparation for an upcoming motor response. The LRP is most pronounced
over the motor cortex (located in the rear of the frontal lobe of the brain) contralateral
to the moving hand.

Frequency bands

As is apparent from visual inspection, EEG is a complex signal composed of multi-


ple frequencies oscillating simultaneously. In the brain, lower frequencies show greater
power than higher frequencies, with power decreasing as the frequency increases. The
majority of brain activity occurs at frequencies under 100 Hz. The frequency spectrum
of the brain is divided into different bands of activity based around a center frequency.
These bands and their ranges are, in increasing frequency order: delta (<4 Hz), theta
(4–7 Hz), alpha (8–13 Hz), beta (14–30 Hz), and gamma (>35 Hz). Alpha activity has
more power than beta activity, which has more power than gamma activity. Of these,
alpha is perhaps the easiest to see, appearing as large rhythmic waves visible to the
naked eye in an EEG signal. It should be noted that while the center frequency of each
band never changes (e.g., alpha always straddles 10 Hz), the boundaries of the bands
are imprecise. For example, some may report alpha as 8–13 Hz or beta as 15–30 Hz.
These EEG frequency bands are associated with psychological phenomena. Delta
is often associated with drowsiness, sleep, and states of altered consciousness. Theta
appears to serve as a carrier wave for and modulator of the other oscillations and is
associated with the cessation of pleasurable activity. Alpha activity is associated with
attention and inhibitory control in the brain. Alpha activity is most prominent during
relaxation and is inversely related to brain activity, as indicated by PET and fMRI stud-
ies (Cook, O’Hara, Uijtdehaage, Mandelkern, & Leuchter, 1998; Goldman, Stern, Engel,
& Cohen, 2002). Interruption of alpha, known as alpha blocking, occurs during cogni-
tive tasks. During alpha blocking, alpha waves are replaced with higher frequency, lower
amplitude beta waves. Beta activity occurs when one is alert and is related to the regula-
tion of processing states. Finally, gamma activity is associated with object maintenance,
memory, and a variety of cognitive processes.
The EEG signal is collected in the time domain and must be converted to the fre-
quency domain before these bands can be analyzed. This is done using the Fourier
transform. Fourier posited that a time series may be represented by the sum of series of
sine waves and a coefficient corresponding to how much of that sine wave is needed to
reconstruct the original signal. In reverse, multiplying these sine waves by their associ-
ated coefficient and adding them together will reproduce the starting waveform. These
principles form the basis of frequency analysis. In EEG research, a Fourier transform
is used to decompose the EEG into a series of frequency coefficients that represent the
amount of power at each frequency needed to reconstruct the original waveform. Prac-
tically, this labels the amount of power at each frequency and therefore allows exam-
ination of EEG frequency bands. Stern, Ray, and Quigley (2001) describe the Fourier
transform as comparing a “template” of frequencies (the sine waves) to an existing EEG
signal to see how closely it matches the template. A power spectrum allows researchers
6 E L E C T R O E N C E P H A L O G R A P H Y (E E G)

to visually represent all the frequencies present in the dataset, collapsed over time. More
complex versions of this analysis are used to examine time-locked frequency variations
in the waveform.
A related procedure is a coherence analysis, which provides information about how
the signal from two given electrodes co-vary at a certain frequency. In other words, a
coherence analysis describes how an EEG signal at “each of two electrodes is related
to each electrode” (Stern et al., 2001, p. 91). A coherence analysis allows researchers to
investigate the extent to which frequencies at different electrode sites are synchronous.

Recording EEG

This section provides an overview of some of the considerations for a researcher who
is interested in recording EEG data. The first part of this section discusses choosing an
EEG system and recording and analysis software, setting up an EEG lab, and a standard
study procedure. The second part of this section describes other practical considera-
tions including amplification, sampling rate, filtering, impedance, and referencing.

Choosing an EEG system


The biggest decision facing a researcher choosing an EEG system is the number of elec-
trodes to use. While it is possible to record from only a few electrodes, EEG systems
typically come in 32, 64, 128, or 256 channel configurations. However, some commu-
nication scholars have even used 14-channel systems (Minas, Potter, Dennis, Bartelt, &
Bae, 2014). Larger systems have become more common over time. A greater number
of sensors provide greater spatial resolution. However, this is at the cost of additional
preparation time for the net and large file sizes for the data. Although analysis options
may be limited with fewer sensors, a smaller configuration may in some cases produce
higher data quality due to the ease of monitoring data during recording. Systems with
fewer sensors (e.g., 14–32) also cost less than higher density systems. Therefore, dense
array EEG systems with 128 channels or more may not be worth the tradeoff for some
researchers. Most systems include a reference and a noise-reducing ground electrode
alongside active sites. The number of electrodes needed depends on specific research
questions to be addressed, but a good recommendation is to use a 64-channel system.
Advanced techniques, such as source localization, require a minimum of 64 sensors.
Choosing at least a 64-channel system allows a wide range of analyses to be performed.
Electrodes are usually 4–8 mm in diameter and are typically mounted in either a
stretch lycra cap or an elastomer net. Silver/silver chloride (Ag/AgCl) is the preferred
material for electrodes as it is nonpolarizing, stable, and has a relatively low noise level.
Tin (Sn) may also be used because, like Ag/AgCl, it is highly conductive and resistant to
polarization. Depending on the system, electrode caps should be replaced about once a
year, but this varies with frequency of use.
A common system of locating electrodes on the scalp, also referred to as a montage, is
the International 10–20 system. In this montage, electrodes are placed at sites a distance
of 10% and 20% away from each other based on the distance between pre-auricular
E L E C T R O E N C E P H A L O G R A P H Y (E E G) 7

points and the distance between the nasion (the top of the nose) and inion (the bony
bump of the back of the skull). Electrodes are labeled according to the region of the
brain above which they are located (O for occipital, T for temporal, C for central, F for
frontal, P for parietal). Numbers on the electrodes indicate laterality and distance from
the midline (labeled z). Odd numbers indicate that the electrode is on the left and even
numbers indicate that the electrode is on the right. Lower numbers indicate that an
electrode is farther away from the midline. This system has been modified over time to
account for higher density recordings, and sensor positions are often reported in this
format even if they were recorded with a different montage.
EEG systems have either low-impedance amplifiers or high-impedance amplifiers.
Low-impedance systems tend to record less noise than high-impedance systems but
require low-impedances at every electrode. Low impedance is achieved by cleaning
or abrading the scalp and will often involve greater preparation time compared to a
high-impedance system. If low-impedance systems are not given low input impedances
then any advantage in data quality is lost. The need to heavily abrade the skin to lower
impedances may increase infection risk, a factor that should also be considered
when choosing a system. High-impedance amplifiers can tolerate both low and high
impedances and, thus, should be faster to prepare for. In some cases, no abrasion of
the skin is necessary, greatly reducing the chance of infection. The disadvantage of
high-impedance systems is that they may record more noise due to poor common
mode rejection. Therefore, the decision is principally a tradeoff between setup time
and noise reduction. It is worth noting that differences in quality should be minimal,
provided good procedures are followed to collect clean data. However, high-impedance
amplifiers are more susceptible to low-frequency noise caused by skin potentials, an
artifact consisting of large, slow voltage shifts.
Another factor to consider when selecting an EEG system is the use of gel- or saline-
based systems. This distinction refers to the conductive medium used to bridge the gap
between electrodes and the scalp. Both options can be used to successfully record EEG
but there are some differences between them. Gel-based systems may need more prepa-
ration time, requiring abrasion and exfoliation of the scalp in order to ensure good
contact. They will also leave residue in subjects’ hair. Saline systems should be quicker
to apply, as the cap only needs to be soaked in solution before use. The speed advantage
may be offset by time-consuming adjustments to re-wet sensors that dry out over the
course of the experiment. Theoretically, compared to saline-based systems, gel-based
systems should provide stable impedances and superior data quality over time; addi-
tionally gel-based systems should sit statically in place better than saline nets, resulting
in fewer movement artifacts. However, no formal comparison exists between the two
methods, meaning that the deciding factor will probably be application time or cost. For
healthy adult subjects, either approach is valid. Children or other special populations
may in particular benefit from the reduced setup time of a saline system.

Software
There are a variety of proprietary and open-source software packages available for EEG
recording and analysis. Some programs are capable of both recording and analysis while
8 E L E C T R O E N C E P H A L O G R A P H Y (E E G)

others might be specialized for only one function. When choosing a software package
for data analysis, it is first necessary to define what sort of analyses will be performed
and then select a program that has the corresponding capabilities. Some programs may
be designed with a particular procedure in mind and perform it smartly but are limited
when using another method. Most, if not all, programs will be capable of ERP analysis.
However, other techniques such as time-frequency, connectivity, and source localiza-
tion may not be supported.
In many cases a purchased EEG system will include software developed by the manu-
facturer. For example, EEG systems built by EGI (Electrical Geodesics, Inc.) are bundled
with their Net Station software package. It is generally advisable to use manufacturer-
supplied software for recording to ensure compatibility with the hardware. However, for
pre-processing and data analysis it is recommended to use an open-source toolbox (two
common options are EEGLAB and ERPLAB). While proprietary software may function
perfectly well, it frequently lacks access to the code used for its operations. This leads to
a situation where a researcher cannot precisely determine how the software is treating
the data. In addition, some proprietary packages are limited in the analysis options
presented to the user, constraining what types of analysis can be done. Open-source
toolboxes allow the exploration and modification of code to suit the needs of the user
and therefore provide greater flexibility and transparency. Several of these toolboxes
run in the MATLAB environment, which may entail a nontrivial expense. Regardless
of what software is chosen, a researcher should always save every step of the analysis.
This helps ensure a backup in case of data loss and also allows the user to come back
at a later time and make adjustments to the data processing pipeline without having to
repeat every step.

Setting up a lab
There are several general requirements for setting up an EEG lab. First and most obvi-
ously, an EEG system will be required. Next, there must be a means of sending event
codes from the stimulus presentation software to the EEG recording. Without any event
markers it will be impossible to time-lock the data to stimulus events or even know what
was onscreen at any given moment. Will the subject make responses to the stimuli? If so,
take into account the latency of the response device. For example, a standard computer
keyboard may not offer good timing precision. A specialized response device may be
required in these cases. In either situation the response device should be comfortable
for the subject and be easy to use to prevent confusion during the task.
When setting up an EEG lab there are a number of steps that can be taken to ensure
the best possible data quality. First, possible sources of electrical noise should be
removed. Unnecessary equipment should not be stored in the lab. The EEG amplifier
should be kept physically distant from monitors, response devices, or other equipment.
One of the largest potential sources of noise is actually the stimulus presentation
monitor; maintaining a distance of 70 cm−1 m from the subject and the screen will
likely ameliorate this problem. Although not often necessary, electrical shielding of
the room and equipment may be worth the cost if line noise is strong and persistent.
Auditory noise is also a concern, as it may distract the subject and degrade task
E L E C T R O E N C E P H A L O G R A P H Y (E E G) 9

performance. Therefore, avoid locating the lab near any loud facilities or machinery
and, if necessary, consider some type of soundproofing.
Ideally subjects will sit in a sturdy, comfortable chair made of nonconductive mate-
rial. An uncomfortable subject may fidget and introduce muscle artifacts or begin to
lose interest in the task. The room should be kept at a comfortable temperature. A room
that is too warm will cause subjects to sweat, introducing skin potentials into the EEG.
Finally, as the EEG cap should not be removed after the experiment begins, subjects
should be asked to avail themselves of lavatory facilities before starting the session.

Procedure
Each EEG session should follow a well-developed procedure and be carried out by
trained personnel familiar with the system. The procedure will likely consist of: con-
sent, a brief summary of what to expect, preparing and placing the EEG cap, running
the experiment, cleanup, and debriefing.
Above all else it is important to maintain a strictly professional mindset during an
experimental session. This will generate trust in the experimenter and impress upon
the subject the scientific nature of the procedure, hopefully prompting them to take it
seriously thereby improving task performance. An air of confidence is essential as it
will reassure the subject and encourage them to engage in the process. Avoid being too
jovial or empathetic, as this could alter the mood of the participant; instead maintain
a posture of professional detachment. To this end, if the experimenter makes minor
mistakes during the session they should not be announced to the subject, but instead
be quietly corrected before continuing. Naturally, any problem that presents a danger
to the subject requires the termination of the experiment.
Cell phones and other electronic devices should be removed prior to starting the
session as these may cause electrical interference or simply serve as distractions. During
the consent process it is important to explain what EEG is to the subject, answering
any questions they have about the technology. EEG may remind them of a medical
procedure and therefore may be intimidating. Showing the subject what an EEG cap
looks like can help soothe any apprehension. Similarly, it is important to refer to the
EEG as a series of “sensors” that rest on the head instead of “electrodes,” as the latter
term may invoke mental images of electrical shock. This is a particularly important
convention to observe when running babies or clinical populations as subjects. Explain
to the subject how the capping procedure will work and try to give them an idea of what
it will feel like to wear the cap.
Follow manufacturer instructions regarding preparation of the EEG net. Saline-based
systems will require the soaking of the cap in a solution for a certain period of time
before use. Gel-based EEG will require that the gel be applied to the cap after placement.
While the net is being prepared, participants should remove any piercings or jewelry
that may catch on the net and cause discomfort to the participant or damage to the
equipment. It is also helpful to ask subjects to part their hair evenly down the middle
to ensure a symmetric, even fit for the cap.
Start cap placement by measuring head circumference to choose the proper cap size.
When placing the cap have the subject sit up straight, hold their chin up, look ahead, and
10 E L E C T R O E N C E P H A L O G R A P H Y (E E G)

try to remain still. If participants dip their head during application the EEG net will not
be seated properly. Next measure to ensure the cap is centered; this is normally done by
ensuring that the vertex sensor Cz is positioned halfway between both the nasion and
the inion and the two pre-auricular points. Some EEG systems, typically those that use
conductive gel, require exfoliation of the scalp underneath each sensor during applica-
tion. This is done to improve impedances by removing skin oils, dead cells, and any hair
that might be in the way. Abrasion should be very gentle; a breach of the skin presents
an unacceptable infection risk and must be avoided completely.
During the experiment, subjects should be asked to try to stay relaxed, so as to avoid
muscle artifacts. Taking breaks after every 10–15 minutes of recording time will help
prevent fatigue from setting in. As blinks are a major source of artifact in the EEG, it
can be helpful to include a short time window at the end of each trial for subjects to
blink, thereby avoiding contamination of the stimulus period. When recording EEG,
the experimenter should not attempt to multitask (reading articles, writing grants, etc.)
but instead monitor the data quality. This allows any issues to be noticed and addressed
during the session without having to resort to statistical artifact removal later on.
While the above suggestions are valid in most cases, additional considerations must
be taken into account when running special subject populations, such as children or
clinical populations. When running children, it is necessary to have at least two exper-
imenters present in order to ensure safety and mind any siblings that may be present. It
may be helpful to have some children’s toys present in such situations. Younger toddlers
may attempt to tear off the cap; do not allow this. Instead have an experimenter remove
the cap if it becomes troublesome to the child. During the capping process of an infant,
one experimenter should place the net while the other keeps the baby’s attention. Solid
cap placement is important here, as the limited attention span of special populations
prevents extensive adjustments; it has to be correct on the first try. Babies and children
should be given a few minutes to adjust to the sensation of the cap and, with a little dis-
traction, they often quickly forget it is there. In addition to normal monitoring of the
EEG, these subjects will need constant monitoring to ensure that there are no problems
during recording. A video camera and monitor setup may be helpful in this regard.
Infection is the primary safety risk for EEG; therefore it is imperative that the net and
any other reusable equipment be disinfected between each use. An EEG session cannot
be run without this step. Always follow manufacturer guidelines regarding disinfection
of the EEG net, which will usually involve soaking it in a specific disinfectant solution.
To help avoid infection, EEG should not be run on anyone with open wounds on the
scalp or excessive nasal drainage.

Impedance
Impedance is a measure of the connection quality between the EEG sensors and the
scalp; it can be thought of as the AC correlate of resistance. Good impedances are essen-
tial to a quality recording and should therefore be kept below an appropriate threshold
for the session. Low-impedance gel-based systems will typically aim for impedances
of 5–10 kΩ while some higher impedance saline systems may use values of 50–70 kΩ.
Regardless of the system selected, steps can be taken to keep impedance low. These
E L E C T R O E N C E P H A L O G R A P H Y (E E G) 11

include exfoliation of the skin when applying the cap and using cotton-tip applicators
or a similar tool to gently brush the hair out of the way and ensure the electrode is in
direct contact with the scalp. The subject should remove any facial makeup before place-
ment of the net if sensors sit on the cheek. Additionally, before the day of the session
the subject should be asked to avoid using conditioner, hair gel, or other such products
as these too may negatively affect signal quality.

Referencing
EEG measurements must reflect the difference in activity between two or more elec-
trode sites. EEG recordings can be either bipolar or monopolar. Bipolar recordings
reflect the difference in electrical potential between a pair of electrodes. Bipolar record-
ings are useful for comparing areas on the right and left side of the head and are used
primarily in clinical settings. In monopolar recordings, the potential of each electrode
is compared to a neutral electrode or the average of all electrodes (average reference).
Relatively neutral sites for a reference include, for example, the ear, mastoid, or nose.
Alternatively, electrodes on both ears may be linked together as if they were one elec-
trode centered on the head and used as a reference—known as a linked-ear reference.
Any linked averages should be performed offline through re-referencing. An average
reference is the mathematical average of activity at all EEG electrodes. The average ref-
erence may come the closest to a neutral site because, theoretically, activity in opposing
dipoles may cancel each other out (however, this is not often the case as there are no
electrodes placed underneath the head). Each possible reference has advantages and dis-
advantages. Although selection of a reference site is an important consideration, online
reference choice is somewhat arbitrary because it is possible to re-reference data offline.

Amplifier
The EEG signal is extremely small; in order to be viewed it requires amplification. The
amplification factor, called the gain, will likely range from 1,000–100,000 depending on
the amplifier model. As described previously, EEG measures voltage, which is a relative
signal that reflects the difference between an active electrode and the reference site.
However any noise inherent in the reference site will also be picked up. To avoid this,
most amplifiers are differential, meaning that the signal they amplify is the difference
between the amplifier and the reference subtracted away from the common ground. To
understand common mode rejection, imagine three electrode sites: one reference, one
common ground, and one active electrode. A differential amplifier measures the sig-
nal between each active electrode and the common ground and the signal between the
reference and the common ground. The difference between these two signals is taken,
resulting in the signal from the common ground being canceled out, leaving only the
difference between the active electrode and the reference. It is this difference that is then
amplified and sent to the recording computer. As a result of this process, noise common
to the ground reference electrode is subtracted out from the data leading to a cleaner
recording with significantly reduced noise. The higher the common mode rejection, the
better the amplifier is at reducing noise. An amplifier must be capable of handling all
12 E L E C T R O E N C E P H A L O G R A P H Y (E E G)

input values it receives without incident; for this reason an amplifier of 20 or 24 bits of
resolution is recommended.

Sampling rate
The EEG recording is an analog signal consisting of voltage deflections across time and
therefore must be digitized before it can be stored on a computer. This is accomplished
by taking a series of discrete samples from the continuous data. The rate at which these
samples are taken is the sampling rate of the EEG and effectively defines the temporal
resolution of the data. The essential factor to consider when choosing a sampling rate is
the Nyquist theorem. This theorem states that in order to accurately reconstruct all the
information in the original analog signal, the EEG must be digitized using at least twice
the rate of the maximum frequency in the signal. If the sampling rate chosen is lower
than this, low-frequency noise (aliasing) will be introduced into the recording. Since
most brain-related activity is below 100 Hz in frequency, a sampling rate of at least 200
Hz should be used to prevent aliasing. In practice, many researchers prefer to sample
more frequently than the Nyquist theorem requires, thereby improving the EEG signal-
to-noise ratio. Once this consideration is satisfied, there are a variety of sampling rates to
choose from. Some common sampling rates include 250 Hz, 256 Hz, 500 Hz, and 1,000
Hz. Other factors to consider when selecting a sampling rate are file size and dimin-
ishing returns. With regard to the former, higher sampling rates will produce larger
datasets that require more space to store and more time to analyze; although modern
64-bit machines should not have great difficulty with this, it may present an unafford-
able financial cost in terms of computer hardware. Additionally, as the sampling rate
increases, diminishing returns begin to reduce the benefit gained with a higher rate.
Sampling rates above 2,000 Hz are unlikely to provide much added advantage.
While any sampling rate between 500 and 2,000 Hz should suffice for the majority of
experiments (Cohen, 2014), 1,000 Hz is perhaps the most convenient and conceptually
simple as each sample point will correspond to 1 ms. It is recommended to select 1,000
Hz or another high value as the sampling rate. This is because, should storage space or
other computer resource limitations become a problem, there is always the option of
down sampling (reducing) a high sampling rate after recording. Many EEG software
packages will include an option to perform this operation, and it is preferable to reduce
a sampling rate to something manageable rather than losing information to aliasing.

Filtering
Filtering is a pre-processing step that is used to remove unwanted frequencies from
the recording. Filtering operates by first transforming the EEG time series into the fre-
quency domain, then applying a function that removes undesired frequencies. Once
these frequencies are removed, the data is transformed back into the time domain. It is
helpful to note that filtering of the EEG waveform does not function in the same man-
ner as a water or air filter. Instead of being like a sieve, EEG filtering is a mathematical
operation applied to a dataset that attenuates the power of frequencies outside a spec-
ified threshold. For example, if the threshold is set at 50 Hz, it might be expected that
E L E C T R O E N C E P H A L O G R A P H Y (E E G) 13

all activity greater than 50 Hz would be cut off while activity below this point would
be preserved. In reality, however, filters do not have sharp cutoffs right at threshold, but
instead slope as they approach the threshold value. Thus, in the example given, the filter
might start to mildly suppress frequencies at 48 Hz and gradually increase until maximal
attenuation is reached at 50 Hz. In other words, filtering involves a controlled distortion
of the data. Improperly applied filtering may result in shifted phase or latency values and
yield incorrect results. For this reason caution must be used when applying any filter.
In general, filters are applied to EEG for the purpose of removing artifactual frequen-
cies from the signal or to be able to visualize the data. Artifactual frequencies will likely
consist of high-frequency muscle noise or line noise signals from electronic equipment
and very low-frequency drifts arising from scalp potentials during recording. These
scalp potentials make it difficult to visualize the EEG without filtering.
Filters may be divided into four categories: high-pass, low-pass, band-pass, and
notch. High-pass filters remove frequency activity below their set threshold; the name
comes from the fact that they “pass” through the higher frequencies unaffected. For
example, a 0.1 Hz high-pass filter removes frequencies below 0.1 Hz and retains those
above. Similarly, a low-pass filter at 40 Hz would remove frequencies above threshold
while passing those below. A band-pass filter is simply a combination of high- and
low-pass filters into a single filter instead of applying each separately. In most cases,
EEG will be band-pass filtered. Notch filters attenuate the signal around a specific
frequency, such as 60 Hz, and are sometimes used to remove line noise.
In the majority of cases, filter settings should be focused on suppressing very
low frequency drifts and filtering out high-frequency noise. A band-pass setting of
0.1–100 Hz for healthy adult subjects should be sufficient in most cases and is a good
conservative parameter to avoid unnecessary distortions. Those who wish to examine
very high frequencies may use an upper limit of 200 Hz or even omit the low-pass filter
altogether, with the caveat that this will produce noisier waveforms. Low-pass filters
are less distortionary in nature than high-pass filters. In light of this, some studies
that examine ERPs will set the upper edge of the band-pass filter to 30 or 40 Hz; these
values will produce cleaner looking ERPs by removing any high-frequency noise while
leaving the properties of the ERP unaffected. High-pass values greater than 0.1 Hz
should normally be avoided as they will alter the latency of some ERP waveforms and
change the results of the experiment.
When recording EEG from subject populations where movement or other artifacts
will be common, more restrictive filter settings may be used. For example, a band-pass
of 0.5–30 Hz would be appropriate when recording data from infants. In such cases
the distortion caused by the high-pass setting may be justified by the exclusion of low-
frequency noise. Line noise, present at 50 or 60 Hz, depending on geographical region,
is another reason filtering may be required. Using a notch filter is not recommended
due to the risk of distorting the data; ideally line noise sources should be eliminated
before recording. If it is still necessary to remove line noise, use a lower low-pass cutoff
(e.g., 40 Hz) or an artifact correction technique such as sliding window multitapers or
ICA before turning to a notch filter if possible.
Normally the EEG amplifier incorporates an online anti-aliasing filter at the hardware
level. This should be used in accordance with manufacturer recommendations. Aside
14 E L E C T R O E N C E P H A L O G R A P H Y (E E G)

from this often built-in feature, EEG data should not be filtered online. Filters should be
applied offline to continuous data files that are not segmented by condition (epoched)
to avoid edge artifacts. The filter used should be noncausal to avoid latency shifts in
the data. When learning about filters it may be helpful to filter the same dataset using
multiple settings and then examine the results to see what differences exist. When in
doubt about what settings to use, it is safe to default to a 0.1–100 Hz band-pass.

Pre-processing of data

Artifacts are variations in EEG signal that originate from nonneural sources. It is a prob-
lem when artifacts are misinterpreted as EEG data. Sources of artifacts include line noise
from recording equipment and biological signals including eye blinks and movement,
muscle activity, cardiac activity, movement of the lips and tongue, and skin potentials.
One way to reduce these artifacts is to ask the subject to relax facial muscles and min-
imize head and eye movements during recording. However, instructions to suppress
movement may result in a secondary task, associated with brain activity that may inter-
fere with the EEG signal of interest. Because many biological artifacts are distinctive
in signal and appearance (e.g., electromyographic activity is usually greater in ampli-
tude than EEG signals of interest), sophisticated analysis programs include algorithms
to remove these signals from the EEG. A common nonbiological source of noise is line
noise interference, usually at 50 or 60 Hz. This noise may emanate from fluorescent light
fixtures or stimulus devices such as computer monitors. One solution is to turn off these
devices. However, if the room is too dim, the researcher runs the risk of producing an
environment conducive to sleeping. Other solutions include correcting line noise after
data has been collected using tools within the data analysis software.
According to Luck (2014), there are several ways in which artifacts may negatively
affect EEG recording. First, artifacts reduce the signal-to-noise ratio. Thus, one may be
unable to find differences between experimental conditions in the averaged EEG wave-
form. Second, some artifacts occur systematically and may be time-locked to stimulus
onset. For example, participants may tend to blink systematically after image presenta-
tion, resulting in a blink artifact that appears time-locked to the image presentation over
many epochs. This is problematic when the artifact occurs in one condition more than
other conditions and is not eliminated by averaging the waveform. Finally, eye move-
ments and blinks during presentation of visual stimuli could result in differences in
perception of the images. A systematic difference in ocular activity between conditions
could present a confound.
Rejection and correction of artifacts is necessary before analysis to reduce noise from
nonbrain sources. Pre-processing allows for the offline reduction of artifacts in the EEG
recording. A researcher may use either manual or automatic procedures—or a combi-
nation of both—for artifact rejection. There are trade-offs to either approach. Visual
inspection, a manual procedure, is tedious and subject to human error. An automatic
procedure may either fail to reject artifacts or reject data of interest based on the strin-
gency of the parameters. One may prefer a semi-automated approach, in which both
methods are employed. A helpful automatic procedure is independent components
E L E C T R O E N C E P H A L O G R A P H Y (E E G) 15

analysis (ICA), a blind source separation technique that decomposes the EEG signal
into individual source signals. ICA yields a series of components, each of which rep-
resents an independent source extracted from the EEG data and, when summed, con-
stitute the original signal. Once sources have been pulled out from the data they can
be examined and large, consistent artifacts such as blinks or EMG will be isolated into
components that can be subtracted from the dataset. Doing so eliminates the noise sig-
nal from the EEG without affecting the other components; this way artifacts can be
removed without having to cut out channels or epochs.
An example pre-processing sequence for a single subject of an ERP study includes
filtering the continuous data, epoching data, manually identifying and removing bad
channels and epochs, conducting an ICA to identify potentially bad components, man-
ually inspecting and rejecting bad components, re-referencing electrodes using an aver-
age reference, interpolating deleted channels, separating epochs by condition, removing
baseline, and (finally!) looking at the ERPs.

EEG research in media contexts

Although some media researchers have used EEG to investigate phenomena, there are
some challenges to using EEG in a media environment, especially if one wishes to
examine ERPs. For example, investigation of ERPs is more conducive to research with
simple, rather than complex, stimuli (e.g., still images rather than video). The necessity
of averaging over many trials to examine ERPs may also be prohibitive. Participants
may habituate to stimuli or become fatigued if the procedure lasts a long time. However,
EEG should not be discounted as a measure to use in media research. Potter and Bolls
(2012) state that “EEG is currently an example of a psychophysiological measure that is
on the brink of providing breakthrough scientific insight into the dynamic interaction
between the human brain and media content” (pp. 98–99).
Despite the challenges, some media researchers have been able to examine ERPs.
Treleavan-Hassard and colleagues (2010) examined the P3 in response to brand logos.
Logos were presented in blocks of 10 while EEG data were collected. Four of the
logos appeared later in advertisements within television programs. Of these, two were
presented in banners at the top of the screen that participants could interact with while
content continued to play (Impulse ads). The other two logos were also presented in
banners at the top of the screen but participants were directed away from the content
on the screen when they interacted with these banners (Dedicated Advertiser Location
ads). Participants viewed the logos again while EEG data were collected. In response to
logos contained in the Dedicated Advertiser Location ads, latencies of P3 were shorter
post-interaction than pre-interaction. Latencies of P3 did not differ between pre- and
post-interaction exposure for logos within Impulse ads. The authors concluded that
automatic attention was greater for the Dedicated Advertiser Location Ads compared
to the Impulse ads.
Because of the challenges of conducting ERP research in media contexts, much of the
existing research investigates the presence or absence of certain frequency bands. For
example, one study demonstrated that alpha activity during commercials is negatively
16 E L E C T R O E N C E P H A L O G R A P H Y (E E G)

correlated with recognition and recall of elements in the commercial (Reeves et al.,
1985). Another study, using alpha power as a measure of cortical arousal, examined
hemispherical asymmetry in processing of emotion on television. Results indicated that
the right hemisphere had lower alpha power in response to negative scenes while the left
hemisphere had lower alpha power in response to positive scenes (Reeves, Lang, Thor-
son, & Rothschild, 1989). More recently, Minas and colleagues (2014) examined the
presence of alpha activity over different locations of the scalp to examine information
processing during a discussion with a virtual team.
Frequency band research is not limited to the investigation of alpha. Kretzschmar
and colleagues (2013) examined theta band activity—associated with memory encod-
ing and retrieval—while adults read text on paper, an e-reader, or a tablet. They found
that theta band activity did not differ between devices for young adults. Older adults,
however, demonstrated reduced theta voltage density and shorter mean fixation dura-
tions while reading text on a tablet, compared to reading text on an e-reader or on paper.
Comprehension of the text for both older and younger adults did not differ between
reading media. The authors concluded that older readers benefitted from the contrast
provided by the backlit text of a tablet.

Some limitations of EEG

EEG has many advantages for the media researcher, many of which are listed above.
However, compared to other neuroimaging techniques there are some limitations as
well. Specifically, although EEG provides high temporal resolution, it lacks spatial res-
olution. Thus, a researcher using EEG will be able to determine the time course of
electrical activity in the brain with high precision, but will not be able to determine
where in the brain that activity originated—an issue known as the inverse problem. In
addition, although the tissue between cortical neurons and the scalp acts a conductor,
EEG is unlikely to pick up activity from deeper brain structures.
A study that uses EEG techniques requires thoughtful consideration of design.
Because EEG typically requires the use of many trials of stimuli, a media researcher
interested in studying a single message or a limited set of messages may be prohibited
from using this technique. Depending on the length of stimulus presentation, many
trials of stimulus presentation in multiple conditions can create an experiment that
lasts for a considerably long time. Many trials of audiovisual stimuli that last for 30
seconds or longer (e.g., advertisements) may result in an experiment that is too long
for the comfort of a participant who is required to remain relatively still. Before the
experimental session, obtaining acceptable impedances for many electrodes may take
a substantial amount of time—sometimes 20 minutes or longer. Long experiments
may result in the drying out of gelled or saline-soaked electrodes and will need to be
broken into blocks. In between these blocks researchers will need to check and regain
acceptable impedance levels. EEG is not only time-consuming in the data-collection
phase of an experiment; it is also time-consuming in the pre-processing phase.
Pre-processing of data can take a substantial amount of time to complete—usually
several hours for each participant.
E L E C T R O E N C E P H A L O G R A P H Y (E E G) 17

SEE ALSO: Experiment, Laboratory; Measurement of Attitudes; Measurement of Cog-


nitions

References

Adrian, E. D., & Matthews, B. H. C. (1934). The Berger rhythm: Potential changes from the occip-
ital lobes in man. Brain, 57, 355–385. doi:10.1093/brain/57.4.355
Bartholow, B. D., & Amodio, D. M. (2009). Using event-related brain potentials in social psy-
chological research. In E. Harmon-Jones & J. Beer (Eds.), Methods in social neuroscience (pp.
198–232). New York: Guilford Press.
Berger, H. (1929). Ueber das Elektrenkephalogramm des Menschen. Archives für Psychiatrie Ner-
venkrankheitren, 87, 527–570.
Cohen, M. X. (2014). Analyzing neural time series data: Theory and practice. Cambridge, MA:
MIT Press.
Cook, I. A., O’Hara, R., Uijtdehaage, S. H. J., Mandelkern, M., & Leuchter, A. F. (1998).
Assessing the accuracy of topographic EEG mapping for determining local brain func-
tion. Electroencephalography and Clinical Neurophysiology, 107, 408–414. doi:10.1016/S0013-
4694(98)00092-3
Donchin, E. (1981). Surprise! … Surprise? Psychophysiology, 18, 493–513. doi:10.1111/j.1469-
8986.1981.tb01815.x
Goldman, R. I., Stern, J. M., Engel, J., & Cohen, M. S. (2002). Simultaneous EEG and fMRI of the
alpha rhythm. NeuroReport, 13, 2487–2492. doi:10.1097/00001756-200212200-00022
Kretzschmar, F., Pleimling, D., Hosemann, J., Füssel, S., Bornkessel-Schlesewsky, I., & Schle-
sewsky, M. (2013). Subjective impressions do not mirror online reading effort: Concur-
rent EEG-eyetracking evidence from the reading of books and digital media. PLoS ONE, 8.
doi:10.1371/journal.pone.0056178
Luck, S. J. (2014). An introduction to the event-related potential technique. Cambridge, MA: MIT
Press.
Minas, R. K., Potter, R. F., Dennis, A. R., Bartelt, V., & Bae, S. (2014). Putting on the thinking
cap: Using NeuroIS to understand information processing biases in virtual teams. Journal of
Management Information Systems, 30, 49–82. doi:10.2753/MIS0742-1222300403
Potter, R. F., & Bolls, P. D. (2012). Psychophysiological measurement and meaning: Cognitive and
emotional processing of media. New York: Routledge.
Reeves, B., Lang, A., Thorson, E., & Rothschild, M. (1989). Emotional television scenes and hemi-
spheric specialization. Human Communication Research, 15, 493–508. doi:10.1111/j.1468-
2958.1989.tb00196.x
Reeves, B., Thorson, E., Rothschild, M., McDonald, D., Hirsch, J., & Goldstein, R. (1985). Atten-
tion to television: Intrastimulus effects of movement and scene changes on alpha variation over
time. International Journal of Neuroscience, 25, 241–255. doi:10.3109/00207458509149770
Stern, R. M., Ray, W. J. & Quigley, K.S. (2001). Psychophysiological recording (2nd ed.). New York:
Oxford University Press
Sutton, S., Braren, M., Zubin, J., & John, E. R. (1965). Evoked-potential correlates of stimulus
uncertainty. Science, 3700, 1187. doi:10.1126/science.150.3700.1187
Tanaka, J. W., & Curran, T. (2001). A neural basis for expert object recognition. Psychological
Science, 12, 43–47. doi:10.1111/1467-9280.00308
Treleaven-Hassard, S., Gold, J., Bellman, S., Schweda, A., Ciorciari, J., Critchley, C., & Varan, D.
(2010). Using the P3a to gauge automatic attention to interactive television advertising. Journal
of Economic Psychology, 31, 777–784. doi:10.1016/j.joep.2010.03.007
18 E L E C T R O E N C E P H A L O G R A P H Y (E E G)

Walter, W. G., Cooper, R., Aldridge, V. J., McCallum, W. C., & Winter, A. L. (1964). Contingent
negative variation: An electric sign of sensori-motor association and expectancy in the human
brain. Nature, 203, 380–384. doi:10.1038/203380a0

Further reading

Beatty, M. J., Heisel, A. D., Pascual-Ferrá, P., & Berger, C. R. (2015). Electroencephalographic
analysis in communication science: Testing two competing models of message production.
Communication Methods & Measures, 9(1/2), 101–116. doi:10.1080/19312458.2014.999753
Harmon-Jones, E., & Peterson, C. K. (2009). Electroencephalographic methods in social and
personality psychology. In E. Harmon-Jones & J. Beer (Eds.), Methods in social neuroscience
(pp. 170–197). New York: Guilford Press.
Ravaja, N. (2004). Contributions of psychophysiology to media research: Review and recom-
mendations. Media Psychology, 6, 193–235. doi:10.1207/s1532785xmep0602_4
Stern, R. M., Ray, W. J., & Quigley, K. S. (2001). Psychophysiological recording (2nd ed.). New
York: Oxford University Press.

Glenna L. Read is a doctoral candidate at Indiana University. She has a background


in experimental psychology and her area of interest can be broadly described as media
psychology. Research interests include the social, cognitive, and neurological processes
involved in the perception of media messages and how these processes influence and
relate to stereotyping and prejudice.

Isaiah J. Innis is an EEG technician at Indiana University. He has a background in


eyeblink conditioning and EEG microstates. He is involved in research on a variety of
topics including the structure of language and the perception of biological motion.

You might also like