0% found this document useful (0 votes)
59 views23 pages

Psychologists: Lie Scales Validity

The document discusses the validity of lie scales and whether controlling for social desirability using lie scales increases the reliability of other self-report scales. It describes a study that used data from driver education programs to test if correcting for lie scales increased correlations between scales administered at different time points, as would be expected if lie scales accurately captured socially desirable responding. The study found that controlling for faking generally decreased correlations between scales, calling into question the assumption that lie scales specifically measure dishonesty.

Uploaded by

Rizan Amdan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views23 pages

Psychologists: Lie Scales Validity

The document discusses the validity of lie scales and whether controlling for social desirability using lie scales increases the reliability of other self-report scales. It describes a study that used data from driver education programs to test if correcting for lie scales increased correlations between scales administered at different time points, as would be expected if lie scales accurately captured socially desirable responding. The study found that controlling for faking generally decreased correlations between scales, calling into question the assumption that lie scales specifically measure dishonesty.

Uploaded by

Rizan Amdan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

ON THE VALIDITY OF LIE SCALES

On the Validity of Lie Scales

Abstract

An assumed feature of lie scales is that partialling out their influence achieves truer scores

on other scales, which should increase reliability when one of the measurements is taken in

a socially sensitive context. This was tested in several samples. Controlling for faking

substantially decreased the correlations. It was concluded that socially desirable

responding increases the reliability of other scales, that lie scales capture aspects of

personality that are not limited to self-report situations, and measure common method

variance. Controlling for social desirability does to some degree have the desired result,

but not because lie scales specifically measure lying.

Key words: social desirability, lie scale, common method variance, response bias

1
ON THE VALIDITY OF LIE SCALES

Introduction

Within psychology, the use of self-reported data is very common, and for several

sub-areas, it is totally dominating. This state of the art is somewhat peculiar, as self-reports

have often been shown to be very unreliable (see the review in af Wåhlberg, 2009), and

when no other source of data is used, the risk of common method variance always exist

(Podsakoff & Organ, 1986; Podsakoff, Mackenzie, Lee & Podsakoff, 2003), meaning that

systematic biases in the data create or change associations between variables.

The main source of bias (or at least the most discussed and researched factor)

would seem to be socially desirable responding (e.g. Furnham, 1986; Ashley & Holtgraves,

2003), and its sub-components impression management and self-deception (Ramanaiah,

Schill & Leung, 1977). This mechanism is usually thought to be individual differences in

the tendency to distort one’s responses to questions according to what is perceived to be

socially acceptable. Whether this acceptability refer to the wider social norm within the

society, or of some sub-group to which the respondent belong, does not seem to be known,

although the first alternative seems to be the assumed state.

To measure the socially desirable response set, lie scales were invented (e.g.

Crowne & Marlowe, 1960). These scales work according to the logic that only people who

are high on social deception would endorse very improbable and trivial statements such as

‘I have never stolen anything, not even a hairpin’. This basic logic has become the standard

for lie scales and a fair number of different scales exist, such as the Unlikely Virtues scale

(Ellingson, Smith & Sackett, 2001) and the Balanced Inventory of Desirable Responding

(Paulhus, 1984). The actual use for these lie scales beside pure research would seem to be

to correct the values of other scales (often personality), for example when hiring personnel

(Ellingson, Sackett & Hough, 1999; Uziel, 2010).

2
ON THE VALIDITY OF LIE SCALES

The use of lie scales would seem to imply that a sound suspicion of self-reported

data exists. However, when the issue of validity of lie scales is studied, it is found that

most researchers take this feature for granted (e.g. Ramanaiah, Schill & Leung, 1977;

Kline, Sulsky & Rever-Moriyama, 2000; Fischer & Katz, 2000; see the review by Uziel,

2010).

Some researchers, however, claim that social desirability is not an effect elicited by

the questions posed, but a personality trait, i.e. a tendency to behave in a certain way which

pervades all facets of life. This 'style' (response tendency) versus 'substance' (personality

trait) discussion has been going on for decades (McRae & Costa, 1983; Uziel, 2010). The

evidence seem to favor the substance position (Uziel, 2010), but lie scales are, despite this,

still used as such (Uziel, 2010).

It was pointed out by Ellingson, Sackett and Hough (1999) that a habit of accepting

lie scale corrected personality scores as valid seem to exist, without any evidence for this

assumption being presented. These authors suggested a test where the lie scale effect was

partialled out from dishonest responses and compared to honest responses, seeing if this

correction made the variables more similar (i.e. if a stronger correlation resulted). This was

not found in their study. However, the method used by Ellingson et al. was the standard

faking method (respondents were instructed to fake), meaning that the error might lie with

lack of ecological validity.

However, testing the validity of lie scales and/or the correction method, under

ecologically valid circumstances, is not easy, which might explain why this specific test

does not seem to have been carried out (see the review by Uziel, 2010).

In the present study, a naturally occurring occasion of a difference in situationally

induced social desirability was used to test whether correcting for lie scale values increased

3
ON THE VALIDITY OF LIE SCALES

the between measurements correlations (i.e. reliability coefficients) between questionnaire

scales.

Method

General

Data was gathered in three different projects where the effects of online driver

education for driving offenders were evaluated (af Wåhlberg, 2010a; 2011). In many police

districts in Britain, drivers who have committed an endorsable offence can chose to take,

and pay for, an educational course instead of paying a fine and possibly receive penalty

points on their licenses. Usually, these courses are of the workshop model, but in 2008, the

first online education was launched.

The presently used data has previously been analysed in several other investigations

of methodological problems of questionnaire use, mainly common method variance (af

Wåhlberg, 2010b; af Wåhlberg, Dorn & Kline, 2010).

Samples and procedures

Data was available from three offender groups (young drivers, seatbelt and red light

offenders), as well as a random control sample. The young drivers (YDS) and the seatbelt

offenders (SS) were from Thames Valley, while the red light running scheme (RLS)

sample was from the Greater Manchester area.

The drivers who had elected to take the course were directed to a homepage where

they were requested to respond to a questionnaire before they could take the first

educational module. For the YDS and RLS, the course takers were again directed to a

questionnaire when they had finished the course, and six months later they were invited by

e-mail to respond to a third wave. For the SS, there were only two waves, three months

4
ON THE VALIDITY OF LIE SCALES

apart, with an e-mail prompt for the second one. The control group was recruited by the

use of an e-mail scheme, where mail was sent to lists of (UK) addresses bought from a

marketing company. A sweepstake with GPS gadgets as prizes was used as incentive. The

responders to this request received another e-mail six months later, requesting them to

respond again.

Questionnaires

Three different questionnaires were constructed, by the use of combinations of

scales from well known inventories, as summarized in Table 1. The control group

responded to the same questionnaire as the YDS sample waves 1 and 3.

The violations scale from the Manchester Driver Behaviour Questionnaire (DBQ-

V; for a review, see af Wåhlberg, Dorn & Kline, 2011) is intended to canvass intentional

dangerous driving acts, like speeding and overtaking under uncertain conditions, and is

widely used within traffic safety research. From the same inventory was included the lapse

scale (DBQ-L), which targets vehicle handling mistakes, like shifting into the wrong gear.

The (Brief) Driving Anger Scale (DAS; Deffenbacher, Oetting & Lynch, 1994;

Deffenbacher, Richards, Filetti & Lynch, 2005) is supposed to measure the frequency of

anger experienced due to common driving events, like being held up by someone blocking

the road. The Aggression scale of the Driver Behaviour Inventory (DBI-A; Gulian,

Glendon, Matthews, Davies & Debney, 1988; for a review, see af Wåhlberg, unpublished),

is somewhat similar to the DAS, but describe the emotions experienced more in terms of

stress and irritation. None of these scales has been validated against objective data.

Two scales included that were not driving-specific were the (Short) Sensation

Seeking Scale (SSS; Slater, 2003), and a Big Five Conscientiousness scale (Gosling,

Rentfrow & Swann, 2003). These are supposed to measure dimensions of personality.

5
ON THE VALIDITY OF LIE SCALES

Finally, two lie scales were used; the Driver Impression Management (DIM) scale

of the Driver Social Desirability Scale (Lajunen, Corry, Summala & Hartley, 1997) and the

Marlowe-Crowne scale (M-C; Hays, Hayashi & Stewart, 1989). The DIM scale has been

found to correlate negatively with self-reported collisions, and very weakly positively with

recorded crashes (af Wåhlberg, Dorn & Kline, 2010), as could be expected if it was valid

(and self-reports of collisions were influenced by social desirability). As for the Marlowe-

Crowne scale, no information regarding its criterion validity was found. Both scales are

intended to measure the impression management part of social desirability, i.e. conscious

faking.

Table 1 about here

Hypothesis and analysis

Given that lie scales measure individual differences in response tendency, it could

be expected that controlling for faking in the self-reports (by the use of a lie scale) would

increase the correlations between scales in different waves for the YDS, RLS and SS

samples, but not the Control sample, due to the socially sensitive situation of the first wave

of the education samples. In other words, it was assumed that the first wave measurements

for YDS, RLS and SS were distorted by social desirability response bias, due to the

situation, and that the lie scales included could capture this distortion. For the control

sample, no such effect was expected, as there was no difference between waves regarding

their situation. That the offending drivers were indeed in a situation where they were prone

to socially desirable responding in the first wave had been established in the evaluation

studies (af Wåhlberg, 2010).

6
ON THE VALIDITY OF LIE SCALES

Therefore, each scale from the first wave was correlated with itself measured in

later waves. Thereafter, these correlations were re-run, controlling for the lie scale values

by the use of partial correlations, and the results compared.

Results

Descriptive data for all samples can be seen in Table 2.

Table 2 about here

First, mean values for each scale and the differences for these between waves were

calculated. It can be seen in Table 3 that strong differences were found for most of the

scales, especially for the offender groups, as reported previously (af Wåhlberg, 2010), with

responses growing worse (less socially acceptable) with time. This has been interpreted as

an effect of situationally induced social desirability, where the offending drivers were

strongly motivated to lie before the course (in wave 1), but less so after the course (waves

2 and 3), as indicated by the lie scales. Such a conclusion was possible to draw because

these changes were not limited to the driving-specific scales, but strong effects were also

noted for the personality scales (Sensation Seeking, Big Five Conscientiousness). As

personality usually does not change much over time, this is probably due to changes in a

perceived need to respond in a socially desirable manner.

Tables 3-5 about here

7
ON THE VALIDITY OF LIE SCALES

Thereafter, the lie scales were correlated with the other scales within each sample

and wave (Table 4). Notable is that all these correlations were at least moderate in size,

even the driving lapses scale, which is about involuntary but non-dangerous behavior.

In Table 5 can be seen the correlations between waves for all scales used, and

partial correlations between waves, controlling for lie scales (from both waves). It can be

noted that in every single case, applying a social desirability control had the effect of

decreasing the strength of the associations, sometimes very substantially so.

All in all, but excluding the lie scales, the average test-retest correlation was .599,

while the partial correlation was .540, a reduction of 18.7 percent in amount of explained

variance (6.7 units reduced). Comparing the Control sample with the other, it was found

that the first had an average correlation of .674, partial r of .629, and thus a reduction in

explained variance of 12.7 percent. The education samples had values of .578 and .514, a

20.8 percent reduction. The effect was thus substantially larger in the samples where the

respondents were under social pressure, despite the fact that the control sample correlation

was much larger to start with.

If only one lie scale wave was controlled for, results were very similar to those

described for controls in each wave. In each case, the first wave accounted for at least half

the difference between raw and partial correlations, and in many cases almost all of it.

Also, the correlations between scales within and between waves were similarly reduced

when controlled for the lie scales (results not shown).

Discussion

The present results unequivocally went against the prediction of an increase in

between-measurements correlations when corrected for lie scale scores. On the contrary,

8
ON THE VALIDITY OF LIE SCALES

all correlations shrunk, some of them quite drastically. This effect was also present in the

control sample, where no situationally induced social desirability had been detected,

although not as strongly as in the educational samples. The results can therefore be

regarded as very trustworthy, and the explanation for the unexpected results must be

sought in the mechanism of social desirability, or in how lie scales works.

It should first be asked, what does a so-called lie scale measure? As no ecologically

valid test of such a scale seems to exist, this is a very relevant question. This could also be

phrased as ‘if such scales do not measure a tendency of socially desirable responding, what

do they measure?’.

In agreement with many previous authors (e.g. McRae & Costa, 1983; Uziel, 2010),

it is concluded from the present data that what is measured by lie scales is mainly a

substantive trait, at least in terms of it being stable over time and situations. This would

seem to be a personality trait which is always present, in all kinds of social situations. Lie

scales should therefore be predictive of (externally measured) behavior (as reported by

Schmitt, Oswald, Kim, Gillespie, Ramsay & Yoo, 2003; Biderman, Nguyen, Mullins &

Luna, 2008; see the review by Uziel, 2010), especially behavior in socially sensitive

situations. However, other conclusions can also be drawn.

It should be remembered that if self-reported and external criteria (supposedly

measuring the same variable) are predicted by self-reports, it would seem to be the rule that

the association is always strongest between the reports (Hessing, Elffers & Weigel, 1988;

Armitage & Conner, 2001; Elliott, Armitage & Baughan, 2007; af Wåhlberg, Dorn &

Kline, 2010; 2011), indicating a response artifact.

As several very (theoretically) different scales were used in the present study, and

these still tended to correlate with the lie scales, and each other, it would seem to be

evident that the lie scales do indeed capture some sort of response bias. This tendency was

9
ON THE VALIDITY OF LIE SCALES

also stable over time, despite the differences in situations between waves. But if the lie

scales are valid to some degree in that they capture common method variance, even if very

limited, the question remains; why does controlling for this influence not create stronger

associations between situations when controlled for?

The first answer to this question would seem to be that there are no conditions

when social desirability is not active, which Ellingson et al. apparently assumed when

suggesting the test run in the present study. Whenever a questionnaire is presented,

(individual differences in) social desirability seems to influence the responses. Therefore,

instead of adding error, social desirability adds systematic variance, which creates stability

between measurements. Similar arguments can be made for other response bias

mechanisms, like consistency motif.

Furthermore, as a consequence of this argumentation, it can also be reasoned that

another, unspoken, premise of the hypothesis of Ellingson et al (1999) can be questioned.

If social desirability would be able to insert error variance in such a way as to distort test-

retest reliability values, this presumes that there is a fairly stable behavior (or attitude)

about which the respondents are able to report correctly. Given the present results, it can

instead be claimed that the correlations found between questionnaire waves are to some

degree due to a stable response tendency, and not to what is purportedly measured.

Given the present results, it can be stated that lie scales are not very different from

many other questionnaire scales, although they are more extremely worded. As other

scales, they seem to pick up several different response biases, like scale use. This, of

course, should be no surprise. Why should lie scales be free from the types of biases that

have been noted for other scales? However, such a conclusion is usually not drawn by

researchers discussing what lie scales measure (e.g. McRae & Costa, 1983; Smith &

10
ON THE VALIDITY OF LIE SCALES

Ellingson, 2002; Uziel, 2010), and this is where the present study depart from previous

research in its conclusions.

One interesting feature of the lie scale for driving that was used can pointed out; it

had been tested as predictor of self-reported and recorded accidents, and while the former

were negatively correlated with the scale, the latter were unrelated, or possibly slightly

positively associated (af Wåhlberg, Dorn & Kline, 2010). In this sense, the scale was

therefore validated, and functioned as intended. However, again, this does not mean that it

measured social desirability, at least not exclusively.

Concluding that lie scales do not measure response style should not be interpreted

as evidence that social desirability bias does not exist in self-reports, only that lie scales is

not a good method for detection of this. Does this mean that correcting for social

desirability is wrong, as claimed by Uziel and others? It would seem to depend upon what

the goal is. If the goal is to cleanse a measurement of bias induced by the measurement

format, it could be used in this way, unless there is something in the social dimension

which is of interest. However, as stated, any kind of scale can probably be used for this

goal.

In the present study, there was a fair number of non-responders (as can be seen in

the difference between calculations in YDS and RLS). However, this is not a problem, as

what this is about is what happen when you use a questionnaire. Whether results would

have been different if non-responders had been included is beside the point, as this is how

questionnaires are used. Non-responding is simply a part of this method. Furthermore, a

test of response bias in the presently used data sets, which have the peculiar feature of

100% response rate to the first wave, indicate that differences between those who

responded to later waves and those who did not were small (paper in preparation).

11
ON THE VALIDITY OF LIE SCALES

One limitation of the present study was the on-line format used, which could have

influenced the results. However, for computer versus paper-and-pencil administration, no

such effects seem exist (see the meta-analysis by Richman, Kiesler, Weisband & Drasgow,

1999). Similarly, using the web as administration tool, seem to have no effects (Hancock &

Flowers, 2001).

Yet another limitation concern the populations sampled. Three out of four samples

were driving offenders. However, the bulk of these offences were rather ordinary

behaviors, like moderate speeding and not using a seatbelt, which do not make these

drivers very different from the majority. Furthermore, the same kind of effect was

observed in the control sample.

Finally, it can be observed that even though many researchers seem to realize the

problematic nature of self-reports and common method variance (Holden, 2008; Chang,

van Witteloostuijn & Eden, 2010), some research areas are still dominated by self-report

studies (af Wåhlberg, 2009). This state of the art is not acceptable, and lie scales are not the

tool to solve the problem.

References

Armitage, C. J., & Conner, M. (2001). Efficacy of theory of planned behaviour: A meta-

analytical review. British Journal of Social Psychology, 40, 471-499.

Ashley, A., & Holtgraves, T. (2003). Repressors and memory: Effects of self-deception,

impression management, and mood. Journal of Research in Personality, 37, 284-

296.

12
ON THE VALIDITY OF LIE SCALES

Biderman, M. D., Nguyen, N. T., Mullins, B., & Luna, J. (2008). A method factor

predictor of performance ratings. Paper presented at the 23rd Annual Conference

of The Society for Industrial and Organizational Psychology, San Francisco.

Chang, S.-J., van Witteloostuijn, A., & Eden, L. (2010). From the Editors: Common

method variance in international business research. Journal of International

Business Studies, 41, 178-184.

Crowne, D. P., & Marlowe, D. (1960). A new scale of social desirability independent of

psychopathology. Journal of Consulting Psychology, 24, 349-354.

Deffenbacher, J. L., Oetting, E. R., & Lynch, R. S. (1994). Development of a driving anger

scale. Psychological Reports, 74, 83-91.

Deffenbacher, J. L., Richards, T. L., Filetti, L. B., & Lynch, R. S. (2005). Angry drivers: A

test of the state-trait theory. Violence and Victims, 20, 455-469.

Ellingson, J. E., Sackett, P. R., & Hough, L. M. (1999). Social desirability corrections in

personality measurement: Issues of applicant comparison and construct validity.

Journal of Applied Psychology, 84, 155-166.

Ellingson, J. E., Smith, D. B., & Sackett, P. R. (2001). Investigating the influence of social

desirability on personality factor structure. Journal of Applied Psychology, 86, 122-

133.

13
ON THE VALIDITY OF LIE SCALES

Elliott, M. A., Armitage, C. J., & Baughan, C. J. (2007). Using the theory of planned

behaviour to predict observed driving behaviour. British Journal of Social

Psychology, 46, 90–96.

Fisher, R. J., & Katz, J. E. (2000). Social desirability bias and the validity of self-reported

values. Psychology and Marketing, 17, 105-120.

Furnham, A. (1986). Response bias, social desirability and dissimulation. Personality and

Individual Differences, 7, 385-400.

Gosling, S. D., Rentfrow, P. J., & Swann, W. B. (2003). A very brief measure of the Big

Five personality domains. Journal of Research in Personality, 37, 504-528.

Gulian, E., Glendon, A. I., Matthews, G., Davies, D. R., & Debney, L. M. (1988).

Exploration of driver stress using self-reported data. In J. A. Rothengatter & R. A.

de Bruin (Eds.) Road User Behaviour: Theory and Research (pp. 342-347).

Maastricht: van Gorcum.

Hancock, D. R., & Flowers, C. (2001). Comparing social desirability responding on World

Wide Web-paper administered surveys. Educational Technology Research and

Development, 49, 5-13.

Hays, R. D., Hayashi, T., & Stewart, A. L. (1989). A five-item measure of socially

desirable response set. Educational and Psychological Measurement, 49, 629-636.

14
ON THE VALIDITY OF LIE SCALES

Hessing, D. J., Elffers, H., & Weigel, R. H. (1988). Exploring the limits of self-reports and

Reasoned Action: An investigation of the psychology of tax evasion behavior.

Journal of Personality and Social Psychology, 54, 405-413.

Holden, R. R. (2008). Underestimating the effects of faking on the validity of self-report

personality scales. Personality and Individual Differences, 44, 311-321.

Kline, T. J., Sulsky, L. M., & Rever- Moriyama, S. D. (2000). Common method variance

and specification errors: A practical approach to detection. The Journal of

Psychology, 134, 401-421.

Lajunen, T., Corry, A., Summala, H., & Hartley, L. (1997). Impression management and

self-deception in traffic behaviour inventories. Personality and Individual

Differences, 22, 341-353.

McRae, R. R., & Costa, P. T. (1983). Social desirability scales: More substance than style.

Journal of Consulting and Clinical Psychology, 51, 882-888.

Paulhus, D. L. (1984). Two-component models of socially desirable responding. Journal of

Personality and Social Psychology, 46, 598-609.

Podsakoff, P. M., Mackenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common

method biases in behavioral research: A critical review of the literature and

recommended remedies. Journal of Applied Psychology, 88, 879-903.

15
ON THE VALIDITY OF LIE SCALES

Podsakoff, P. M., & Organ, D. W. (1986). Self-reports in organizational research:

Problems and prospects. Journal of Management, 12, 531-544.

Ramanaiah, N. V., Schill, T., & Leung, L. S. (1977). A test of the hypothesis about the

two-dimensional nature of the Marlowe-Crowne social desirability scale. Journal of

Research in Personality, 11, 251-259.

Richman, W. L., Kiesler, S., Weisband, S., & Drasgow, F. (1999). A meta-analytic study

of social desirability distortion in computer-administered questionnaires, traditional

questionnaires, and interviews. Journal of Applied Psychology, 84, 754-775.

Schmitt, N., Oswald, F. L., Kim, B. H., Gillespie, M. A., Ramsay, L. J., & Yoo, T.-Y.

(2003). Impact of elaboration on socially desirable responding and validity of

biodata measures. Journal of Applied Psychology, 88, 978-988.

Slater, M. D. (2003). Alienation, aggression, and sensation-seeking as predictors

of adolescent use of violent film, computer and website content. Journal of Community,

53, 105–121.

Smith, D. B., & Ellingson, J. E. (2002). Substance versus style: A new look at social

desirability in motivating contexts. Journal of Applied Psychology, 87, 211-219.

Uziel, L. (2010). Rethinking Social Desirability scales: From Impression Management to

interpersonally oriented self-control. Perspectives on Psychological Science, 5,

243-262.

16
ON THE VALIDITY OF LIE SCALES

af Wåhlberg, A. E. (2009). Driver Behaviour and Accident Research Methodology;

Unresolved Problems. Farnham: Ashgate.

af Wåhlberg, A. E. (2010a). Re-education of young driving offenders; effects on self-

reports of driver behavior. Journal of Safety Research, 41, 331-338.

af Wåhlberg, A. E. (2010b). Social desirability effects in driver behavior inventories.

Journal of Safety Research, 41, 99-106.

af Wåhlberg, A. E. (2011). Re-education of young driving offenders; effects on recorded

offences and self-reported collisions. Transportation Research Part F: Traffic

Psychology and Behaviour, 14, 291-299.

af Wåhlberg, A. E. (unpublished). The Road to Driver Risk Index; A Review. Available at

www.psyk.uu.se/hemsidor/busdriver.

af Wåhlberg, A. E., Dorn, L., & Kline, T. (2010). The effect of social desirability on self

reported and recorded road traffic accidents. Transportation Research Part F, 13,

106-114.

af Wåhlberg, A. E., Dorn, L., & Kline, T. (2011). The Manchester Driver Behaviour

Questionnaire as predictor of road traffic accidents. Theoretical Issues in

Ergonomics Science, 12, 66-86.

17
ON THE VALIDITY OF LIE SCALES

Table 1

Sample Waves Time Inventories Lie scales


period
YDS 3 1+6 mo DAS, SSS, DBI-A, DBQ-V DIM
Control 2 6 mo DAS, SSS, DBI-A, DBQ-V, DIM
RLS 3 1+6 mo Big Five-C, DBQ-L DIM, M-C
SS 2 3 mo Big Five-C, SSS, DBQ-L DIM

Note. DAS=Driver Aggression Scale, SSS=Sensation Seeking Scale, DBI-A=Driver

Behaviour Inventory (Aggression), DBQ-V=Driver Behaviour Questionnaire (Violation

scale), DBQ-L=Driver Behaviour Questionnaire (lapse scale), Big Five-C= Big Five

Conscientiousness, DIM=Driver Impression Management, M-C= Marlowe-Crowne.

18
ON THE VALIDITY OF LIE SCALES

Table 2

Sample N Sex Age Number of


waves
YDS 8335 59.6% 21.7/2.2 3
Control 234 53.0% 31.8/13.3 2
RLS 4035 42.3% 38.9/12.9 3
SS 505 83.4% 41.0/14.3 2

19
ON THE VALIDITY OF LIE SCALES

Table 3

Sample Scale Number Wave 1 Wave 2 Effect


of items
m/std alpha m/std alpha N t d
YDS DAS 6 2.05/0.62 .79 2.05/0.74 .85 8377 -0.76 -0.01
YDS SSS 2 1.37/0.60 .77 1.44/0.73 .84 8377 -10.71*** -0.12
YDS DBI-A 5 1.60/0.50 .65 1.76/0.63 .69 8377 -25.30*** -0.32
YDS DBQ-V 7 1.30/0.35 .71 1.37/0.49 .83 8377 -13.68*** -0.20
YDS DIM 7 3.10/0.91 .81 - - 8377 - -
Control DAS 6 2.80/0.78 .81 2.69/0.74 .80 234 2.71** 0.14
Control SSS 2 1.68/0.73 .76 1.64/0.70 .79 234 0.94 0.05
Control DBI-A 5 2.12/0.69 .66 2.08/0.66 .68 234 1.12 0.06
Control DBQ-V 7 1.46/0.48 .79 1.45/0.44 .74 234 0.49 0.02
Control DIM 7 2.91/1.02 .86 2.87/1.00 .86 234 0.89 0.04
RLS Big Five-C 2 4.54/0.56 .40 4.51/0.57 .48 4035 2.85** 0.04
RLS DBQ-L 4 2.16/0.37 .62 2.18/0.40 .69 4035 -4.33*** -0.06
RLS DIM 6 3.42/0.91 .83 3.52/0.89 .83 4035 -11.07*** -0.12
RLS M-C 5 4.10/0.62 .66 4.11/0.65 .73 4035 -1.63 -0.02
SS Big Five-C 2 4.56/0.58 .37 4.51/0.62 .48 505 1.64 0.08
SS SSS 2 1.49/0.75 .85 1.64/0.83 .84 505 -5.09*** -0.21
SS DBQ-L 4 1.37/0.43 .61 1.45/0.50 .70 505 -3.73** -0.18
SS DIM 7 3.66/0.89 .83 3.55/0.94 .85 505 3.10** 0.13

** p<.01, *** p<.001

20
ON THE VALIDITY OF LIE SCALES

Table 4

Sample/scale/wave N DAS SSS DBI-A DBQ-V DBQ-L Big


Five-C
YDS/DIM/1 10426 -.381 -.319 -.442 -.529 - -
Control/DIM/1 1461 -.223 -.239 -.351 -.450 - -
RLS/DIM/1 4227 - - - - -.251 .293
RLS/M-C/1 4227 - - - - -.280 .382
SS/DIM/1 8013 - -.338 - - -.265 .257
YDS/DIM/3 1188 -.520 -.338 -.538 -.563 - -
Control/DIM/2 234 -.281 -.286 -.338 -.572 - -
RLS/DIM/2 4036 - - - - -.295 .383
RLS/M-C/2 4036 - - - - -.370 .473
RLS/DIM/3 353 - - - - -.276 .199
RLS/M-C/3 353 - - - - -.400 .443
SS/DIM/2 505 - -.383 - - -.184 .246

Note. All correlations significant at p<.001.

21
ON THE VALIDITY OF LIE SCALES

Table 5

Sample Scale First versus second wave Second versus third wave
N Correlation Partial N Correlation Partial
correlation correlation
YDS DAS 8378 .493 .429 1189 .678 .591
YDS SSS 8378 .521 .478 1189 .715 .679
YDS DBI-A 8378 .461 .369 1189 .644 .542
YDS DBQ-V 8378 .464 .340 1189 .634 .507
YDS DIM - - - 1189 .568 -
(1st/3rd
waves)
Control DAS 234 .662 .649 - - -
Control SSS 234 .580 .545 - - -
Control DBI-A 234 .748 .724 - - -
Control DBQ-V 234 .694 .585 - - -
Control DIM 234 .733 - - - -
RLS Big 4036 .589 .506 342 .584 .548
Five-C
RLS DBQ-L 4036 .694 .660 342 .588 .544
RLS DIM 4036 .767 - 342 .705 -
RLS M-C 4036 .705 - 342 .627 -
SS Big 505 .383 .355 - - -
Five-C
SS SSS 505 .604 .551 - - -
SS DBQ-L 505 .503 .472 - - -
SS DIM 505 .602 - - - -

Note. All correlations significant at p<.001.

22
ON THE VALIDITY OF LIE SCALES

Table captions

Table 1: Overview of the samples and the characteristics of each data gathering.

Table 2: Descriptive data for the samples used, for those drivers who responded to two

waves of each questionnaire. Shown are the percentages of males and mean /standard

deviation of age.

Table 3: Descriptive statistics for the scales used, for all samples, for waves 1 and 2, for

those who responded to both. Five-step scales were used for all items, mean values and

standard deviations over items presented. Dependent t-tests and Cohen’s d calculated

between waves (wave 1 std as denominator). Also presented are the number of items in

each scale and the Cronbach alphas for the scales in each wave.

Table 4: The Pearson correlations between the lie scales and other scales, within each

wave.

Table 5: The Pearson correlations between the scales in waves 1 and 2, raw and with the

first wave DIM lie scale held constant. In the YDS, the DIM scale was distributed in the

first and third waves, in the other samples in all waves. In the RLS, both lie scales were

held constant.

23

You might also like