Logan1968 Incentive Theory
Logan1968 Incentive Theory
IN REWARD1
Frank A . Logan
UNIVERSITY OF NEW MEXICO
ALBUQUERQUE, NEW BfEXICO
I. Introduction .......................................... 1
A. Incentive Effects in Partial Reinforcement. ............ 2
B. Re rg-sgas the Mechanism of Incentive Motivation. ..... 5
C. Rationale.. ....................................... 8
II. Method .............................................. 10
A. Subjects .......................................... 10
B. Apparatus ........................................ 11
C. Procedure......................................... 11
111. Experimental Designs and Results. ...................... 12
A. Experiment 1. Joint Extinction Following Partial and
Continuous Reinforcement .......................... 12
B. Experiment 2. Reacquisition Following Original Training
with Partial and Continuous Reinforcement. . . . . . . . . . . . 14
C. Experiments 3 and 4. Reversal of Choice Following Partial
and Continuous Reinforcement. ...................... 15
D. The Overtraining Extinction Effect.. ................. 19
E. Experiment 5. Effect of Amount of Reward on Resistance
to Extinction of Incentive Motivation. ................ 19
F. Experiment 6. Incentive Contrast.. ................... 20
G. Experiment 7. Rate of Increase in Incentive Motivation. . 24
IV. Discussion............................................ 25
V. Summary ............................................ 28
References............................................ 28
I. Introduction
Of the basic phenomena encountered in the experimental analysis of
the learning process, perhaps none has proved to be so intractable to an
even temporarily satisfying theoretical description as that of experi-
mental extinction. The closest was probably Pavlov’s (1927) original
postulation of an internal inhibitory process which, as every introductory
psychology student is supposed to know, was suggested by such other
This research waa supported in part by grants from the National Science
Foundation. Louis Gonzelez supervised the laboratory during the time these data
were collected, and ran many of the subjects. I n addition, Christopher Bennett,
Alfred Coscina, Douglas Gibson, Albert Gonzales, and Eli Padilla assisted in
collecting the data.
1
2 Frank A. Logan
for incentive arise if one pays strict attention to the data in the relevant
situations. As a specific illustration, the interstimulus-interval function
in classical conditioning shows an optimum a t somewhere between a
half and one second; the delay of reinforcement gradient does not show
such nonmonotonicity. As another example, Egger and Miller (1963)
have recently shown that the later stimuli in a sequential chain of con-
ditioned stimuli preceding an unconditioned stimulus do not come to
control conditioned responses. As secondary reinforcers, only the early
stimuli are effective. This should mean that rg would not be evoked by
the stimuli arising late in the instrumental behqvior chain, although the
common assumption in this regard is that incentive motivation increases
progressively as the goal is approached. The detailed laws of classical
conditioning have not yet been systematically applied to the incentive
construct within a theory and hence the extent to which inconsistencies
would be detected is not fully known. However, special problems would
almost certainly arise from such realizations as that conditioned
responses occur with some probability while incentive motivation must
be present on every trial.
4. An empirical source of disillusionment concerns the attempts to
identify salivation as a component of rg and to obtain supporting
evidence by recording and/or manipulating salivation during instru-
mental conditioning. By and large, the results of these efforts have not
been very encouraging (see, e.g., Lewis & Kent, 1961) although a few
suggestive results have been reported (e.g., Shapiro, 1962). It may be
granted that salivation has been used simply to illustrate the logic of
an rg-sgmechanism ; rg as an intervening variable need not be physio-
logically localized and hence the failures with salivation are indeed not
crucial. But to many observers, the salivation illustration is simply too
good: the necessary features (e.g., a learnable component of the goal
response that is not incompatible with the instrumental response) are
present and it ought to work better.
5. Finally, an approach in which incentive motivation is mediated by
a response mechanism is most appropriate for a theory utilizing a
cybernetic-feedback type of analysis, especially if incentive is assumed
to contribute to the general motivational level (but see Trapold, 1962,
for contrary evidence). Indeed, this is the context in which it is typically
employed (e.g., Miller, 1963; Mowrer, 1960; F. D. Sheffield, 1966;
Spence, 1960). I n these analyses, the organism is assumed somehow to
be behaving; feedback from his behavior is available and is, through
prior conditioning, associated with the incentive-mediating response
mechanism which, in turn, fosters either continuation of the ongoing
behavior or change, presumably toward a more optimal level. Miller and
Sheffield have, somewhat differently, included response habits on which
to base the choice of responses. Although this type of approach has a
8 Frsnk A. Logan
great deal of appeal and most probably indicates the direction of future
theorizing, it does not readily permit incentive motivation to act
selectively at the instant of choice. This difficulty is most obvious at the
moment of initiation of a response, but actually recurs continuously
during a behavior chain.
To emphasize the difficulty in the context of response initiation, one
might postulate that the organism covertly scans the domain of possible
responses, receives internal and external feedback from each of these,
and selects the one with which the most vigorous rg is associated. I n its
simplest and most visible form, this type of behavior is seen in the VTE
behavior of rats at a choice point in a maze (Spence, 1960; Tolman,
1932). Even in that context, however, a complete theory must postulate
additional storage and comparative mechanisms that enable the
organism to select the optimal response. For example, a rat rewarded
differentially in both arms of a T-maze might happen to orient toward
the arm containing the smaller reward. The feedback cues available in
that orientation elicit some rg, but the rat must be assumed to have the
capacity to withhold continuation of that response ;he must be assumed
to store that level of rg while orienting toward the other arm where the
there-appropriate rg is in turn elicited ; following these orientations, the
rg)s must be compared so that their relative strengths can determine
choice. These molecular details have not been explicitly incorporated
into a behavior theory of even this simple situation, much less the more
complex, freer situations where, for example, the rat is choosing a speed
of locomotion. It seems improbable that any organism has the time or
resources to make momentary decisions on the basis of implicit monitor-
ing of all possible courses of action.
It is, of course, possible that a sufficiently molecular analysis will
ultimately show that this approach is realistically tractable. At the
present time, however, it appears preferable to conceptualize incentive
motivation as specific to different S-R events and immediately given as
a basis for choice before the choice is made. Feedback from prior behavior
is certainly a component of the total stimulus complex in which decisions
occur but incentive motivational differences among the responses to be
made must be available in advance. That is to say, the rg associated with
the feedback from the response in the next instant is important as a
secondary reinforcer and to help guide the ensuing behavior, but this
feedback cannot possibly be present at the actual moment of choice
before the response is made.
C. RATIONALE
On the basis of the above arguments, the view adopted here is that
incentive motivation is a fundamental process like habit (i.e., it is not
Incentive Theory and Changes in Reward 9
II. Method
A. SUBJECTS
The Ss were 232 hooded rats predominantly bred in the colony main-
tained by the Department of Psychology of the University of New
Mexico. Additional rats from one replication of several of the studies
were discarded when it subsequently became apparent that the strain
had become so highly inbred at that time that the data were unusually
insensitive to reward effects. Other rats were excluded because they
showed such strong position/brightness preferences that no variability
in their early choice behavior occurred; still others were deleted at
random t o equate N’s for analysis and to ensure appropriate counter-
balancing. The inclusion of these discarded data would, of course, add
noise to the graphic presentation of the results but would not, in fact,
alter any of the conclusions.
Age, sex, and source of the rats were counterbalanced across the
conditions within any study. These are not reported because sub-
analyses failed to indicate any significant interactions with these factors.
During the experimental treatments all Ss were maintained in individual
cages with water freely available. They were maintained on approxi-
mately 12 gm per day of laboratory chow given immediately after each
day’s session.
Incentive Theory and Changes in Reward 11
B. APPARATUS
Two apparatuses were employed, half the Ss in each study normally
running in each apparatus. These consisted of pairs of parallel straight
alleys, differing principally in that one pair was 4 feet and the other
8 feet long. All doors in the short apparatus were fully automated; the
goal box door in the long apparatus was manually operated. I n each
apparatus, one alley was black and the other white, confounded with
position. Speeds were recorded from reciprocal-reading clocks that
started with the breaking of photobeams located approximately 3 inches
outside the start door and stopped with the breaking of photobeams
located inside the food cups ; these were individually converted into
feet per second for treatment.
Rats were confined in the goal section of the alley after completing the
response. Food reward consisted of 45-mg Noyes pellets. These were
delivered immediately upon breaking of the food-cup photobeam 'by
Davis feeders pulsed at a rate of about six per second. Rats were removed
after consuming the reward or after about 10 seconds on nonreward
trials.
C. PROCEDURE
Ss were preadapted to the appropriate apparatus on an adjusting
shaping schedule. First, groups of four rats were permitted freely to
explore the maze for approximately 30 minutes with all doors open and
with food pellets available in the food cups. This exploration was
followed by individual magazine training in the goal sections of each
alley with CRF programmed on the food-cup photobeam. By the time
training proper began, each S was eating readily from each food cup and
had adapted to the sound of the feeding mechanism.
Rats were run in squads of four on a rotating basis, producing a some-
what variable intertrial interval averaging after the initial trials about
3-4 minutes. Six trials per day were run, the first four trials being forced
according to one of the following patterns rotated over blocks of 4 days :
RRLL, RLLR, LRRL, LLRR. The fifth trial was then a free-choice
trial providing the basis for observation of preference, followed by a
final forced trial opposite in direction to that chosen on the fifth trial.
The assigned reward conditions simply prevailed from the beginning
of training proper. When partial reinforcement was scheduled in any
alley, a preassigned sequence was followed that distributed six reinforce-
ments over the 12 trials in that alley during each block of 4 days. The
sequence was designed to ensure that each ordinal trial position received
equal frequency of reinforcement. Specifically, for example, half of the
choice trials would be programmed for reinforcement if that alternative
were chosen.
12 Frank A. Logan
nl -3
-P
8
t
---
5
.-
50- -2
-2
B
0)
P
100%9 A vs 50% 9 A n
v)
MChoice
25 - C. Speed (100%) -1
e-* Speed ( 5 0 % )
W C-speed (100%)
o--o C-speed (50%)
I 1 I
X-3 X-2 X-1 1 2 3 4 5 6 7 8 9
Blocks of 12 trials (each response)
FIG.1. Choice behavior during joint extinction following initial training with
partial and continuous reinforcement in two alternative alleys. Graphed are the
last 12 of 42 days of acquisition and 36 days of extinction. Speed data in the two
alleys are also depicted, together with the speeds of control groups that received
only partial or only continuous reinforcement before extinction.
B. EXPERIMENT FOLLOWING
2. REACQUISITION ORIGINAL ~ R A I N I N U
WITH PARTIAL
AND CONTINUOUS
REINFORCEMENT
A somewhat different approach to the question of whether or not
extinction has differential effects on incentive motivation following
partial and continuous reinforcement arises in the context of reacquisi-
tion. Since performance during extinction differs radically following
these training conditions, the effects on incentive might differ in a way
that could be detected by the reintroduction of reward. Specifically, for
example, if continuously reinforced rats stop instrumental performance
rapidly because of other-than-incentive factors such as a generalization
decrement in habit, then they might have more residual incentive
motivation than partially reinforced rats who persist longer in making
the instrumental response. Hence, reacquisition might progress at
different rates following extinction after initial training with partial or
continuous reinforcement.
To evaluate this possibility, the control groups shown in Fig. 1 were
run for an additional 60 trials with reward again present. Reacquisition
conditions were run under continuous reinforcement for some rats and
under partial reinforcement for others in order to control for possible
generalization decrements due to change from the original training
conditions. The results are plotted in Fig. 2. It is apparent that all
groups reacquired the running response rapidly, and somewhat more
rapidly with continuous than with partial reinforcement during this
stage of the experiment. But no noticeable differences occurred during
reacquisition as a residual from the original acquisition reinforcement
conditions. Accordingly, in spite of dramatic differences in instrumental
Incentive Theory and Changes in Reward 15
Reacquisition
1 , I ,
x-3 x-2 x-1 1 2 3 4 5
Blocks of 12 trials
C. EXPERIMENTS OF CHOICEFOLLOWING
3 AND 4. REVERSAL
PARTIAL
AND CONTINUOUS REINFORCEMENT
I n an attempt to minimize the possible role of generalization effects,
a between-groups design was employed to compare the rates of loss of
incentive motivation following partial and continuous reward. Two
groups of rats received a large reward in one alley, one group receiving
the reward on every trial (100 yo9A) and the other group receiving the
large reward on an irregular half of their trials in that alley (50 yo 9A).
Both groups received a small reward continuously (1OOyo1A) in the
other alley. The selection of these amounts of reward was intended to
ensure that both groups would initially prefer the large-reward alterna-
tive, although presumably the incentive basis for this preference would
be somewhat greater with continuous than with partial reinforcement.
16 Frank A. Logan
After clear evidence of such preference for the larger reward was
apparent (28 days at six trials per day), extinction conditions were
initiated in the large-reward alley for both groups. The small reward was
still given to both groups on every trial in the alternative alley. Accord-
ingly, both groups would be expected ultimately to come to prefer the
'
(-3 X-2 X-1 I 2 3 4 5
particularly rapid for either group and that there were no noticeable
differences in the rate at which reversal took place. Accordingly, even
in this between-groups design, there is no evidence that incentive moti-
vation persisted longer following partial than continuous reinforcement.
Because some possible generalized partial reinforcement effects may
also arise in this differential-reinforcement procedure, especially follow-
ing a relatively small amount of discrimination training (Brown & Logan,
loo -
Extinguish 9 A wntinw I A
42 days ocquiwtion
75 -
0 50% 9 A vs W % I A (17.22)
Choice
Speed (9A)
Speed ( I A )
I .
X-3 X-2 X-l 1 2 3 4 5 6 7 8 9
Blocks of 4 days (24 trials)
1965), the procedure was repeated with new rats who were given suffi-
cient overtraining to minimize further any generalization effects.
Specifically, 42 days under the original reward conditions were given
before the large-reward alley underwenb extinction. This represents
50 yoovertraining beyond that given the previous groups in this design;
otherwise, the procedures were identical.
The results are shown in Fig. 4. The choice data are depicted by the
wide-lined curves and, although the partial group remained somewhat
above the continuous group during the later portions of the extinction
phase, this difference was absolutely quite small and did not approach
statistical significance. Again, the evidence suggests that incentive
motivation did not persist longer following partial reinforcement than
following continuous reinforcement, at least insofar as this is reflected
18 Frank A. Logan
D. THEOVERTRAININGEXTINCTION
EFFECT
A number of studies (e.g., North & Stimmel, 1960) have recently
appeared suggesting that overtraining of an instrumental response may
lead to reduced resistance to extinction of that response compared with
smaller amounts of training. This phenomenon, then, is another instance
in which differential rates of change in incentive motivation might
possibly be detected. Evidence against such an interpretation can be
seen by comparison of Figs. 3 and 4. It will be recalled that the only
difference between these studies concerned the number of training trials
given prior to extinction of the large-reward alley. Hence, if incentive
motivation decreased more rapidly following overtraining (with either
partial or continuous reward) then the reversal in Fig. 4 should have
been faster than that in Fig. 3. It is clear that no such difference occurred,
reversal of preference being essentially equally gradual in both instances.
Examination of the speed curves, however, does show a more precipitous
decline after overtraining. But there is no evidence that the overtraining
extinction effect represents changes in the rate of extinction of incentive
motivation.
E. EXPERIMENT5 . EFFECT
OF AMOUNTOF REWARDON
TO EXTINCTION
RESISTANCE MOTIVATION
OF INCENTIVE
Recent evidence (e.g., Hulse, 1958) indicates that, a t least after a
substantial amount of training, instrumental extinction proceeds more
rapidly with large as compared with small rewards. Experiment 5 was
designed to determine whether this phenomenon might reflect an effect
on the rate of change in incentive motivation. Furthermore, since the
amount-of-reward effect on extinction appears to be specific to con-
tinuous reinforcement, partial reward conditions were also included.
Two groups of rats received a large reward in one alley and a small
reward in the other alley, one group receiving both of these rewards
continuously and the other group receiving both of these rewards on an
irregular 50 yopartial reinforcement schedule. After 42 days of acquisi-
tion under these conditions, both alleys were subjected to joint extinc-
tion. Consistent with the logic developed previously, if extinction of
incentive motivation proceeded more rapidly following the large as
compared with the small reward, preference should show a temporary
reversal toward the small-reward alley.
The results are shown in Fig. 5. The heavy-lined curves show the
choice data for the two groups separately, and it is clear that no reversal
occurred. Instead, both curves drifted gradually toward indifference in
response to continued nonreinforcement. Attention to the speed curves
does reveal that the rate of extinction as measured by instrumental
performance was greater following continuous large reward than
20 Frank A. Logan
75 -
0
._
0)
0
5
E 50-
B
0)
25 -
I 1 I
X-3 X-2 X-l 1 2 3 4 5 6 7 8 9
Blocks of 4 days (24 trials)
FIQ.5. Choice behavior during joint extinction following initial training with a
lmge reward in one alternative and a small reward in the other. One group received
both rewards continuously and the other group received both rewards on an
irregular half of their trials in each alley. Graphed are the last 12 of 42 days of
acquisition followed by 36 days of joint extinction. Speeds for m h group in each
alternative are also depicted.
the original training may not have been carried fully to asymptote and
that the group shifted to the larger reward may have run faster not
because of any incentive contrast effect but because additional habit
was being acquired during the postshift trials. Subsequent studies have
tended to favor this interpretation ;a number of studies are now available
(e.g., Ehrenfreund and Badia, 1962) indicating that the upward shift in
amount of reward does not always produce the overshoot seen in the
earlier data. These latter studies, however, may also be criticized
because the larger reward employed has probably been near or at the
upper limit of potential incentive value. That is to say, a large reward
might appear still larger by contrast with concurrent or preceding
smaller rewards, but if the incentive value of the large reward is already
maximal, any such contrast would not appear in instrumental
performance.
Other studies that might reveal an incentive contrast phenomenon
have employed a concurrent exposure to different amounts of reward.
For example, Bower (1961) and Goldstein and Spence (1963) gave rats
a small reward in an alley of one color and a large reward in another
alley. The performances of these rats were separated for the two alleys
and compared with control rats receiving all of their training with either
the large or small reward. These results have been consistent in showing
no contrast effect with respect to the large reward although some of the
data suggest that performance with a small reward may be inferior if
the rats are concurrently encountering a large reward in a similar
situation. The same possible criticism with respect to the upper limit of
incentive applies and, in addition, there is the question of the extent to
which incentive effects generalize between the situations so as to provide
a basis for contrast. The unresolved issue is the degree to which the
situations would have to be similar for different rewards to show any
effective contrast as opposed to complete discrimination of the rewards
appropriate to the different situations.
The limiting case of similarity, of course, is to give different rewards
in the same situation. This procedure of varied reward has been studied
most systematically by Beier (see Logan, 1960) but the inability to infer
the incentive value of the rewards separately makes any interpretation
based on contrast highly speculative. However, suggestive analyses in
favor of such a phenomenon were reported.
Some data bearing on this issue were presented in Fig. 4. Rats
received a large reward in one alley and a small reward in the other alley
followed by extinction in the large-reward alley while the small reward
continued. If one refers to the running speed in the small-reward alley
when it was initially paired with a large reward and subsequently paired
with no reward, it is apparent that no differences appeared. That is to
Incentive Theory and Changes in Reward 23
say, the rats ran as well toward one pellet reward when they were
receiving nine pellets in the other alley as when they were receiving
nothing in the other alley. This presumably indicates that incentive was
more or less perfectly discriminated and that no contrast effects, upward
or downward, occur in this situation.
loo -
75 -
.-
5
'c 50-
e
&!
0 5A vs 2 A then 5 A vs 5A h = O )
0 5 A vs 2 A then 2 A vs 2 A (n=IO)
Change DmowIt
42 days acquisition
-
-
Choice
25 - M Speed ( 5 A then 5 A )
*-* Speed ( 2 A then 5A)
Speed ( 2 A then 2 A )
Speed ( 5 8 then 2 A )
Blacks of 24 trials
FIG.6. Choice behavior after reward wm equalized following initial training with
differential reinforcement. Equalization was achieved by increasing the small
reward for one group and by decreasing the large reward for the other group.
Graphed are the last 12 of 42 days of acquisition followed by 36 days with equal
rewards. Speeds for each group in each alternative are also depicted.
thus making the rate comparisons somewhat difficult. Even so, however,
there is no indication that the group shifted from partial to continuous
took longer to reverse than the group shifted from nothing to continuous
large reward. If anything, the latter obtained, presumably reflecting the
initial differencein preference. Hence, these data do not suggest that the
incremental rate parameter for incentive motivation is modified by
prior reinforcement history.
-
- Choice
Speed 3A
o--* Speed nil then 100%5A
O----o Speed 50X5A then K)o%5P
(-3 x-2 x-l 1 2 3 4 5 6 7 8 9
Blocks of 24 trials
FIG.7. Choice behavior during reversal when a relatively small reward was
initially pitted against either nothing or partial reinforcement with a relatively
large reward and was subsequently pitted against continuous reinforcement.
Graphed are the last 12 of 42 days of acquisition followed by 36 days for reversal.
Speeds for each group in each alternative are also depicted.
IV. Discussion
Changes in incentive motivation are essential aspects of contemporary
theories of learning, but little explicit attention has been paid to the
rate at which such changes take place. Were incentive motivation
mediated by a response mechanism such as rg-sg, then the laws of
classical conditioning would suffice to describe incentive change effects.
However, since reasonable questions can be raised about a medietional
approach to incentive motivation, the present research program
attempted to evaluate the question empirically.
26 Frank A. Logan
V. Summary
The rate at which incentive motivation changes in response to a
change in the reward conditions was evaluated empirically by use of a
choice procedure with rats as subjects. Two alternatives initially
received different conditions of reinforcement followed by a change in
the conditions for one or both alternatives. The procedures studied were
selected from effects familiar in the literature of learning, including the
effects on extinction of partial reinforcement, amount of reward, and
overtraining. Changes in both amount and probability of reward were
also studied. I n no case was choice behavior affected by these changes
in the manner that would be expected if the rate at which incentive
motivation changes had been significantly affected.
During forced trials, running speeds reflected most of the familiar
phenomena of simple instrumental performance measures. Several novel
results were also obtained in the speed data, notably a reduction in the
partial reinforcement effect when extinction is conducted jointly with a
continuously reinforced response, and a generalized frustration effect
reflected in faster running for small reward when combined with partial
as compared with continuous large reward, In general, the speed data
confirmed that the rats were performing appropriately to the conditions
of reinforcement in each alternative.
Reservations about the generality of the choice procedure were noted,
especially the possible generalization effects between the alternatives and
the observable persisting habits of choice. Nevertheless, the consistency
of the results suggests the conclusion that the rate of change in incentive
motivation is an invariant state parameter of the subject and is not
significantly aRected by prior reinforcement history.
REFERENCES
Amsel, A. Frustrative nonreward in partial reinforcement and discrimination
learning : Some recent history and a theoretical extension. Psychological Review,
1962, 69, 306-328.
Amsel, A. Partial reinforcement effects on vigor and persistence. In K. W. Spence
and J. T. Spence (Eds.,),The psychology of learning and motivation: Advances i n
research and theory, Vol. 1. New York: Academic Press, 1967. Pp. 1-65.
Black,R. W. On the combination of drive and incentive motivktion.Psychological
Review, 1965, 72, 310-317.
Bower, G. H. Partial and correlated reward in escape learning. Journal of Experi-
mental Psychology, 1960,59, 126-130.
Bower, G. H. A contrast effect in differential conditioning.Journal of Experimental
Psychology, 1961,62, 196-199.
Brown, R. T., t Logan, F. A. Generalized partial reinforcement effect. Journal of
Comparative and Physiological Psychology, 1965, 60, 64-69.
Capaldi, E. J. Partial reinforcement:An hypothesis on sequential effects. Psycho-
logical Review, 1966,73,459-477.
Incentive Theory and Changes in Reward 29
Cofer, C. N., & Appley, M. H. Motivation: Theory and research. New York: Wiley,
1964.
Crespi, L. P. Amount of reinforcement and the level of performance. Psychological
Review, 1944,51, 341-357.
Davenport, J. W. Spatial discrimination and reversal learning based upon dif-
ferential percentage of reinforcement. Journal of Comparative and Physiological
Psychology, 1963,56, 1038-1043.
Egger, M. D., & Miller, N. E. When is a reward reinforcing? : An experimental
study of the information hypothesis. Journal of Comparative and Physiological
Psychobgy, 1963,56, 132-137.
Ehrenfreund, D., & Badia, P. Response strength as a function of drive level and
pre- and post-shift incentive magnitude. Journal of Experimental Psychology,
1962,63,468-471.
Gleitman, H., Nachmias, J., & Neisser, U. The S-R reinforcement theory of
extinction. Psychological Review, 1954,61, 23-33.
Goldstein, H., & Spence, K. W. Performance in differential conditioning as a
function of variation in magnitude of reward. Journal of Experimental Pay-
choology, 1963, 65, 86-93.
Guthrie, E. R. Reward and punishment. Psychological Review, 1934,41,450-460.
Hull, C. L. Principles of behavior. New York: Appleton, 1943.
Hull, C. L. A behavior system. New Haven: Yale University Press, 1952.
Hulse, S. H., Jr. Amount and percentage of reinforcement and duration of goal
confinement in conditioning and extinction. Journal of Experimental Psychology,
1958,56,48-57.
Kaplan, R., Kaplan, S., & Walker, E. L. Individual differences in learning aa a
function of shock level. Journal of Experimental Psychology, 1960, 60, 404-407.
Kendler, H. H. Drive interaction: I. Learning as a function of the simultaneous
presence of the hunger and thirst drives. Journal of Experimental Psychology,
1945, 35, 96-109.
Koch, S. Clark L. Hull. In W. K. Estes, et al., Modern learning theory. New York:
Appleton, 1954. Pp. 1-176.
Lewis, D. J., & Kent, N. D. Attempted direct activation and deactivation of the
fractional anticipatory goal response. Psychological Reports, 1961, 8 , 107-1 10.
Logan, F. A. Incentive. New Haven: Yale University Press, 1960.
Logan, F. A. Decision making by rats. Journal of Comparative and Physiological
Pwchlogy, 1965,599 1-12 and 246-251.
Logan, F. A., & Wagner, A. R. Reward and punishment. Boston: Allyn & Bacon,
1965.
Miller, N. E. Studies of fear as an acquirable drive: I. Fear as motivation and fear-
reduction as reinforcement in the learning of new responses. Journal of Experi-
mental Psychology, 1948, 38, 89-101.
Miller, N. E. Some reflections on the law of effect produce a new alternative to
drive reduction. In M. R. Jones (Ed.), Nebrmka qmposium on motivation.
Lincoln, Nebr.: University of Nebraska Press, 1963. Pp. 65-112.
Mowrer, 0. H. Learning theory and behavior'. New York: Wiley, 1960.
Nefzger, M. D. The properties of stimuli associated with shock reduction. Journal
of Experimental Psychology, 1957, 53, 184-188.
North, A. J., & Stimmel, D. T. Extinction of an instrumental response following
a large number of reinforcements. Psychological Reports, 1960, 6 , 227-234.
Novin, D., & Miller, N. E. Failure to condition thirst induced by feeding drv food
to hungry rats. J o u d of Comparative and Phyaiologieal P&chologyy 1962, 55,
373-374.
30 Frank A. Logan