In praise of dead behaviourists
Chris Timms considers American Psychologist Clark Hull, and his ‘fractional antedating goal response’.
27 September 2018
If you carried out a word association test and presented participants with the stimulus word ‘behaviourist’, the most frequent response would probably be ‘Skinner’. Occasionally it might be ‘Watson’ or even ‘deluded’. But only rarely would the response be ‘Hull’. The Hullian wing of behaviourism (sometimes called Hull–Spence theory) died, in effect, with Kenneth Spence in 1967, and it is rarely referenced in mainstream psychology today.
Yet it was Hull that came to mind following an event in a public lavatory – an event that, in its ordinariness, neatly illustrates the potency of one of Hull’s least-known explanatory gems. I was taking a pee and reading the eye-level advertising. Suddenly, a young man clattered through the door and hurried to the urinals. He wasted no time with that common male preamble of simulating a mortal struggle with an anaconda. There was just the sound of frantic fumbling, a fleeting pause and then… the rattling sound of water striking metal, hard.
He exhaled, beginning to chuckle with relief: ‘That was close.’ I laughed too. ‘Been there, mate,’ I said. We stood silently for a few moments, then, perhaps feeling a need to explain his un-cool arrival, he added, ‘I was feeling all right until a minute ago.’
The psychologist inside me was burning to speak again. I wanted to explain to this youth why his desire to pee had suddenly lurched from manageable to tsunami as he approached the toilet door. He was, undoubtedly, the victim of a ‘fractional antedating goal response’.
This psychological mechanism has a title that does not slip easily off the tongue. It’s almost completely unknown outside formal psychology, its heritage still-born into the high behaviourism of Clark Hull. And yet, in describing the subtle parameters within which learning occurs, the fractional antedating goal response has much to tell us about everyday life.
The FAGR was Hull’s means of linking classical and operant conditioning with interdependent stimulus–response chains and secondary reinforcers to explain complex maze learning (perhaps not dissimilar to cue-producing responses). It describes the partial and anticipatory responses we make when the opportunity for a desirable consummatory/relieving moment approaches – and its effect is to intensify our feelings of need and purposefulness. For example, in humans, the FAGR manifests itself in that common sensation of feeling increasingly hungry as we advance down the queue at a food counter. The sights, sounds and smells stimulate, channel and reinforce our ‘correct’ approach behaviour.
The FAGR also interacts with Dollard and Miller’s somewhat better-known frustration-aggression hypothesis, providing an explanation for why hitherto friendly pets sometimes begin to squabble when they realise that you are about to feed them. It’s why people can suddenly start fighting when food parcels are about to come off the backs of trucks, and why dropping the lasagne on its way out of the oven feels more annoying than dropping it on the way in. And, yes, sometimes the FAGR can turn a mild urge to pee into a wild urge.
It is easy to see why a psychological mechanism like the FAGR would evolve. Being channelled towards places where ‘good things’ happen to us, having our ‘correct’ intermediate responses supported by secondary reinforcers, and then – finally – being fired up to seize the consummatory moment with immediacy, force and focus, is likely to be highly adaptive, especially in competitive situations.
In human societies the consequences of the FAGR – often agonistic – are reduced by socially learned rules. Very early in life, for example, we are taught to queue politely, and to resist the urge to bite our peers at the sight or scent of food. But although the FAGR is overlaid with much cultural ‘noise’ in humans, it still functions quietly beneath the surface. Sometimes it produces unexpected and undesirable Dawkins-style evolutionary ‘misfires’. It is the FAGR that causes over-excited boyfriends to ejaculate annoyingly even before… And, staying with sex, our ability to be aroused by artificial sexual stimuli (and sometimes prefer them to pursuing the real thing) has roots in the FAGR. To be unambiguous here, without the partial, anticipatory and pleasurable ideations and responses enabled by the FAGR, there would be no porn industry – and probably no romantic fiction featuring handsome surgeons either.
But back at the urinal, I said none of this. Public lavatories are no forum for translating masterpieces of behaviourist gobbledegook into plain English, unless you are willing to be arrested as a pervert. We zipped up and parted without another word.
A preference for the behavioural
Hullian behaviourism is little more than a historical footnote today. But for psychologists who are willing to see past ‘obsolete behaviourists’, it offers surprisingly good explanatory value. The FAGR, for example, can readily explain quirky and involuntary behaviour/affect, which other approaches struggle with. And for psychologists willing to treat psychological theories as potentially complementary rather than adversarial, Hull’s theories are not difficult to reconcile with cognitive psychology and evolutionary psychology. Although Hull denied free will and recoiled from allowing his rats to be ‘buried in thought’, Hullian behaviourism incorporated needs, drives and motivations into its theories, as well as the more obvious hard-wired algorithms of learning theory such as reinforcement, escape/avoidance conditioning, stimulus generalisation, and discrimination learning. If Hull and his successors believed that behaviour and feelings could be best expressed in Stimulus-Response terminology (and latterly S-S-R), they never argued that the processing in between S and R was tabula rasa. Hullian behaviourism is not cognitive neuroscience – but its differences from determinist versions of cognitive neuroscience (that of Edmund Rolls, for example) are chiefly in its preference for behavioural levels of explanation.
After Spence died, interest in ‘classical’ behaviourism steadily faded. Its ambitions were too Newtonian and its mathematics too daunting. A cynic might also argue that, by 1970, there were much easier ways for an ambitious psychologist to get a PhD than by trying to unravel Hull versus Tolman versus Skinner. Cognitive psychology steadily replaced behaviourism as the main focus of research interest in the last quarter of the 20th century. More recently, progress in cognitive psychology has become increasingly connected with developments in cognitive and computational neuroscience. Accordingly, the future of psychology seems likely to be characterised by questionnaires, brain scans and computer analogies rather than by conventional experiments and rat analogies. So what wisdom, if any, might we still take from Hull’s theories, methods and discoveries?
Hull’s most obvious contribution to knowledge was to identify the universal learning mechanisms that interact with individual circumstances, thereby creating individual differences, behaviour, feelings and behavioural predispositions. But Hull’s work also demonstrated, sometimes unintentionally, that learning is a fiendishly complex process, especially when its mechanisms interact and conflict with one another, or interact and conflict with evolutionary predispositions. It is in these circumstances that non-intuitive, often contradictory and sometimes self-destructive behavioural/affective outcomes appear – resulting in human behaviour as varied as anthropomorphism, acquiring the targets for our pity and prejudices and acquiring the triggers for our anxieties. There is perhaps no better illustration of the continuing need to study how learning works in the 21st century than the fact that our over-enriched Western environments are simultaneously causing obesity in many children while enabling the dissemination of cultural messages that cause other children to starve themselves for fear of getting fat.
Psychology has not stood still since Hull’s death in 1952, of course. Latter-day behaviourists such as Albert Bandura have added social learning and latent learning to Hullian learning theory. Hard-wired cognitive processes (empathy, perceptual set, stereotyping and cognitive dissonance, for example) have also been added to the explanatory mix. The ethology of Lorenz and Tinbergen has grown into evolutionary psychology, requiring us to add such notions as ‘selfish genes’ to our explanations for affective and behavioural dispositions (maternal ‘love’ and sacrifice, for example). The resulting fusion, for those happy to suspend debate about free will and inbuilt morality, offers a potent nomothetic account of how affect and behaviour arise. We do not know exactly how all these elements interact – and we certainly cannot quantify their interactions as Hull hoped. But, today, a broad range of psychological variables that contribute to affect and behaviour seems well established.
Unfortunately, this kind of explanatory eclecticism is not the direction in which psychology seems to be heading – and the culture of debate between psychologists remains dismayingly adversarial and blinkered. Simon Baron-Cohen’s quest for the origins of cruelty, for example, pits the concept of evil against empathy deficiency. It neither incorporates the mechanisms of Hullian/social learning theory nor explains why they should be completely ignored. Compounding this problem, many of those who lead public opinion in psychological matters today come from non-psychological backgrounds (psychotherapy, science journalism and biology, in particular). For most of these commentators, subtle interactions between learning processes are simply not on the radar at all – unknown unknowns, as Donald Rumsfeld might have put it.
The result has been a growing fashion for attributing, more or less directly, all kinds of complex feelings and behaviour to simplistic constructs drawn from neuroscience – including ‘psychopathic brain types’, deficient ‘empathy circuits’, overactive amygdalas and ‘reptile brains’. And when our feelings or behaviours happen to be maladaptive, they are commonly attributed to an ever-expanding list of causative mental disorders – again, with neurological correlates. Whether this neurological/medicalised approach will ultimately prove to have explanatory merit or not is moot. Certainly, it is startlingly bold in its cause-and-effect attributions; social anxieties, phobias and OCDs, the most obviously learned of all putative mental disorders, are already being confidently attributed to faulty neurology, and sometimes treated with neurosurgery.
But no one, not even Hull, could dispute that neuroscience and mental disorders are much more entertaining than behaviourism. And as long as figures such as Professor James Fallon are willing to boast that they can identify ‘psychopaths’ merely by glancing at brain scans, and while celebrities are willing to ‘out’ their mental disorders on telly, Horizon’s producers will not be commissioning a special feature on the fractional antedating goal response any time soon.
For ex-deputy prime minister Damian Green, forced to resign because he had porn on his office computer, this is especially unfortunate. Even if he attends counselling sessions for ‘porn addiction disorder’, he is unlikely to ever discover the part in his downfall played by the FAGR – that highly adaptive learning mechanism installed by God or evolution to facilitate reproduction which, rather un-adaptively, offers humans the effortless option of staying at home, filling their caves/computers with erotic material, and masturbating.
Even more unfortunately, in my view, he will never discover which long dead psychologist to thank for the insight.
- Chris Timms is an independent writer
Image: By http://www1.appstate.edu/~kms/classes/psy3202/images/ClarkHull2.jpg - http://www1.appstate.edu/~kms/classes/psy3202/images/ClarkHull2.jpg, FAL, https://commons.wikimedia.org/w/index.php?curid=15898163