
Lightening minds
An exclusive extract from ‘Forgetting’, by Professor Scott Small, director of the Alzheimer's Disease Research Center at Columbia University.
20 May 2025
Share this page
In 1995, Jasper Johns – one of the greatest living American artists – and a group of art historians were summoned by the executors of Abstract Expressionist Willem de Kooning's estate on Long Island to evaluate his last series of paintings, completed in the 1980s. Aware of de Kooning's diagnosis of "probable" Alzheimer's disease, the experts were tasked with determining the quality of the paintings – whether they represented the late style of an aging creative genius and thus should be considered a last but legitimate chapter of his oeuvre, or they were so compromised by his disease that they should be placed outside the scope of his artistic trajectory and shelved from public viewing so as not to damage his legacy. It was my turn now to boil this down to a simple question: Could Alzheimer's disease impair an artist's ability to be creative?
We neurologists commonly field variants of this question when patients, family members, and sometimes even – most awkwardly – the courts ask how the disease can affect career performance. The answer, never easy to ascertain, depends on the stage of the disease. But it's much easier to come up with an answer for someone in a common profession than for a career artist.
Alzheimer's is a gradually progressive disease. While the disease's ground zero has been pinpointed, its time zero, its precise beginning, has never been clocked. We do know that it takes decades for a patient to develop dementia. By tracking large groups of people over decades, we know that Alzheimer's begins in the "preclinical stage," when neurons in a hippocampal region called the entorhinal cortex begin to malfunction, but only subtly so. The afflicted person might notice occasional memory lapses in newly learned information – trouble recalling the name of someone recently met, for example – but these remain subjective and are not reliably detected by formal memory tests. As the disease progresses over the course of many years, patients enter the "prodromal stage." During this middle stage, the disease is still largely confined to the hippocampus, but it begins its blanket killing of neurons, causing consistent and detectable memory impairment – forgetting a movie seen last night, a dinner party attended last weekend. From that point, it typically takes five to ten years for the disease to enter the "dementia stage," as the disease migrates out of the hippocampus up to higher-order cortical regions – hubs of complex networks that are the seats of other cognitive abilities. Most prominently affected during the early phases of dementia are the cortical regions where information is stored and processed and from which it is retrieved: the cortical hubs of the sensory processing streams described in previous chapters. Now, besides just the hippocampus "speaking," the patient begins experiencing overt pathological forgetting: forgetting events from younger years, forgetting the names of friends, forgetting words, forgetting travel routes, forgetting how to return home.
Unfortunately, the disease does not stop there. Ultimately, after many years, during which time the disease is confined to these cognitive areas, it spreads throughout the cortex, robbing a person's personality and personhood. And then it dives deep below the cortex to the nuclei – button-sized clusters of neurons – in the brain stem that are critical for maintaining consciousness and for basic bodily functions, including sleeping, eating, and breathing. It is this end stage of Alzheimer's disease that strikes terror in patients and family members when the diagnosis is invoked. But for most of the disease's course, it is confined to information-processing areas of the brain.
***
"Just shoot me if I get it!" is the declaration I often hear when chatting about Alzheimer's disease in social situations. I initially thought this was a reaction to knowing about the end stage of the disease and, without getting into the ethical debate over euthanasia, a tenable response. I came to realize that it was more about fear of the prodromal stage and the early phases of dementia, about losing one's cognition. Now that I have viewed Alzheimer's disease up close and throughout its long course, mainly as a physician but also in my own family, I appreciate how wrong this suicidal reaction is. Without minimizing the anguish caused to patients and their families, I must say that none of my patients during the prodromal stage or even the early phases of dementia wish to die. It turns out that we can lose many of our cognitive faculties and still engage with others and enjoy life. For some this might seem obvious, but even in our current age, which places so much importance on information – its processing, storage, and retrieval – my patients have taught me that we tend to overvalue computational dexterity. We seem unaware that many cognitive abilities are not critical for our being – our core personality traits, our ability to socialize with family and friends, our ability to laugh and love, to be moved by beauty. But clearly, cognition is important for most of our careers, certainly for mine, and its gradual demise exacts a price.
While reviewing with Jasper the stages of the disease, a required preamble to deciding whether an Alzheimer's disease patient can practice their profession, I realized that I had shifted into lecture mode. We settled back into a proper dialogue as we entered the more challenging part of the discussion: "staging" de Kooning's disease while he painted his last series, and determining whether the disease might have affected the paintings' quality.
We discussed whether an artist could still create works reflecting his or her genuine creativity if, like the patient H.M. described in chapter 1 of my book, the hippocampi were excised from the artist's brain in later life. The artist's cortical visual processing stream, from beginning to end, would be intact, and so would the associations the visual cortical hub had formed with other sensory modalities and with emotions – as long as this binding process had occurred months before the hippocampectomy. Our conclusion, therefore, was that an artist's creative process could survive when he or she was in the preclinical and prodromal stages of the disease.
The problem was that in de Kooning, the disease had very likely spread into the cortex during the 1980s, while he was creating the paintings in question. He was formally diagnosed with Alzheimer's disease dementia in 1989, and based on our experience as neurologists, patients enter the dementia stage a few years before they are diagnosed with dementia. It is hard to be precise because the anatomical progression of Alzheimer's disease is not like a sequential series of region-by-region surgical excisions. The disease's stages are not discretely demarcated, but bleed into one another. Once the disease gradually migrates into a new region, it festers there for years as it sickens neurons before slowly killing them off. Despite the inherent anatomical fuzziness, Jasper and I were able to conclude that for most of the 1980s, de Kooning's disease had already migrated out of his hippocampi and that his visual processing stream was affected – but only its higher-order cortical regions, at or around the visual central hub. I immediately emphasized what seemed very relevant to our brain-mapping mission: that only during the very end stages does the disease begin to propagate down the sensory processing stream, from the central hubs to the lower-order cortical hubs, where colors and contours are processed. There is little doubt that these lower cortical regions in de Kooning's brain were relatively intact during most of the 1980s.
And so, through our dialogues, Jasper and I finally, as best we could, completed an approximate map of de Kooning's disease. De Kooning painted much of his last series after the disease had already affected his visual cortical processing stream, but only its upper reaches. Even there, however, in the central hub, many neurons were still alive, although they were "sick." The complex perceptual processing that occurs in them, therefore, would have been dimmed but not darkened; the links these neurons form with other information and emotions would have been loosened but not absent. In contrast, the neurons in his lower-order cortical regions would have been robustly healthy throughout the decade. This map of disease might help explain, at least neurologically, why de Kooning's style was so dramatically different at that time than it was before. Gone were the complex and emotionally charged depictions of figures, objects, or landscapes, rendered in lushly dense and variegated brushstrokes. These were replaced instead by sparse ribbons, simple colors and contours.
The question was whether the disease map could inform us of the quality of his later paintings; whether his cognitive function was sufficiently impaired to affect his professional capacity as an artist. Neurologists are asked this all the time, in helping decide whether a patient can no longer practice their profession. Neuropsychological tests provide the objective evidence – equivalent to a mechanic's checklist when examining a car—to document when a run-down hippocampus and prefrontal cortex, and other cortical regions to which they connect, impair the ability to, say, process and remember information, to speak fluently, to calculate and manipulate numbers and other abstract symbols, and to navigate in space and time. This cognitive checklist can be translated practically to help decide when most patients' professions are likely impaired, but not so for an artist.
Outside a neurologist's scope, the determination of when an artist's performance is subpar has to be left to other experts – in this case Jasper and the group of art historians who visited de Kooning's estate in 1995. Collectively, they concluded that except for his last few paintings, most were of sufficiently high artistic quality, were consistent with his artistic trajectory, and belonged as legitimate parts of his oeuvre. This approval formed the basis of, and gave the green light to, an exhibition of these paintings, ultimately opening to nearly universal acclaim at the Museum of Modern Art in 1997.
Illuminated by the de Kooning case study, I articulated a practicing neurologist's conclusion: that artists can, in principle, practice their profession through the preclinical, prodromal, and early dementia stages of the illness. The most interesting conclusion to me as a cognitive scientist was that the creative process, even its genius, can occur when the upper regions of a sensory processing stream are impaired – its rich weave of associative networks in tatters – and when the stream is dominated by lower cortical regions, where sensory processing is simpler and associative networks are sparser and more rudimentary. Jasper's response to this neurologizing was a cocked head and a sly smile.
***
Jasper did offer glimpses into his creative process for painting his Flag (Orton, 1994). Not to me, but in a number of interviews published in the 1960s, where he acknowledged how sleeping was part of the process, and how the inspiration for painting an American flag came to him in a dream. Dreaming is known to be a fertile state for creativity (Ritter & Dijksterhuis, 2014) not just for Jasper and other artists but for scientists as well. I knew, however, that there was no point in my asking Jasper to expand on his quotes, as his coyness on this topic would undoubtedly prevail. But I sensed that he would be interested in new findings on the biological purpose of sleep, which are beginning to explain how dreaming benefits creativity.
The body's need for sleep has remained one of biology's great mysteries. A few minutes a day are all we really need to eat and drink enough for survival, but we are forced to dedicate many hours to sleep, to disconnect from the world around us, with all its lurking dangers. Those of us lucky enough to get the full eight hours our bodies crave, and even those who claim a couple of hours less, end up spending almost a third of our existence exposed and vulnerable to our surroundings. Despite this exposure, alternating wakefulness with sleep is so essential to life that no being with a complex nervous system has managed to escape this daily cycle of existence. Mammals do it (from humans to rodents), vertebrates do it (from fowl to fish), even the lowly invertebrates do it (from flies to worms). Yet, unlike nourishment and hydration, whose necessity for our bodily functions is easy to explain, the need for sleep has remained unknown.
Many hypotheses have been proposed in an attempt to explain why, despite the fact that conscious awareness of our surroundings increases our chances of survival, we are forced to dedicate hours a day to a slumbering oblivion in order to survive. One hypothesis, whose general contours were proposed a quarter of a century ago, has slowly amassed circumstantial support. Only in the past few years, however, with the development of sophisticated technologies, has it been tested and confirmed.
Francis Crick, the scientific luminary who shared the 1962 Nobel Prize in Physiology or Medicine for describing the double-helical structure of DNA, sparking the molecular revolution described in previous chapters, shifted his focus later in his career. Brazenly, he decided to tackle the most intractable questions in the brain sciences – the nature of consciousness and the mystery of sleep. In 1983, he published a theoretical paper that hypothesized about sleep's biological purpose. He summarized his elaborate idea in one pithy and startling conclusion: "We dream in order to forget".
Recall that the neuronal correlates of memory are those small protrusions from dendrites, the dendritic spines. The billions of neurons in our cortex each have thousands of dendritic spines, so the number of individual spines is truly astronomical. Their sole purpose is to modify their size, and the number of neurotransmitter receptors contained within them, with experience. Each spine is endowed with the molecular machinery to sprout in response to experience, and each experience triggers vast fields of spine growth.
Imagine spending a day in your life wearing eyeglasses with a built-in mini-camera documenting, frame by frame, the thousands of images you experience. Viewing your daily odyssey as a slide show later that evening, you would recognize many, if not most, of the experiences. Each moment of recognition would reflect the growth of millions of spines distributed across your cortex. Even though many experiences shared overlapping information and therefore overlapping spines, each was at least partially distinct. Your recognition of them is psychological evidence that your brain must have grown, even if just microscopically, throughout the day. Now imagine a whirlwind tour of the world, a weeklong adventure spent jetting to, and spending a chock-full day of sightseeing in, very different environments – a city, a jungle, the mountains, ancient ruins, the desert, a bucolic countryside, and, most boring, a resort island. Each day, you are pumping your brain with thousands of distinctly vibrant memories, each fragment of memory a lawn's worth of spine growth. Leaving aside the spatial problem – that your rigid skull prevents your brain from significantly expanding in size – such spine growth gone wild would cause cognitive havoc. Each spine can only grow so much, and sooner or later your cortical spines would fill to capacity. When this happened, like a saturated digital picture with no contrast across its pixels, memory snapshots of previous experiences would be whited out and left indistinguishable. Running out of spines, the cortex would eventually have no room left for new memories to form. With your cortical sensory processing regions densely populated with overgrown spines, even your perceptions of the outside world would be affected. They would become distorted as neurons in these cortical regions became overly excitable to incoming information, and they might even be deranged by the information overload, which would throw the normal sensory flow out of whack (see Waters et al., 2018).
Crick first proposed in 1983 that sleeping solves this problem by what has come to be called "smart forgetting," an idea that has been modified and refined over the years by his students and other investigators. Based on the principles of neuronal plasticity, sleeping, particularly dreaming, should have a dual and opposing effect on the fields of new spines grown in response to our daily experiences. While we dream, the hippocampus stimulates and replays fragments of our experiences in the cortex, but not the full episode experienced in all of its elaborate intricacies. Dreams are like those "previously seen" recaps in TV series, in which only the most important snippets are reshown, the very few needed to capture and reinforce the gist of the story line. In so doing, the hippocampus persistently stimulates a few privileged cortical spines, stabilizing into a memory those few whose growth reflects the gist of our daily experiences. More sweepingly, however, the vast majority of new spines are left unstimulated while dreaming. Unstable and neglected, vast fields of freshly grown spines should, according to the general hypothesis, wilt back down. After a good night's sleep, we might expect to see some pockets of newly grown spines, now stabilized into a memory. But the net effect, comparing the cortex at the day's end and the morning after, would be spine shrinkage – that is, the net effect of sleeping is forgetting. While it is true that a secondary gain of the large-scale pruning that occurs during sleep would benefit memory – topiary-like, accentuating its details – sleeping's main purpose, according to this hypothesis, is to refresh the cortex. By cleaning and clearing the cortical slate, sleep reopens the cortex to accommodate future memories; by reducing neuronal excitability and effectively deleting extraneous cortical information, it preserves the regulated processing and flow of sensory input.
While this hypothesis makes sense, it is only in the past couple of years that studies have empirically validated its key assumption. In 2017, using powerful new microscopes and other sophisticated techniques, researchers were at last able to investigate spine size across large swaths of cortex. The results were strikingly clear: the net effect of sleep is to cause wholesale spine shrinkage – to cause forgetting. To paraphrase one of Crick's former students, who performed many of the seminal studies documenting sleep-induced forgetting, sleep is "the price we pay" for having a nervous system so eager to learn that it evolved trigger-happy spines, ticklishly responsive to our external worlds (Tononi & Cirelli, 2014). The added elegance of this hypothesis is that it explains why we must disconnect from the external world for so long and on a daily basis. Shrinkage of spines cannot happen immediately. It takes hours for the delicate molecular machinery that governs active forgetting to carefully disassemble a newly grown spine. So, in contrast to hunger, which can be sated by a few ravenous mouthfuls, or thirst, which can be quenched by a couple of long swigs, forgetting can't be rushed. Forgetting takes its slow and deliberate time.
The behavioral consequences reported by people who are forced to go for days without sleep provides more experiential support for the hypothesis. If sleep is important for memory, as earlier hypotheses have contended, then sleeplessness should ultimately cause the kind of memory loss seen throughout the stages of Alzheimer's disease. But this is not the case. What is reported instead are symptoms consistent with neurons that are overly excitable to sensory input, with cortical regions in sensory overload and overflow, all to be expected if sleep's primary purpose is to forget, to shrink spines and erase information. The telltale symptoms of this lack of forgetting, which are devastatingly experienced by virtually anyone forced to go days without sleep, are distorted and deranged perceptions. Sleeplessness affects every part of the visual processing stream – distorting the way we see colors and contours and a precept's component parts, and ultimately deranging the combined whole, even fleetingly causing us to hallucinate.
The benefits of sleep on creative insight also come into play when looking at the effects of sleep-induced forgetting. Psychologists have pored over the introspections of individuals who are generally agreed to be highly creative – visual artists, poets, novelists, musicians, physicists, mathematicians, and exceptional biologists (e.g. see Ghiselin, 1985). A unifying thread among these testimonials has emerged. Colloquially, "to create" implies novelty or innovation; "to be creative" suggests a broader generative capacity. But the recurrent theme that epitomizes the creative process is not generating something brand-new out of the blue. Rather, a creative spark occurs when unexpected associations among existing elements are suddenly forged—a sort of cognitive alchemy. Phrases people use to describe creative insight include how elements in one's mind engage in "combinatory play," how they "collided until pairs interlocked … making a stable combination," or how they "drew together beneath the surface through the almost chemical affinities of common elements." My favorite description is from the poet Stephen Spender, who described his creative process as "a dim cloud of an idea which … must be condensed into a shower of words."
Psychologists set out to devise a behavioral task that captures this creative crucible (Mednick, 1962). Consider the following three words: "elephant," "lapse," "vivid." Think of a fourth word that relates to all three. The answer is "memory." How about a word that is associated with another trio: "rat," "blue," "cottage"? If you answered "cheese," you are right. But even if you did not, take a moment to reflect on these two answers. Once you put the words together and come up with or are shown the right answer, its accuracy is obvious, and you experience an aha moment. There is no obvious route a mind must take, no formula for how to cognitively compute the right answer. It just happens. The correct answer is always there, somewhere in your cortex. You know that rats eat cheese; you have eaten, or at least seen, blue cheese or cottage cheese. But if you were asked to free-associate to "rat" alone, "cheese" might not come first to your mind. Unless you are a cheesemonger, "blue" might elicit "sky"; "cottage" might spark "house." Only if you are a pest control expert, a ratcatcher who has experimented with various baits, might the word "cheese" come first to your mind. Similarly, only if you are a memory expert like me might the word "memory" be your response to "elephant," "lapse," and "vivid." On the flip side, the strength of my association with words linked to "memory" can potentially constrain my creativity. I cannot, for example, see one of the sea's most fabulous creatures, the seahorse (hippocampusin Latin) without being locked in to an immediate association with "memory."
And this is exactly the point. Creativity requires preexisting associations – requires memory – but they must remain loose and playful. The artists' testimonials teach us that creative abilities are forged by immersion in various elements and the establishment of associations between them, but only when the links are relaxed. All visual artists immerse themselves in visions, poets in words, scientists in facts and ideas. But what sets the great ones apart is that their associations are not set in stone.
Loosening associations, relaxing links, associations that are set in clay not in stone: all are required for creativity, and all sound like forms of forgetting. Is this true? Evidence that forgetting is beneficial for creativity (e.g. see Storm & Patel, 2014) first came from studies in which psychologists used various ways to either strengthen or loosen associations between word pairs, like "blue–sky" or "cottage–house." For example, by repeatedly exposing subjects to word pairs, researchers found that they formed tighter memories between those couplets and predictably initially performed worse on the creativity task. Subjects' performance gradually improved over the next few days, however, an improvement that tracked with forgetting's known timeline.
While those findings are interesting, other evidence that links forgetting to creativity comes from sleep studies. These studies clearly show that our creativity, whether measured by the creativity word task or on other measures, significantly benefits from a good night's sleep and in particular from our dreaming. And when examined, this benefit did not occur because sleeping is somehow restful. Nor did it occur because dreaming happens to sharpen a few memory snippets of what our minds were exposed to throughout our daily peregrinations. Most of the studies were performed before the definitive evidence validated Crick's prediction that we sleep in order to forget much of our quotidian memories. Nevertheless, with the benefit of scientific hindsight, the inescapable conclusion is that we are most creative when associations of what we do remember are kept loose and playful by sleep-induced forgetting.
***
Anyone can learn why we need to eat, how food is digested, how nutrients are delivered to cells, and how cells combust them for energy production. But nothing can teach this need more acutely than experiencing a bout of starvation. The powerful yearning, the overwhelming craving, you feel for sleep after a long and eventful day is the best way to fully grasp forgetting's need. The blissfulness of a good night's sleep is to experience your dendritic spines neatly trimmed, your mind lightened and refreshed to record the day ahead. The mental static and derangements you feel after nights of insomnia, in part, are the result of a brain overloaded with unnecessary information.
As Jasper and I were wrapping up our long discussions, we considered whether forgetting evolved specifically for creativity. While it is undoubtedly true that we and other species have benefited from creative insight, it is more likely that creativity piggybacked on the forgetting process, whose primary reason to evolve was for the cognitive and emotional benefits described in previous chapters. But it is still true that by lightening our minds, forgetting unmoors us from memories that weigh our minds down and prevent flights of fancy and creativity.
- Forgetting: The New Science of Memory is published by August Books.
References
Crick, F., and G. Mitchison, "The Function of Dream Sleep." Nature, 1983. 304(5922): pp. 111–114.
Ghiselin, B., ed., The Creative Process: Reflection on Invention in the Arts and Sciences. 1985, Berkeley: University of California Press.
Mednick, S. A., "The Associative Basis of the Creative Process." Psychological Review, 1962. 69: pp. 220–232.
Orton, F. (1994). Figuring Jasper Johns. London: Reaktion Books.
Ritter, S.M. and A. Dijksterhuis, A. (2014). Creativity—The Unconscious Foundations of the Incubation Period. Frontiers in Human Neuroscience, 8, p.215.
Storm, B. C., and T. N. Patel, "Forgetting as a Consequence and Enabler of Creative Thinking." Journal of Experimental Psychology: Learning, Memory, and Cognition, 2014. 40(6): pp. 1594–1609.
Tononi, G., and Cirelli, C. (2014). Sleep and the Price of Plasticity: From Synaptic and Cellular Homeostasis to Memory Consolidation and Integration. Neuron, 81(1), pp.12–34.
Waters, F., et al. (2018). Severe Sleep Deprivation Causes Hallucinations and a Gradual Progression Toward Psychosis with Increasing Time Awake. Frontiers in Psychiatry, 9, p.303.