What type of learning involves voluntary responses that come to be controlled by their consequences?

Associative learning is defined as learning about the relationship between two separate stimuli, where the stimuli might range from concrete objects and events to abstract concepts, such as time, location, context, or categories.

From: Handbook of Clinical Neurology, 2020

Associative Learning

E. Fantino, S. Stolarz-Fantino, in Encyclopedia of Human Behavior (Second Edition), 2012

Challenge and limitations of the biological constraints position

The focus on biological constraints on associative learning has leveled two classes of criticism against traditional theories of reinforcement and of associative learning. The first criticism is that laboratory research on learning is artificial. Thus, the principles of the acquisition and maintenance of behavior that results from such research lacks generality for any natural settings. Technically principles derived from such studies are said to lack external validity. The second challenge is the claim that traditional theories incorrectly assume that any emitted response and any contingent reinforcer may be associated with equal effectiveness (technically, the principle of equipotentiality, introduced above).

The first challenge does not negate the value of laboratory research on the principles of associative strength and behavior change. The natural environment imposes temporal constraints on accessibility to vital resources. The study of schedules of reinforcement under rigorously defined laboratory conditions may help determine how natural restrictions result in changes in associative strength and in behavior change. The focus on biological constraints is important in that it correctly suggests that different patterns of responding may emerge depending upon the nature of the species being studied and on the particular selection of stimuli, responses, and reinforcers. But if we are to comprehend how the effects of response and temporal partitioning are altered by biological constraints on learning, we must also understand how they operate when constraints are absent. Because the biological factors are demonstrably potent, an arbitrary laboratory situation in which the effects of these factors are minimized becomes a definitive way to analyze the effects of temporal factors in isolation. The emerging principles must, at some point, be integrated with the constraints presumed to operate in the wild. To summarize, laboratory studies have value for both because they reveal fundamental ways by which behavior change is effected by limiting access to vital resources and because these effects can be assessed most clearly under conditions in which the intrusion of biological constraints are minimized.

The second challenge argues against the equipotentiality position. This challenge is well taken in the sense that most experimenters in the area appeared to harbor the belief that the particular selection of stimulus, response, reinforcing event, and species was relatively unimportant. However this assumption was never made explicit by any of the great learning theorists: in fact Edward Thorndike, B. F. Skinner, and others explicitly stated that the principle was invalid. Indeed, well before the issue of biological constraints on learning became popular, Skinner acknowledged that a thorough appreciation of the determinants of behavior change could not be accomplished simply from the study of representative stimuli, responses, and reinforcers. Recognition of this fact led Skinner to select stimuli, responses, and reinforcers that would likely minimize intrusion by complicating biological factors. He hoped that the resultant behavior would be orderly and that its form would also describe the behavior maintained by other stimuli, responses, reinforcing events, and species. As Barry Schwartz has argued, the central problem centers around appreciating which features of a phenomenon are attributable to general associative principles and which are attributable to situation-specific variables.

We conclude by noting that both the complexity and diversity of behavior require that both phylogenetic and ontogenetic principles are part of any complete explanation of behavior. Whereas phylogenic factors may predominate where insect behavior is concerned, ontogenetic (learned) factors may assume a more central role in the explanation of the more generally flexible behavior of humans. The real challenge is to achieve the correct mix of phylogenetic and ontogenetic factors that best accounts for the behavior of individual members of a particular species in a given circumstance.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123750006000367

International Review of Neurobiology

Richard F. Thompson, ... David J. Krupa, in International Review of Neurobiology, 1997

XV. Cerebellar Involvement in Other Forms of Memory

This chapter has focused on the essential role of the cerebellum in classical conditioning of discrete behavioral responses, a basic form of associative learning and memory. This is the clearest and most decisive evidence for the localization of a memory trace to a particular brain region in mammals (cerebellum) that exists at present. A closely related and increasingly definitive literature supports the view that the cerebellum learns complex, multijoint movements. Thus, Thach et al. (1992) have proposed a model of cerebellar function that suggests that the job of the cerebellum is, among other things, to coordinate elements of movement in its downstream targets and to adjust old movement synergies while learning new ones. These authors suggested that beams of Purkinje cells connected by long parallel fibers could link actions of different body parts represented within each cerebellar nucleus and exert control across nuclei into coordinated multijointed movements. The model Thach et al. (1992) proposed suggests that the cerebellum is involved not only in coordinating multijointed tasks, but that it also learns new tasks through an activity-dependent modification of parallel fiber–Purkinje cell synapses (Thach et al., 1992; Gilbert and Thach, 1977). Evidence in support of this hypothesis includes Purkinje cell recordings, which reveal patterns of activity consistent with a causal involvement in the modification of the coordination between eye position and hand/arm movement (Keating and Thach, 1990), and studies in which focal lesions by microinjection of muscimol severely impair the adaptation of hand/eye coordination without affecting the performance of the task (Keating and Thach, 1991). It is known that the conditioned eyeblink response is a highly coordinated activation of several muscle groups.

There is growing evidence that the cerebellum is critically involved in many other forms of learning, memory and cognition, as documented in this volume. This chapter notes just a few learning examples. Steinmetz and associates (1993) made use of a lever-pressing instrumental response in rats. Animals were trained to press for a food reward and to avoid a shock. Cerebellar lesions abolished the learned lever press avoidance response but did not impair the same lever press response for food. It would seem that the cerebellum is necessary for learning discrete responses to deal with aversive events in both classical and instrumental contingencies. Supple and Leaton (1990a,b) have shown that the cerebellar vermis is necessary for classical conditioning of the heart rate in both restrained and freely moving rats.

In recent years, the hippocampus has become the sine qua non structure for spatial learning and memory in rodents (e.g., Morris et al., 1986; O'Keefe and Nadel, 1978). However, it appears that the cerebellum may also play a critical role in spatial learning and performance, a role that extends well beyond motor coordination. Some years ago Altman and associates found that rats with cerebellar cell loss following early postnatal X irradiation were severly impaired in maze performance based on spatial cues (Pellegrino and Altman, 1979). Lalonde and associates (1990) have made use of a range of mutant mice with cerebellar defects (staggerer, weaver, pcd, lurcher) and also used cerebellar lesions to explore learning impairments. In general, they find significant impairments in tasks involving spatial learning and memory (for a review of this important work see Lalonde and Botez, 1990; Lalonde, this volume).

The Morris water maze has become the quintessential task for detecting hippocampal lesion deficits in spatial learning and memory (Morris 1984). In a most important study, Goodlett et al. (1992) trained pcd mice in the Morris water maze (see earlier discussion for a description of this mutant). Results were striking. The mutants showed massive impairments on this task in distal cue spatial navigation, both in terms of learning and in terms of expressing biases on probe trials. In contrast, they showed normal performance on the proximal cue visual guidance task, thus demonstrating that “the massive spatial navigation deficit was not due simply to motor dysfunction” (Goodlett et al., 1992). The pcd mutants showed clear Purkinje cell loss without significant depletion of hippocampal neurons, but their deficits in spatial learning and memory were actually more severe than those typically seen following hippocampal lesions!

Finally, there is an important and growing body of literature concerned with the role of the cerebellum in timing (Ivry, this volume; Keele and Ivry, 1990). These authors found that the accuracy of timing motor responses correlated across various motor effectors, and with the acuity of judging differences in intervals between tone pairs, suggesting that a localized neural system may underlie timing. Further, the critical timing site (in humans) appears to be the lateral cerebellum, where damage impairs motor and perceptual timing and the perception of visual velocity. These are the same regions critical for classical conditioning.

As Keele and Ivry (1990) note, classical conditioning of discrete adaptive behavioral responses like the eyeblink and limb flexion involves very precise timing, requiring detailed temporal computation. In well-trained animals, the peak of the conditioned eyeblink response occurs within tens of milliseconds of the onset of the US over a wide range of CS–US onset intervals (Smith, 1968; Smith et al., 1969). This adaptive timing of the CR in classical conditioning is one of the most important unsolved mysteries regarding brain substrates of memory. The neural mechanisms that yield this adaptive timing may provide keys to the functions of the cerebellum and to the nature of memory storage processes in the brain.

The evidence reviewed here and in the other chapters of this volume make it abundantly clear that the cerebellum is not simply a structure subserving motor coordination, but plays critical roles in learning and memory storage, in spatial learning and memory, in timing in verbal associative memory in humans (See e.g., Fiez et al., 1992), and, more generally, in cognitive processes (see, e.g., Schmahmann, this volume).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S0074774208603517

Mechanisms of Memory

Stephanie A. Carmack, ... Stephan G. Anagnostaras, in Learning and Memory: A Comprehensive Reference (Second Edition), 2017

4.27.4 Conclusion

Associative learning and memory are clearly involved in components of addiction, particularly in relapse. Contexts, cues, and affective states associated with drug use can trigger craving and goal-directed instrumental drug seeking and taking by a positive incentive state or removal of an aversive state. After chronic or repeated use, drug seeking and craving may be driven by learned associations and/or autonomous, habitual cue-conditioned behavior. Additionally, there is a significant overlap between the neurobiology of associative learning and memory and the neurobiology of addiction; they share many molecular substrates and neurocircuits. As a result, current accounts of addiction include aspects of associative learning and memory; research on the neural substrates of drug conditioning now dominate the literature. However, the transition from recreational to pathological and compulsive drug seeking may involve processes other than associative learning and memory, such as sensitization, allostasis, or loss of inhibitory control. The long-lasting neuroadaptations underlying these components may only partially overlap with those underlying traditional associative learning. Understanding the neurobiology of addiction-related “memories,” whether associative or nonassociative, is necessary for development of effective treatments for addiction-related behaviors.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128093245211012

Cerebellum: Associative Learning

K.M. Christian, in Encyclopedia of Behavioral Neuroscience, 2010

Introduction

Associative learning is the process through which organisms acquire information about relationships between events or entities in their environment. It is expressed as the modification of existing behaviors, or the development of novel behaviors, that reflects the conscious or unconscious recognition of a contingency. It is the contingent, and contiguous, relationship among stimuli that is a hallmark of associative learning – a meaningful temporal or spatial proximity of A and B and the perceived consequent occurrence of B if A. As such, it is fundamental to our sense of causality and is the basis of much of our understanding of the external world. Associative learning also underlies the majority of our adaptive behavior when the association is recognized to have either positive or negative consequences. Adaptive changes in behavior can be triggered by both aversive and appetitive stimuli and can thus enable the organism to avoid negative outcomes or to increase the probability of obtaining a reward. This type of associative learning depends on the presence of signaled reinforcement.

However, there are many forms of associative learning, and the brain regions that support the acquisition and expression of these learned behaviors are determined by the nature of the information acquired and the response itself. Certainly, sensorimotor systems are required, first, to transduce the information related to the critical stimuli and then, subsequently, to perform the sequence of actions reflective of the memory, but it is the integrative neural trace of the association itself that is the focus of learning-related research. Several structures have emerged as being the critical loci of plasticity underlying specific types of associative memory. For example, emotion-based memory often involves the amygdala and the amygdalar nuclei are critical for the acquisition of conditioned fear – a process wherein neutral stimuli become associated with a fearful event. Similarly, the medial temporal lobe is involved in some forms of associative learning as well as long-term storage of associative information. Structures within this region show changes in neural activity at time points that precede and/or coincide with the behavioral expression of newly acquired associations between visual stimuli, as well as novel spatial and temporal relationships between task-relevant environmental features. But, perhaps, the most exhaustive and complete description of the neural instantiation of memory in the mammalian brain is that involving a simple form of associative memory localized to the cerebellum – namely, classical conditioning of discrete reflexes to behaviorally neutral stimuli. That the cerebellum is involved in cognitive processes at all was met with some resistance originally, although it is now widely accepted that it is the essential neural substrate for the acquisition and expression of this associative memory.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080453965001317

Motor learning

Şermin Tükel, in Comparative Kinesiology of the Human Body, 2020

Associative learning

Associative learning modifies the behavior via relating one stimulus with another, or relating a stimulus with a particular behavior. In classical conditioning, a person pairs two stimuli, and therefore reflex response is modified. In operant conditioning, a person pairs his/her own behavior with the consequences of that behavior (Kandel et al., 2000). Classical conditioning is a simple form of associative learning, where the behavioral response is modified by conditioned stimulus. In the classical example, developed by Ivan Pavlov, dogs produce reflex response of salivation when conditioned with a sound stimulus. In the experiment, dogs associated the sound stimulus with the food (natural stimulus leading to salivation response) after sufficient conditioning. Then, the dog shows salivation response to the sound in the absence of food. Classical conditioning is usually related to 1) emotional responses and 2) skeletal muscle responses. Eye blink conditioning is a form of classical conditioning that has been studied in the investigation of neural mechanisms underlying learning and memory. A mild air puff is a natural stimulus that results in an eye blink reflex. In the experiment, the air puff is paired with visual or auditory stimuli, so that, for example, eye blink reflex is seen when a person hears a sound. The eye blink conditioning experiments showed the important role of cerebellum in associative learning; especially in acquisition and timing of motor actions (Gerwig et al., 2007).

Operant conditioning (also called trial-and-error learning) is another type of associative learning in which a voluntary motor behavior is strengthened or weakened, depending on its favorable or unfavorable consequences. When motor behavior is associated with desirable consequences such as a reward (positive reinforcement), there is a tendency to repeat the behavior. In the opposite situation, when the behavior results in unwanted consequences, such as pain or failure (negative reinforcement), it will decrease the likelihood of its occurrence. In rehabilitation science, operant conditioning of spinal reflex has been investigated as a promising tool for locomotion. Simply stated, stimulus-induced muscle responses (reflexes) are used to induce neuroplasticity. In a study by Wolpaw’s group, patients with incomplete spinal cord injury decreased H-reflex of soleus muscle with operant down-conditioning, which was associated with faster and more symmetrical locomotion (Thompson et al., 2013). Similarly, operant up-conditioning has been led to increase in motor-evoked potential of tibialis anterior in incomplete spinal cord injury (Thompson et al., 2018).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128121627000254

Non-Elemental Learning in Invertebrates

M. Giurfa, ... R. Menzel, in Encyclopedia of Animal Behavior, 2010

Elemental Forms of Associative Learning in Invertebrates

Associative learning allows extracting the logical structure of the world by evaluating the sequential order of events. Two major forms of associative learning are usually recognized: in classical conditioning, animals learn to associate an originally neutral stimulus (conditioned stimulus (CS)) with a biologically relevant stimulus (unconditioned stimulus (US)); in operant conditioning, they learn to associate their own behavior with a reinforcer and relate this connection to the context conditions of the environment. In their most simple version, both learning forms rely on the establishment of associative links connecting two (or more) specific and unambiguous events in the animal’s world. For instance, in absolute classical conditioning (A+), a direct link between an event (A) and reinforcement (+) is learned, while in differential classical conditioning (A+ vs. B−), simple, unambiguous links between A and reinforcement and between B and the absence of reinforcement are simultaneously learned.

Multiple cases of these simple learning forms have been described for invertebrates. For instance, in the honeybee Apis mellifera, olfactory conditioning of the proboscis extension response (PER) has been repeatedly used for the study of elemental classical conditioning and its neural substrates. Individually harnessed hungry bees that do not respond to an odor presentation with an extension of their proboscis do so when their antennae are stimulated with sucrose solution (the US). If the odor (the CS) is forward paired with sugar, the bees learn an association between odor and sugar reward so that they exhibit conditioned PER to future presentations of the odor alone (Figure 1). An example of elemental operant conditioning is provided by the aquatic mollusc Lymnaea stagnalis, which can be trained to suppress the opening of its pneumostome, a small respiratory orifice, when the animal surfaces and attempts to breathe. This is achieved by an aversive and repeated mechanical stimulation of the pneumostome, which determines that the mollusc learns to reduce its attempts to open the pneumostome as training progresses. In both examples, the neural networks mediating associative learning are relatively simple and well studied, thus underlining the advantages of invertebrates as model systems for the understanding the neural mechanisms of simple forms of learning.

What type of learning involves voluntary responses that come to be controlled by their consequences?

Figure 1. Olfactory conditioning of the proboscis extension reflex. (a) An individual bee is immobilized in a metal tube so that only the antennae and mouth parts (the proboscis) are free to move. The bee is set in front of an odorant stimulation setup which is controlled by a computer and which sends a constant flow of clean air to the bee. The air flow can be rerouted through cartridges presenting chemicals used for olfactory stimulation (conditioned stimuli or CS). A toothpick soaked in sucrose solution (unconditioned stimulus or US) is delivered to the antennae and the proboscis. In this appetitive classical conditioning, the bee learns to associate odorants (CS) and sucrose solution (US). (b) The proboscis extension reflex of the honeybee. Bees exhibit this reflex when their antennae are touched with sucrose solution (US). After successful conditioning, bees extend the proboscis to the odorant (CS) which predicts the US.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080453378000954

Learning Theory and Behaviour

M. Heisenberg, B. Gerber, in Learning and Memory: A Comprehensive Reference, 2008

Associative learning is supposed to come about through changes in neurons, and it is believed that memory-guided behavior relies on these changes (Lechner and Byrne, 1998; Martin et al., 2000; Cooke and Bliss, 2006). Thus, in terms of physiology, past experiences would leave traces in terms of altered properties of neuronal circuits (See Chapters 1.02, 1.04, 1.33, 1.34). Here, we discuss whether it is possible to assign such changes to specific cells in the brain and, in this sense, to localize those memory traces underlying conditioned behavioral modifications. What could be the criteria for an accomplished localization of such a memory trace? As argued elsewhere (Gerber et al., 2004), if a certain set of cells were said to be the site of a memory trace, one should be able to show that:

1.

Neuronal plasticity occurs in these cells and is sufficient for memory.

2.

The neuronal plasticity in these cells is necessary for memory.

3.

Memory cannot be expressed if these cells cannot provide output during test.

4.

Memory cannot be established if these cells do not receive input during training.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123705099000668

Learning Theory and Behavior

André Fiala, Thomas Riemensperger, in Learning and Memory: A Comprehensive Reference (Second Edition), 2017

1.26.4 The Memory Engram: Sparse Activation of Kenyon Cell Ensembles and Reactivation During Memory Readout

Associative learning relies on neural properties to predict positive or negative future situations from experienced combinations of environmental stimuli and/or own actions. Therefore, the variety of environmental stimuli and their combinations have to be encoded in the brain to assign values to them through associative learning. The sparse information coding scheme of the mushroom body could provide a framework for the neuronal representation of a huge variety of sensory stimuli selectively in terms of distributed Kenyon cell ensembles. To test whether an ensemble of active Kenyon cells is sufficient to encode an aversive associative memory, one would have to determine which ensemble encodes exactly an odor, artificially activate that ensemble in coincidence with a punitive stimulus, and replay the ensemble's activity pattern to analyze whether this causes a learned avoidance behavior. This experiment is not possible with current techniques because specific patterns of Kenyon cells representing certain odor stimuli cannot be genetically determined. An alternative approach has been taken by Vasmer et al. (2014). The thermogenetic cation channel (drosophila transient receptor potential A1 [dTRPA1], Hamada et al., 2008) was fused with a fluorescent tag and randomly expressed in groups of Kenyon cells. The dTRPA1 channel opens at temperatures above ∼25°C (Hamada et al., 2008), i.e., neurons can be efficiently depolarized by placing the animals at ambient temperatures above 25°C. Vasmer et al. (2014) activated groups of Kenyon cells simultaneously with an electric shock. In a subsequent test, the animals could freely distribute along a temperature gradient and choose a place of preferred temperature. After the simulated associative training procedure, the animals significantly avoided temperatures above 25°C more strongly than control animals, i.e., the animals avoided through their own behavior a reactivation of the trained ensembles of Kenyon cells. This memory effect could be observed only if the number of Kenyon cells that were thermogenetically activated was sufficiently high, but not too high, i.e., ∼5% of all Kenyon cells (Vasmer et al., 2014), which corresponds to the number of Kenyon cells activated by a given odor. These results demonstrate two points. First, the activity of Kenyon cell ensembles in coincidence with a salient stimulus is sufficient to induce an associative memory. Second, memory retrieval is characterized by a closed feedback loop between the behavioral action and, as a consequence, the activity of the trained Kenyon cell ensembles. In other words, the animals follow the rule “avoid situations in which those ensembles of Kenyon cells that were aversively trained are active.”

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128093245210304

Neurobiological Models of Aging in the Dog and Other Vertebrate Species

Elizabeth Head, ... Carl W. Cotman, in Functional Neurobiology of Aging, 2001

2. Associative Learning: Simple and Complex Discriminations Have Differential Age Sensitivities

Associative learning tasks involve the establishment of a link between two events as a consequence of repeated pairing of those events. Discrimination learning is one example of an associative learning task in which an animal is required to establish an association between a particular stimulus and delivery of a reward, while another stimulus is not associated with a reward. Dogs readily acquire simple discrimination learning problems, such as a visually based object discrimination task where the two stimuli are different objects (Milgram et al., 1994).

We have also looked at more complex types of discrimination tasks. These tasks include a size discrimination task (where the objects are identical in all respects except for size), a double discrimination task (two pairs of objects are used in alternating trials), concurrent discrimination learning (10 pairs of objects presented once per day), and olfactory discrimination learning (where the objects are associated with different odors) (Head et al., 1996, 1998).

Discrimination learning, in other animal models, is typically insensitive to age, except when the discrimination is difficult (Bartus et al., 1979; Arnsten and Goldman-Rakic, 1985; Moss et al., 1988; Markowska et al., 1989; Rapp, 1990; Baxter and Gallagher, 1996). We have obtained similar findings in dogs. We typically find no difference between old and young dogs on a simple object discrimination task. We do, however, find greater variability among old animals. This is probably because some aged dogs show a global impairment on all tests of cognitive function, and this can also include impairments on simple discrimination problems.

The individual variability in discrimination learning tasks, in general, may be a consequence of existing object preferences. We tested this hypothesis directly by determining object preferences prior to training on a simple object discrimination learning task. Assigning a preferred object as a positive stimulus to aged dogs actually conferred an advantage over that of young dogs, resulting in a trend toward lower error scores. On the other hand, aged dogs assigned a nonpreferred object performed more poorly than young dogs. Thus, existing object preferences lead to higher or lower error scores on a visual discrimination task, which is particularly true for older dogs (Head et al., 1998).

One possible explanation for this effect is increased perseverative responding in aged dogs, which is likely dependent upon frontal lobe function (Brush et al., 1961; Mishkin et al., 1964). In fact, as will be discussed later in the chapter, visual discrimination learning using a nonpreferred object is indeed sensitive to prefrontal cortex neuropathology (Head et al., 1998). This may also be the reason for olfactory discrimination impairments in aged dogs; although detection thresholds were similar in young and old dogs, an olfactory task is sensitive to orbitoventral frontal lobe dysfunction (Allen, 1939, 1943).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123518309500329

Invertebrate Learning and Memory

Stevanus Rio Tedjakumala, Martin Giurfa, in Handbook of Behavioral Neuroscience, 2013

Introduction

Associative learning allows extracting the logical structure of the world because it enables making predictions about stimuli and their potential outcomes. Honeybees (Apis mellifera) constitute a traditional invertebrate model for the study of associative learning at the behavioral, cellular, and molecular levels.1–5 For almost a century, research on honeybee learning and memory has focused almost exclusively on appetitive learning, exploiting the fact that bees can learn about a variety of sensory stimuli or to perform certain behaviors if these are rewarded with sucrose solution, the equivalent of nectar collected in flowers.6 Since the discovery of the immense potential of this appetitive behavior by Karl von Frisch,7 researchers interested in bee learning have concentrated on appetitive learning. Indeed, a single Pavlovian protocol, the olfactory conditioning of the proboscis extension reflex (PER),8–10 has been used for approximately 50 years as the unique tool to access the neural and molecular bases of learning and memory in honeybees.1,10 This protocol relies on PER, the appetitive reflex exhibited by a harnessed honeybee to sugar reward (the unconditioned stimulus (US)) delivered to its antennae and mouthparts.9 After pairing a neutral odorant (the conditioned stimulus (CS)) and sucrose, the bee learns the association between odorant and food and therefore extends its proboscis in response to the odorant alone.8–10

In contrast, less was known about the capacity of honeybees to learn about aversive events in their environment. Can bees learn to avoid specific stimuli that have been associated with undesirable consequences? In the fruit fly Drosophila melanogaster, the other insect model that has emerged as a powerful model for the study of learning and memory,11–14 aversive learning has been the dominant framework. The typical procedure consists of training groups of flies alternatively presented with two different odors, one paired with an electric shock (CS+) and another nonpaired with the shock (CS−).15 Retention is measured in a T-maze, in which conditioned flies must choose between the CS+ and the CS− and in which they avoid the CS+ in case of successful learning and retention. Due to obvious differences in behavioral and motivational contexts, and to the impossibility to equate US nature and strength, caution is needed when comparing appetitive and aversive learning in bees and flies, respectively.

To determine if and how bees learn about punishment in their environment, and to fill the gap existing between bee and fruit fly research on learning and memory, we studied punishment learning in honeybees and established a new conditioning protocol based on the sting extension reflex (SER), which is a defensive response to potentially noxious stimuli.16 This unconditioned response can be elicited by means of electric-shock delivery to a harnessed bee.17,18 Because no appetitive responses are involved in this experimental context, true punishment learning could be studied for the first time in harnessed honeybees.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124158238000368

What is learning in which a voluntary response?

Operant conditioning. -Learning in which a voluntary response is strengthened or weakened depending on its favorable or unfavorable consequences.

What is voluntary learning based on consequences?

Classical conditioning involves associating an involuntary response and a stimulus, while operant conditioning is about associating a voluntary behavior and a consequence.

What type of learning is voluntary?

Operant behavior is said to be "voluntary". The responses are under the control of the organism and are operants. For example, the child may face a choice between opening the box and petting a puppy.