Oxford Research Encyclopedia of Education reached a major milestone this month by publishing our 500th article! For more information visit our News page.

Show Summary Details

Page of

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, EDUCATION (oxfordre.com/education). (c) Oxford University Press USA, 2020. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy and Legal Notice).

Subscriber: null; date: 27 October 2020

Information Processing and Human Memory

Abstract and Keywords

Information processing is a cognitive learning theory that helps explain how individuals acquire, process, store, and retrieve information from memory. The cognitive architecture that facilitates the processing of information consists of three components: memory stores, cognitive processes, and metacognition. The memory stores are sensory memory, a virtually unlimited store that briefly holds stimuli from the environment in an unprocessed form until processing begins; working memory, the conscious component of our information processing system, limited in both capacity and duration, where knowledge is organized and constructed in a form that makes sense to the individual; and long-term memory, a vast and durable store that holds an individual’s lifetime of acquired information.

Information is moved from sensory memory to working memory using the cognitive processes attention, selectively focusing on a single stimulus, and perception, the process of attaching meaning to stimuli. After information is organized in working memory so it makes sense to the individual, it is represented in long-term memory through the process of encoding, where it can later be retrieved and connected to new information from the environment. Metacognition is a regulatory mechanism that facilitates the use of strategies, such as chunking, automaticity, and distributed processing, that help accommodate the limitations of working memory, and schema activation, organization, elaboration, and imagery that promote the efficient encoding of information into long-term memory. Information processing theory has implications for our daily living ranging from tasks as simple as shopping at a supermarket to those as sophisticated as solving complex problems.

Keywords: cognitive load, cognitive processes, encoding strategies, long-term memory, meaningfulness, metacognition, schema, sensory memory, working memory, educational psychology


We’ve all had the experience of knowing some fact or a person’s name, but can’t think of it right then, only to have it pop into our minds sometime later. And we’ve made comments, such as, “I’m suffering from memory overload,” or “It’s on the tip of my tongue, but I can’t quite dredge it up.” Why do our minds work like this?

Because it’s such an integral part of our daily living, we rarely think about “memory.” We rack our brains to recall isolated facts, such as the capital of Slovakia (Bratislava), and we use our memories to solve problems, identify relationships, and make decisions. And we’ve almost certainly encountered the issues mentioned previously. Examining these ideas is the purpose of this article.

The article is organized in three major sections. The first examines memory stores—sensory memory (SM), working memory (WM), and long-term memory (LTM), repositories that hold information, in some cases very briefly and in others essentially permanently. The second section discusses cognitive processes—attention, perception, rehearsal, encoding, and retrieval—mechanisms that move information from one memory store to another. The third section examines metacognition—a supervisory system we have for monitoring and regulating both the storage of information and how it’s moved from one store to another.

The relationships among these components are illustrated in Figure 1, which is similar to the model initially proposed by Atkinson and Shiffrin (1968), and it is a central component of information processing theory, a cognitive learning theory that helps explain “the process of acquiring, processing, storing, and retrieving information from memory and provides guidance on how memory can be enhanced” (Tangen & Borders, 2017, p. 100). Since originally proposed, information processing theory and this model have generated a great deal of research and have undergone considerable refinement (Schunk, 2016).

Information Processing and Human Memory

Figure 1. A model of human memory.

Although this model is only a representation, it provides us with valuable information about how our minds work. As one simple example, in the model we see that working memory is smaller than either sensory memory or long-term memory. This reminds us that its capacity is smaller than that of the other two stores. I will refer to the model and why it is constructed the way we see it as each of the components are discussed.

I begin with memory stores.

Memory Stores

The memory stores—sensory memory, working memory, and long-term memory—are repositories that hold information as we organize it in ways that make sense to us and store it for further use. I examine them in this section.

Sensory Memory

Hold your finger in front of you and rapidly wiggle it. What do you notice? Now, press firmly on your arm with your finger, and then release it. What do you feel?

Think about these questions. In the first case, did you see a faint “shadow” that trailed behind your finger as it moved? And did the sensation of pressure remain for an instant after you stopped pressing on your arm? We can explain these events with the concept of sensory memory, the information store that briefly holds incoming stimuli from the environment in a raw, unprocessed form. For example, the shadow is the image of your finger that has been briefly stored in your visual sensory memory, and the sensation of pressure that remains has been briefly stored in your tactile sensory memory.

Our sensory memories are essential for both learning and functioning in our everyday lives. For instance, if a friend says something as simple as, “I have a dentist appointment at two o’clock on Thursday,” we must briefly retain the first part of the sentence in our auditory sensory memories until we hear the complete sentence, or we won’t be able to make sense of the statement. Sensory memory has nearly unlimited capacity, but if processing doesn’t begin almost immediately, the memory trace quickly fades away (Öğmen & Herzog, 2016). Sensory memory holds information until we attach meaning to it and then transfer it to working memory, the next store.

Working Memory

To make sense of their experiences, people construct knowledge is a principle of cognitive learning theory, and this is where working memory comes into play. Working memory is the conscious component of our memory system, often called a “workbench,” because it is where our thinking occurs, and it is where we construct our knowledge. We are not aware of the contents of either sensory memory or long-term memory until they are pulled into working memory for processing.

A Model of Working Memory

Figure 1 represents working memory as a single unit, and this is how it was initially described. However, additional research suggests that working memory has three components that work together to process information (Baddeley, 2001). They’re outlined in Figure 2.

Information Processing and Human Memory

Figure 2. A model of working memory.

To introduce you to the components of working memory, try to calculate the area of the sketch you see here:

Information Processing and Human Memory

In thinking about the problem, you probably did something like the following. To determine the height of the triangular portion of the sketch, you subtracted the 4 from the 6, which is 2 inches. Then, you recalled that the formula for the area of a triangle is ½(b)(h) or (½ times base times height) and the area for a rectangle is (l)(w) or (length times width). You then calculated the areas to be (½)(5)(2) = 5 square inches for the triangular portion, and (5)(4) = 20 sq. in. for the rectangular part, making the total area of the figure 25 sq. in.

Now, let’s see how the different components of your working memory helped you execute the task. The phonological loop, a short-term storage component for verbal information (Papagno et al., 2017), temporarily held the formula for finding the area of a triangle ½(b)(h) and the formula for finding the area of a rectangle (l)(w), as well as the dimensions of the figure in memory while you made calculations. Impairment in the phonological loop is often associated with reading difficulties because of the role it plays in processing verbal information.

The visuo-spatial sketchpad, a short-term storage system for visual and spatial information, allowed you to visualize the figure as you made decisions about how to solve the problem. You briefly stored the image of the figure in your visuo-spatial sketchpad while you took the formulas from your phonological loop and made the calculations. The visuo-spatial sketchpad and the phonological loop are independent, so each can perform mental work without taxing the resources of the other.

Both of these storage systems are limited in capacity and duration, and they serve the functions that historically have been attributed to short-term memory. Thus, for example, when we hear or read information about “short-term memory loss,” it refers to decrements in these components of working memory.

As you held information in your phonological loop and visuo-spatial sketchpad, your central executive, a supervisory system that controls and directs the flow of information to and from the other components, guided your problem-solving efforts. For instance, your central executive guided your decision to break the figure into a triangle and rectangle, then find the area of each, and finally add the two.

Executive Functioning

The central executive plays a broader role in life than solving a simple problem such as the one discussed previously. It governs executive functioning, “an umbrella term used to describe the cognitive processes responsible for purposeful, goal-directed behavior” (Cantin, Gnaedinger, Gallaway, Hesson-McInnis, & Hund, 2016, p. 66). To see how it affects student learning, as well as our lives outside of school, let’s look at the experience of a fifth grader named Josh.

As Mrs. Gonzalez, Josh’s teacher, is writing the next day’s math homework on the board, Kelsey, who sits in the row directly across from Josh, is shuffling her notebook as she prepares to write down the assignment. Josh turns in response to what he hears, watches Kelsey for a few seconds, and forgets to write down the assignment.

When Josh gets home from school his mother asks him about his homework; he looks in his math notebook and realizes that it isn’t there. In frustration, because this is a pattern with Josh, his mother emails Mrs. Gonzalez to ask about the assignment.

Does this sound familiar? Josh’s behavior illustrates what we sometimes think of as “flakey,” but in fact it’s an issue with his executive functioning. Executive functioning is linked to both academic and social success in school and success and satisfaction in life after the school years. For example, well-developed executive functioning predicts higher reading and math achievement in school (Blair & McKinnona, 2016), and it’s also associated with social–emotional competence, such as behaving appropriately in social interactions and detecting subtleties such as irony and sarcasm in conversations (Cantin et al., 2016). In adults, executive functioning deficits are associated with anxiety and social interaction problems (Jarrett, 2016), and both students and adults with attention-deficit/hyperactivity disorder (ADHD) often struggle with it. If Josh doesn’t overcome his executive functioning issues, it is likely to impair his achievement.

On the other hand, well-developed executive functioning is associated with job success and satisfaction, and older adults with effective executive functioning report higher levels of health and well-being (McHugh & Lawlor, 2016).

Effective executive functioning includes three subskills:

  • Effective working memory, the ability to hold information in mind and manipulate it. Josh didn’t retain the fact that he needed to write down the assignment in his working memory.

  • Inhibitory control—sometimes called self-control—the ability to attend to relevant information and avoid being distracted by information that’s irrelevant, as well as inhibiting inappropriate behaviors. Josh impulsively turned to Kelsey’s shuffling instead or remaining focused on writing down the assignment.

  • Cognitive flexibility, the ability to mentally shift when task demands change. Josh was unable to switch from writing down the assignment to Kelsey’s shuffling and then back again. Instead, he forgot to write down the assignment.

We know that practice is essential for all forms of learning, and even simple abilities, such as retaining information in working memory, inhibiting inappropriate behaviors, and moving flexibly from one task to another may—for some people, such as Josh—need to be practiced.

From the discussion to this point, we see that working memory plays a huge role in both learning and functioning effectively in our daily lives. It seems almost ironic that a component so important is also limited.

Limitations of Working Memory

At the beginning of our discussion of human memory, I suggested that you may have made a comment such as “I’m suffering from memory overload” at one time or another. If you have, working memory is the culprit, because its capacity is severely limited, and it is directly related to the limitations in the phonological loop and visuo-spatial sketchpad that we described in the section “A Model of Working Memory.” Early research suggested that adult working memories can hold about seven items of information at a time and can hold the items for only about ten to 20 seconds. (Children’s working memories are more limited.) Selecting and organizing information also uses working memory space, so we “are probably only able to deal with two or three items of information simultaneously when required to process rather than merely hold information” (Sweller, Merrienboer, & Paas,1998, p. 252). As we saw in Figure 1, working memory is represented as smaller than either sensory memory or long-term memory, which is intended to remind us that its capacity is limited. This limited capacity is arguably its most important feature, because working memory is not only responsible for executive functioning, but it’s also where we make sense of our experiences and construct our knowledge. Think about it: the most important processes in learning—functioning effectively and constructing meaningful knowledge—take place in the component of our memory system that is the most limited! Little wonder that people miss important information, construct misconceptions, and are sometimes confused. It also helps us understand why people misinterpret, or don’t remember, what we say in simple day-to-day conversations.

We can explain these examples using the concept of cognitive load, which is the amount of mental activity imposed on working memory. It’s what you referred to when you said “I’m suffering from memory overload.” Cognitive load depends on two factors. The number of elements we must attend to is the first (Paas, Renkl, & Sweller, 2004), such as remembering this sequence of digits—7 9 5 3—versus this one—3 9 2 4 6 7 8. The second imposes a heavier cognitive load than the first simply because there are more numbers in it.

The complexity of the elements is the second factor (Paas et al., 2004). For example, attempting to create a well-organized written product is a complex cognitive task, that is, it requires a great deal of thought, and if we must also spend working memory space thinking about where to place our fingers on a keyboard, the cognitive load becomes too great, and we write better essays by hand.

What, then, can we do about the limitations of working memory? I address this question next.

Reducing Cognitive Load

To accommodate the limitations of working memory, our goal is to reduce cognitive load. Two strategies can help us reach this goal:

  • Chunking

  • Automaticity


Chunking is the process of mentally combining separate items into larger, more meaningful units (Miller, 1956). For example, 9 0 4 7 5 0 5 8 0 7 is a phone number, and it is 10 digits long, so it exceeds the typical capacity of working memory. Now, as normally written, 904-750-5807, these numbers have been “chunked” into three larger units, so it reduces cognitive load by taking up less working memory space. This is the reason phone numbers are chunked; they’re easier to read and remember.

Chunking is common in our daily lives. For instance, a credit card with 16 digits is chunked into four units (of four). Driver’s license numbers, memberships in organizations, and the long license numbers on computer software are all presented as chunks instead of continuous strings of numbers and letters.


Have you ever left your house or apartment wondering if you’ve locked the door? You may have even returned to the door to check the door, later to find that you did not forget to lock it. Or do you ever wonder if you’ve shut off your coffee pot (manufacturers understand this, and most coffee pots shut themselves off after a period of time). If so, why?

Automaticity is the ability to perform mental operations with little awareness or conscious effort, and it is enormously important for both learning and everyday living. For instance, once keyboarding skills become automatic, that is, once we can type “without thinking about it,” we can then devote all of our limited working memory space to composing quality written work. This helps us understand why we produce better written products using computers if our keyboarding skills are well developed; we use the keyboard automatically.

Automaticity is essential for reducing cognitive load, and research indicates that experts— people who are highly skilled or knowledgeable in a specific domain, such as math, computer science, basketball, or teaching—have as many of their skills developed to automaticity as possible. And in our daily lives we all develop simple routines that reduce cognitive load. For instance, I automatically put my keys and wallet in a cabinet drawer the moment I walk into my house, so I have one less thing to think about.

Automaticity is a double-edged sword, however, and it helps answer the questions we asked at the beginning of this section. If you answered yes to either, it was because you locked the door or completed the other routine task “automatically”—you did it without thinking about it. As another example, because driving is nearly automatic, many people believe they can simultaneously text, or talk on their cell phones, while driving, both of which are very dangerous.

Capitalizing on the Components of Working Memory

Earlier we saw that the phonological loop and the visuo-spatial sketchpad operate independently in working memory, so each can perform mental work without taxing the resources of the other. The visual processor supplements the verbal processor and vice versa, which capitalizes on the processing capability of the two components, reduces cognitive load, and helps accommodate working memory’s limitations. For instance, in math, diagrams are helpful in problem solving; pictures of cells with their components are used in biology; videos illustrating the correct technique for shooting a jump shot are used in coaching basketball; and replicas of masterpieces are displayed in art classes. Each capitalizes on the capacity of working memory to distribute cognitive load.

Long-Term Memory

The knowledge people construct depends on what they already know is a principle of cognitive learning theory, and this is where long-term memory, our permanent information store, comes into play. What we know is stored in long-term memory, and being able to access this knowledge plays an essential role in later learning. Long-term memory’s capacity is vast and durable, and some experts believe that information in it remains for a lifetime. Two important types of knowledge—declarative knowledge and procedural knowledge—are stored in long-term memory. We examine them next.

Declarative Knowledge in Long-Term Memory

To this point in the article, I hope you have acquired a considerable amount of knowledge, including concepts such as sensory memory, working memory, and automaticity, and facts, such as that the adult capacity of working memory is approximately seven bits of information. In addition, we all have stored a great many personal experiences. This information exists as declarative knowledge, often described as knowing “what.”

Acquiring declarative knowledge involves integrating different items of information; alternatively, we can acquire declarative knowledge simply by repeating information over and over (although this may not be as effective for learning and retaining large amounts of information). This integrated information exists in the form of schemas (also called schemata), cognitive structures that represent the way information is organized in long-term memory (Schunk, 2016).


Not all information is organized equally well in long-term memory, and the more meaningfully it is organized, the more useful it will be for functioning in our daily lives. Meaningfulness describes the extent to which items of information are linked and related to each other. To illustrate, think about three American history topics: (1) the French and Indian War, (2) the Boston Tea Party, and (3) the American Revolutionary War. The French and Indian War was fought between the French and British, American colonists dumped tea in Boston Harbor, and the Revolutionary War led to U.S. independence. They are important separate ideas, but not particularly meaningful.

The three events are, in fact, closely related. The French and Indian War, fought between 1754 and 1763, was very costly for the British, so to raise revenue they imposed onerous taxes on the colonists for goods such as tea. This led to rebellion, one important example of which was dumping tea into Boston Harbor in 1773, ultimately leading to the Revolutionary War, fought between 1775 and 1783. Now the three events make a lot more sense; they are more meaningful.

Meaningfulness is a very powerful factor in learning, for two reasons. First, research indicates that although the number of “chunks” working memory can hold is limited, the size and complexity of the chunks are not (Sweller et al., 1998). When individual items are interconnected in a “Revolutionary War” schema, it behaves like a single chunk, so it uses only one working memory slot and significantly reduces cognitive load. Second, the more meaningful (interconnected) a schema is, the more places exist to which we can connect new information. This suggests that we should always be looking for relationships in the topics we study. In fact, finding connections—making information meaningful—is a very satisfying cognitive pursuit. Moreover, isolated information imposes a heavy cognitive load on our working memories, which helps explain why people seem to retain so little of what they read and study. Connecting ideas helps integrated information behave as chunks, which reduces cognitive load and makes the information more interesting and easier to remember.

Procedural Knowledge in Long-Term Memory

Procedural knowledge, knowledge of how to perform tasks, such as solving problems, composing essays, performing pieces of music, or executing physical skills (such as a back flip in gymnastics), is the second type of knowledge stored in long-term memory. Procedural knowledge depends on declarative knowledge. For instance, consider this problem:


Knowing that we must find a common denominator before we can add the fractions is a form of declarative knowledge. Then, finding the common denominator and adding the fractions requires procedural knowledge. To compose an essay—which requires procedural knowledge—we must understand grammar, punctuation, and spelling rules, all types of declarative knowledge. To serve effectively in tennis, we must understand the fundamentals of the stroke before we’re able to practice it effectively. The same is true for all forms of procedural knowledge.

Our goal in developing procedural knowledge is to reach automaticity, which requires a great deal of time, effort, and practice. It’s the source of the old joke in which a tourist asks: “How do you get to Carnegie Hall?” The native New Yorker replies: “Practice, practice, practice.” Becoming a good writer requires practice; playing a musical instrument well demands a great deal of practice; and to become a skilled athlete we must practice. The same is true for all forms of procedural knowledge, and no shortcuts exist.

Context is important for optimal practice. For example, to become a skilled tennis player, we must practice forehands in the context of match play, and math students developing their skills in the context of word problems instead of simple operations is much more desirable. Similarly, to become a good driver, we need to drive a car in different conditions, and athletes must perform their skills in the context of competition.

The characteristics of the memory stores are summarized in Table 1.

Table 1. Characteristics of the Memory Stores


Retention in Memory

Form of Information

Individuals’ Awareness

Sensory Memory

Virtually unlimited

Very short

Raw, unprocessed


Working Memory

Very limited

Relatively short unless rehearsed

Actively processed


Long-Term Memory

Virtually unlimited

Durable, perhaps permanent

Meaningfully organized


Cognitive Processes

How does information move from sensory memory to working memory and from working memory to long-term memory? More importantly, how can we learn to store information most efficiently? To answer these questions, look again at Figure 1, but this time focus on the cognitive processes—attention, perception, encoding, and retrieval—and as you study the following sections, remember that they are responsible for moving information from one store to another.


Think about a coin. What is on one face of the coin? What’s on the opposite face?

Think about our questions here. We routinely handle many different coins, but most of us don’t know what’s on either side of them, and we may not even know that they have unique faces; we don’t pay attention to this information. Attention is the process of selectively focusing on specific stimuli in the environment while ignoring other stimuli, everything we experience through our five senses (Radvansky & Ashcraft, 2018). It is impossible to attend to all the stimuli—represented in Figure 1 as arrows to the left of the model—we encounter, and the screening function of attention is represented by fewer arrows to its right than to its left. Without this screening function, we would be overwhelmed by an endless stream of stimuli.

Attention has several important characteristics that can influence learning. First, attention is limited. For instance, we miss parts of conversations, even when we are directly involved. Second, we are easily distracted; our attention wanders from one stimulus to another, and we often drift off without realizing it. “Unfortunately, we are not always aware when our thoughts drift off, since the propensity to do so is spontaneous and often occurs without awareness” (Xu & Metcalfe, 2016, p. 681). Third, it is emotion laden; we’re more likely to attend to stimuli that arouse our emotions than those lacking an emotional component (Sutherland, McQuiggan, Ryan, & Mather, 2017). Fourth, attention is a social process. For instance, people display a strong sensitivity to others’ eye gaze, tending to follow the gaze of others (Gregory & Jackson, 2017). Thus, if even a small number of people in a conversation drift off, others are more likely to do the same.

These factors help us understand why people retain so little of what they hear in conversations. A myriad of distractions exist—other people, noises both inside and outside the room, among many others. Any of these can distract people’s attention and cause them to miss parts of a conversation.

A great deal has been written about the impact of distracted driving, such as using a cell phone while behind the wheel. The limited capacity of attention helps us understand why cell phone use while driving is so dangerous. Moreover, the limited resources of attention help us understand why attempts at multitasking are generally unsuccessful. In fact, some researchers suggest that multitasking technically does not exist (Chen & Yan, 2016). Rather, what we commonly call multitasking is “rapid attention switching in which individuals only process one stimulus at a time but rapidly shift back and forth between the stimuli” (Chen & Yan, 2016, p. 35).


Look at the picture you see here:

Information Processing and Human Memory

Do you see a vase, or do you see two faces looking at each other?

This classic example illustrates the nature of perception, the process people use to find meaning in stimuli. For those of you who “saw” a vase, for instance, this is the meaning you attached to the picture, and the same is true for those of you who “saw” two faces. Technically, we were asking: “Do you ‘perceive’ a vase or two faces?” In our everyday world, the term perception is commonly used to describe the way we interpret objects and events. For instance, some of us “interpreted” the picture as a vase, whereas others “interpreted” it as two faces.

Perception is influenced by our past experiences and expectations. To illustrate, let’s look at a classroom activity with a science teacher, Sophie Martin, who is introducing the topic of airplanes and how they’re able to fly.

Her goal is to help her students understand that the way the wing of an airplane is shaped helps planes fly, so she displays the sketch we see here, intending it to be a sketch of an airplane wing:

Information Processing and Human Memory

“Now,” Sophie directs, “Look at the figure we have displayed here. What do you see? … Kim?

“A drawing with curved lines,” Kim responds.

“Good,” Sophie nods. “How about you, Jason? What do you see?”

“A whale,” Jason answers.

This is an actual classroom example that I witnessed as I sat in on a class a few years ago, and it illustrates the importance of perception in information processing. We don’t know about Kim’s and Jason’s past experiences or expectations, but the meanings they attached to Sophie’s sketch were very different, and neither interpreted it as Sophie intended. Moreover, if they retain these interpretations, the understanding they derive from Sophie’s lesson will be hindered.

Differences in perception have important implications for teaching, as we see in Sophie’s example, and the outside world as well. If someone we’re attempting to communicate with misinterprets what we’re saying, the entire thrust of the conversation will be affected.

To illustrate, let’s look at another example, in this case a conversation between two young women interviewing for a sales job with the same supervisor.

“How was your interview?” Emma asks her friend, Kelly.

“Terrible,” Kelly responds. “He grilled me, asking me specifically how I would handle a situation with a disgruntled customer and what I would do in case the person became hostile. He treated me like I didn’t know anything. Madison, a friend of mine who works there, told me about him. … How was yours?”

“Gosh, I thought mine was good,” Emma replies. “He asked me the same questions, but I thought he was just trying to find out how we would think about working with people if he hired us. I had an interview with another firm, and the manager there asked almost the same questions.”

Kelly’s and Emma’s perceptions—the way they interpreted their interviews—were very different, Kelly viewing hers as being “grilled,” but Emma feeling good about hers. Kelly anticipated the interview with negative expectations, influenced by her friend, Madison. On the other hand, Emma’s expectations were influenced by her experience at another firm. The arrows to the right of “perception” in Figure 1 are curved to remind us that people’s perceptions vary.

Accurate perceptions are essential, because people’s perceptions of what they see and hear are what enter working memory. If these perceptions are inaccurate, the information ultimately stored in long-term memory will also be inaccurate. The primary way to determine if someone is accurately perceiving what we are saying is to listen to their responses. If a response seems irrelevant or does not make sense, it is likely that they misperceived what we were saying.

Encoding and Encoding Strategies

Encoding occurs when we initially perceive and learn something (McDermott & Roediger, 2018). After we attend to and perceive information in working memory so that it makes sense to us, encoding allows us to retain it in long-term memory (McCrudden & McNamara, 2018).

There are a number of common strategies that we use to encode information (see Figure 3) that are discussed in this section:

  • Rehearsal

  • Elaboration

  • Organization

  • Schema activation

  • Imagery

Information Processing and Human Memory

Figure 3. Strategies for meaningful encoding.


Rehearsal is the process of repeating information over and over without altering it, such as repeating 7 × 9 = 63. Rehearsal serves two important functions. First, it can be used to retain information in working memory until the information is used or forgotten, such as repeating a phone number over and over until we dial it. Usually, once dialed, we forget the number.

If rehearsed enough, however, information can be retained in long-term memory through “brute force.” For example, we have all used rehearsal to remember definitions, such as a verb is a part of speech that shows action or a state of being, and other factual information, such as 8 × 9 = 72, or Abraham Lincoln was the U.S. president during the Civil War.

Knowing factual information, such as important events or math facts, is important, and using flash cards to remember definitions and other factual information is a common form of rehearsal. However, one downside of using rehearsal as an encoding strategy is that information may be retained as isolated pieces of information in memory, unconnected to other related ideas. This can be problematic because it may make the information less meaningful and thus less accessible in the future.

Rehearsal can be made more meaningful, however, by connecting the to-be-remembered factual information to other facts. As an example, we can remember the location of the Atlantic Ocean—factual information—by noting that it is between Africa and the Americas, and all three—Atlantic Ocean, Africa, Americas—begin with “a.” And we can do something similar when we remember that the sum of the digits in the product of any number times 9 is always equal to 9 (e.g., 6 × 9 = 54 [5 + 4 = 9]; 8 × 9 = 72 [7 + 2 = 9]). Research confirms the advantage of making information meaningful in this way compared with simple “brute force” rehearsal for long-term retention (Schunk, 2016).


Mnemonics are strategies that can be used to make factual information more meaningful, creating associations—links—between knowledge to be acquired and familiar information (Schunk, 2016). Because they create associations, mnemonics are more effective than “brute force” for remembering factual information, such as vocabulary, names, rules, and lists. Mnemonics can be effective in a variety of contexts. For instance, we use acronyms, such as HOMES to remember the names of the Great Lakes (Huron, Ontario, Michigan, Erie, and Superior), and phrases, such as “Every good boy does fine” to remember the names of the lines of the treble clef (E, G, B, D, and F). Children remember the alphabet by singing the ABCs, and we’ve all used the phrase “i before e except after c or when sounding like a, as in neighbor or weigh” and many others.

We turn now to more comprehensive and effective encoding strategies.


You’re at a noisy party, and when you miss some of a conversation, you fill in details, trying to make sense of an incomplete message. You do the same when you read a text or listen to a lecture. You expand on—and sometimes distort—information to make it fit your expectations and current understanding. In each case, you are elaborating on the message or what you already know. Elaboration is an encoding strategy that involves adding new information to our existing knowledge, or creating new connections (links) in existing information (Schunk, 2016). Elaboration is a powerful encoding strategy, and finding or creating examples is one of the most powerful and meaningful forms of elaboration. For instance, at one time or another we’ve probably all said, “Give me an example,” when someone is describing some event or phenomenon that we don’t fully understand.

Our earlier example with the French and Indian War, Boston Tea Party, and Revolutionary War also illustrates elaboration. Instead of simply knowing isolated facts, such as the dates of the two wars, we elaborate on the individual items of information by making connections among them. This helps us understand why elaboration is such a powerful encoding strategy.

Schema Activation

This was a “Final Jeopardy” answer a while back: “This state capital was a compromise between the ‘North Platters’ and the ‘South Platters’.” The correct response was Lincoln, Nebraska.

This example helps us understand schema activation, a form of elaboration that involves activating relevant prior knowledge so that new information can be connected to it. For instance, for people who responded correctly, the terms “North Platter” and “South Platter” activated prior knowledge about the Platte River and the fact that the Platte River runs through Nebraska. From that point, it is a minor jump to Lincoln, the capital of Nebraska.

The most effective way of activating prior knowledge in people is to ask them what they already know about a topic, or to ask them to provide some personal experiences related to the topic.


Look at the following two excerpts:

  1. 1. Traveling by rail from the Midwest to California is a pleasant experience. As you sit in the dining car, you may meet interesting people, it’s relaxing, and you hear the sound of the train rolling over the rails.

  2. 2. At the next table a woman stuck her nose in a novel; a college kid pecked at a laptop. Overlaying all this, a soundtrack: choo-k-chook, choo-k-chook, choo-k-chook, the metronomic rhythm of an Amtrak train rolling down the line to California.

How are these two excerpts different?

Now, let’s think about our question. The first is rather bland, whereas in the second we can virtually “see” in our mind’s eye the woman reading her novel and the student at his computer, and we can almost hear the clicking of rails. It capitalizes on imagery, a form of elaboration that involves the process of forming mental pictures of an idea (Schunk, 2016). It’s a common strategy novelists and other writers use to make their stories vivid and interesting.

Imagery is also widely used by coaches to help athletes develop skills ranging from corner kicks in soccer and jump shots in basketball to performance in Olympic events. Through imagery, coaches encourage their athletes to visualize successfully executing the skill they’re trying to develop.

Imagery is an effective encoding strategy because our long-term memories have one system for verbal information and a separate system for images. Ideas that can be represented both visually and verbally, such as ball, house, or dog, are easier to remember than concepts more difficult to visualize, such as honesty or truth. The fact that we can both read about the components of our memory systems and visualize the model in Figure 1 helps us capitalize on these two systems and reminds us of the importance of supplementing verbal information with visual representations and vice versa.

We can take advantage of imagery in several ways. For instance, we can try to form mental images of the ideas we study, and we can draw our own diagrams about ideas we are learning. Furthermore, imagery is particularly helpful in problem solving (Schunk, 2016). It would have been harder to solve the area-of-the-pentagon problem presented in the discussion of working memory, for example, without the drawing. Seeing the sketch is more effective than simply being asked to find the area of a pentagon that is five inches at the base, four inches at the side, and six inches at the peak.


Organization is an encoding strategy that involves creating clusters of related items in categories that illustrate relationships. Because well-organized content contains connections among its elements, cognitive load is decreased, and encoding (and later retrieval) is more efficient. Research in reading, memory, and classroom instruction demonstrates the value of organization in promoting learning. Research also suggests that experts learn more efficiently than novices because their knowledge in long-term memory is better organized, allowing them to better access and connect new information to it (McCrudden & McNamara, 2018; Radvansky & Ashcraft, 2018).

We can organize information in several ways:

  • Hierarchies: effective when new information can be subsumed under existing ideas. Figure 2, for instance, is a simple hierarchy I used to organize information about the components of working memory.

  • Charts and matrices: useful for organizing large amounts of information into categories. For instance, Table 1 is my attempt to organize information about the memory stores so it’s meaningful to you.

  • Models: helpful for representing relationships that cannot be observed directly. The model of human memory in Figure 1 is an example.

  • Outlines: useful for representing the organizational structure in a body of written material.

Other ways to organize content include graphs, tables, flowcharts, and maps. When we study we can also use these organizers as aids in our attempts to make the information we’re studying meaningful.

Promoting Encoding: The Generation Effect

Think about driving or riding in a car to some unfamiliar location. If you drive, you are likely to be able to find the location a second time without difficulty. However, if you are merely riding you are less likely to be able to find it. Why? The generation effect, the process of thinking carefully about, and engaging in, the task at hand, is the answer. If you’re driving you are generating a response to the driving task—you are thinking carefully about finding the location. On the other hand, if you are merely riding, you are simply receiving information as opposed to thinking about where you need to go and how to get there, and, as a result, you are less likely to be able to find the location a second time.

The way we learn the definition of a new term is another example. If we use the term in a sentence or two, we’re more cognitively active than if we simply repeat the definition, and we’re more likely to remember the meaning of the term.


Although we encode a great deal of information in our lifetimes, we cannot always retrieve information when we need it. Retrieval is the process of accessing information from long-term memory. It is represented by the reverse arrow in Figure 1—the arrow pointing from long-term memory to working memory. We have all had the experience of knowing a name or fact, but we simply cannot dredge it up; we can’t retrieve the information. Retrieval failure, commonly known as forgetting, occurs when we are unable to retrieve information that we have previously encoded in memory, and it is a real part of our everyday lives. To understand forgetting, look again at the model first presented in Figure 1. There we see that information lost from both sensory memory and working memory is permanent. It is literally gone because we never transferred it to long-term memory. However, information in long-term memory has been encoded. Why can’t we remember it?

One of the primary factors contributing to retrieval failure is when we lack a sufficient retrieval cue. Quickly name the months of the year. Now, do the same, but this time list them alphabetically. Why were you so much slower the second time? Retrieval depends on the cue provided and it helps answer the question I just asked. We encode the months of the year chronologically, since that’s the way we’ve learned and experienced them, and remembering them this way is automatic. Attempting to state them alphabetically is a different retrieval cue and different retrieval challenge. We can do it, but the process is more laborious. Similarly, you know a person at work, but you cannot remember her name when you see her at a party; her name is encoded in the work context, and you’re trying to retrieve it in the context of the party.

Meaningfulness is the key to retrieval. The more meaningful—interconnected—our knowledge is in long-term memory, the easier it is to retrieve, and practice to the point of automaticity, such as we have done with the months of the year, also facilitates retrieval. When math facts are automatic, for example, students easily retrieve them for use in problem solving, leaving more working memory space to focus on solutions to problems.

Another contributing factor to forgetting is interference, which occurs when something learned either before or after detracts from understanding. For example, we learn to form singular possessives by adding an apostrophe “s” to the singular noun (e.g., one of the car’s had a bent fenders). Later, we learn rules for forming plural possessives, such as “the boys’ lockers were all in a mess” or “the children’s clothes have been washed.” If the rule for forming singular possessives later confuses our understanding of rules for forming plural possessives, we would call it proactive interference (i.e., when something we learned previously disrupts our ability to remember something we are learning right now). Or, if the rules for forming plural possessives confuse our prior understanding, we could call it retroactive interference (i.e., when something we are learning now disrupts our ability to remember something we learned previously).

Metacognition: Knowledge and Regulation of Cognition

Have you ever said to yourself something like, “I’m beat today, I better drink a cup of coffee before I go to my meeting,” or “I better sit close to the speaker so I won’t fall asleep.” What do these comments suggest about your understanding of yourself?

If you answered yes to the first question, or have had similar thoughts about your work, you were being metacognitive. Metacognition, commonly described as “thinking about thinking,” is knowledge about and regulation of our cognition (Medina, Castleberry, & Persky, 2017). Meta-attention, such as you knowing that your drowsiness may affect your ability to pay attention, is one type of metacognition. You knew about the importance of attention for effective work output, and you regulated it by drinking coffee or sitting near the speaker. Metamemory, knowledge and regulation of memory strategies, is another form of metacognition.

In addition to the previous examples, many others exist in our everyday lives. I know I am likely to misplace my keys, for instance (knowledge of cognition), so I immediately put them in a desk drawer when I come in from the garage (regulation of cognition). And, many people prepare a list when grocery shopping to help them remember the items they need. In each case, we are aware of our cognition (thinking), and we regulate it with our lists and other strategies.

Evaluating Information Processing and the Model of Human Memory

Information processing and the model of human memory make important contributions to our understanding of the way we gather, organize, and store information. However, as with all theories, they have both strengths and weaknesses. For instance, critics suggest that the model, as presented in Figure 1, oversimplifies the complexities involved in processing information. For example, the model presents attention as a filter between sensory memory and working memory, but some evidence suggests that the central executive in working memory influences both our attention and how we perceive information. Thus, attending and attaching meaning to incoming stimuli are not as simple as the one-way flow of information suggested by the model. Further, some researchers question whether working memory and long-term memory are as distinct as the model suggests, and some argue that they simply represent different activation states, that is, differences in the extent to which individuals are actively processing information in a single memory. According to this view, when people are processing information, it’s in an active state, which is information we typically describe as being in working memory. As attention shifts, other information becomes active, and the previous information becomes inactive. Most of the information in our memories is inactive, information we typically describe as being in long-term memory (Radvansky & Ashcraft, 2018).

The memory model has also been criticized for failing to adequately consider the social context in which learning occurs, as well as cultural and personal factors that influence learning, such as students’ emotions. Critics also argue that it does not adequately account for the extent to which learners construct their own knowledge, a basic principle of cognitive learning theory.

Finally, the model of human memory is assumed to be a logical, sequential information processing system that is governed by metacognition. It assumes that information is consciously processed, that is, we attend to the information, attach meaning to it through perception, consciously organize and make sense of it in working memory, and then encode and store it in long-term memory. However, an expanding body of evidence indicates that we make many conclusions and decisions without thinking about them. Others go further and argue that much of what we do happens below our level of awareness. Examples in our everyday lives corroborate this view. Retailers, for example, pay big bucks to supermarkets to have their products displayed at eye level, because marketing research indicates that we—subconsciously—are more likely to buy goods placed this way. Dairy items are usually in the back so we have to weave our way through the store to get a carton of milk, and the more time we spend in the supermarket, the more likely it is we will spend extra money.

On the other hand, virtually all cognitive descriptions of learning accept the basic structure of human memory, including a limited-capacity working memory, a long-term memory that stores information in organized form, cognitive processes that move the information from one store to another, and the regulatory mechanisms of metacognition (Bransford, Brown, & Cocking, 2000; Schunk, 2016). These components help explain aspects of learning that other theories cannot.

And it provides, through metacognition, a mechanism for increasing the efficiency and effectiveness of our daily living. For instance, it has been emphasized as a tool for dealing with emotional issues, such as stress, negative thinking, and worry in both adults and children, and it has been identified as a positive influence on decision-making, cooperation, and social interaction (Pescetelli, Rees, & Bahrami, 2016). Metacognition can also improve our learning and job efficiency without significant increases in work or effort. When we become more metacognitive, we are attempting to work “smarter,” not harder. And metacognition is relatively independent of general intelligence. For instance, realizing that attention to a task is essential does not depend on ability (González, Fernández, & Paoloni, 2017). This realization can lead us to create personal learning environments free of distractions, such as simply turning off a cell phone and TV while studying or working at home. Metacognition is complex. For instance, research indicates that it’s influenced by emotion, with hope and optimism having a positive effect and anxiety having a negative impact (González et al., 2017). As we would expect, most people are not particularly good at metacognitive monitoring, but it can be significantly improved with effort (Medina et al., 2017).

As we better understand the model of human memory and how we process information, we acquire a powerful tool that can help us learn and operate in the real world more efficiently. For instance, being aware of our working memory’s limited capacity helps us take steps to accommodate its limitations, and being aware of the importance of making connections in the topics we study and read about literally increases our memory capacity. An understanding of this theory can improve the way we learn and live.

Further Reading

Bruning, R. H., Schraw, G. J., & Norby, M. M. (2011). Cognitive psychology and instruction (5th ed.). Upper Saddle River, NJ: Prentice Hall.Find this resource:

    Eggen, P., & Kauchak, D. (2020). Using educational psychology in teaching (11th ed.). New York: Pearson.Find this resource:

      Kahneman, D. (2011). Thinking fast and slow. New York: Farrar, Straus and Giroux.Find this resource:

        Schacter, D. (2001). The seven deadly sins of memory. Boston: Houghton Mifflin.Find this resource:

          Vedantam, S. (2010). The hidden brain: How unconscious minds elect presidents, control markets, wage wars, and save our lives. New York: Spiegel & Grau.Find this resource:

            McCrudden, M. T., & McNamara, D. S. (2018). Cognition in education. New York, NY: Routledge.Find this resource:

              McDermott, K. B., & Roediger, H. L. (2018). Memory (encoding, storage, retrieval). In R. Biswas-Diener & E. Diener (Eds.), Noba textbook series: Psychology. Champaign, IL: DEF.Find this resource:

                Paas, F., Renkl, A., & Sweller, J. (2004). Cognitive load theory: Instructional implications of the interaction between information structures and cognitive architecture. Instructional Science, 32(1), 1–8.Find this resource:

                  Schunk, D. (2016). Learning theories: An educational perspective (7th edition). Boston, MA: Pearson.Find this resource:

                    Sweller, J., van Merrienboer, J., & Paas, F. (1998). Cognitive architecture and instructional design. Educational Psychology Review, 10, 251–296.Find this resource:


                      Atkinson, R., & Shiffrin, R. (1968). Human memory: A proposed system and its control processes. In K. Spence & J. Spence (Eds.), The psychology of learning and motivation: Advances in research and theory (Vol. 2). San Diego, CA: Academic Press.Find this resource:

                        Baddeley, A. D. (2001). Is working memory still working? American Psychologist, 56, 851–864.Find this resource:

                          Blair, C., & McKinnona, R. D. (2016). Moderating effects of executive functions and the teacher–child relationship on the development of mathematics ability in kindergarten. Learning and Instruction, 41, 85–93.Find this resource:

                            Bransford, J., Brown, A., & Cocking, R. (Eds.). (2000). How people learn: Brain, mind, experience, and school. Washington, DC: National Academies Press.Find this resource:

                              Cantin, R. H., Gnaedinger, E. K., Gallaway, K. C., Hesson-McInnis, M. S., & Hund, A. M. (2016). Executive functioning predicts reading, mathematics, and theory of mind during the elementary years. Journal of Experimental Child Psychology, 146, 66–78.Find this resource:

                                Chen, Q., & Yan, Z. (2016). Review: Does multitasking with mobile phones affect learning? Computers in Human Behavior, 54, 34–42.Find this resource:

                                  González, A., Fernández, M.-V., & Paoloni, P.-V. (2017). Hope and anxiety in physics class: Exploring their motivational antecedents and influence on metacognition and performance. Journal of Research in Science Teaching, 54, 558–585.Find this resource:

                                    Gregory, S. E. A., & Jackson, M. C. (2017). Joint attention enhances visual working memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43, 237–249.Find this resource:

                                      Jarrett, M. A. (2016). Attention-deficit/hyperactivity disorder (ADHD) symptoms, anxiety symptoms, and executive functioning in emerging adults. Psychological Assessment, 28, 245–250.Find this resource:

                                        McCrudden, M. T., & McNamara, D. S. (2018). Cognition in education. New York, NY: Routledge.Find this resource:

                                          McDermott, K. B., & Roediger, H. L. (2018). Memory (encoding, storage, retrieval). In R. Biswas-Diener & E. Diener (Eds.), Noba textbook series: Psychology. Champaign, IL: DEF.Find this resource:

                                            McHugh, J. E., & Lawlor, B. A. (2016). Executive functioning independently predicts self-rated health and improvement in self-rated health over time among community-dwelling older adults. Aging & Mental Health, 20, 415–422.Find this resource:

                                              Medina, M. S., Castleberry, A. N., & Persky, A. M. (2017). Strategies for improving learner metacognition in health professional education. American Journal of Pharmaceutical Education, 81, 1–14.Find this resource:

                                                Miller, G. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81–97.Find this resource:

                                                  Mızrak, E., & Öztekin, I. (2016). Relationship between emotion and forgetting. Emotion, 16, 33–42.Find this resource:

                                                    Öğmen, H., & Herzog, M. H. (2016). A new conceptualization of human visual sensory-memory. Frontiers in Psychology, 7, 830.Find this resource:

                                                      Paas, F., Renkl, A., & Sweller, J. (2004). Cognitive load theory: Instructional implications of the interaction between information structures and cognitive architecture. Instructional Science, 32(1), 1–8.Find this resource:

                                                        Papagno, C., Comi, A., Riva, M., Bizzi, A., Vernice, M., Casarotti, A., … Bello, L. (2017). Mapping the brain network of the phonological loop. Human Brain Mapping, 38, 3011–3024.Find this resource:

                                                          Pescetelli, N., Rees, G., & Bahrami, B. (2016). The perceptual and social components of metacognition. Journal of Experimental Psychology: General, 145, 949–965.Find this resource:

                                                            Radvansky, G. A., & Ashcraft, M. H. (2018). Cognition (7th ed.). New York, NY: Pearson.Find this resource:

                                                              Schunk, D. (2016). Learning theories: An educational perspective (7th ed.). Boston, MA: Pearson.Find this resource:

                                                                Sutherland, M. R., McQuiggan, D. A., Ryan, J. D., & Mather, M. (2017). Perceptual salience does not influence emotional arousal’s impairing effects on top-down attention. Emotion, 17, 700–706.Find this resource:

                                                                  Sweller, J., van Merrienboer, J., & Paas, F. (1998). Cognitive architecture and instructional design. Educational Psychology Review, 10, 251–296.Find this resource:

                                                                    Tangen, J. L., & Borders, L. D. (2017). Applying information processing theory to supervision: An initial exploration. Counselor Education and Supervision, 56, 98–111.Find this resource:

                                                                      Xu, J., & Metcalfe, J. (2016). Studying in the region of proximal learning reduces mind wandering. Memory and Cognition, 44, 681–695.Find this resource: