Long-term memory
Long-term memory is the stage of the Atkinson–Shiffrin memory model in which informative knowledge is held indefinitely. It is defined in contrast to sensory memory, the initial stage, and short-term or working memory, the second stage, which persists for about 18 to 30 seconds. LTM is grouped into two categories known as explicit memory and implicit memory. Explicit memory is broken down into episodic and semantic memory, while implicit memory includes procedural memory and emotional conditioning.
Stores
The idea of separate memories for short- and long-term storage originated in the 19th century. One model of memory developed in the 1960s assumed that all memories are formed in one store and transfer to another store after a small period of time. This model is referred to as the "modal model", most famously detailed by Shiffrin. The model states that memory is first stored in sensory memory, which has a large capacity but can only maintain information for milliseconds. A representation of that rapidly decaying memory is moved to short-term memory. Short-term memory does not have a large capacity like sensory memory but holds information for seconds or minutes. The final storage is long-term memory, which has a very large capacity and is capable of holding information possibly for a lifetime.The exact mechanisms by which this transfer takes place, whether all or only some memories are retained permanently, and even to have the existence of a genuine distinction between stores, remain controversial.
Evidence
Anterograde amnesia
One form of evidence cited in favor of the existence of a short-term store comes from anterograde amnesia, the inability to learn new facts and episodes. Patients with this form of amnesia have an intact ability to retain small amounts of information over short time scales but have little ability to form longer-term memories. This is interpreted as showing that the short-term store is protected from damage and diseases.Distraction tasks
Other evidence comes from experimental studies showing that some manipulations impair memory for the 3 to 5 most recently learned words of a list. Recall for words from earlier in the list are unaffected. Other manipulations affect only memory for earlier list words, but do not affect memory for the most recent few words. These results show that different factors affect short-term recall and long-term recall. Together, these findings show that long-term memory and short-term memory can vary independently of each other.Models
Unitary model
Not all researchers agree that short- and long-term memory are separate systems. The alternative Unitary Model proposes that short-term memory consists of temporary activations of long-term representations. It has been difficult to identify a sharp boundary between short- and long-term memory. Eugen Tarnow, a physics researcher, reported that the recall probability versus latency curve is a straight line from 6 to 600 seconds, with the probability of failure to recall only saturating after 600 seconds. If two different stores were operating in this time domain, it is reasonable to expect a discontinuity in this curve. Other research has shown that the detailed pattern of recall errors looks remarkably similar to recall of a list immediately after learning and recall after 24 hours.Further evidence for a unified store comes from experiments involving continual distractor tasks. In 1974, Bjork and Whitten, psychology researchers, presented subjects with word pairs to remember; before and after each word pair, subjects performed a simple multiplication task for 12 seconds. After the final word-pair, subjects performed the multiplication distractor task for 20 seconds. They reported that the recency effect and the primacy effect was sustained. These results are incompatible with a separate short-term memory as the distractor items should have displaced some of the word-pairs in the buffer, thereby weakening the associated strength of the items in long-term memory.
Ovid Tzeng reported an instance where the recency effect in free recall did not seem to result from a short-term memory store. Subjects were presented with four study-test periods of 10-word lists, with a continual distractor task. At the end of each list, participants had to free recall as many words as possible. After recall of the fourth list, participants were asked to recall items from all four lists. Both the initial and final recall showed a recency effect. These results violated the predictions of a short-term memory model, where no recency effect would be expected.
Koppenaal and Glanzer attempted to explain these phenomena as a result of the subjects' adaptation to the distractor task, which allowed them to preserve at least some short-term memory capabilities. In their experiment, the long-term recency effect disappeared when the distractor after the last item differed from the distractors that preceded and followed the other items. Thapar and Greene challenged this theory. In one of their experiments, participants were given a different distractor task after every study item. According to Koppenaal and Glanzer's theory, no recency effect would be expected as subjects would not have had time to adapt to the distractor; yet such a recency effect remained in place in the experiment.
Another explanation
One proposed explanation for recency in a continual distractor condition, and its disappearance in an end-only distractor task is the influence of contextual and distinctive processes. According to this model, recency is a result of the similarity of the final items' processing context to the processing context of the other items and the distinctive position of the final items versus intermediate items. In the end distractor task, the processing context of the final items is no longer similar to that of the other list items. At the same time, retrieval cues for these items are no longer as effective as without the distractor. Therefore, recency recedes or vanishes. However, when distractor tasks are placed before and after each item, recency returns, because all the list items have similar processing context.Dual-store memory model
According to George Miller, whose paper in 1956 popularized the theory of the "magic number seven", short-term memory is limited to a certain number of chunks of information, while long-term memory has a limitless store.Atkinson–Shiffrin memory model
According to the dual store memory model proposed in 1968 by Richard C. Atkinson and Richard Shiffrin, memories can reside in the short-term "buffer" for a limited time while they are simultaneously strengthening their associations in LTM. When items are first presented, they enter short-term memory for approximately twenty to thirty seconds, but due to its limited space, as new items enter, older ones are pushed out. The limit of items that can be held in the short-term memory is an average between four and seven, yet, with practice and new skills that number can be increased. However, each time an item in short-term memory is rehearsed, it is strengthened in long-term memory. Similarly, the longer an item stays in short-term memory, the stronger its association becomes in long-term memory.Baddeley's model of working memory
In 1974, Baddeley and Hitch proposed an alternative theory of short-term memory, Baddeley's model of working memory. According to this theory, short-term memory is divided into different slave systems for different types of input items, and there is an executive control supervising what items enter and exit those systems. The slave systems include the phonological loop, the visuo-spatial sketchpad, and the episodic buffer.Encoding of information
LTM encodes information semantically for storage, as researched by Baddeley. In vision, the information needs to enter working memory before it can be stored into LTM. This is evidenced by the fact that the speed with which information is stored into LTM is determined by the amount of information that can be fit, at each step, into visual working memory. In other words, the larger the capacity of working memory for certain stimuli, the faster will these materials be learned.Synaptic consolidation is the process by which items are transferred from short- to long-term memory. Within the first minutes or hours after acquisition, the engram is encoded within synapses, becoming resistant to interference from outside sources.
As LTM is subject to fading in the natural forgetting process, maintenance rehearsal may be needed to preserve long-term memories. Individual retrievals can take place in increasing intervals in accordance with the principle of spaced repetition. This can happen quite naturally through reflection or deliberate recall, often dependent on the perceived importance of the material. Using testing methods as a form of recall can lead to the testing effect, which aids long-term memory through information retrieval and feedback.
In LTM, brain cells fire in specific patterns. When someone experiences something in the world, the brain responds by creating a pattern of specific nerves firing in a specific way to represent the experience. This is called distributed representation. Distributed representation can be explained through a scientific calculator. At the top of the calculator is an opening in which the numbers typed in show up. This small slot is compiled by many blocks that light up to show a specific number. In that sense, certain blocks light up when prompted to show the number 4, but other blocks light up to show the number 5. There may be overlap in the blocks used, but ultimately, these blocks are able to generate different patterns for each specific situation. The encoding of specific episodic memories can be explained through distributed representation. When you try to remember an experience, perhaps your friend's birthday party a year ago, your brain is activating a certain pattern of neurons. If you try to remember your mother's birthday party, another pattern of neurons is fired but there may be overlap because they are both birthday parties. This kind of remembering is the idea of retrieval because it involves recalling the specific distributed representation created during the encoding of the experience.