Hopfield network
A Hopfield network is a form of recurrent neural network, or a spin glass system, that can serve as a content-addressable memory. The Hopfield network, named for John Hopfield, consists of a single layer of neurons, where each neuron is connected to every other neuron except itself. These connections are bidirectional and symmetric, meaning the weight of the connection from neuron i to neuron j is the same as the weight from neuron j to neuron i. Patterns are associatively recalled by fixing certain inputs, and dynamically evolve the network to minimize an energy function, towards local energy minimum states that correspond to stored patterns. Patterns are associatively learned by a Hebbian learning algorithm.
One of the key features of Hopfield networks is their ability to recover complete patterns from partial or noisy inputs, making them robust in the face of incomplete or corrupted data. Their connection to statistical mechanics, recurrent networks, and human cognitive psychology has led to their application in various fields, including physics, psychology, neuroscience, and machine learning theory and practice. Due to their binary-valued neurons, limited scalability, and incompatibility with gradient-based learning, classical Hopfield networks are rarely used in modern machine learning.
History
One origin of associative memory is human cognitive psychology, specifically the associative memory. Frank Rosenblatt studied "close-loop cross-coupled perceptrons", which are 3-layered perceptron networks whose middle layer contains recurrent connections that change by a Hebbian learning rule.Another model of associative memory is where the output does not loop back to the input. W. K. Taylor proposed such a model trained by Hebbian learning in 1956. Karl Steinbuch, who wanted to understand learning, and was inspired by watching his children learn, published the Lernmatrix in 1961. It was translated to English in 1963. Similar research was done with the correlogram of D. J. Willshaw et al. in 1969. Teuvo Kohonen trained an associative memory by gradient descent in 1974.
Another origin of associative memory was statistical mechanics. The Ising model was published in 1920s as a model of magnetism, however it studied the thermal equilibrium, which does not change with time. Roy J. Glauber in 1963 studied the Ising model evolving in time, as a process towards thermal equilibrium, adding in the component of time.
The second component to be added was adaptation to stimulus. This component has been added independently by different sources, including Rosenblatt, Kaoru Nakano, and Shun'ichi Amari. They proposed to modify the weights of an Ising model by Hebbian learning rule as a model of associative memory. The same idea was published by in 1974, who was acknowledged by Hopfield in his 1982 paper.
See Carpenter and Cowan for a technical description of some of these early works in associative memory.
The Sherrington–Kirkpatrick model of spin glass, published in 1975, is the Hopfield network with random initialization. Sherrington and Kirkpatrick found that it is highly likely for the energy function of the SK model to have many local minima. In the 1982 paper, Hopfield applied this recently developed theory to study the Hopfield network with binary activation functions. In a 1984 paper he extended this to continuous activation functions. It became a standard model for the study of neural networks through statistical mechanics.
A major advance in memory storage capacity was developed by Dimitry Krotov and Hopfield in 2016 through a change in network dynamics and energy function. This idea was further extended by Demircigil and collaborators in 2017. The continuous dynamics of large memory capacity models was developed in a series of papers between 2016 and 2020. Large memory storage capacity Hopfield Networks are now called Dense Associative Memories or modern Hopfield networks.
In 2024, John J. Hopfield and Geoffrey E. Hinton were awarded the Nobel Prize in Physics for their foundational contributions to machine learning, such as the Hopfield network.
Structure
[Image:Hopfield-net-vector.svg|thumb|A Hopfield net with four units]The units in Hopfield nets are binary threshold units, i.e. the units only take on two different values for their states, and the value is determined by whether or not the unit's input exceeds its threshold. Discrete Hopfield nets describe relationships between binary neurons. At a certain time, the state of the neural net is described by a vector, which records which neurons are firing in a binary word of bits.
The interactions between neurons have units that usually take on values of 1 or −1, and this convention will be used throughout this article. However, other literature might use units that take values of 0 and 1. These interactions are "learned" via Hebb's law of association, such that, for a certain state and distinct nodes
but.
Once the network is trained, no longer evolve. If a new state of neurons is introduced to the neural network, the net acts on neurons such that
- if
- if
The connections in a Hopfield net typically have the following restrictions:
The constraint that weights are symmetric guarantees that the energy function decreases monotonically while following the activation rules. A network with asymmetric weights may exhibit some periodic or chaotic behaviour; however, Hopfield found that this behavior is confined to relatively small parts of the phase space and does not impair the network's ability to act as a content-addressable associative memory system.
Hopfield also modeled neural nets for continuous values, in which the electric output of each neuron is not binary but some value between 0 and 1. He found that this type of network was also able to store and reproduce memorized states.
Notice that every pair of units i and j in a Hopfield network has a connection that is described by the connectivity weight. In this sense, the Hopfield network can be formally described as a complete undirected graph, where is a set of McCulloch–Pitts neurons and is a function that links pairs of units to a real value, the connectivity weight.
Updating
Updating one unit in the Hopfield network is performed using the following rule:where:
- is the strength of the connection weight from unit j to unit i.
- is the state of unit i.
- is the threshold of unit i.
Neurons "attract or repel each other" in state space
The weight between two units has a powerful impact upon the values of the neurons. Consider the connection weight between two neurons i and j. If, the updating rule implies that:- when, the contribution of j in the weighted sum is positive. Thus, is pulled by j towards its value
- when, the contribution of j in the weighted sum is negative. Then again, is pushed by j towards its value
Convergence properties of discrete and continuous Hopfield networks
in his paper in 1990 studied discrete Hopfield networks and proved a generalized convergence theorem that is based on the connection between the network's dynamics and cuts in the associated graph. This generalization covered both asynchronous as well as synchronous dynamics and presented elementary proofs based on greedy algorithms for max-cut in graphs. A subsequent paper further investigated the behavior of any neuron in both discrete-time and continuous-time Hopfield networks when the corresponding energy function is minimized during an optimization process. Bruck showed that neuron j changes its state if and only if it further decreases the following biased pseudo-cut. The discrete Hopfield network minimizes the following biased pseudo-cut for the synaptic weight matrix of the Hopfield net.where and represents the set of neurons which are −1 and +1, respectively, at time. For further details, see the recent paper.
The discrete-time Hopfield Network always minimizes exactly the following pseudo-cut
The continuous-time Hopfield network always minimizes an upper bound to the following weighted cut
where is a zero-centered sigmoid function.
The complex Hopfield network, on the other hand, generally tends to minimize the so-called shadow-cut of the complex weight matrix of the net.
Energy
Hopfield nets have a scalar value associated with each state of the network, referred to as the "energy", E, of the network, where:This quantity is called "energy" because it either decreases or stays the same upon network units being updated. Furthermore, under repeated updating the network will eventually converge to a state which is a local minimum in the energy function. Thus, if a state is a local minimum in the energy function it is a stable state for the network. Note that this energy function belongs to a general class of models in physics under the name of Ising models; these in turn are a special case of Markov networks, since the associated probability measure, the Gibbs measure, has the Markov property.
Hopfield network in optimization
Hopfield and Tank presented the Hopfield network application in solving the classical traveling-salesman problem in 1985. Since then, the Hopfield network has been widely used for optimization. The idea of using the Hopfield network in optimization problems is straightforward: If a constrained/unconstrained cost function can be written in the form of the Hopfield energy function E, then there exists a Hopfield network whose equilibrium points represent solutions to the constrained/unconstrained optimization problem. Minimizing the Hopfield energy function both minimizes the objective function and satisfies the constraints also as the constraints are "embedded" into the synaptic weights of the network. Although including the optimization constraints into the synaptic weights in the best possible way is a challenging task, many difficult optimization problems with constraints in different disciplines have been converted to the Hopfield energy function: Associative memory systems, Analog-to-Digital conversion, job-shop scheduling problem, quadratic assignment and other related NP-complete problems, channel allocation problem in wireless networks, mobile ad-hoc network routing problem, image restoration, system identification, combinatorial optimization, etc., just to name a few. However, while it is possible to convert hard optimization problems to Hopfield energy functions, it does not guarantee convergence to a solution.Initialization and running
Initialization of the Hopfield networks is done by setting the values of the units to the desired start pattern. Repeated updates are then performed until the network converges to an attractor pattern. Convergence is generally assured, as Hopfield proved that the attractors of this nonlinear dynamical system are stable, not periodic or chaotic as in some other systems. Therefore, in the context of Hopfield networks, an attractor pattern is a final stable state, a pattern that cannot change any value within it under updating.Training
Training a Hopfield net involves lowering the energy of states that the net should "remember". This allows the net to serve as a content addressable memory system, that is to say, the network will converge to a "remembered" state if it is given only part of the state. The net can be used to recover from a distorted input to the trained state that is most similar to that input. This is called associative memory because it recovers memories on the basis of similarity. For example, if we train a Hopfield net with five units so that the state is an energy minimum, and we give the network the state it will converge to. Thus, the network is properly trained when the energy of states which the network should remember are local minima. Note that, in contrast to Perceptron training, the thresholds of the neurons are never updated.Learning rules
There are various different learning rules that can be used to store information in the memory of the Hopfield network. It is desirable for a learning rule to have both of the following two properties:Local: A learning rule is local if each weight is updated using information available to neurons on either side of the connection that is associated with that particular weight.Incremental: New patterns can be learned without using information from the old patterns that have been also used for training. That is, when a new pattern is used for training, the new values for the weights only depend on the old values and on the new pattern.These properties are desirable, since a learning rule satisfying them is more biologically plausible. For example, since the human brain is always learning new concepts, one can reason that human learning is incremental. A learning system that was not incremental would generally be trained only once, with a huge batch of training data.
Hebbian learning rule for Hopfield networks
Hebbian theory was introduced by Donald Hebb in 1949 in order to explain "associative learning", in which simultaneous activation of neuron cells leads to pronounced increases in synaptic strength between those cells. It is often summarized as "Neurons that fire together wire together. Neurons that fire out of sync fail to link".The Hebbian rule is both local and incremental. For the Hopfield networks, it is implemented in the following manner when learning
binary patterns:
where represents bit i from pattern.
If the bits corresponding to neurons i and j are equal in pattern, then the product will be positive. This would, in turn, have a positive effect on the weight and the values of i and j will tend to become equal. The opposite happens if the bits corresponding to neurons i and j are different.
Storkey learning rule
This rule was introduced by Amos Storkey in 1997 and is both local and incremental. Storkey also showed that a Hopfield network trained using this rule has a greater capacity than a corresponding network trained using the Hebbian rule. The weight matrix of an attractor neural network is said to follow the Storkey learning rule if it obeys:where is a form of local field at neuron i.
This learning rule is local, since the synapses take into account only neurons at their sides. The rule makes use of more information from the patterns and weights than the generalized Hebbian rule, due to the effect of the local field.
Spurious patterns
Patterns that the network uses for training become attractors of the system. Repeated updates would eventually lead to convergence to one of the retrieval states. However, sometimes the network will converge to spurious patterns. In fact, the number of spurious patterns can be exponential in the number of stored patterns, even if the stored patterns are orthogonal. The energy in these spurious patterns is also a local minimum. For each stored pattern x, the negation -x is also a spurious pattern.A spurious state can also be a linear combination of an odd number of retrieval states. For example, when using 3 patterns, one can get the following spurious state:
Spurious patterns that have an even number of states cannot exist, since they might sum up to zero
Capacity
The Network capacity of the Hopfield network model is determined by neuron amounts and connections within a given network. Therefore, the number of memories that are able to be stored is dependent on neurons and connections. Furthermore, it was shown that the recall accuracy between vectors and nodes was 0.138 . Therefore, it is evident that many mistakes will occur if one tries to store a large number of vectors. When the Hopfield model does not recall the right pattern, it is possible that an intrusion has taken place, since semantically related items tend to confuse the individual, and recollection of the wrong pattern occurs. Therefore, the Hopfield network model is shown to confuse one stored item with that of another upon retrieval. Perfect recalls and high capacity, >0.14, can be loaded in the network by Storkey learning method; ETAM, ETAM experiments also in. Ulterior models inspired by the Hopfield network were later devised to raise the storage limit and reduce the retrieval error rate, with some being capable of one-shot learning.The storage capacity using the Hebb's rule can be given as where is the number of neurons in the net.
The storage capacity using the Storkey's rule can be given as where is the number of neurons in the net.
Human memory
The Hopfield network is a model for human associative learning and recall. It accounts for associative memory through the incorporation of memory vectors. Memory vectors can be slightly used, and this would spark the retrieval of the most similar vector in the network. However, we will find out that due to this process, intrusions can occur. In associative memory for the Hopfield network, there are two types of operations: auto-association and hetero-association. The first being when a vector is associated with itself, and the latter being when two different vectors are associated in storage. Furthermore, both types of operations are possible to store within a single memory matrix, but only if that given representation matrix is not one or the other of the operations, but rather the combination of the two.Hopfield's network model utilizes the same learning rule as Hebb's (1949) learning rule, which characterised learning as being a result of the strengthening of the weights in cases of neuronal activity.
Rizzuto and Kahana were able to show that the neural network model can account for repetition on recall accuracy by incorporating a probabilistic-learning algorithm. During the retrieval process, no learning occurs. As a result, the weights of the network remain fixed, showing that the model is able to switch from a learning stage to a recall stage. By adding contextual drift they were able to show the rapid forgetting that occurs in a Hopfield model during a cued-recall task. The entire network contributes to the change in the activation of any single node.
McCulloch and Pitts' dynamical rule, which describes the behavior of neurons, does so in a way that shows how the activations of multiple neurons map onto the activation of a new neuron's firing rate, and how the weights of the neurons strengthen the synaptic connections between the new activated neuron. Hopfield would use McCulloch–Pitts's dynamical rule in order to show how retrieval is possible in the Hopfield network. However, Hopfield would do so in a repetitious fashion. Hopfield would use a nonlinear activation function, instead of using a linear function. This would therefore create the Hopfield dynamical rule and with this, Hopfield was able to show that with the nonlinear activation function, the dynamical rule will always modify the values of the state vector in the direction of one of the stored patterns.
Dense associative memory or modern Hopfield network
Hopfield networks are recurrent neural networks with dynamical trajectories converging to fixed point attractor states and described by an energy function. The state of each model neuron is defined by a time-dependent variable, which can be chosen to be either discrete or continuous. A complete model describes the mathematics of how the future state of activity of each neuron depends on the known present or previous activity of all the neurons.In the original Hopfield model of associative memory, the variables were binary, and the dynamics were described by a one-at-a-time update of the state of the neurons. An energy function quadratic in the was defined, and the dynamics consisted of changing the activity of each single neuron only if doing so would lower the total energy of the system. This same idea was extended to the case of being a continuous variable representing the output of neuron, and being a monotonic function of an input current. The dynamics became expressed as a set of first-order differential equations for which the "energy" of the system always decreased. The energy in the continuous case has one term which is quadratic in the, and a second term which depends on the gain function. While having many desirable properties of associative memory, both of these classical systems suffer from a small memory storage capacity, which scales linearly with the number of input features. In contrast, by increasing the number of parameters in the model so that there are not just pair-wise but also higher-order interactions between the neurons, one can increase the memory storage capacity.
Dense Associative Memories are generalizations of the classical Hopfield Networks that break the linear scaling relationship between the number of input features and the number of stored memories. This is achieved by introducing stronger non-linearities leading to super-linear memory storage capacity as a function of the number of feature neurons, in effect increasing the order of interactions between the neurons. The network still requires a sufficient number of hidden neurons.
The key theoretical idea behind dense associative memory networks is to use an energy function and an update rule that is more sharply peaked around the stored memories in the space of neuron's configurations compared to the classical model, as demonstrated when the higher-order interactions and subsequent energy landscapes are explicitly modelled.
Discrete variables
A simple example of the modern Hopfield network can be written in terms of binary variables that represent the active and inactive state of the model neuron.In this formula the weights represent the matrix of memory vectors, and the function is a rapidly growing non-linear function. The update rule for individual neurons can be written in the following form which states that in order to calculate the updated state of the -th neuron the network compares two energies: the energy of the network with the -th neuron in the ON state and the energy of the network with the -th neuron in the OFF state, given the states of the remaining neuron. The updated state of the -th neuron selects the state that has the lowest of the two energies.In the limiting case when the non-linear energy function is quadratic these equations reduce to the familiar energy function and the update rule for the classical binary Hopfield Network.
The memory storage capacity of these networks can be calculated for random binary patterns. For the power energy function the maximal number of memories that can be stored and retrieved from this network without errors is given byFor an exponential energy function the memory storage capacity is exponential in the number of feature neurons
Continuous variables
Modern Hopfield networks or dense associative memories can be best understood in continuous variables and continuous time. Consider the network architecture, shown in Fig.1, and the equations for neuron's states evolutionwhere the currents of the feature neurons are denoted by, and the currents of the memory neurons are denoted by . There are no synaptic connections among the feature neurons or the memory neurons. A matrix denotes the strength of synapses from a feature neuron to the memory neuron. The synapses are assumed to be symmetric, so that the same value characterizes a different physical synapse from the memory neuron to the feature neuron. The outputs of the memory neurons and the feature neurons are denoted by and, which are non-linear functions of the corresponding currents. In general these outputs can depend on the currents of all the neurons in that layer so that and. It is convenient to define these activation functions as derivatives of the Lagrangian functions for the two groups of neuronsThis way the specific form of the equations for neuron's states is completely defined once the Lagrangian functions are specified. Finally, the time constants for the two groups of neurons are denoted by and, is the input current to the network that can be driven by the presented data.General systems of non-linear differential equations can have many complicated behaviors that can depend on the choice of the non-linearities and the initial conditions. For Hopfield Networks, however, this is not the case - the dynamical trajectories always converge to a fixed point attractor state. This property is achieved because these equations are specifically engineered so that they have an underlying energy function The terms grouped into square brackets represent a Legendre transform of the Lagrangian function with respect to the states of the neurons. If the Hessian matrices of the Lagrangian functions are positive semi-definite, the energy function is guaranteed to decrease on the dynamical trajectory This property makes it possible to prove that the system of dynamical equations describing temporal evolution of neurons' activities will eventually reach a fixed point attractor state.
In certain situations one can assume that the dynamics of hidden neurons equilibrates at a much faster time scale compared to the feature neurons,. In this case the steady state solution of the second equation in the system can be used to express the currents of the hidden units through the outputs of the feature neurons. This makes it possible to reduce the general theory to an effective theory for feature neurons only. The resulting effective update rules and the energies for various common choices of the Lagrangian functions are shown in Fig.2. In the case of log-sum-exponential Lagrangian function the update rule for the states of the feature neurons is the attention mechanism commonly used in many modern AI systems.