Explore 1.5M+ audiobooks & ebooks free for days

From $11.99/month after trial. Cancel anytime.

Hopfield Networks: Fundamentals and Applications of The Neural Network That Stores Memories
Hopfield Networks: Fundamentals and Applications of The Neural Network That Stores Memories
Hopfield Networks: Fundamentals and Applications of The Neural Network That Stores Memories
Ebook218 pages2 hoursArtificial Intelligence

Hopfield Networks: Fundamentals and Applications of The Neural Network That Stores Memories

Rating: 0 out of 5 stars

()

Read preview

About this ebook

What is Hopfield Networks


John Hopfield popularized the Hopfield network in 1982. It is a type of recurrent artificial neural network and a spin glass system. The Hopfield network was initially defined by Shun'ichi Amari in 1972 and by Little in 1974. The Hopfield network is based on the collaboration of Ernst Ising and Wilhelm Lenz on the Ising model. Hopfield networks are content-addressable ("associative") memory systems that can either have continuous variables or binary threshold nodes. Additionally, hopfield networks serve as a model for comprehending the human memory.


How You Will Benefit


(I) Insights, and validations about the following topics:


Chapter 1: Hopfield Network


Chapter 2: Unsupervised Learning


Chapter 3: Ising Model


Chapter 4: Hebbian Theory


Chapter 5: Boltzmann Machine


Chapter 6: Backpropagation


Chapter 7: Multilayer Perceptron


Chapter 8: Quantum Neural Network


Chapter 9: Autoencoder


Chapter 10: Modern Hopfield Network


(II) Answering the public top questions about hopfield networks.


(III) Real world examples for the usage of hopfield networks in many fields.


Who This Book is For


Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of hopfield networks.


What is Artificial Intelligence Series


The Artificial Intelligence eBook series provides comprehensive coverage in over 200 topics. Each ebook covers a specific Artificial Intelligence topic in depth, written by experts in the field. The series aims to give readers a thorough understanding of the concepts, techniques, history and applications of artificial intelligence. Topics covered include machine learning, deep learning, neural networks, computer vision, natural language processing, robotics, ethics and more. The ebooks are written for professionals, students, and anyone interested in learning about the latest developments in this rapidly advancing field.
The Artificial Intelligence eBook series provides an in-depth yet accessible exploration, from the fundamental concepts to the state-of-the-art research. With over 200 volumes, readers gain a thorough grounding in all aspects of Artificial Intelligence. The ebooks are designed to build knowledge systematically, with later volumes building on the foundations laid by earlier ones. This comprehensive series is an indispensable resource for anyone seeking to develop expertise in artificial intelligence.

LanguageEnglish
PublisherOne Billion Knowledgeable
Release dateJun 20, 2023
Hopfield Networks: Fundamentals and Applications of The Neural Network That Stores Memories

Other titles in Hopfield Networks Series (30)

View More

Read more from Fouad Sabry

Related to Hopfield Networks

Titles in the series (100)

View More

Related ebooks

Intelligence (AI) & Semantics For You

View More

Reviews for Hopfield Networks

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Hopfield Networks - Fouad Sabry

    Chapter 1: Hopfield network

    A Hopfield network is a sort of recurrent artificial neural network that was made famous by John Hopfield in 1982 and by Little in 1974. It is also known as an Amari-Hopfield network, an Ising model of a neural network, or an Ising–Lenz–Little model.

    In 1972, Shun'ichi Amari was the one who came up with the concept of using a recurrent neural network as a learning memory model for the first time. Large amounts of memory and storage space Dense Associative Memories are the new name given to Hopfield Networks, also known as contemporary Hopfield networks.

    Hopfield networks use binary threshold units as their units of analysis, i.e.

    There are only two distinct values that may be taken on by the units while in their states, and the value is determined by whether or not the unit's input exceeds its threshold U_{i} .

    Discrete Hopfield nets describe relationships between binary (firing or not-firing) neurons {\displaystyle 1,2,\ldots ,i,j,\ldots ,N} .

    After a given amount of time, the state of the neural net is described by a vector V , which records which neurons are firing in a binary word of N bits.

    The interactions w_{{ij}} between neurons have units that usually take on values of 1 or −1, ...and we will adhere to this format for the whole of this piece.

    However, Other types of writing may make use of units with values of 0 and 1, respectively.

    Hebb's law of association is responsible for learning these encounters between people, such that, for a certain state {\displaystyle V^{s}} and distinct nodes i,j

    {\displaystyle w_{ij}=V_{i}^{s}V_{j}^{s}}

    but {\displaystyle w_{ii}=0} .

    (Note that the Hebbian learning rule takes the form

    {\displaystyle w_{ij}=(2V_{i}^{s}-1)(2V_{j}^{s}-1)}

    when the units assume values in {\displaystyle \{0,1\}} .)

    After the training of the network,, w_{{ij}} no longer evolve.

    If a new state of neurons {\displaystyle V^{s'}} is introduced to the neural network, the net has an effect on neurons in such a way that

    {\displaystyle V_{i}^{s'}\rightarrow 1} if {\displaystyle \sum _{j}w_{ij}V_{j}^{s'}>U_{i}}

    {\displaystyle V_{i}^{s'}\rightarrow -1} if {\displaystyle \sum _{j}w_{ij}V_{j}^{s'}

    where U_{i} is the threshold value of the i'th neuron (often taken to be 0).

    This being the case, Hopfield networks are able to remember states that have been previously recorded in the interaction matrix, because if a new state {\displaystyle V^{s'}} is subjected to the interaction matrix, each neuron will change until it matches the original state {\displaystyle V^{s}} (see the Updates section below).

    The following are some of the usual constraints placed on the links in a Hopfield net::

    w_{{ii}}=0,\forall i (no unit has a connection with itself)

    w_{{ij}}=w_{{ji}},\forall i,j (connections are symmetric)

    The fact that it is required that weights be symmetric ensures that the energy function will always go down in a monotonous fashion while still adhering to the activation requirements. However, Hopfield discovered that this behavior is confined to relatively small parts of the phase space and does not impair the network's ability to act as a content-addressable associative memory system. A network with asymmetric weights may display some periodic or chaotic behavior; however, Hopfield discovered that this behavior is confined to relatively small parts of the phase space.

    Additionally, Hopfield modeled neural networks with continuous values, in which the electric output of each neuron is not binary but rather some value between 0 and 1, rather than 0 and 1. He discovered that this kind of network was also able to preserve and replicate the states that had been previously learned.

    Notice that every pair of units i and j in a Hopfield network has a connection that is described by the connectivity weight w_{{ij}} .

    To put it another way, the Hopfield network can be formally described as a complete undirected graph G=\langle V,f\rangle , where V is a set of McCulloch–Pitts neurons and {\displaystyle f:V^{2}\rightarrow \mathbb {R} } is a function that links pairs of units to a real value, the importance of the connection weight.

    The following rule is what is used to accomplish the operation of updating one unit in the Hopfield network, which is the node in the graph that simulates the artificial neuron:

    {\displaystyle s_{i}\leftarrow \left\{{\begin{array}{ll}+1&{\text{if }}\sum _{j}{w_{ij}s_{j}}\geq \theta _{i},\\-1&{\text{otherwise.}}\end{array}}\right.}

    where:

    w_{ij} is the strength of the connection weight from unit j to unit i (the weight of the connection).

    s_{i} is the state of unit i.

    \theta _{i} is the threshold of unit i.

    There are two distinct methods available for implementing updates in the Hopfield network:

    Asynchronous means that just one unit at a time receives an update. This unit may be chosen at random, or a pre-defined order can be enforced right from the start. Both options are available.

    Synchronous means that all of the units get their updates at the same exact moment. This necessitates the inclusion of a central clock within the system so that synchronization can be maintained. Some people have the opinion that this technique is not as realistic as others due to the fact that there is no observable world clock that influences comparable biological or physical systems of interest.

    The difference in weight between two units has a significant effect on the values that the neurons produce.

    Consider the connection weight w_{ij} between two neurons i and j.

    If w_{{ij}}>0 , Keeping up with the updating rule requires that:

    when {\displaystyle s_{j}=1} , The value of j represents a positive contribution to the weighted total.

    Thus, s_{{i}} is pulled by j towards its value {\displaystyle s_{i}=1}

    when {\displaystyle s_{j}=-1} , The contribution that j makes to the weighted total is a negative value.

    Then again, s_{i} is pushed by j towards its value {\displaystyle s_{i}=-1}

    Therefore, the values of neurons I and j will converge on a common value if the weight that connects them has a positive value. In a similar manner, if the weight is negative, they will move in opposite directions.

    When showing the convergence of the discrete Hopfield network in his study in 1990, Bruck provided some insight on the behavior of a neuron inside the discrete Hopfield network. as a weight matrix for the synapses of the Hopfield network.

    {\displaystyle J_{pseudo-cut}(k)=\sum _{i\in C_{1}(k)}\sum _{j\in C_{2}(k)}w_{ij}+\sum _{j\in C_{1}(k)}{\theta _{j}}}

    where {\displaystyle C_{1}(k)} and {\displaystyle C_{2}(k)} represents the set of neurons which are −1 and +1, respectively, at time k .

    For additional information, go here, check out the most current article.

    {\displaystyle U(k)=\sum _{i=1}^{N}\sum _{j=1}^{N}w_{ij}(s_{i}(k)-s_{j}(k))^{2}+2\sum _{j=1}^{N}\theta _{j}s_{j}(k)}

    The time that is uninterrupted The Hopfield network constantly strives to reduce an upper limit to the weighted cut that comes after it.

    {\displaystyle V(t)=\sum _{i=1}^{N}\sum _{j=1}^{N}w_{ij}(f(s_{i}(t))-f(s_{j}(t))^{2}+2\sum _{j=1}^{N}\theta _{j}f(s_{j}(t))}

    where {\displaystyle f(\cdot )} is a zero-centered sigmoid function.

    On the other hand, the complex Hopfield network often has a tendency to minimize the so-called shadow-cut of the net's complex weight matrix.

    Hopfield networks contain a scalar value called the energy associated with each state of the network. This value is denoted by the letter E and may be thought of as the network's energy.:

    {\displaystyle E=-{\frac {1}{2}}\sum _{i,j}w_{ij}s_{i}s_{j}-\sum _{i}\theta _{i}s_{i}}

    This number is referred to as energy due to the fact that whenever network units are updated, it either drops or remains the same. In addition to this, the network will ultimately converge to a state that is a local minimum in the energy function as a result of frequent updates being applied to it (which is considered to be a Lyapunov function). Therefore, a state is considered to be stable for the network if it corresponds to a local minimum in the energy function. Note that this energy function is part of a larger class of models in physics known as Ising models; these models, in turn, are a specific form of Markov networks due to the fact that the related probability measure, known as the Gibbs measure, has the Markov feature.

    1985 saw Hopfield and Tank's presentation of the Hopfield network application, which offered a solution to the traditional traveling-salesman conundrum.

    In order to get the Hopfield networks up and running, you have to first configure the values of the units to the start pattern that you want to use. The network is then subjected to iterative updates until it reaches a point where it converges on an attractor pattern. Hopfield's proof demonstrates that the attractors of this nonlinear dynamical system are stable, as opposed to periodic or chaotic, as is the case in certain other systems. As a result, convergence is typically guaranteed to occur. Therefore, in the context of Hopfield networks, an attractor pattern is a final stable state, a pattern that cannot alter any value contained inside it under updating conditions.

    In order to train a Hopfield net, you first need to reduce the energy of states that the net should remember. This enables the internet to function as a content-addressable memory system, which means that the internet will eventually arrive at a state that it has remembered even if it is only provided with a portion of the state. The net may be used to recover from a distorted input to the training state that is most similar to that input. This recovery can take place in either direction. Because it recalls experiences on the basis of similarities, this kind of memory is referred to as associative memory. For instance, if we train a Hopfield net with five units such that the state (1, 1, 1, 1, 1) is an energy minimum, and then we give the network the state (1, 1, 1, 1, 1), it will converge to the state (1, 1, 1, 1, 1). This is because the state (1, 1, 1, 1, 1) is an energy minimum. Therefore, the network is considered to have received enough training when the energy of states that the network ought to remember has reached its local minimum. It is important to take into account that, in contrast to training with perceptrons, the thresholds of the neurons are never modified.

    When it comes to storing knowledge in the memory of the Hopfield network, one may make use of a wide variety of learning principles. It is preferable for a learning rule to possess both of the two features that are listed below:

    When a learning rule is said to be local, it means that each weight is updated using information that is accessible to neurons on either side of the connection that is connected with that specific weight.

    Incremental: It is possible to learn new patterns without relying on information from previously learned patterns that have been utilized for training in the past. That is to say, if a new pattern is used for training, the new values for the weights simply rely on the previous values, in addition to the new pattern.

    Because a learning rule that satisfies these requirements is more likely to exist in the natural world, it is beneficial to have them. For instance, one may infer that human learning is incremental given that the human brain is always acquiring new ideas. In most cases, a non-incremental learning system would only be taught once, using a large quantity of data all at once.

    In 1949, Donald Hebb presented the Hebbian hypothesis in an effort to explain associative learning. This is a phenomenon in which the simultaneous stimulation of neuron cells leads to significant increases in the synaptic strength between those cells. It is often summed up as follows: Neurons that fire together are more likely to connect with one another. Neurons that do not fire in synchrony with one another are unable to connect..

    The Hebbian rule is regional in scope while also being gradual.

    With regard to the Hopfield networks, it is implemented in the following manner when learning n binary patterns:

    w_{{ij}}={\frac {1}{n}}\sum _{{\mu =1}}^{{n}}\epsilon _{{i}}^{\mu }\epsilon _{{j}}^{\mu }

    where \epsilon _{i}^{\mu } represents bit i from pattern \mu .

    If the bits corresponding to neurons i and j are equal in pattern \mu , then the product \epsilon _{{i}}^{\mu }\epsilon _{{j}}^{\mu } will be positive.

    This would, in turn, have a positive effect on the weight w_{ij} and the values of i and j will tend to become equal.

    If the bits that belong to neurons I and j are different from one another, the reverse will occur.

    This rule, which is both local and incremental, was first presented by Amos Storkey in the year 1997. Storkey also shown that a Hopfield network trained using this rule had a higher capacity than a matching network trained with the Hebbian rule. This was demonstrated by comparing the two networks' levels of training. If the weight matrix of an attractor neural network obeys the Storkey learning rule, then the network

    Enjoying the preview?
    Page 1 of 1