Finalmp
Finalmp
BACHELOR OF TECHNOLOGY
In
Submitted by
M. Satya Sree N. Vijaya Bhaskar
(22341A04C1) (22341A04C3)
M. Sudheer M. Vykuntham
(22341A04A8) (22341A04A3)
CERTIFICATE
This is to certify that the mini project entitled Design And Implementation Of CMOS Based
Vaikuntham (22341A04A3) has been carried out in partial fulfilment of the requirement for the
Rajam affiliated to JNTU-GV, Vizianagarm is a record of bonafide work carried out by them
under my guidance & supervision. The results embodied in this report have not been submitted to
We would like to sincerely thank our Associate Dean Academics, Dr. M. V. Nageswara Rao,
for extending his support with all the necessary facilities that led to the successful completion of our
project work.
We would like to take this opportunity to thank our beloved Principal, Dr. C. L. V. R. S. V
Prasad, for providing a great support to us in completing our mini project and for giving us the
opportunity of doing the project work.
We would like to take this opportunity to thank our beloved Director Education, Dr. J. Girish,
for providing all the necessary facilities and a great support to us in completing the mini project
work.
We would like to thank all the faculty members and the non-teaching staff of the Department
of Electronics and Communication Engineering for their direct or indirect support for helping us in
completion of this mini project work.
Finally, we would like to thank all of our friends and family members for their continuous help
and encouragement.
iii
ABSTRACT
This project presents the design and implementation of CMOS-based circuits that model
mechanisms. The circuits, fabricated using 90-nm CMOS technology, replicate key aspects of
synaptic function, including synaptic vesicle fusion and neurotransmitter release, essential for
efficient information transfer in biological and neuromorphic systems. The Pre-synapse circuit is
designed to generate a current response based on two primary inputs: a spike train and a threshold
value. This mechanism effectively simulates the vesicle fusion process, where presynaptic neurons
release neurotransmitters upon reaching an activation threshold. The circuit's ability to process
these inputs allows for dynamic and adjustable synaptic behaviour, closely mimicking the
operation of biological neurons. The Post-synapse circuit models the reception and processing of
that the postsynaptic response is appropriately triggered based on the presynaptic activity, thereby
these circuits provide a realistic and scalable approach to synapse emulation, bridging the gap
between neuroscience and neuromorphic engineering. While the primary focus of these designs is
on achieving biological accuracy, they also pave the way for the development of efficient
these circuits offer a robust foundation for scalable, low-power artificial neural networks (ANNs)
and spiking neural networks (SNNs). The proposed architecture contributes to advancing neuro-
iv
TABLE OF CONTENTS
ACKNOWLEDGEMENT iii
ABSTRACT iv
1. INTRODUCTION 1-7
2. LITERATURE SURVEY 8-10
3. METHODOLOGY 11-25
4. RESULTS AND DISCUSSIONS 26-31
5. CONCLUSIONS AND FUTURE SCOPE 32-34
REFERENCES
PUBLICATIONS/ PARTICIPATION CERTIFICATES
1. INTRODUCTION
To overcome AI’s cognitive limitations, we must explore how neurons process and transmit
information in the human brain. Unlike traditional AI, which relies on static mathematical models
and weight-based learning, neurons enable the brain to perform dynamic learning, self-repair, and
energy-efficient computation. The fundamental difference lies in the brain’s event-driven
architecture, where information is processed only when required, rather than through continuous
computations like in modern AI systems.
Neurons, the fundamental building blocks of the brain, are responsible for receiving, processing,
and transmitting electrical signals through synaptic connections. Each neuron consists of dendrites,
which receive incoming signals, a cell body (soma) that processes information, an axon that
transmits signals, and synapses where communication between neurons takes place via
neurotransmitters. This structure allows neurons to operate in a spike-based manner, meaning they
fire discrete action potentials instead of continuously processing data. This spiking mechanism is
1
highly energy-efficient and forms the foundation of human cognition, enabling learning,
adaptation, and fault tolerance.
AI’s current limitations in cognitive flexibility and adaptability stem from its reliance on static
architectures and weight updates through backpropagation. In contrast, neurons exhibit synaptic
plasticity, meaning their connection strengths change dynamically based on learning experiences.
If AI were to adopt neuron-inspired models, it could achieve event-driven computation, where
processing happens only when needed, significantly reducing energy consumption. Additionally,
neuron-based AI could develop self-adaptive learning, allowing systems to evolve and generalize
knowledge more effectively. Another crucial advantage of neurons is their fault tolerance—if some
neurons are damaged, the brain reroutes signals through alternative pathways, ensuring continued
functionality. Traditional AI, on the other hand, lacks this resilience and often requires retraining
from scratch when faced with significant data loss or architectural failures.
A promising approach to bridging the gap between neurons and AI is the development of
neuromorphic CMOS circuits, which mimic the function and structure of biological neurons.
These circuits process information using spikes, similar to how neurons communicate through
action potentials. By incorporating self-repairing mechanisms, neuromorphic circuits enable AI
systems to recover from faults, making them more robust and reliable. Furthermore, these circuits
optimize power efficiency, bringing AI closer to the brain’s ultra-low energy consumption.
Integrating neuromorphic circuits with AI models could revolutionize artificial intelligence,
making it more human-like in its ability to learn, adapt, and self-correct dynamically.
The integration of biological neuron-inspired mechanisms into AI has the potential to redefine
artificial intelligence by introducing adaptive, self-repairing, and fault-tolerant capabilities.
Current AI systems rely on artificial neural networks (ANNs) that, despite their name, are far from
replicating the true functionality of biological neurons. By incorporating neuron-like processing,
AI can transition from static algorithm-based decision-making to dynamic, real-time cognitive
adaptation, bringing it closer to human intelligence.
One of the most significant advantages of this integration is the implementation of spike-based
computing models, also known as spiking neural networks (SNNs). Unlike traditional deep
learning networks, which process information through matrix multiplications and
backpropagation, SNNs mimic biological neurons by transmitting discrete spikes only when
2
needed. This event-driven processing significantly reduces power consumption and makes AI
systems more efficient, scalable, and capable of real-time learning. Additionally, spike-based
architectures can handle temporal data more effectively, allowing AI to process time-dependent
patterns just as the human brain does when recognizing speech, motion, and emotions.
Another critical aspect of neuron-AI integration is fault tolerance. In biological neural networks,
when neurons are damaged or lost, the brain can rewire itself through synaptic plasticity and
neurogenesis, maintaining its overall functionality. Traditional AI models, however, fail
catastrophically when a portion of their structure is damaged or if they encounter unknown
conditions. By incorporating the self-repairing mechanisms of neurons, AI could develop the
ability to recover from failures autonomously, much like the human brain after an injury. This
would eliminate the need for constant retraining, making AI more robust in real-world
applications.
The fusion of AI with neuron-inspired principles could also have profound implications in fields
such as neuromorphic computing, neuroprosthetics, and autonomous robotics. Neuromorphic
processors, designed to mimic biological synapses, could make AI-driven devices more energy-
efficient and capable of real-time processing. In neuroprosthetics, such circuits could enable
seamless brain-machine interfaces, helping restore lost cognitive and motor functions in patients
with neurological disorders.
With these advancements, AI could move beyond its current limitations, evolving into systems
that not only learn but also self-repair, adapt, and function in a more human-like manner. The next
section will explore the working of neurons in detail, focusing on their structure and information-
processing mechanisms, which serve as the foundation for these capabilities.
3
To understand how neuron-inspired AI can revolutionize artificial intelligence, we must first
examine how biological neurons process and transmit information. Neurons are the fundamental
computational units of the brain, responsible for processing vast amounts of sensory and cognitive
data in real time. Their ability to adapt, self-repair, and establish dynamic interconnections makes
them far superior to the rigid, algorithm-driven processing of traditional AI systems.
A neuron consists of three main parts: the dendrites, the cell body (soma), and the axon. Dendrites
receive incoming signals from other neurons and pass them to the soma, where the information is
processed. If the cumulative input surpasses a certain threshold, the axon transmits an electrical
signal, known as an action potential, to the next neuron through the synapse. This spike-based
transmission is what enables complex thought, decision-making, and learning in biological brains.
Unlike conventional AI, which relies on continuous mathematical functions to process data,
neurons use discrete event-based processing, meaning they transmit signals only when necessary.
This makes biological computation highly energy-efficient compared to artificial neural networks
(ANNs), which require constant updates to weights and biases.
At the core of neuronal communication is the synapse, the junction where neurons exchange
information using chemical neurotransmitters. When an action potential reaches the axon terminal,
neurotransmitters are released into the synaptic cleft, triggering a response in the adjacent neuron.
This mechanism forms the basis of learning and memory formation in biological systems.
A crucial feature of synapses is their ability to strengthen or weaken over time, a process known
as synaptic plasticity. This adaptation, driven by the Hebbian learning principle ("neurons that fire
together, wire together"), allows the brain to reconfigure itself dynamically in response to
experiences. Traditional AI struggles with adaptability because it lacks this intrinsic ability to
modify its structure based on feedback. However, by integrating synaptic plasticity into AI models,
systems can learn more flexibly and generalize knowledge better.
Recent advancements in neuromorphic engineering have led to the development of Spiking Neural
Networks (SNNs), which mimic the brain’s spike-based communication. Unlike conventional
deep learning models, which rely on matrix operations, SNNs process information asynchronously
4
through discrete spikes. This event-driven approach reduces power consumption, making AI
systems more efficient and scalable.
Neuromorphic CMOS circuits, inspired by biological neurons, play a crucial role in bridging AI
with human-like cognition. These circuits incorporate memristors, crossbar arrays, and analog
computation, mimicking synaptic plasticity to enable AI systems to dynamically adjust their
internal parameters. When combined with AI, such neuromorphic processors could lead to
adaptive, self-repairing intelligent systems with real-time learning capabilities.
One of the most remarkable features of neurons is their fault tolerance. When neurons are damaged
due to injury or aging, the brain compensates by rerouting signals through alternative pathways.
This self-repairing capability ensures that cognitive functions remain intact even in the presence
of failures. Traditional AI models, on the other hand, suffer from catastrophic failures when parts
of their network are damaged or lose data. By incorporating biologically inspired self-repair
mechanisms, AI could become more resilient, autonomous, and capable of recovering from faults
without retraining.
The brain’s ability to self-repair and adapt to damage is one of the most crucial features that
differentiates it from artificial intelligence. While traditional AI systems require explicit
reprogramming and retraining when faults occur, the brain can dynamically reorganize and restore
functionality through an intricate repair mechanism. A key component of this process is the
tripartite synapse, which consists of three elements:
The Presynapse (Pre-neuron terminal): The sending end of the synapse, where
neurotransmitters are released.
The Postsynapse (Post-neuron terminal): The receiving end of the synapse, which processes
signals.
The Astrocyte: A specialized glial cell that plays a critical role in maintaining synaptic stability,
modulating neurotransmission, and facilitating self-repair. Traditionally, AI models are designed
using artificial neurons that lack dynamic self-repair mechanisms. However, in the human brain,
astrocytes actively monitor and regulate synaptic function, ensuring the proper transmission of
neural signals even in the presence of damage. This is achieved through the following mechanisms.
5
Synaptic Homeostasis: Astrocytes regulate the concentration of neurotransmitters in the synapse,
preventing excessive excitation or inhibition that could lead to neuronal malfunction.
Fault Tolerance through Synaptic Rewiring: When a neuron or synapse is damaged, astrocytes
facilitate synaptogenesis, the process of forming new synaptic connections, allowing the brain to
compensate for lost pathways.
Energy Support: Neurons rely on astrocytes for metabolic support, providing essential nutrients
and maintaining ionic balance for optimal function.
By integrating the self-repairing behavior of the tripartite synapse into AI, we can develop fault-
tolerant neuromorphic systems that mimic the brain’s adaptability and resilience. A neuromorphic
CMOS circuit designed with astrocyte-like behavior could actively monitor and regulate AI
processing units, ensuring stable operation even in the presence of failures.
Synaptic Plasticity-Based AI: AI architectures that modify their connection strengths in response
to input patterns, improving learning efficiency and adaptability.
Fault Detection and Compensation Circuits: Neuromorphic processors equipped with built-in
redundancy mechanisms to automatically reconfigure pathways in case of failures.
Incorporating the self-repairing mechanism of the tripartite synapse into AI models would allow
them to:
Recover from failures autonomously, eliminating the need for manual intervention or retraining.
Adapt dynamically to changing environments, similar to how humans learn from experience.
Develop real-time fault detection, ensuring stable AI operations even in unpredictable scenarios.
6
Artificial intelligence has made remarkable progress in recent years, yet it still lacks the robustness
and adaptability of the human brain. One of the critical differences between human intelligence
and AI is the ability to function despite failures. Traditional AI systems rely on fixed architectures
and predefined algorithms, making them highly susceptible to errors when a component
malfunctions. In contrast, the human brain exhibits fault tolerance, allowing it to remain functional
even when neurons are damaged or lost. This resilience is largely due to the self-repairing
mechanisms of the tripartite synapse, which ensures continuous and adaptive neural processing.
The tripartite synapse, composed of the presynapse, postsynapse, and astrocyte, plays a crucial
role in maintaining stable and efficient communication within the brain. When synaptic activity is
disrupted due to injury or degeneration, astrocytes intervene to restore balance. They regulate
neurotransmitter levels, facilitate the formation of new synaptic connections, and help redistribute
signals to compensate for lost pathways. These biological fault-tolerance mechanisms allow the
brain to recover from damage, adapt to new conditions, and ensure consistent cognitive
performance. In contrast, current AI models lack such resilience. If a neural network experiences
damage—such as the loss of nodes or corrupted data—it often requires external retraining, leading
to inefficiencies and computational overhead.
By integrating the fault tolerance of the tripartite synapse into AI, we can develop systems that are
not only efficient but also self-healing and adaptable. For instance, incorporating astrocyte-like
mechanisms in AI can enable real-time error detection and correction. Just as astrocytes monitor
and modulate synaptic activity, AI systems can be designed to identify processing faults and
reroute information through alternative pathways. This approach would significantly enhance the
reliability and longevity of AI models, ensuring they continue functioning without constant human
intervention. Additionally, by mimicking synaptic plasticity, AI networks could adjust connection
strengths dynamically, improving learning efficiency and memory retention. This would help
overcome the problem of catastrophic forgetting, a major limitation in conventional deep learning
systems, where newly acquired information overwrites previous knowledge.
Another advantage of implementing biological fault tolerance in AI is the potential for energy-
efficient cognitive processing. The human brain operates on approximately 20 watts of power, yet
it outperforms the most advanced supercomputers in adaptability and real-time decision-making.
7
2. LITERATURE SURVEY
A confocal laser-scanning microscope is used to analyze the effects of dendritic spine shape
alterations on synaptic function. By employing double-labeling techniques, neurons labeled with
green fluorescent protein and presynaptic terminals labeled with synaptophysin are imaged in
separate channels. A parametric model is applied to noisy confocal images to detect and
reconstruct spine morphology. Using a distance map, presynaptic boutons associated with spines
are identified and quantified in a flexible, distance-dependent manner. This approach integrates
functional and morphological features, allowing for advanced statistical analysis of learning
processes in the brain. By correlating spine morphology changes with synaptic functionality,
researchers can better understand cellular mechanisms underlying neural plasticity.
The paper "A Neuromorphic CMOS Circuit with Self-Repairing Capability" discusses the design
and implementation of a neuromorphic circuit that mimics the brain’s ability to self-repair and
recover from faults. Using complementary metal-oxide-semiconductor (CMOS) technology, the
circuit incorporates self-diagnostic and adaptive mechanisms that enable it to detect and fix errors
autonomously. This self-repairing feature is critical for enhancing the circuit's robustness,
reliability, and longevity, especially in environments where human intervention is limited. The
work holds potential for applications in neural networks, artificial intelligence, and fault-tolerant
systems.
8
2.3 Literature Survey 3:
The paper "An Analog Astrocyte-Neuron Interaction Circuit for Neuromorphic Applications"
presents a bio-inspired circuit model that simulates the interaction between astrocytes (glial cells)
and neurons. The authors design an analog circuit that mimics the neuromodulatory role astrocytes
play in brain function, affecting neural plasticity and signal processing. This model enhances
neuromorphic computing systems by incorporating more biologically accurate interactions, going
beyond traditional neuron-only models. The approach has potential applications in brain-like
computing architectures, neural prosthetics, and advanced AI systems, offering a more realistic
simulation of brain dynamics. 7
Nano-scale resistive memories are driving dense integration of electronic synapses, crucial for
large-scale neuromorphic systems. This work introduces a compact CMOS spiking neuron that
9
supports in-situ learning and computation while managing a substantial number of resistive
synapses. The proposed leaky integrate-and-fire neuron circuit features dual-mode operation for
current integration and synaptic driving, all with a single operational amplifier. This innovative
neuron circuit contributes to scalable, energy-efficient neuromorphic computing, underscoring the
potential of CMOS technology in creating compact, brain-inspired processing units for future
applications.
This paper presents a silicon neuron circuit designed to replicate the firing behavior of various
biological neuron classes. Implemented using 0.35µm CMOS technology, the circuit efficiently
models key neuronal firing patterns, including regular spiking (RS), fast spiking (FS), chattering
(CH), and intrinsic bursting (IB). These diverse firing behaviors are achieved through simple
adjustments of two biasing voltages, demonstrating the circuit’s flexibility in mimicking different
neuronal responses. Simulation results confirm that the proposed design can accurately reproduce
the accommodation and firing frequency of distinct neuron types, making it a versatile tool for
neuromorphic computing. With a compact structure utilizing only 14 MOSFETs, the circuit
enables the integration of a large number of neurons within a small silicon footprint, supporting
the development of massively parallel analogue neuromorphic networks. By closely emulating
cortical circuits, this work contributes to the advancement of energy-efficient, large-scale
neuromorphic systems, paving the way for more biologically inspired artificial intelligence
applications.
10
[Link]
Integrating self-repairing, fault-tolerant properties into AI will bridge the gap between artificial
and human intelligence. By developing systems capable of real-time self-healing, dynamic
learning, and energy-efficient computation, AI can become more flexible, autonomous, and
reliable. This advancement will be particularly beneficial for neuromorphic computing,
neuroprosthetics, and AI-driven robotics, where stability and adaptability are crucial. Future
research will focus on merging astrocyte-inspired processing units with deep learning architectures
to create AI that not only processes data but thinks, learns, and adapts like the human [Link]
biological self-repair mechanism of neurons, particularly in response to synaptic damage, involves
intricate processes mediated by astrocytes. These glial cells are not just passive support structures
but play an active role in regulating synaptic function, restoring balance, and ensuring the proper
transmission of neural signals. The regulation of synaptic activity occurs at the tripartite synapse,
which consists of three key elements: the presynaptic neuron, the postsynaptic neuron (PSN), and
the astrocyte. Each of these components contributes to the complex feedback loops that maintain
synaptic homeostasis and enable repair mechanisms.
When a presynaptic neuron fires, it releases neurotransmitters, primarily glutamate, into the
synaptic cleft. This neurotransmitter binds to receptors on the postsynaptic neuron, including
AMPA and NMDA receptors, leading to an influx of Na⁺ and Ca²⁺ ions. If the depolarization
exceeds a certain threshold, an action potential is triggered in the postsynaptic neuron, propagating
the signal further. However, the strength and persistence of synaptic transmission need precise
control, as excessive or insufficient activity can disrupt neural network function. A critical
molecule in this regulatory system is 2-arachidonoyl glycerol (2-AG), an endocannabinoid
synthesized from diacylglycerol (DAG) by the enzyme diacylglycerol lipase (DAGL). The
production of 2-AG occurs in response to elevated intracellular Ca²⁺ levels, linking it to neuronal
activity.
Once synthesized, 2-AG follows two distinct pathways to modulate synaptic activity: the direct
and indirect routes. In the direct pathway, 2-AG binds to cannabinoid receptor type 1 (CB1) on the
presynaptic neuron, which is a G-protein coupled receptor (GPCR). This interaction leads to the
inhibition of adenylyl cyclase, reducing the production of cyclic AMP (cAMP). The subsequent
decrease in cAMP levels reduces the activation of protein kinase A (PKA), which normally
11
phosphorylates and enhances the activity of voltage-gated Ca²⁺ channels. As a result, Ca²⁺ influx
into the presynaptic terminal is suppressed, leading to a lower probability of glutamate release.
This mechanism, known as depolarization-induced suppression of excitation (DSE), acts as a
negative feedback loop to prevent excessive synaptic excitation.
In the indirect pathway, 2-AG is taken up by astrocytes, where it initiates a different signaling
cascade. Inside the astrocyte, 2-AG binds to its specific receptor, activating phospholipase C
(PLC), which hydrolyzes phosphatidylinositol 4,5-bisphosphate (PIP₂) to generate inositol
trisphosphate (IP₃). IP₃ then interacts with its receptors on the endoplasmic reticulum, leading to
the release of Ca²⁺ into the cytoplasm. The resulting calcium oscillations serve as an internal signal
that triggers the release of gliotransmitters, including glutamate, from the astrocyte back into the
synapse. This process, termed endocannabinoid-mediated synaptic potentiation (e-SP), enhances
excitatory transmission by increasing the presynaptic neuron’s activity. The balance between DSE
and e-SP fine-tunes synaptic strength, ensuring that neural circuits remain adaptive and responsive
to external stimuli.
In the event of synaptic damage, the self-repair mechanism is activated by altering the balance
between these pathways. Damage to synapses can occur due to oxidative stress, excitotoxicity, or
trauma, leading to a decline in synaptic transmission probability (PR). Affected synapses exhibit
reduced neurotransmitter release, impairing signal propagation within the neural network. This
drop in PR is detected by surrounding neurons and astrocytes, triggering a compensatory response.
The first step in this response involves the suppression of DSE. Since damaged synapses release
less glutamate, there is a corresponding decrease in 2-AG synthesis, weakening the inhibition of
presynaptic Ca²⁺ channels. This reduction in DSE allows the indirect pathway to become more
dominant, leading to increased astrocytic glutamate release. The enhanced e-SP mechanism
compensates for the lost synaptic activity by shifting functional load to neighboring synapses,
thereby restoring overall neural network function.
Astrocytes further contribute to synaptic repair by promoting the synthesis of neurotrophic factors
such as brain-derived neurotrophic factor (BDNF) and glial cell line-derived neurotrophic factor
(GDNF). These molecules activate receptor tyrosine kinases (RTKs) on neurons, triggering
downstream signaling cascades such as the MAPK/ERK and PI3K/Akt pathways. These pathways
promote synaptic remodeling by enhancing dendritic spine formation and strengthening synaptic
12
connections. Additionally, astrocytes regulate extracellular glutamate levels through the
glutamate-glutamine cycle. They take up excess glutamate via excitatory amino acid transporters
(EAATs) and convert it into glutamine through the enzyme glutamine synthetase. The glutamine
is then transported back to neurons, where it is converted into glutamate, replenishing the
neurotransmitter pool. This recycling mechanism is crucial for sustaining synaptic transmission
and supporting neural plasticity.
Neuromorphic CMOS circuits aim to replicate these biological processes through artificial
synaptic networks. Inspired by the Li-Rinzel astrocyte model, the proposed neuromorphic circuit
integrates 24 synapses and operates using minimal power consumption while dynamically
adjusting synaptic weights. The circuit employs analog components to simulate synaptic plasticity,
using variable resistors to mimic changes in PR. The direct and indirect pathways are implemented
using current-controlled transistors, where the input signal modulates the current flow to either
suppress or enhance transmission. The self-repair function is achieved through feedback loops that
detect weakened synaptic activity and redistribute computational load to functional pathways. By
emulating astrocyte-mediated regulation, the circuit ensures fault tolerance and resilience,
enabling stable neural computation even in the presence of defects.
To mimic biological synaptic behavior in electronic circuits, the presynaptic neuron is designed
using a tripartite synapse model, consisting of two presynaptic neurons, two postsynaptic neurons,
an astrocyte, and a total of 24 synapses. Each presynaptic neuron generates a spike train, which
13
Fig1: Complete circuit for the coupled tripartite synapse with the self-repairing capability[1].
serves as an input for the connected synapses. These synapses operate probabilistically, meaning
that each synapse generates a random value, which is then compared with its transmission
probability (PR). If the generated random value is lower than PR, a specific current is allowed to
flow into the postsynaptic neuron, simulating the behavior of biological synapses.
In electronic terms, the synapse circuit is constructed using a differential amplifier with a current
mirror load. This circuit configuration enables the comparison between the presynaptic voltage
input (Vpre-syn) and the PR threshold. When Vpre-syn is greater than PR, more current flows
through a specific transistor, which results in no current reaching the postsynaptic neuron.
Conversely, if Vpre-syn is less than PR, another transistor is activated, allowing a current of
approximately 80 nA to pass into the postsynaptic neuron. This probabilistic current flow
mechanism effectively replicates the randomness of synaptic transmission observed in biological
neurons.
14
The postsynaptic neuron follows the Izhikevich neuron model, which is widely used for replicating
real cortical neuron firing patterns. It receives inputs from multiple synapses, summing up the
synaptic currents generated by each one. The circuit ensures that only the synapses where Vpresyn
is smaller than PR contribute to the overall current, reinforcing a biologically inspired threshold-
based activation mechanism. This allows the postsynaptic neuron to effectively integrate multiple
synaptic signals, just like in biological neural networks.
An important factor in this design is the PR value, which plays a crucial role in determining neuron
activity levels. If PR is increased, more synapses become active, leading to a higher total current
flowing into the postsynaptic neuron, thereby increasing its firing activity. On the other hand,
lowering PR decreases the number of active synapses, reducing the overall current and, in turn,
decreasing the activity level of the postsynaptic neuron. This parameter can be adjusted
dynamically to regulate neuronal behavior, making the circuit more adaptable and flexible.
Moving further into the design, the circuit implements a neuromodulatory mechanism using a 2-
arachidonylglycerol (2-AG) circuit. The output spikes from the postsynaptic neuron are first
inverted to align with the required logic levels for the subsequent processing stages. A PMOS
transistor, combined with an RC network, is then used to integrate these spikes over time,
effectively generating a 2-AG signal. The frequency of neuron spikes directly influences the 2-AG
voltage, meaning that higher neural activity results in a faster buildup of the 2-AG signal. This
behavior closely mimics the role of neuromodulators in biological synapses, where
neurotransmitter release is influenced by the firing activity of neurons.
To process multiple 2-AG signals, a summation circuit is introduced, which first converts the
voltage signals from individual neurons into proportional current signals. This is achieved using
nMOS transistors operating in the subthreshold region, leveraging the exponential relationship
between gate voltage and drain current for efficient low-power current generation. These currents
are then summed using a current mirror configuration, ensuring accurate and linear integration.
Finally, the combined current is reconverted into a voltage signal through a diode-connected
transistor, generating a stable output that represents the collective 2-AG signal from all
contributing neurons.
The astrocyte circuit, which is built with 14 transistors, takes this 2-AG summation voltage as its
input and produces two key signals: inositol triphosphate (IP3) and calcium ions (Ca²⁺). These
15
signals mimic the biological role of astrocytes in synaptic regulation. In the biological system, 2-
AG interacts with cannabinoid receptors on both presynaptic terminals and astrocytes. Within
presynaptic terminals, this interaction leads to depolarization-induced suppression of excitation
(DSE), reducing neurotransmitter release. Meanwhile, in astrocytes, the activation of cannabinoid
receptors results in increased intracellular calcium levels, which in turn trigger the release of
gliotransmitters such as glutamate.
This entire circuit structure closely mirrors biological processes, effectively emulating the fault-
tolerant nature of the brain. By combining probabilistic synaptic activation, neuron integration,
neuromodulatory signaling, and astrocyte-mediated adaptation, the electronic model provides a
realistic representation of self-repairing mechanisms observed in real neural networks. The ability
of this circuit to dynamically adjust synaptic activity, enhance neural connectivity, and regulate
neurotransmitter release contributes to its robustness and adaptability. This design not only
advances the field of neuromorphic engineering but also opens possibilities for the development
of more efficient and resilient artificial neural networks.
In the human brain, neurons communicate through synapses, which serve as connection points
between the presynaptic and postsynaptic neurons. When an action potential, or electrical spike,
16
reaches the end of a presynaptic neuron, it triggers the release of neurotransmitters into the synaptic
cleft. These neurotransmitters diffuse across the small gap and bind to receptors on the
postsynaptic neuron. This binding changes the membrane potential of the postsynaptic neuron,
either exciting or inhibiting it based on the type of neurotransmitter. If the cumulative excitatory
signals surpass a certain threshold, the postsynaptic neuron generates an action potential,
continuing the transmission of information.
Neuromorphic CMOS circuits aim to replicate this behavior using transistors, capacitors, and
current sources. The presynapse circuit in a neuromorphic system receives an input spike train,
which consists of electrical pulses that represent neural activity. Another input, typically a
threshold voltage, is used to determine whether the signal should be transmitted. This threshold
mimics the biological neuron’s ability to decide whether to fire an action potential. The presynapse
processes these inputs and, if the conditions are met, generates an output current that represents
the release of neurotransmitters in the biological counterpart.
Once the signal leaves the presynapse, it must travel to the postsynapse. In biological systems,
neurotransmitters diffuse passively across the synaptic cleft, but in CMOS circuits, the signal is
actively transmitted using electrical pathways. The postsynaptic circuit receives the current output
from the presynapse and processes it. This processing can involve various circuit components,
such as a low-pass filter, which smooths out variations in the signal, or an integrator circuit, which
accumulates charge over time to simulate synaptic plasticity. The final response of the postsynapse
determines whether it will generate an output signal, similar to a biological neuron firing an action
potential.
One key aspect of this data transmission process is synaptic weighting, which allows the system
to model learning. In biological neurons, synaptic strength is adjusted through mechanisms like
long-term potentiation (LTP) and long-term depression (LTD), which strengthen or weaken the
connection between neurons based on activity. In CMOS-based circuits, these adjustments can be
implemented using variable resistors, current sources, or floating-gate transistors that change their
conductance over time, mimicking biological learning.
The efficiency of data transmission between the presynapse and postsynapse is crucial for the
overall functionality of neuromorphic systems. If the signal does not reach the postsynapse with
sufficient strength, it may not trigger a response, leading to information loss. Conversely, if the
17
signal is too strong, it may result in excessive activation, similar to overexcitation in biological
neurons. Designing CMOS circuits that accurately replicate the timing and amplitude of biological
synaptic transmission is an ongoing challenge in neuromorphic engineering.
By modeling the transmission process using CMOS circuits, researchers can create artificial neural
networks that behave similarly to biological systems, enabling applications in brain-inspired
computing, robotics, and adaptive learning systems.
The process begins in the pre-synapse, where neurotransmitters are stored in small vesicles. When
an action potential (electrical signal) arrives at the pre-synaptic terminal, it depolarizes the
membrane, leading to the opening of voltage-gated calcium (Ca²⁺) channels. The influx of Ca²⁺
ions triggers a cascade of events where synaptic vesicles dock and fuse with the pre-synaptic
membrane. This fusion process, mediated by proteins such as synaptotagmin and SNARE
complexes, results in the exocytosis of neurotransmitters into the synaptic cleft, the gap between
the two neurons.
Once released, neurotransmitters diffuse across the synaptic cleft and bind to specific receptors on
the post-synaptic membrane. These receptors are often ligand-gated ion channels, which, upon
activation, allow the flow of ions such as Na⁺, K⁺, or Cl⁻, altering the electrical state of the post-
synaptic neuron. If the incoming signal is excitatory (e.g., glutamate), it depolarizes the post-
synaptic membrane, increasing the likelihood of generating a new action potential. Conversely, if
the signal is inhibitory (e.g., GABA), it hyperpolarizes the membrane, reducing neuronal activity.
The process is carefully regulated to maintain efficiency and prevent excessive signaling.
Neurotransmitters are cleared from the synaptic cleft through enzymatic breakdown (e.g.,
acetylcholinesterase for acetylcholine) or reuptake into the pre-synapse for recycling.
Additionally, synaptic plasticity mechanisms, such as long-term potentiation (LTP) and long-term
18
depression (LTD), enable neurons to strengthen or weaken connections based on activity levels,
which is crucial for learning and memory.
In self-repairing neuromorphic systems, these biological mechanisms are mimicked using spike-
based circuits, charge transfer mechanisms, and adaptive learning algorithms. The pre-synaptic
circuit mimics vesicle fusion using voltage pulses and controlled charge storage, while the post-
synaptic circuit replicates receptor activation through tunable conductance changes. Additionally,
self-repairing neurons integrate fault detection and correction mechanisms, ensuring continuous
learning, energy efficiency, and resilience in AI hardware. By replicating the self-repair and
adaptability of biological synapses, neuromorphic computing can enable more robust, efficient,
and autonomous AI systems for real-world applications.
Pre-Synapse block:
19
In this section a CMOS circuit is proposed which resembles a biological pre-synapse. This circuit
comprises of an differential amplifier with current mirror load. A differential amplifier with a
current mirror load amplifies the difference between two input voltages, while a current mirror
acts as an active load, providing a constant current source and improving the amplifier's
performance like achieving better gain, higher bandwidth, and improved stability.
Therefore the differential amplifier with a current mirror load is a powerful circuit that can amplify
small voltage differences with high stability. The two FET's in the middle form a differential
amplifier which procecces two input voltages 𝑉 and 𝑉 . If presynaptic voltage exceeds the
threshold voltage then the differential amplifier generates an output which is proportional to the
difference between the two input voltages. So this output turns on the PMOS and produces the
current i.e., 80nA proportional to this difference.
In this model, each synapse generates a random value to compare with its transmission probability
(PR). If the random value is less than PR, a specified current flows into the PSN. The output of
each presynaptic neuron is a spike train that acts as a voltage input to each synapse. This spike
train is represented by the variable 𝑉 . Each synapse generates a random value that is
compared to the PR value (probability rate of the synapse). The PR value essentially determines
how likely the synapse is to fire, based on whether the random value is larger or smaller than the
PR value. The synapse circuit uses a differential amplifier with a current mirror load. A differential
amplifier compares two input voltages, and its output is based on the difference between them.
The 𝑉 (input from the presynaptic neuron) is compared with the PR value.
If 𝑉 > PR: More current flows through transistor MS1. This increases the output voltage and
ensures that transistor MS5 remains OFF. No current flows into the PSN circuit.
If 𝑉 < PR: More Presynaptic Neuron current flows through transistor MS2, decreasing the
output voltage. This reduction turns ON transistor MS5, allowing a synaptic current (~80 nA) to
flow into the PSN circuit.
Biological resemblance: The circuit likely consists of MOSFETs, capacitors, and current
sources, which work together to mimic the process of spike-based transmission, vesicle fusion,
and neurotransmitter release in a biological neuron.
20
Input Signal (Action Potential Equivalent): The circuit receives an input voltage pulse,
simulating an incoming action potential in a biological neuron. This input triggers voltage changes
across different transistors, allowing charge accumulation.
Charge Accumulation (Vesicle Loading Equivalent): A capacitor in the circuit stores charge,
analogous to neurotransmitter vesicle loading in a biological pre-synapse. This capacitor mimics
how synaptic vesicles are prepared for release.
Voltage Threshold Mechanism (Calcium Influx Equivalent): A specific voltage threshold must
be reached for the circuit to respond, mimicking the Ca²⁺ channel activation in a real synapse. Once
the threshold is exceeded, a MOSFET switch turns on, allowing current flow.
Signal Transmission to Post-Synapse: The current output of the pre-synapse circuit serves as an
input to the post-synapse circuit, where further processing occurs. The strength of the output
depends on adaptive weight control mechanisms, similar to synaptic plasticity in biological
neurons.
Action potential arrives at the axon terminal. Input voltage pulse simulates neural firing.
Voltage-gated Ca²⁺ channels open, allowing Threshold voltage controls MOSFET switch,
Ca²⁺ influx. triggering charge transfer.
Vesicles fuse with the membrane, releasing Capacitor discharges, sending current spikes.
neurotransmitters.
21
Post-synapse block:
The circuit is structured into three primary functional blocks; the membrane potential circuit, the
slow variable circuit, and the comparator circuit. Each of these blocks plays a crucial role in
simulating the behaviour of a neuron, particularly in generating action potentials (spikes) based on
synaptic input currents. These components work together to integrate incoming signals, regulate
spike generation, and control neuronal adaptation mechanisms, making this circuit an effective
model for neuromorphic computing.
22
voltage 𝑉 , a comparator circuit triggers a response to reset the membrane potential, allowing for
the next cycle of integration and spiking.
Comparator Circuit:
The comparator circuit serves as the control mechanism that determines when a spike is generated
and how the neuron resets. It continuously monitors the membrane potential (V) and compares it
against the threshold voltage (𝑉 ). Additionally, it relies on a bias voltage 𝑉 for proper
operation. Once the membrane potential reaches or exceeds 𝑉 , the comparator generates two
key pulses, 𝑉 and 𝑉 . The 𝑉 pulse is responsible for resetting the membrane potential circuit by
activating M5, which discharges 𝐶 , bringing the membrane potential back to a lower value
dictated by voltage 𝑉 . This allows the neuron to begin a new cycle of charge integration.
Simultaneously, the 𝑉 pulse interacts with the slow variable circuit, briefly activating M8 to
increase the slow variable potential, thereby adapting the neuron’s firing rate over time. By
generating these pulses in response to threshold crossings, the comparator ensures that the neuron
fires in a biologically inspired manner while maintaining stability and adaptability.
Overall Functionality and Biological Resemblance: This circuit closely mimics the
operation of a biological neuron by incorporating integrate-and-fire behaviour, adaptive threshold
23
dynamics, and event-driven signal processing. The membrane potential circuit replicates the
integration of synaptic currents and action potential generation seen in biological neurons. The
slow variable circuit models spike-frequency adaptation, a process observed in biological neurons
where prolonged activity leads to a temporary reduction in excitability. Finally, the comparator
circuit acts as a threshold detection mechanism, akin to how real neurons determine when to fire
based on accumulated input. The CMOS post-synapse circuit mimics biological neurons by
receiving, integrating, and processing input signals to generate spikes. It consists of three key
blocks: the membrane potential circuit, which accumulates input currents on a capacitor (𝐶 ) and
triggers a spike when the threshold (𝑉 ) is reached; the comparator circuit, which detects the
threshold crossing and resets the membrane potential via M5; and the slow variable circuit, which
adapts the neuron’s response by adjusting the slow variable potential (𝐶 ) through feedback from
M8, simulating synaptic plasticity. This design allows the circuit to dynamically regulate spike
frequency, improving efficiency and adaptability. By replicating biological post-synaptic
behavior, this CMOS neuron enables low-power, real-time learning, making it ideal for
neuromorphic computing applications. This combination of circuits enables neuromorphic systems
to efficiently mimic brain-like computations, making them highly suitable for low-power, event-
driven AI applications.
This circuit closely follows the biological post-synaptic function by mimicking how neurons
receive, integrate, and adapt to input signals:
Synaptic Input Processing: The circuit integrates excitatory and inhibitory currents just
as biological neurons integrate ion flow through receptors.
Threshold-Based Spike Firing: The comparator circuit replicates the action potential
mechanism, ensuring that only significant inputs lead to a response.
Repolarization and Reset: The 𝑉 pulse resets the membrane potential, just like the
neuron's refractory period restores resting potential.
24
Feature Biological post-synapse CMOS post-synapse circuit
25
4. RSULTS AND DISCUSSION
This project is executed in 90nm CMOS technology. The pre-synaptic circuit consists of
multiple transistor-based synapses that receive input signals (𝑉 ) and generate an output
current (𝐼 ). These synapses are connected through switching transistors that regulate signal
transmission to the post-synaptic neuron. The signal is then processed by a delay and self-
enhancement (DSE) circuit, which modulates synaptic strength. This structure ensures efficient
communication between pre- and post-synaptic neurons while incorporating adaptability and
fault tolerance, mimicking biological synaptic behavior.
The presynaptic output in a neuromorphic system or a biological synapse refers to the signal
generated by the neuron before it transmits information to the next neuron. In electronic or
neuromorphic circuits, this output is typically a current or voltage signal that encodes the neural
activity of the presynaptic neuron. The first waveform shown in your simulation appears to
represent the presynaptic current, which remains relatively stable at around 80.1636nA.
26
Fig5: Output2 of pre-synapse.
The waveform shown in the image represents the presynaptic current or voltage in a neuromorphic
circuit, which mimics biological synaptic activity. This signal is crucial in determining how the
synapse interacts with the postsynaptic neuron.
In the plot, the presynaptic output exhibits a periodic pulsing pattern, which suggests that the
neuron is spiking at regular intervals. The waveform alternates between a high and a low value,
indicating that the presynaptic terminal is actively transmitting signals in a rhythmic manner. This
behaviour is commonly observed in spiking neural networks (SNNs) and bio-inspired circuits
where presynaptic neurons generate action potentials that lead to neurotransmitter release in
biological systems.
The negative and positive variations in current indicate a bidirectional flow of charge, which could
be the result of capacitive effects in the circuit or a design that includes both excitatory and
inhibitory components. The sharp transitions in the waveform suggest that the circuit might be
implementing a digital spiking mechanism where the synapse fires discrete pulses instead of a
continuous signal.
This output is essential for driving the postsynaptic response, influencing whether the next neuron
in the circuit gets activated. If this waveform represents a synaptic current, its amplitude and
frequency play a role in determining the synaptic strength and the learning rules applied in the
27
neuromorphic system. Such periodic activity is often linked to oscillatory behaviours in neural
circuits, which are key for processes like sensory encoding, decision-making, and memory
formation.
Neurons communicate by generating electrical signals, which can be categorized into two primary
firing patterns: spiking and bursting. These patterns define how neurons process and transmit
information, influencing learning, memory, and behavior. By analyzing the postsynaptic output
waveform, we can determine how a neuron responds to incoming stimuli and how these firing
patterns contribute to neural function.
Spiking refers to the generation of individual action potentials, or electrical pulses, when a neuron's
membrane potential crosses a specific threshold. Each spike represents a brief depolarization and
repolarization event, during which the neuron transmits information along its axon to communicate
with other neurons. In the context of the postsynaptic output, spiking activity occurs when the
neuron responds to presynaptic inputs by producing discrete, regularly spaced action potentials.
The frequency and timing of spikes are crucial for neural coding. Some neurons fire at a constant
rate (regular spiking), while others may exhibit variable spiking patterns depending on the strength
and timing of presynaptic inputs. Spiking neurons are commonly found in sensory processing
regions of the brain, where precise timing is essential for encoding external stimuli. In
28
neuromorphic computing, spiking behavior is often replicated using spiking neural networks
(SNNs) to mimic the brain’s information-processing capabilities.
Bursting is a more complex firing pattern in which a neuron generates multiple spikes in rapid
succession, followed by a period of inactivity or reduced firing. Unlike regular spiking, bursting
involves groups of action potentials occurring close together in time, forming high-frequency
clusters known as "bursts." This pattern is particularly important in areas of the brain responsible
for motor control, rhythmic activity, and certain cognitive functions.
In the postsynaptic output waveform, the presence of bursts suggests that the neuron is integrating
strong or repetitive presynaptic inputs. Bursting can enhance signal reliability and improve
synaptic plasticity, making it a key mechanism for long-term learning and adaptation. It is
influenced by factors such as calcium ion dynamics, synaptic feedback loops, and membrane
properties that regulate the neuron’s excitability. In artificial neural models, bursting is used to
improve information retention and dynamic learning.
While both spiking and bursting involve action potential generation, they differ in how they encode
information. Spiking is typically associated with precise and linear information transmission,
where each action potential carries an independent signal. Bursting, on the other hand, allows for
more complex coding by transmitting multiple signals in a short time frame, making it more
resistant to noise and signal loss.
The transition between spiking and bursting is often regulated by presynaptic activity and network
dynamics. When a neuron receives weak or infrequent input, it may fire single spikes. However,
when the input is strong or sustained, the neuron can switch to a bursting mode, allowing for
enhanced communication and adaptation to changing stimuli.
The output waveform of the postsynaptic neuron provides valuable insights into how the neuron
processes incoming signals from the presynaptic input. By analyzing the waveform’s structure, we
can infer the firing behavior of the neuron, which includes spiking and bursting patterns. These
firing characteristics play a crucial role in neural communication, synaptic plasticity, and
information processing in both biological and artificial neural networks.
The provided waveform consists of regularly occurring spikes that appear in clusters, suggesting
a bursting pattern rather than isolated action potentials. The amplitude of the waveform remains
29
within a defined range, indicating that the neuron follows a structured response to incoming
stimuli. The periodic nature of the spikes suggests that the postsynaptic neuron is consistently
activated by presynaptic inputs. Furthermore, the sharp peaks and valleys in the waveform reflect
the rapid depolarization and repolarization phases typical of neuronal firing.
Spiking refers to the generation of discrete action potentials when the neuron’s membrane potential
reaches a critical threshold. Each spike represents a moment when the neuron transmits
information through its axon to other neurons. The presence of evenly spaced spikes suggests that
the neuron is operating in a regular firing mode, meaning that it responds to each presynaptic signal
in a predictable manner. The generation of spikes depends on multiple factors, including synaptic
strength, neurotransmitter release, and ion channel activity. In artificial neuromorphic systems,
spiking activity is often modeled using computational frameworks like the Leaky Integrate-and-
Fire (LIF) model or Hodgkin-Huxley equations, which replicate the electrical behavior of
biological neurons.
In addition to single spikes, the output also exhibits bursting activity, where multiple spikes occur
in rapid succession before a period of reduced activity. Bursting is a more complex neural behavior
that enhances signal reliability and information encoding. This pattern is often observed in neurons
involved in sensory processing, motor control, and rhythm generation. The occurrence of bursts
suggests that the postsynaptic neuron receives stronger or more frequent synaptic input, leading to
a sustained firing response. Bursting can be modulated by various factors, including calcium ion
dynamics, synaptic plasticity, and inhibitory feedback mechanisms. In artificial neural systems,
bursting behavior is often used to mimic biological information processing and improve the
efficiency of neuromorphic computing.
The postsynaptic response is directly influenced by the activity of the presynaptic neuron. When
the presynaptic neuron fires an action potential, it releases neurotransmitters (in biological
systems) or generates an electrical pulse (in neuromorphic circuits), which then influences the
postsynaptic membrane potential. If the presynaptic input is weak or infrequent, the postsynaptic
neuron may exhibit isolated spikes. However, when the presynaptic neuron fires at a higher rate,
the postsynaptic neuron may transition into a bursting mode, as observed in the waveform. This
shift from regular spiking to bursting suggests that the neuron is dynamically adjusting its output
in response to varying synaptic input strength.
30
The postsynaptic output waveform reveals critical aspects of neural signal processing, highlighting
both spiking and bursting dynamics. The presence of periodic spikes suggests that the neuron is
engaged in regular firing activity, while the occurrence of burst-like patterns indicates a higher
level of synaptic integration. This behavior is essential for enhancing signal transmission and
improving neural network efficiency. By understanding these firing patterns, researchers and
engineers can design better neuromorphic systems that replicate biological learning and memory
mechanisms.
31
5. CONCLUSION
The study of synaptic transmission through CMOS-based circuits provides critical insights into
the mechanisms of neural communication and their application in neuromorphic computing. By
designing and implementing circuits that model neurotransmission between the pre-synapse and
post-synapse, we can replicate key biological processes such as synaptic vesicle fusion,
neurotransmitter release, and postsynaptic signal integration. These CMOS circuits, fabricated
using 90-nm technology, effectively emulate the firing patterns observed in biological neurons,
demonstrating both spiking and bursting behaviours that are essential for efficient neural
processing.
Spiking patterns represent discrete action potentials that encode information in both biological and
artificial neural networks. Regular spiking indicates a steady transmission of signals, whereas
bursting behaviour—characterized by clusters of rapid spikes followed by inactivity—enhances
signal reliability and efficiency. The transition between spiking and bursting is influenced by
synaptic input strength, ion channel dynamics, and neuro-modulatory effects, factors that have
been successfully modelled in these CMOS-based circuits.
By bridging neuroscience and hardware design, these circuits lay the foundation for scalable, low-
power neuromorphic systems capable of mimicking brain-like computation. This advancement
holds promise for applications in brain-machine interfaces, intelligent learning systems, and real-
time signal processing. The ability to accurately reproduce synaptic dynamics opens new pathways
for creating more efficient and adaptable artificial intelligence systems, ultimately bringing us
closer to replicating the computational efficiency of the human brain in hardware.
32
6. FUTURE SCOPE
The future of CMOS-based neuromorphic circuits is promising, with applications spanning AI,
neuroscience, medical research, and next-generation computing. By further optimizing their
scalability, adaptability, and power efficiency, these circuits could play a pivotal role in the
evolution of neuromorphic hardware, helping us bridge the gap between biological intelligence
and artificial computation. The development of CMOS-based neuromorphic circuits for synaptic
transmission modeling has the potential to transform multiple fields, including artificial
intelligence, neuroscience, biomedical engineering, and real-time computing. As this technology
advances, several promising areas of research and application emerge:
The ability of these CMOS circuits to replicate spiking and bursting behaviors in a biologically
accurate manner makes them well-suited for massively parallel neuromorphic architectures. Future
advancements could focus on scaling these circuits to develop large-scale neuromorphic chips
capable of real-time processing, bringing us closer to achieving brain-like computation in
hardware.
The ability to model synaptic transmission using CMOS circuits makes them ideal for brain-
machine interfaces (BMIs) and neuroprosthetic devices. Future work could explore their
application in restoring lost sensory or motor functions, allowing seamless communication
between artificial circuits and biological neurons. This could lead to breakthroughs in assistive
technologies for patients with neurological disorders.
33
Adaptive and Self-Learning Systems:
Currently, these circuits primarily focus on mimicking fixed neural behaviors. Future
developments could incorporate plasticity mechanisms, such as Spike-Timing-Dependent
Plasticity (STDP) and learning algorithms, enabling on-chip learning and adaptive AI models that
continuously evolve based on experience. This would be a significant step toward intelligent
neuromorphic systems capable of real-time learning.
Neuromorphic circuits designed to replicate synaptic transmission and neural firing patterns can
serve as valuable tools for studying neurodegenerative diseases, brain disorders, and neural
network dysfunctions. Future research could focus on simulating conditions such as epilepsy,
Parkinson’s disease, and Alzheimer’s disease, leading to better diagnostic and therapeutic
solutions.
34
REFERENCES
[1] Ehsan Rahiminejad, Fatemeh Azad, Adel Parvizi-Fard, Mahmood Amiri and Bernabé Linares-
Barranco, IEEE, “A Neuromorphic CMOS Circuit with Self-Repairing Capability”, Volume:
33, Issue:5, May 2022. DOI: 10.1109/TNNLS.2020.3045019.
[2] Jayawan H. B. Wijekoon, and Piotr Dudek, “Spiking and Bursting Firing Patterns of a
Compact VLSI Cortical Neuron Circuit”, Issue: 29 October 2007.
DOI: 10.1109/IJCNN.2007.4371151.
[3] Srinjoy Mitra, Stefano Fusi, Giacomo Indiveri, “Real-time classification of complex patterns
using spike-based learning in neuromorphic VLSI”, Volume: 3, Issue:1, February 2009.
DOI: 10.1109/TBCAS.2008.2005781.
[4] A. Herzog et al., "Geometrical Modeling and Visualization of Pre- and Post-Synaptic
Structures in Double-Labeled Confocal Images," in International Conference on Medical
Information Visualisation - BioMedical Visualisation, London, England, UK, 2006, pp. 34-
38, DOI: 10.1109/MEDIVIS.2006.12.
[5] X. Wu, V. Saxena, K. Zhu and S. Balagopal, "A CMOS Spiking Neuron for Brain-Inspired
Neural Networks with Resistive Synapses and In Situ Learning," in IEEE Transactions on
Circuits and Systems II: Express Briefs, vol. 62, no. 11, pp. 1088-1092, Nov. 2015,
DOI:10.1109/TCSII.2015.2456372.
[6] S. Mitra, S. Fusi and G. Indiveri, "Real-Time Classification of Complex Patterns Using Spike-
Based Learning in Neuromorphic VLSI," in IEEE Transactions on Biomedical Circuits and
Systems, vol. 3, no. 1, pp. 32-42, Feb. 2009, DOI:10.1109/TBCAS.2008.2005781.
[7] S. Nazari, M. Amiri, K. Faez and M. M. Van Hulle, "Information Transmitted from
Bioinspired Neuron–Astrocyte Network Improves Cortical Spiking Network’s Pattern
Recognition Performance," in IEEE Transactions on Neural Networks and Learning Systems,
vol. 31, no. 2, pp. 464-474, Feb. 2020, DOI:10.1109/TNNLS.2019.2905003.
35
CERTIFICATES
36