0% found this document useful (0 votes)
35 views

Fault Detection Based On Deep Learning For Digital VLSI Circuits

Fault Detection based on Deep Learning for Digital VLSI Circuits

Uploaded by

nalevihtkas
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

Fault Detection Based On Deep Learning For Digital VLSI Circuits

Fault Detection based on Deep Learning for Digital VLSI Circuits

Uploaded by

nalevihtkas
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Available online at www.sciencedirect.

com

ScienceDirect
Available online at www.sciencedirect.com
Available online at www.sciencedirect.com
Procedia Computer Science 00 (2021) 000–000
ScienceDirect www.elsevier.com/locate/procedia
ScienceDirect
Procedia Computer Science 00 (2021) 000–000
Procedia Computer Science 194 (2021) 122–131
www.elsevier.com/locate/procedia

18th International Learning & Technology Conference 2021

Fault Detection based on Learning


18th International Deep Learning for
& Technology Digital2021
Conference VLSI Circuits
Lamya
Fault Detection Gaber on
based , Aziza
Deep I. Hussein , Mohammed
Learning
a
Moness
for Digital VLSI Circuits
b a

a
Computers and Systems Eng. Dept., Minia University, Minia 61111, Egypt.
Lamya Gaber , Aziza I. Hussein , Mohammed Moness
a
Electrical and Computer b
Eng. Dept., Effat University, Jeddah 22332, Saudi Arabia. a
b

a
Computers and Systems Eng. Dept., Minia University, Minia 61111, Egypt.
b
Electrical and Computer Eng. Dept., Effat University, Jeddah 22332, Saudi Arabia.
Abstract

As growing complexity of digital VLSI circuits, fault detection and correction processes have been the most crucial phases during
IC design. Many CAD tools and formal approaches have been used for debugging and localizing different kinds of design bugs.
Abstract
However, the search space explosion problem remains the main problem for IC designers. Recently, Artificial intelligence and
machine
As growinglearning modelsofhave
complexity been
digital expanded
VLSI circuits,infault
feature extraction
detection and reduction
and correction models.
processes In this
have beenpaper, we introduce
the most a newduring
crucial phases fault
detection
IC design.model
Manybased
CADon deep
tools learning
and formal for extractinghave
approaches features
beenanduseddetecting faults from
for debugging large-sizeddifferent
and localizing digital circuits.
kinds ofThe mainbugs.
design goal
of the proposed
However, modelspace
the search is to explosion
avoid the search
problem space usingthe
remains stacked sparse autoencoder,
main problem a specific
for IC designers. type ofArtificial
Recently, artificial intelligence
neural network.
and
The model
machine consists
learning of three
models havephases: test pattern
been expanded generation
in feature usingand
extraction ATALANTA software,
reduction models. feature
In this reduction
paper, using aSSAE
we introduce and
new fault
classification
detection model forbased
faultondetection. Test vectors
deep learning are utilized
for extracting featuresinandSSAE as a training
detecting data
faults from for unsupervised
large-sized learning
digital circuits. Thephase. The
main goal
performance
of the proposedof feature
model isextraction
to avoid is
thetested
searchbyspace
changing
usingthe architecture
stacked sparse of SSAE network
autoencoder, and sparsity
a specific type of constraint. The proposed
artificial neural network.
algorithm
The modelhas been of
consists implemented
three phases:using
testeight
patterncombinational
generation usingdigitalATALANTA
circuits fromsoftware,
ISCAS'85. Fromreduction
feature experimental
using results,
SSAE and the
maximum faultforcoverage
classification using ATALANTA
fault detection. Test vectorstool aredelivers
utilizedaround
in SSAE99.2%
as ausing ISCAS'85.
training data forIn unsupervised
addition, the maximum validation
learning phase. The
accuracy of proposed
performance of featureSSAE modelisdelivers
extraction around
tested by 99.7%
changing theinarchitecture
feature reduction
of SSAE phase.
network and sparsity constraint. The proposed
algorithm has been implemented using eight combinational digital circuits from ISCAS'85. From experimental results, the
© 2021 The
maximum
Keywords:
Authors.
fault
Design
Published
coverage
Debugging;using byAutoencoder;
Elsevier B.V.
ATALANTA
Sparse tool
ML; delivers around 99.2% using ISCAS'85. In addition, the maximum validation
Deep Learning.
This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0)
accuracy of proposed SSAE model delivers around 99.7% in feature reduction phase.
Peer-review under responsibility of the scientific committee of the 18th International Learning & Technology Conference 2021
1. Introduction
Keywords: Design Debugging; Sparse Autoencoder; ML; Deep Learning.

By growing advances in digital technologies, verification, fault detection and correction procedures in digital
1. Introduction
systems are becoming more and more crucial especially, for digital systems. In large complex systems, fault detection
is a challenging task due to different types of design bugs and large sized circuits. Therefore, recent extensive
By growing
researches have advances
been proposed in digital technologies,
for improving verification,
performance faultdetection
of fault detectionbyand correction
Artificial procedures
intelligence (AI),inBoolean
digital
systems are becoming more and more crucial especially, for digital systems. In large complex systems,
satisfiability problems and model checking. To match behavioral specification and avoid time and cost consumed in fault detection
is a challenging
design, task due
many advances havetobeen
different
focusedtypes of design bugs
on developing and large
debugging sized circuits.
and correction Therefore, recent extensive
algorithms.
researches have been proposed for improving performance of fault detection by Artificial intelligence (AI),Boolean
satisfiability problems and model checking. To match behavioral specification and avoid time and cost consumed in
design, many advances
* Corresponding have
author. Tel.: been focused on developing debugging and correction algorithms.
+20-100-906-7271.
E-mail address: [email protected] ; [email protected].

* Corresponding author. Tel.: +20-100-906-7271.


1877-0509 © 2021
E-mail address: The Authors. Published; [email protected].
[email protected] by ELSEVIER B.V.
This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0)
Peer-review under responsibility of the scientific committee of the 18th International Learning & Technology Conference 2021
1877-0509 © 2021 The Authors. Published by ELSEVIER B.V.
This is an open
1877-0509 access
© 2021 Thearticle underPublished
Authors. the CC BY-NC-ND
by Elsevier license
B.V. (https://creativecommons.org/licenses/by-nc-nd/4.0)
Peer-review underaccess
This is an open responsibility
article of the scientific
under committee of license
the CC BY-NC-ND the 18th(https://creativecommons.org/licenses/by-nc-nd/4.0)
International Learning & Technology Conference 2021
Peer-review under responsibility of the scientific committee of the 18th International Learning & Technology Conference 2021
10.1016/j.procs.2021.10.065
2 Lamya Gaber. et al. / Procedia Computer Science 00 (2021) 000–000
Lamya Gaber et al. / Procedia Computer Science 194 (2021) 122–131 123

In IC design stages, the design debugging and correction phases contribute on average 70% of the total design time
[1]. By the growth of complexities, debugging with auto-correction contributes on average 60 % of the verification
process in digital design for observing some failures in a given design.
Therefore, researchers have focused on how to use an efficient way for quickly verifying and detecting digital
systems by reaching to an excellent trade-off between accuracy, speed and human interpretations. In this paper we will
discuss the most common ways for detecting faults in digital systems.
There are three main categories of methods for detecting faults which mainly based on machine learning algorithms,
SAT-based algorithms and model checking algorithms, respectively. Also, there are different types of faults which
may be occurred in digital circuits such as extra or missing inverters, gate replacements and stuck-at faults. We attempt
to discuss the following:

1- How machine learning algorithms can be used for detecting stuck-at faults.
2- How SAT-based algorithms can be used for detecting gate replacement algorithms

In machine learning (ML) models, the main core is the data set which is different from application to another. Then,
ML models utilize these data set to learn from a given set of data which have many form such as integers, strings,
images, videos, audios, etc. In digital VLSI circuits, the data sets are the outputs of test pattern generators which
produce specific values assigned to inputs of the digital circuit and their corresponding outputs. Therefore, the accuracy
of the data set has a great impact on the performance of fault detection algorithms. In the last centuries, many advanced
ATPG tools have emerged with high expensive licenses. One of the most powerful free ATPG tools is ATALANTA
software [2] which was created at the Virginia Tech University. It is suitable for just combinational circuit and has
many advantages like efficiency, flexibility related to the test sets, fault dictionary and fault coverage. Also, it can also
perform random test pattern generator (RTPG).
Recently, the use of artificial intelligence methods for feature extraction and fault detection has been expanded.
Most of methods extract features from inputs and actual outputs of every circuit then compare them with the target
outputs. In [3], authors utilized support vector machine (SVM) for fault detection. But the main disadvantages of this
method are the sample size, local minimum and non-linearity problems. In [4, 5], the multi-class relevance vector
machine and random forest are used for fault diagnosis. However, the good results from these methods can be achieved
after long time. In [6], The hierarchical artificial neural network (HANN) proposed for extracting faults by dividing
the fault pattern space into small sub-spaces using fuzzy C-means clustering algorithm. Also, In [7], a global two-
layer back-propagation network was used for fault classification.
In addition, big data can be handled by one of the newest methods in the field of automatic features extraction and
fault detection known as deep learning. The traditional methods have been improving by using the high-level features
extraction from big data instead of using the original data with a high accuracy. In [8], the fault detection process and
fault correction process have been modelled by artificial neural network and cryptographic service which perform
privacy defense over dispersed cyber physical systems.
On the other hand, SAT and MAX-SAT are utilized to detect and correct some kinds of errors such as gate
replacements. In SAT-based algorithms, the whole process is converted to Boolean satisfiability problem. Then, some
of advanced SAT solvers (such as MiniSAT [9] solver or DPLL solver [10]) can be utilized for detecting specific
information and attributes (such as minimal correction or minimal unsatisfiable subsets (MCSes, MUSes))in the
SAT instances. These extracted subsets are mapped to define the corresponding faulty gates [11]. In addition, many
SAT-based ATPG algorithms [12] have been used for giving test patterns that can be used in the subsequent processes
of fault detection and correction [13-15].
Various SAT-based approaches have been proposed to perform fault detection with high performance. In [16], the
MUS generation was proposed by initially computing all subsets of minimal correction (MCSes) from the original
formula. Then the extraction of MUS performed on all MCSes instead of original formula by using the relation between
MCS and MUS. Therefore, the SAT solver calls can be removed in generating every single MUS. But the main
disadvantage of this method is that the reasons of single faults (MUS) can Not be recognized unless generating all
subsets of minimal corrections. Therefore, the performance of fault detection algorithm decreases especially with small
faults and large-sized circuits. In [17], a new algorithm for finding the minimal correction subsets based on
124 Lamya Gaber et al. / Procedia Computer Science 194 (2021) 122–131
Lamya Gaber. et al. / Procedia Computer Science 00 (2021) 000–000 3

some proposed proofs in [18] which extracts minimal correction subsets to be directly mapped to failures in circuits.
In [19], a recursive method of computing MUSes is performed directly from the input formula using critical constraints.
These types of algorithms depend on how fast a single algorithm can be extracted using shrinking algorithm. In [11],
authors improves the shrinking method by removing a non-relevant parts of the original formula which have a high
impact on increasing speed with the same accuracy for single gate-replacement faults. This approach used a serial test
pattern generator proposed in [12] for extracting test vectors with correct responses in order to form a SAT instance
for debugging phase. These SAT-based methods have been utilized in the consequence fault correction based on
satisfiability [13, 20].
In this paper, we propose an efficient method based on deep learning for feature extraction and fault detection of
combinational digital circuits of struck-at-faults. Our proposed model attempts to reduce big data using stacked sparse
autoencoders for extracting the most important features with high accuracy of reconstruction. This method consists of
three main phases: test pattern generation, feature reduction and classification. The following sections organized as
follows: section 2 illustrates background of deep learning and autoencoders. Section 3 explains the proposed model
and section 4 demonstrates the performance evaluation. Section 5 presents conclusion.

2. Background

2.1. Deep Learning

Deep learning is one the newest efficient ways in the field of automatic features extraction and fault detection. By
using big data and trying to extract the most important features using deep learning, the performance of fault detection
are increased than traditional techniques [21, 22]
An artificial neural networks (ANNs) mainly consist of multiple of layers, every layer has number of neurons which
adapt complex functions through a progression of nonlinear transformations [23]. These systems of neurons can be
connected to perform many applications such as complex classifications. Therefore, various researches [6] have been
focused on performing fault diagnosis as one of the ANN applications.
There are multiple types of ANN such as Feedforward Neural Network (FNN), Recurrent Neural Network (RNN),
Convolution Neural Network (CNN) and Modular Neural Network (MNN). The following table compares between
the four previous neural networks.

Table 1. Comparison between different types of neural networks

Type of NN Definition Applications Visual Description


Feedforward - The simplest forms of ANN - Classification where the target
NN classes is complicated.
- Input data passes in one direction from
the input nodes to output nodes.
- May or may not has hidden layers.
- No backpropagation by using a
classifying activation function usually.
- Has a front propagated wave.
Fig. 1 Example of FNN [24].
Lamya Gaber et al. / Procedia Computer Science 194 (2021) 122–131 125
4 Lamya Gaber. et al. / Procedia Computer Science 00 (2021) 000–000

Recurrent NN - Saves the output and feeds it back to -Text to speech (TTS) conversion
the input to help in predicating the models
correct output.
- Each neuron acts like a memory cell.
- The learning rate is used for finding the
correct prediction.

Fig.2 Example of RNN [24].


Convolution NN - Similar to feedforward neural network. - Suitable in image processing and
- The input features are passed in batch- computer vision.
wise.

Fig.3 Example of CNN [24].


Modular NN - It consists of multiple networks, - Many applications such as
working independently. function approximations,
- The inputs of every neural network are character recognitions and patient
not similar for performing sub-tasks independent ECG recognition
(No interaction). system

- Large processes can be broken down


into smaller models in order to decrease
the complexity.
- Processing time depends on the number Fig.4 Example of MNN [24].
of neurons.

2.2. Autoencoders

One of the powerful unsupervised machine learning algorithms is an autoencoder (AE). It is simply used for
implementing dimensionally feature reduction from higher dimension to a lower dimension. This reduction process
attempts to preserve the most important features of data and removing the redundant ones or non-essential parts. A
single layered autoencoder with a linear activation function is similar to Principle Component Analysis (PCA).
However, PCA is essentially a linear transformation whereas autoencoders can model more complex non-linear
functions. Figure 5 visualizes the difference between PCA and autoencoder for feature reduction. In simple words,
PCA are trying to find a lower dimensional hyperplane whereas autoencoders attempt to learn nonlinear manifolds.

Fig. 5. A visual difference between PCA and Autoencoder [25]

Autoencoders have a symmetrical structure. It consists of three parts: encoder, code and decoder as shown in figure
7.
126 Lamya Gaber et al. / Procedia Computer Science 194 (2021) 122–131
Lamya Gaber. et al. / Procedia Computer Science 00 (2021) 000–000 5

1- The encoder down-samples the input into fewer number of bits known as latent-space (known as code). Also,
the maximum point of compression is called bottleneck. Therefore, the original data and the compressed data
are called encoding of the input.
2- The decoder attempts to reconstruct the input from the encoding of the input. The dimensionality reduction
has been performed by using non-linear hidden layers [26]. The best encodings for the input can be achieved
when the decoder is able to reconstruct the input exactly as it was passed to the encoder. At this point, the
encoding data can be used instead of the original data in a supervised algorithm such as classifiers.

Fig. 6. Autoencoder process [27].


Autoencoder network can be trained in order to find the best reconstructed output (š̵) by minimizing the reconstruction
error between the original data and the reconstructed output. Therefore, autoencoders attempt to balance between: 1-
being sensitive to the inputs enough to build an accurate reconstructed output and 2- being insensitive to the input
enough to avoid memorizing and overfitting the training data. Therefore, the loss function of the model can be defined
in eq.1 as two parts: one term satisfies the sensitivity to the input šand the other is toavoid memorization/overfitting
which is called regularizer.

ሺšǡš̵ሻ൅”‡‰—Žƒ”‹œ‡” (1)

There are many types of autoencoders such as convolution autoencoder, denoising autoencoders, variational
autoencoders and sparse autoencoders and deep autoencoders (stacked sparse autoencoders). We will focus on the last
two types.

2.3. Sparse autoencoders

Sparse autoencoders attempt to avoid reduction in the number of nodes in hidden layers by activating a small
number of neurons in hidden layers as shown in figure.7. This is considered an alternative technique on introducing
bottleneck without reduction. This process can be done by imposing a sparsity constraint on the hidden units and
constructing loss function by penalizing activation within hidden layers. Informally, if the output of neuron close to
1, the neuron will be "active" or "firing" and if the output value is close to zero, the neuron will be "inactive" (In case
of using sigmoid activation function).

Fig. 7. Sparse autoencoder [25].


Lamya Gaber et al. / Procedia Computer Science 194 (2021) 122–131 127
6 Lamya Gaber. et al. / Procedia Computer Science 00 (2021) 000–000

There are two ways for imposing a sparsity constraint in the loss function that are called L1-Regularization and
KL-Divergence. For each training batch, both methods depend on measuring the hidden layer activations and adding
some term to loss function for penalizing excessive activations.
In L1 Regularization: the added term to the loss function penalizes the absolute value of the activation vector in
hidden layer (ℎ)for training observation ሺ𝑖𝑖ሻand scaled by tuning parameter ሺ𝜆𝜆ሻas the follows:
ሺšǡš̵ሻ൅λ∑ ȁƒሺŠሻȁ (2)
‹ ‹


In KL-Divergence, the sparsity constraint defines the average activation of a neuron over a collection of
samples ሺ𝑚𝑚ሻas follows:
ͳ
ρ̂ ൌ ∑ሾƒʹ ሺš ሺ‹ሻ ሻሿ

(3)
Œ  ‹ൌͳ Œ

where 𝑖𝑖denoted to neuron in hidden layer (ℎ = 2), 𝑚𝑚denotes to training observations and 𝑥𝑥denotes to an individual
input or observation. The sparse autoencoder attempts to make this constraint equals to a sparsity parameter ρ which
is typically close to zero. This equivalence can be achieved by adding extra penalty term to the optimization objective
using different methods [25].

2.4. Stacked Sparse Autoencoders

Stacked Sparse autoencoder (SSAE) or Deep autoencoder attempts to improve performance of typical autoencoders
when there are complex architectures. This type of autoencoder forms of several stacked autoencoder layers. Every
encoder layer computed by the autoencoder will be the input to its next autoencoder as shown in figure
8. Every autoencoder is separately trained as unsupervised stage by minimizing its reconstruction errors. When all
layers are pre-trained, the network can be passed to supervised stage (stage 3 in figure 8)

Fig. 8. Deep Autoencoder

2.5. Fault Detection using Deep Learning

As deep neural networks have no necessity to find an accurate mathematical model for the data or formal
specification, fault detection process of digital VLSI circuits [28] can be performed by utilizing SSAE with test vectors
generated from ATALANTA, one of the powerful ATPG tool. First of all, test vectors are inputs of SSAE for
compressing data consequently by using stacked sparse autoencoders. This phase is considered an unsupervised
learning as there is no labels for data but rather the targets are the same as inputs (inputs of encoder equal to outputs
of decoder).
After pre-training stages, the input and encoding data from every SAE are used along with final layer which called
softmax layer. Softmax function can be used as a classification in machine learning. It is also called multi- class
logistic repression. It can be used as multi-class classification when the classes are mutually exclusive. The softmax
function converts the score values into a normalized probability distribution which gives useful intuition to a user.
The softmax formula is described in the following equation:
128 Lamya Gaber.
Lamya et al.et/ Procedia
Gaber Computer
al. / Procedia Science
Computer 00 (2021)
Science 000–000
194 (2021) 122–131 7

‡œ‹ 
σሺz⃗ሻൌ  (4)
‹ ∑ ‡ œŒ 
Œൌͳ 

where 𝑧𝑧is the input vector of softmax, 𝑧𝑧𝑖𝑖values are output vector of neural network (real values) and 𝑘𝑘is the
number of classes in the multi-class classifier [28].

3. The Proposed Fault Detection Approach

In this section, the proposed fault detection model based on deep learning is explained in details. Fig 9. shows the
main phases of our algorithm: generating test patterns, feature reduction and classification.

 Phase 1:
The first step is generating test patterns for digital circuit that are used in the consequent steps. A large
amount of training data yields to high accuracy and performance. Therefore, test patterns for digital
circuits have to be unique and large in order to help neural network to learn different features of digital
circuits. So, the input of ATPG tool is the digital circuit in bench format and the outputs are multiple
number of test patterns with the correct outputs for each fault and the fault mask for every test pattern.
In this phase, ATALANTA tool is used as test pattern generation which depends on the FAN algorithm
[2].

 Phase 2:
After generating and augmenting data, neural network takes these data for reducing its dimensions. As
most digital circuits have large size which has impact on the performance of traditional fault detection
algorithms, stacked sparse autoencoder can be used as unsupervised learning algorithm in order to perform
feature dimension reduction. Therefore, the target of this phase is to find the minimum number of features
that can be used in the next steps instead of original features with the same accuracy. In our application,
features are test patterns and the corresponding faulty-free responses for digital circuits. These patterns
are used as a training data in stacked sparse autoencoder.

 Phase 3:
This is the last step which is used for detecting faults for every test pattern. After compressing test patterns
and finding the compact data which can be perfectly used for reconstruction, the softmax classifier can be
used as a last layer in our machine learning model. The number of classes in this layer are the number of
faults recognized in the first phase. As we previously said, when there are multiple number of classes, the
ML model take "n" real numbers corresponding to the input position of eachclass. Then equation 4
converts real values to probabilities of input in each of n classes.

Fig. 9. The proposed Fault Detection Model


Lamya Gaber et al. / Procedia Computer Science 194 (2021) 122–131 129
8 Lamya Gaber. et al. / Procedia Computer Science 00 (2021) 000–000

4. Experimental Results

In this section, we illustrate the experimental results of proposed algorithm of detecting stuck-at-0 and stuck-at-1
faults using stacked sparse autoencoder. The proposed algorithm is implemented using eight combinational circuits
from ISCAS'85 benchmark [29]. The model executed on intel core i7 10750 working at 2.60 GHz with 16 GB system
memory.
Every digital circuit from ISCAS'85 has been passed to ATALANTA software. The stuck-at-0 and struck-at-1 are
recognized in the ATALANTA and generating number test patterns for each fault. We have selected 20/50 test vectors
for each fault in every digital circuit. Table 2 illustrates number of inputs and outputs, faults, test vectors and percentage
of fault coverage for every digital circuit.

Table 2. Number of inputs and outputs, faults, test patterns and fault coverage of 8 combinational circuit

CNF Type # Inputs #Faults # Test Vectors Fault


and Coverage
outputs %
c17 7 22 54 100%
c432 43 524 8950 98.8%
c499 73 758 13066 95.5%
c880 86 942 15357 100%
c1908 58 1879 34216 99.2%
c3540 72 3428 55752 95.9%
c5315 301 5350 88978 98.7%
c6288 64 7744 114101 80.8%

The parameters of the proposed SSAE are illustrated in table 3. The number of hidden layers and hidden neurons are
changed experimentally to find the best feature reduction. Also, the amount of sparsity constraint 𝜌𝜌 is changed
experimentally to find the best reconstruction performance of SSAE with 100 epochs and 32 batching size. Also, table
3 illustrates a comparison between simple autoencoders and stacked sparse autoencoders in terms of validation
accuracy. From this comparison, we conclude that SSAE gives maximum high accuracy of reconstruction around
99.7% in "c5315" that has 301 inputs and outputs (which are compressed to 20 by three stacked sparse autoencoder
with sparsity around 10e-9).

Table 3. Number of hidden neurons and validation accuracy of feature extraction with sparsity constraints

CNF Type #hidden neurons Validation Validation Sparsity


for every layer Accuracy using Accuracy using Constraints of
simple AE SSAE SSAE

c17 5,3,2 63.9% 71.4% 10e-3


c432 30,20,10 95.4% 97.5% 10e-6
c499 50,30,20 98.2% 98.50% 10e-9
c880 50,30,20 98.3% 98.6% 10e-6
c1908 50,30,20 97.8% 98.8% 10e-6
c3540 50,30,20 97.3% 98.2% 10e-6
c5315 100,50,20 98.9% 99.7% 10e-9
c6288 50,30,20 97.9% 99.3% 10e-6

Figure 10. illustrates the effect of ρ with accuracy of SSAE for reconstruction phase. We changed the sparsity
constraint between three values ( 𝜌𝜌ͳ =0.49, 𝜌𝜌ʹ =0.067, 𝜌𝜌͵ =0.024) with 5 different combinational circuit. The
implementation of SSAE model is performed using three SAE with the same architecture as it stated in table 3.From
results, we can conclude that the best value for sparsity constraint is about 0.024 (𝜌𝜌͵in fig. 10) for eight combinational
circuits (ALU circuits and SEC circuits) in ISCAS'85 benchmarks.
130 Lamya Gaber et al. / Procedia Computer Science 194 (2021) 122–131
Lamya Gaber. et al. / Procedia Computer Science 00 (2021) 000–000 9

ρ1 ρ2 ρ3
150

Validation Accuracy %
100

50

0
c17 c432 c499 c880 c1908 c3540 c5315 c6288

Fig. 10. The effect of sparsity constraints on the accuracy of SSAE.

5. Conclusion

In this paper, a new approach for fault detection process based on an artificial neural network are proposed for
detecting stuck-at-faults occurred in 27-channel interrupt controller, ALUs circuits from ISCAS'85 benchmarks. The
proposed algorithm attempts to avoid the explosion problem of search space by compressing features of digital circuits
(test patterns in our case). In our algorithm, ATALANTA software is utilized for generating test vectors to each fault.
Then, three numbers of sparse autoencoder are stacking together with different number of hidden neurons according
to the best choice for reconstruction accuracy, especially in debugging large-sized digital circuits. The proposed SSAE
is connected with softmax classifier as a last hidden layer for performing the supervised learning phase using the fault
mask for every test pattern generated from ATALANTA tool. The proposed approach has been implemented on eight
combinational circuits from ISCAS’85. The maximum fault coverage delivers around 99.3% using ATALANTA tool.
Also, the SSAE network delivers around 99.7% maximum validation accuracy in the feature reduction of test patterns
for eight combinational circuits.

References

[1] P. Rashinkar, P. Paterson, and L. Singh, System-on-a-chip verification: methodology and techniques:
Springer Science & Business Media, 2007.
[2] P. Fišer. (September 2005). Atalanta-M. Available: https://ddd.fit.cvut.cz/prj/Atalanta-M/
[3] Q. Hu, R.-j. Wang, and Y.-j. Zhan, "Fault diagnosis technology based on SVM in power electronics
circuit," 2008.
[4] T. Wang, H. Xu, J. Han, E. Elbouchikhi, and M. E. H. Benbouzid, "Cascaded H-bridge multilevel inverter
system fault diagnosis using a PCA and multiclass relevance vector machine approach," IEEE Transactions
on Power Electronics, vol. 30, pp. 7006-7018, 2015.
[5] J.-D. Cai and R.-W. Yan, "Fault diagnosis of power electronic circuit based on random forests algorithm," in
2009 Fifth International Conference on Natural Computation, 2009, pp. 214-217.
[6] R. Eslamloueyan, "Designing a hierarchical neural network based on fuzzy clustering for fault diagnosis of
the Tennessee–Eastman process," Applied soft computing, vol. 11, pp. 1407-1415, 2011.
[7] Z. Zhang and J. Zhao, "A deep belief network based fault diagnosis model for complex chemical processes,"
Computers & chemical engineering, vol. 107, pp. 395-407, 2017.
[8] H. Xiao, M. Cao, and R. Peng, "Artificial neural network based software fault detection and correction
prediction models considering testing effort," Applied Soft Computing, vol. 94, p. 106491, 2020.
[9] N. S. N. Eén. (2016). The MiniSat Page. Available: http://minisat.se
[10] A. Dal Palù, A. Dovier, A. Formisano, and E. Pontelli, "Cud@ sat: Sat solving on gpus," Journal of
Experimental & Theoretical Artificial Intelligence, vol. 27, pp. 293-316, 2015.
[11] L. Gaber, A. I. Hussein, H. Mahmoud, M. M. Mabrook, and M. Moness, "Computation of minimal
unsatisfiable subformulas for SAT-based digital circuit error diagnosis," Journal of Ambient Intelligence and
Humanized Computing, pp. 1-19, 2020.
[12] M. Osama, L. Gaber, A. I. Hussein, and H. Mahmoud, "An Efficient SAT-Based Test Generation
Lamya Gaber et al. / Procedia Computer Science 194 (2021) 122–131 131
1 Lamya Gaber. et al. / Procedia Computer Science 00 (2021) 000–000
0
Algorithm with GPU Accelerator," Journal of Electronic Testing, vol. 34, pp. 511-527, 2018.
[13] L. Gaber, A. I. Hussein, and M. Moness, "Improved Automatic Correction for Digital VLSI Circuits," in
2019 31st International Conference on Microelectronics (ICM), 2019, pp. 18-22.
[14] A. I. H. a. M. M. Lamya Gaber, "INCREMENTAL AUTOMATIC CORRECTION FOR DIGITAL VLSI
CIRCUITS," presented at the 11th International Conference on VLSI (VLSI 2020), 2020.
[15] A. I. H. Lamya Gaber, Mohammed Moness, "Fast Auto-Correction algorithm for Digital VLSI Circuits
" presented at the 17th International Learning & Technology Conference 2020, 2020.
[16] M. H. Liffiton, A. Previti, A. Malik, and J. Marques-Silva, "Fast, flexible MUS enumeration," Constraints,
vol. 21, pp. 223-250, 2016.
[17] L. G. Ali, A. I. Hussein, and H. M. Ali, "An efficient computation of minimal correction subformulas for
SAT-based ATPG of digital circuits," in Computer Engineering and Systems (ICCES), 2017 12th
International Conference on, 2017, pp. 383-389.
[18] L. G. Ali, A. I. Hussein, and H. M. Ali, "Parallelization of unit propagation algorithm for SAT-based
ATPG of digital circuits," in Microelectronics (ICM), 2016 28th International Conference on, 2016, pp. 184-
188.
[19] J. Bendík and I. Černá, "MUST: Minimal Unsatisfiable Subsets Enumeration Tool," in International
Conference on Tools and Algorithms for the Construction and Analysis of Systems, 2020, pp. 135-152.
[20] B. Alizadeh and S. R. Sharafinejad, "Incremental SAT-Based Accurate Auto-Correction of Sequential
Circuits Through Automatic Test Pattern Generation," IEEE Transactions on Computer-Aided Design of
Integrated Circuits and Systems, vol. 38, pp. 245-252, 2018.
[21] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural
networks," Communications of the ACM, vol. 60, pp. 84-90, 2017.
[22] G. E. Dahl, D. Yu, L. Deng, and A. Acero, "Context-dependent pre-trained deep neural networks for large-
vocabulary speech recognition," IEEE Transactions on audio, speech, and language processing, vol. 20, pp.
30-42, 2011.
[23] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition,"
arXiv preprint arXiv:1409.1556, 2014.
[24] K. MALADKAR. (2018). 6 Types of Artificial Neural Networks Currently Being Used in Machine Learning.
Available: https://analyticsindiamag.com/6-types-of-artificial-neural-networks-currently-being- used-in-
todays-technology/
[25] J. Jordan. (2018). Introduction to autoencoders. Available: https://www.jeremyjordan.me/autoencoders/
[26] Y. Guo, Y. Liu, A. Oerlemans, S. Lao, S. Wu, and M. S. Lew, "Deep learning for visual understanding: A
review," Neurocomputing, vol. 187, pp. 27-48, 2016.
[27] O. Aouedi, K. Piamrat, and D. Bagadthey, "A Semi-supervised Stacked Autoencoder Approach for
Network Traffic Classification," 2020.
[28] L. Malihi and R. Malihi, "Single stuck-at-faults detection using test generation vector and deep stacked-
sparse-autoencoder," SN Applied Sciences, vol. 2, pp. 1-10, 2020.
[29] D. Bryan, "The ISCAS'85 benchmark circuits and netlist format," North Carolina State University, vol. 25,
1985.

You might also like