18CSE357T – BIOMETRICS
Unit –1 : Session –2 : SLO -1
SRM Institute of Science and Technology 1
DESIGN
CYCLE OF
BIOMETRIC
SYSTEM
Cooperative versus non-
cooperative users
Overt versus covert deployment
Nature of Habituated users versus non-
habituated
the Attended versus unattended
application operation
Controlled versus uncontrolled
operation
Open versus closed system
Universality
Uniqueness
Permanence
CHOICE OF
BIOMETRIC Measurability
TRAIT
Performance
Acceptability
Circumvention
This data is required both for
designing the feature extraction and
matcher modules as well as for the
evaluation of the designed biometric
system.
However, before venturing into data
DATA collection, we first need to design
COLLECTION appropriate sensors to acquire the
chosen biometric trait(s).
Factors like size, cost, ruggedness,
and ability to capture good quality
biometric samples are some of the
key issues in biometric sensor
design.
7
• If the database is too easy (i.e., it includes only good quality
biometric samples with small intrauser variations), the resulting
recognition error rates will be close to zero and it will be very
difficult to distinguish between the competing feature extraction and
matching algorithms.
• On the other hand, if the database is too challenging (i.e., it includes
only poor quality biometric samples with large intra-user variations),
the recognition challenge may be beyond the capabilities of existing
technologies.
Choice of features and
matching algorithm
• Design of a feature extractor and matcher not only requires a
database of biometric samples, but also some prior knowledge about
the biometric trait under consideration.
• For instance, prior knowledge about the “uniqueness” of minutia
points facilitated the development of minutiae-based fingerprint
recognition systems.
• Similarly, the fact that a minutiae pattern is typically represented as
an unordered set of points, drives the development of a suitable
matching algorithm to match the minutia sets.
Evaluation
• Technology evaluation: Technology evaluation compares competing algorithms from a
single technology on a standardized database. Since the database is fixed, the technology
evaluation results are repeatable. The Fingerprint Verification Competitions (FVC), the
Fingerprint Vendor Technology Evaluation (FpVTE),the Face Recognition Vendor Tests
(FRVT), the Face Recognition Technology (FERET) program, and the NIST Speaker
Recognition Evaluations (SRE) are examples of biometric technology evaluations.
• Scenario evaluation: In scenario evaluation, the testing of the prototype biometric systems
is carried out in an environment that closely resembles the real-world application. Since
each system will acquire its own biometric data, care must be taken to ensure uniformity in
the environmental conditions and sample population across the different prototype systems.
• Operational evaluation: Operational evaluation is used to ascertain the performance of a
complete biometric system in a specific real-world application environment on a specific
target population.