UGC-sponsored Two -Week Online Refresher Course on
“Biometrics Security in the Generative AI Era”
Day-8 Assignment (Dt: 13-08-2025)
1. List any five applications of BioHashing and explain any one with sample case study.
Ans:
Five Applications of BioHashing
1. Secure User Authentication
Protects system logins by converting biometric features into non-reversible hash codes.
2. Banking and Financial Transactions
Ensures secure access to ATM services, online banking, and mobile payment systems.
3. Access Control in High-Security Areas
Restricts entry to sensitive zones like data centers, research labs, or defense facilities.
4. E-Voting Systems
Prevents impersonation and ensures that only eligible voters can cast votes.
5. Digital Rights Management (DRM)
Protects access to copyrighted digital content using biohash-based keys.
Case Study: BioHashing in Banking Transactions
Scenario:
A multinational bank wanted to replace password/PIN-based ATM authentication with a more
secure, user-friendly system. They deployed BioHashing using fingerprint data.
Implementation Steps:
1. Biometric Capture:
The ATM captures the customer’s fingerprint.
2. Feature Extraction:
Distinct fingerprint features (e.g., minutiae points) are extracted.
3. Token Generation:
A user-specific random token is generated during enrollment (stored on a secure chip
card).
4. BioHash Creation:
Fingerprint features are projected using the token into a new feature space, producing a
biohash code (binary string).
5. Storage:
Only the biohash (not the fingerprint image) is stored in the bank’s secure database.
6. Verification:
o During ATM use, the fingerprint is scanned again.
o The same token transforms it into a biohash.
o The new biohash is compared with the stored one.
Outcome & Advantages:
No raw biometrics stored → prevents misuse if database is hacked.
Revocability → if a token is compromised, a new one can be issued, and a new biohash
generated.
Fast Matching → biohash codes allow quick 1:1 comparison.
2. Difference between Cancelable Biometrics, Biohashing and Biometric encryption.
Ans):
1. Cancelable Biometrics
Definition:
A method where the original biometric template is deliberately and repeatably distorted using
a transformation function before storage.
Key Points:
Uses a mathematical transformation (e.g., rotation, permutation, random projection).
If compromised, the transformed template can be changed (revoked) without changing
the actual biometric.
The transformation function (key) must be kept secret.
Example:
Applying a fixed random permutation to fingerprint minutiae coordinates before storage.
If stolen, apply a new permutation to regenerate a new template.
2. BioHashing
Definition:
A token-based transformation of biometric features into a compact binary code (biohash) using
a user-specific random key/token.
Key Points:
Requires both biometric data and a secret token for authentication.
Biohash is non-invertible (cannot reconstruct biometric from it).
Revocable by issuing a new token.
More secure than pure cancelable biometrics because it adds two-factor security
(biometric + token).
Example:
Fingerprint features projected using a random orthonormal matrix generated from a token →
binary biohash code stored in database.
3. Biometric Encryption (BE)
Definition:
A method where a cryptographic key is securely bound to a biometric template so that it can
be retrieved only with the correct biometric input.
Key Points:
Biometric acts as a key to unlock encrypted data.
The key itself is never stored explicitly.
Protects both the biometric and the cryptographic key.
Often implemented using Fuzzy Commitment or Fuzzy Vault schemes.
Example:
In a Fuzzy Vault scheme, a cryptographic key is hidden within a "vault" created using biometric
features (e.g., fingerprint minutiae). Only a close-enough biometric samples can retrieve the key.
3. Explain the core concept of differential privacy, including its mathematical foundation
and why it offers stronger privacy guarantees than methods like k-anonymity.
Ans:
1. Core Concept
Differential Privacy is a mathematical framework for ensuring that the output of a computation
does not reveal too much information about any single individual in the dataset — even if an
attacker has other auxiliary knowledge.
Intuition:
If I remove your data from a dataset, the output of any analysis should look almost the same as
if your data were included. This means no one can confidently tell whether you were in the
dataset or not.
2. Mathematical Foundation
A randomized algorithm MMM (mechanism) satisfies ϵ\epsilonϵ-Differential Privacy if for all
datasets D1D_1D1and D2D_2D2differing in only one record, and for all possible outputs S:
Interpretation:
The probability of getting a given output is almost the same whether or not any single individual
is included. The "closeness" is controlled by ϵ\epsilonϵ.
3. How It’s Achieved
DP is typically implemented by adding random noise (often Laplace or Gaussian) to query
results:
This ensures the result doesn’t depend too strongly on any one person’s presence.
4. Why It’s Stronger than k-Anonymity
Feature k-Anonymity Differential Privacy
Each individual is indistinguishable from at The output probability changes only
Guarantee least k−1k-1k−1 others in terms of quasi- minimally if any one person’s data
identifiers. changes.
Weak — attackers can re-identify Strong — privacy holds even if the
Resistance to
individuals using other data sources (linkage attacker knows almost everything
Auxiliary Information
attacks). else.
Feature k-Anonymity Differential Privacy
Formal, quantifiable bound via
Mathematical Bound No formal bound on information leakage.
ϵ\epsilonϵ.
Required — uses randomized
Randomization Not required — deterministic grouping.
algorithms.
Example of weakness in k-anonymity:
If all 10 people in a group have the same medical diagnosis, k-anonymity says you’re “safe,” but
in reality, the attacker still learns that diagnosis.
Differential Privacy avoids this because noise would hide exact values.
4. What is homomorphic encryption, and how does it differ from traditional encryption
methods?
Ans): 1. what is Homomorphic Encryption?
Homomorphic Encryption (HE) is a type of encryption that allows you to perform
computations directly on encrypted data without first decrypting it.
The result of the computation — when decrypted — is the same as if you had performed the
operation on the original, unencrypted data.
2. How It Differs from Traditional Encryption
Feature Traditional Encryption Homomorphic Encryption
Operation on Must decrypt data first, operate, Can operate directly on ciphertext without
Data then re-encrypt. decryption.
Data Exposure Higher — data is in plaintext during
Lower — data stays encrypted the entire time.
Risk computation.
Slower — computations on encrypted data are
Performance Fast — minimal overhead.
mathematically heavier.
Secure storage, secure Secure computation on sensitive data by
Use Cases
transmission. untrusted servers (e.g., cloud).
3. Types of Homomorphic Encryption
1. Partially Homomorphic Encryption (PHE)
Supports only one type of operation (e.g., addition or multiplication).
Example: RSA is multiplicative homomorphic.
2. Somewhat Homomorphic Encryption (SHE)
Supports limited number of both operations before noise makes it unusable.
3. Fully Homomorphic Encryption (FHE)
Supports unlimited addition and multiplication operations on ciphertext.
Invented by Craig Gentry in 2009 — a breakthrough in cryptography.
4. Why It’s Powerful
Privacy-preserving cloud computing → servers can process your encrypted data
without ever seeing it.
Secure medical data analysis → researchers can run analytics on encrypted patient
records.
Financial services → risk models can run on encrypted client data without violating
confidentiality.
5. Discuss the role of fuzzy logic in handling noisy and variable biometric data. Provide
examples from fingerprint, iris, and multimodal biometric systems.
Ans:
1. Why Fuzzy Logic is Needed in Biometrics
Biometric data (fingerprints, iris scans, face images, etc.) is never perfectly consistent because
of:
Noise → sensor imperfections, dirt, lighting changes
Variability → natural changes in physiology or capture conditions
Partial data → incomplete fingerprints, occluded irises
Traditional matching (exact threshold-based) can fail when data is slightly distorted.
Fuzzy logic solves this by:
Allowing degrees of similarity instead of strict “match / no match”
Handling uncertainty and approximate reasoning
Mapping raw similarity scores into linguistic terms like High Match, Medium Match,
Low Match
2. Core Concept of Fuzzy Logic in Biometrics
Fuzzy logic works with membership functions that map an input (e.g., similarity score) to a
degree between 0 and 1, representing the strength of membership in a “match” category.
Example membership function for fingerprint similarity:
High Match: membership close to 1 for scores > 0.85
Medium Match: membership ~0.5 for scores around 0.65
Low Match: membership close to 0 for scores < 0.4
The system uses IF–THEN rules to make a decision:
IF similarity is High THEN accept
IF similarity is Medium AND quality is Good THEN accept
ELSE reject
3. Examples in Specific Biometric Systems
A. Fingerprint Recognition
Problem: Fingerprint scans can have partial prints, smudges, or dry/wet fingers → minutiae
matching score varies.
Fuzzy Approach:
Define membership functions for match score (low, medium, high) and image quality
(poor, average, good).
Rule: IF match score is medium AND quality is good → treat as match.
Benefit: Prevents false rejections when the print is slightly degraded.
B. Iris Recognition
Problem: Iris scans can be affected by pupil dilation, eyelashes, reflections.
Fuzzy Approach:
Input: Hamming distance between iris codes.
Membership functions: Very Close, Close, Far.
Rule: IF Hamming distance is very close → accept; IF close AND quality is high →
accept.
Benefit: Maintains high accuracy even with slight iris pattern distortions.
C. Multimodal Biometric Systems
Problem: Multiple biometrics (e.g., fingerprint + face) may have conflicting scores.
Fuzzy Approach:
Inputs: Normalized match scores from each modality.
Fuzzy rules combine them:
o IF fingerprint is high AND face is medium → accept.
o IF both are medium → request re-scan.
Benefit: Makes balanced decisions when one biometric is noisy but the other is
reliable.
4. Advantages of Fuzzy Logic in Biometrics
Handles uncertainty and noise gracefully.
Reduces false rejections due to minor data variations.
Allows human-like reasoning in decision-making.
Works well for multimodal fusion when combining multiple imperfect sources.
6. Explain the step-by-step working of a biometric authentication algorithm that uses fuzzy
logic. Highlight how membership functions and fuzzy rules improve decision-making
compared to traditional binary thresholding.
Ans:
1. Why Fuzzy Logic is Needed in Biometrics
Biometric data (fingerprints, iris scans, face images, etc.) is never perfectly consistent because
of:
Noise → sensor imperfections, dirt, lighting changes
Variability → natural changes in physiology or capture conditions
Partial data → incomplete fingerprints, occluded irises
Traditional matching (exact threshold-based) can fail when data is slightly distorted.
Fuzzy logic solves this by:
Allowing degrees of similarity instead of strict “match / no match”
Handling uncertainty and approximate reasoning
Mapping raw similarity scores into linguistic terms like High Match, Medium Match,
Low Match
2. Core Concept of Fuzzy Logic in Biometrics
Fuzzy logic works with membership functions that map an input (e.g., similarity score) to a
degree between 0 and 1, representing the strength of membership in a “match” category.
Example membership function for fingerprint similarity:
High Match: membership close to 1 for scores > 0.85
Medium Match: membership ~0.5 for scores around 0.65
Low Match: membership close to 0 for scores < 0.4
The system uses IF–THEN rules to make a decision:
IF similarity is High THEN accept
IF similarity is Medium AND quality is Good THEN accept
ELSE reject
3. Examples in Specific Biometric Systems
A. Fingerprint Recognition
Problem: Fingerprint scans can have partial prints, smudges, or dry/wet fingers → minutiae
matching score varies.
Fuzzy Approach:
Define membership functions for match score (low, medium, high) and image quality
(poor, average, good).
Rule: IF match score is medium AND quality is good → treat as match.
Benefit: Prevents false rejections when the print is slightly degraded.
B. Iris Recognition
Problem: Iris scans can be affected by pupil dilation, eyelashes, reflections.
Fuzzy Approach:
Input: Hamming distance between iris codes.
Membership functions: Very Close, Close, Far.
Rule: IF Hamming distance is very close → accept; IF close AND quality is high →
accept.
Benefit: Maintains high accuracy even with slight iris pattern distortions.
C. Multimodal Biometric Systems
Problem: Multiple biometrics (e.g., fingerprint + face) may have conflicting scores.
Fuzzy Approach:
Inputs: Normalized match scores from each modality.
Fuzzy rules combine them:
o IF fingerprint is high AND face is medium → accept.
o IF both are medium → request re-scan.
Benefit: Makes balanced decisions when one biometric is noisy but the other is
reliable.
4. Advantages of Fuzzy Logic in Biometrics
Handles uncertainty and noise gracefully.
Reduces false rejections due to minor data variations.
Allows human-like reasoning in decision-making.
Works well for multimodal fusion when combining multiple imperfect sources.