0% found this document useful (0 votes)
106 views

Criminal Final

The document discusses building a criminal face recognition system using face biometrics. It describes traditional security approaches like human guards, locks/keys and their problems. The proposed system has two phases: enrollment where user features are extracted and stored, and authentication where extracted features are matched to verify identity. The goal is to build a face biometric database to recognize criminals.

Uploaded by

Anuj Gupta
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
106 views

Criminal Final

The document discusses building a criminal face recognition system using face biometrics. It describes traditional security approaches like human guards, locks/keys and their problems. The proposed system has two phases: enrollment where user features are extracted and stored, and authentication where extracted features are matched to verify identity. The goal is to build a face biometric database to recognize criminals.

Uploaded by

Anuj Gupta
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 79

Face Biometric Criminal Databse

ABSTRACT
The aim of the project is to build a criminal recognition system which would depend upon face recognition technique. The face recognition is known as Face Biometric Application. The biometric security system has so far been the most successful one as far as providing a sense of security is concerned. It is based on the fact that almost every part of the human body is unique in its own way , and with the help of electronics we can make use of the uniqueness and set up a security system in which a person is granted access only if he biometrically proves that he has the rights to access the asset. With many biometric parameters around the ones that have gained ready popularity are the ones like: Fingerprints Hand geometry Iris/retina scan Face study Signature reading Voice recognition etc

Traditional Security Approaches:


Human guards. Police. Locks and keys. Numeric keypads. Magnetic cards / PINs. 1

Face Biometric Criminal Databse


Usernames / passwords. Surveillance cameras.

Problems with the Traditional Security Approaches:


Human guards, police are expensive, prone to error and corruption. Keys, PINs, passwords can be stolen, lost, or cracked. PIN identifies a card and password identifies a username, not user.

The Face Recognition System


The Facial recognition system mainly consists of two phases: Enrollment phase and the Authentification/Verification phase. The organization of the system is shown below diagrammatically through the figures.

Enrollment : Capture and processing of user biometric data for use by


system in subsequent authentication operations. During the first phase, user enrollment as shown in the Fig. 3.1, features are extracted from the input face given by the user by a process called Feature extraction, and are modeled as a template. The modeling is a process of enrolling user to the verification system by constructing a model of his/her face, based on the features extracted from his/her facial sample. After the features are extracted and all the signal processing is done the system checks for the quality of the templates that are extracted from face, if sufficient quality of features are extracted then the templates are stored in the database else the system needs to acquire new facial image. The collection of all such enrolled models is called facial database.

Gone are the days when an engineer or a scientist had to stick to his respective field of work. Today with all the technological upgradements on our

Face Biometric Criminal Databse


side we get to see a number of technical fields wedding each other and produce such great works that the world is swept off its feet. Such is the combination of life science/biology and electronics which has today given us the field so called biometrics which has further succeeded in blurring the demarcation between different fields of sciences. Now with cutthroat competition among all in all the fields the thirst for victory has left all heights behind. So to safeguard what one has, he is ready to pay any amount if the product promises to give him, the just little sense of security. And the biometric security system has so far been the most successful one as far as providing a sense of security is concerned. It is based on the fact that almost every part of the human body is unique in its own way , and with the help of electronics we can make use of the uniqueness and set up a security system in which a person is granted access only if he biometrically proves that he has the rights to access the asset.

Chapter - 1

INTRODUCTION
3

Face Biometric Criminal Databse


Traditional means of authentication such as passwords and Personal Identification Numbers (PINs) have dominated computing world until recently and are likely to remain essential for years to come. However stronger authentication technologies capable of providing higher degrees of certainty are in demand for better security. Biometrics is one such strong authentication technology. Four factors: reduced cost, reduced size, increased accuracy and increased ease of use have contributed to make Biometrics, an increasingly feasible solution for securing access to computers and networks. Millions of people around the world use biometric technology in a variety of applications such as attendance logging, voter registration, international travel etc. Biometrics can be used for security, for convenience, for fraud reduction and as an empowering technology.

Traditional Security Approaches:


Human guards. Police. Locks and keys. Numeric keypads. Magnetic cards / PINs. Usernames / passwords. Surveillance cameras.

Problems with the Traditional Security Approaches:


Human guards, police are expensive, prone to error and corruption. Keys, PINs, passwords can be stolen, lost, or cracked. 4

Face Biometric Criminal Databse


PIN identifies a card and password identifies a username, not user.

Chapter - 2

AN OVERVIEW OF BIOMETRICS

Face Biometric Criminal Databse


Biometrics refers to the automatic identification of a person based on his/her physiological or behavioral characteristics. This method of identification is preferred over traditional methods involving passwords and PIN numbers for various reasons: (i) the person to be identified is required to be physically present at the point-of-identification; (ii) identification based on biometric techniques obviates the need to remember a password or carry a token. With the increased use of computers as vehicles of information technology, it is necessary to restrict access to sensitive/personal data. By replacing PINs, biometric techniques can potentially prevent unauthorized access to or fraudulent use of ATMs, cellular phones, smart cards, desktop PCs, workstations, and computer networks. PINs and passwords may be forgotten, and token based methods of identification like passports and driver's licenses may be forged, stolen, or lost. Thus biometric based systems of identification are receiving considerable interest. Various types of biometric systems are being used for real-time identification; the most popular are based on face, iris and fingerprint matching. However, there are other biometric systems that utilize retinal scan, speech, signatures and hand geometry. A biometric system is essentially a pattern recognition system which makes a personal identification by determining the authenticity of a specific physiological or behavioral characteristic possessed by the user. An important issue in designing a practical system is to determine how an individual is identified. Depending on the context, a biometric system can be either a verification (authentication) system or an identification system.

2.1 Verification vs. Identification


There are two different ways to resolve a person's identity: verification and identification. Verification (Am I whom I claim I am?) involves confirming or denying a person's claimed identity. In identification, one has to establish a

Face Biometric Criminal Databse


person's identity (Who am I?). Each one of these approaches has its own complexities and could probably be solved best by a certain biometric system.

Fig 2.1.Ben Gurion Airport - Hand Geometry

Fig 2.2 Face Pass - Face Verification

Face Biometric Criminal Databse

Fig 2.3 Heathrow Airport - Iris

Fig 2. 4 Grocery store payment - Fingerprint

2.2 History of Biometrics

Face Biometric Criminal Databse


Chinese Precursor:
Possibly the first known example of biometrics in practice was a form of finger printing being used in China in the 14th century, as reported by explorer Joao de Barros. He wrote that the Chinese merchants were stamping children's palm prints and footprints on paper with ink to distinguish the young children from one another. This is one of the earliest known cases of biometrics in use and is still being used today.

European Origins:
Elsewhere in the world up until the late 1800s, identification largely relied upon "photographic memory." In the 1890s, an anthropologist and police desk clerk in Paris named Alphonse Bertillon sought to fix the problem of identifying convicted criminals and turned biometrics into a distinct field of study. He developed a method of multiple body measurements which got named after him (Bertillonage). His system was used by police authorities throughout the world, until it quickly faded when it was discovered that some people shared the same measurements and based on the measurements alone, two people could get treated as one. After the failure of Bertillonage, the police started using finger printing, which was developed by Richard Edward Henry of Scotland Yard, essentially reverting to the same methods used by the Chinese for years.

Modern Times:

Face Biometric Criminal Databse


In the past three decades biometrics has moved from a single method (fingerprinting) to more than ten discreet methods. Companies involved with new methods number in the hundreds and continue to improve their methods as the technology available to them advances. Prices for the hardware required continue to fall making systems more feasible for low and mid-level budgets. As the industry grows however, so does the public concern over privacy issues. Laws and regulations continue to be drafted and standards are beginning to develop. While no other biometric has yet reached the breadth of use of fingerprinting, some are beginning to be used in both legal and business areas.

2.3 Operation & Performance


In a typical IT biometric system, a person registers with the system when one or more of his physical and behavioral characteristics are obtained. This information is then processed by a numerical algorithm, and entered into a database. The algorithm creates a digital representation of the obtained biometric. If the user is new to the system, he or she enrolls, which means that the digital template of the biometric is entered into the database. Each subsequent attempt to use the system, or authenticate, requires the biometric of the user to be captured again, and processed into a digital template. That template is then compared to those existing in the database to determine a match. The process of converting the acquired biometric into a digital template for comparison is completed each time the user attempts to authenticate to the system. The comparison process involves the use of a Hamming distance. This is a measurement of how similar two bit strings are. For example, two identical bit strings have a Hamming Distance of zero, while two totally dissimilar ones have a Hamming Distance of one. Thus, the Hamming distance measures the percentage of dissimilar bits out of the number of comparisons made. Ideally, when a user logs in, nearly all of his features match; then when someone else tries to log in, who does not fully match, and the system will not allow the new

10

Face Biometric Criminal Databse


person to log in. Current technologies have widely varying Equal Error Rates, varying from as low as 60% and as high as99.9%.

Fig 2.5 Sensitivity vs. Errors Graph Performance of a biometric measure is usually referred to in terms of the false accept rate (FAR), the false non match or reject rate (FRR), and the failure to enroll rate (FTE or FER). The FAR measures the percent of invalid users who are incorrectly accepted as genuine users, while the FRR measures the percent of valid users who are rejected as impostors. In real-world biometric systems the FAR and FRR can typically be traded off against each other by changing some parameter. One of the most common measures of real-world biometric systems is the rate at which both accept and reject errors are equal: the equal error rate (EER), also known as the cross-over error rate (CER). The lower the EER or CER, the more accurate the system is considered to be.

11

Face Biometric Criminal Databse


Claimed error rates sometimes involve idiosyncratic or subjective elements. For example, one biometrics vendor set the acceptance threshold high, to minimize false accepts. In the trial, three attempts were allowed, and so a false reject was counted only if all three attempts failed. At the same time, when measuring performance biometrics (e.g. writing, speech etc.), opinions may differ on what constitutes a false reject. If a signature verification system is trained with an initial and a surname, can a false reject be legitimately claimed when it then rejects the signature incorporating a full first name? Despite these misgivings, biometric systems have the potential to identify individuals with a very high degree of certainty. Forensic DNA evidence enjoys a particularly high degree of public trust at present (ca. 2004) and substantial claims are being made in respect of iris recognition technology, which has the capacity to discriminate between individuals with identical DNA, such as monozygotic twins.

Chapter - 3

FACIAL RECONIZATION
12

Face Biometric Criminal Databse


A face recognition system identifies an individual by analyzing the unique shape, pattern and positioning of facial features. There are essentially two methods of processing the data: video and thermal imaging. Standard video techniques are based on the facial image captured by a video camera. Thermal imaging techniques analyze the heat-generated pattern of blood vessels underneath the skin. The attraction of this biometric system is that it is able to operate 'hands=free', limiting the amount of man-machine interaction. Facial biometrics uses various features of the face to recognize or verify a user. Facial recognition is a 1-to-many mapping whereas facial verification is a simpler 1-to-1 mapping. There are four primary facial recognition techniques: eigen faces, feature analysis, neural networking and automatic face processing. The neural network approach, for example, uses the template face and runs it through a neural network that tries to identify it with its database (or verifies it against the template the user is enrolled to). If a false acceptance or rejection occurs then the neural network modifies its weights to help improve recognition at a later date. Another interesting approach is MIT's Eigenfaces. Basically, a database holds a large number of template faces (typically between 60-120). When your face is read in, it is mapped on to the template faces so that the program can mix the template faces together to actually artificially reproduce your face! This is then used later for recognition and verification purposes.

3.1 Organization of the System


The Facial recognition system mainly consists of two phases: Enrollment phase and the Authentification/Verification phase. The organization of the system is shown below diagrammatically through the figures. 13

Face Biometric Criminal Databse


Enrollment : Capture and processing of user biometric data for use by
system in subsequent authentication operations. During the first phase, user enrollment as shown in the Fig. 3.1, features are extracted from the input face given by the user by a process called Feature extraction, and are modeled as a template. The modeling is a process of enrolling user to the verification system by constructing a model of his/her face, based on the features extracted from his/her facial sample. After the features are extracted and all the signal processing is done the system checks for the quality of the templates that are extracted from face, if sufficient quality of features are extracted then the templates are stored in the database else the system needs to acquire new facial image. The collection of all such enrolled models is called facial database.

14

Face Biometric Criminal Databse


Require new acquisition of Face

No Signal Processing, Feature Extraction, Representation

Face Data Collection

Transmissi on

Quality Sufficie nt? Yes

Databas e

Generate Template

Fig. 3.1: Schematic flow of Enrollment Phase

Authentication/Verification : Capture and processing of user biometric


data in order to render an authentication decision based on the outcome of a matching process of the stored to current template. In the second phase, verification phase as shown in the Fig.3.2, features are extracted from the face of a user and these current features are compared with the claimed features stored in the database by a process called Feature matching. Based on this comparison the final decision is made about the user identity.

REquire

new acquisition of face

Additional image preprocessing, adaptive extraction/ representation

15

Face Biometric Criminal Databse

No

Facial Data Collection

Transmissi on

Signal Processing, Feature Extraction

Quality Sufficie nt? Yes

Databas e
Yes/No

Template Match Generate Template


Decision Confidenc e?

Fig.3.2: Schematic flow of Verification Phase

16

Face Biometric Criminal Databse


Both these phases include Feature extraction, which is used to extract user dependent characteristics from face. The main purpose of this process is to reduce the amount of test data while retaining user discriminative information. The complete system is as shown below in schematic flow diagram which combines both the enrollment and verification phases.

Enrollment Phase
Extract High Quality Biometric Features/Representat ion Formulate Biometric Feature/Rep Template Database Template Depositor y

Acquire and Digitize Facial Data

Acquire and Digitize Biometric Data

Extract High Quality Biometric Features/Representat ion

Formulate Biometric Feature/Rep Template

Templat e Matcher

Decision

Output

Authentification/ Verification phase Fig.3.3. Schematic Flow of Complete System

17

Face Biometric Criminal Databse

3.2 Introduction
3.2.1 Fundamental Issues in Face Recognition: Robust face recognition requires the ability to recognize identity despite many variations in appearance that the face can have in a scene. The face is a 3D object which is illuminated from a variety of light sources and surrounded by arbitrary background data (including other faces). Therefore, the appearance a face has when projected onto a 2D image can vary tremendously. If we wish to develop a system capable of performing non-contrived recognition, we need to find and recognize faces despite these variations. In fact, 3D pose, illumination and foreground-background segmentation have been pertinent issues in the field of computer vision as a whole. Additionally, our detection and recognition scheme must also be capable of tolerating variations in the faces themselves. The human face is not a unique rigid object. There are billions of different faces and each of them can assume a variety of deformations. Inter-personal variations can be due to race, identity, or genetics while intra-personal variations can be due to deformations, expression, aging, facial hair, cosmetics and facial paraphernalia. Furthermore, the output of the detection and recognition system has to be accurate. A recognition system has to associate an identity or name for each face it comes across by matching it to a large database of individuals. Simultaneously, the system must be robust to typical image-acquisition problems such as noise, video-camera distortion and image resolution. Thus, we are dealing with a multi-dimensional detection and recognition problem. One final constraint is the need to maintain the usability of the system on contemporary computational devices ( 100 MIPS). In other words, the

18

Face Biometric Criminal Databse


processing involved should be efficient with respect to run-time and storage space. 3.2.2 Current Vision Systems for Face Recognition:

Research in intensity image face recognition generally falls into two categories: holistic (global) methods and feature-based methods. Feature-based methods rely on the identification of certain fiducial points on the face such as the eyes, the nose, the mouth, etc. The location of those points can be determined and used to compute geometrical relationships between the points as well to analyze the surrounding region locally. Thus, independent processing of the eyes, the nose, and other fiducial points is performed and then combined to produce recognition of the face. Since detection of feature points precedes the analysis, such a system is robust to position variations in the image. Holistic methods treat the image data simultaneously without attempting to localize individual points. The face is recognized as one entity without explicitly isolating different regions in the face. Holistic techniques utilize statistical analysis, neural networks and transformations. They also usually require large samples of training data. The advantage of holistic methods is that they utilize the face as a whole and do not destroy any information by exclusively processing only certain fiducial points. Thus, they generally provide more accurate recognition results. However, such techniques are sensitive to variations in position, scale and so on which restricts their use to standard, frontal mug-shot images.

3.3 Face Detection and Localization


19

Face Biometric Criminal Databse


The detection of faces and facial features from an arbitrary uncontrived image is a critical precursor to recognition. A robust scheme is needed to detect the face as well as determine its precise placement to extract the relevant data from an input image. This is necessary to properly prepare the image's 2D intensity description of the face for input to a recognition system. This detection scheme must operate flexibly and reliably regardless of lighting conditions, background clutter in the image, multiple faces in the image, as well as variations in face position, scale, pose and expression. The geometrical information about each face in the image that we gather at this stage will be used to apply geometrical transformations that will map the data in the image into an invariant form. By isolating each face, transforming it into a standard frontal mug shot pose and correcting lighting effects, we limit the variance in its intensity image description to the true physical shape and texture of the face itself. The set of input images in Figure below illustrates some of the variations in the intensity image that detection must be capable of overcoming to properly localize the face. These variations need appropriate compensation to isolate only the relevant data necessary for recognition. Furthermore, note that these variations can occur in any combination and are not mutually exclusive.

20

Face Biometric Criminal Databse

We propose a hierarchical detection method which can quickly and reliably converge to a localization of the face amidst a wide range of external visual stimuli and variation. It is necessary to precede expensive computations with simple and efficient ones in this hierarchy to maximize efficiency. The results of the initial, diffuse and large search space computations narrow the search space for the more localized, higher precision operations that will follow. In other words, the results of preliminary detections guide the use of subsequent operations in a feed-forward manner to restrict their application to only significant parts of the image. This reduces the probability of error since the subsequent detection steps will not be distracted by irrelevant image data. Furthermore, more robust operations precede more sensitive ones in our hierarchy since the sensitive operations in the hierarchy need to have adequate initialization from previous stages to prevent failure.

21

Face Biometric Criminal Databse


Figure 3.5: The hierarchical search sequence for faces and facial features

Figure 3.5 displays the sequence of search steps for the face detection. We begin by searching for possible face or head-like blobs in the image. The detected blob candidates are examined to obtain an approximation of their contours. If these exhibit a face-like contour, their interior is scanned for the presence of eyes. Each of the possible pairs of eyes detected in the face are examined one at a time to see if they are in an appropriate position with respect to the facial contour. If they are, then we search for a mouth isolated by the facial contour and the position of the detected eyes. Once a mouth has been detected, the region to be searched for a nose is better isolated and we determine the nose position. Lastly, these facial coordinates are used to more accurately locate the iris within the eye region, if they are visible. The final result is a set of geometrical coordinates that specify the position, scale and pose of all possible faces in the image.

22

Face Biometric Criminal Databse

Face Localization
The human face is a highly correlated object due to the lack of variation in skin complexion. Even though facial features (i.e. mouths and eyes) differ in color from the skin tone, the hairless skin regions dominate facial surface area allowing it stand out against most backgrounds. Thus we expect a boundary around the face to be present. The foreshortening of the 3D face structure under most lighting conditions also accentuates the contour of the face. We propose that the face be considered as a blob or a region with a certain amount of contour enclosure. Furthermore, the scalp and the hair also usually triggers edge contours that extend the face blob. Consequently, we can expect both the face and head structures to behave as blobs or symmetric enclosures about a center point.

Chapter - 4
23

Face Biometric Criminal Databse

SYSTEM REQUIREMENTS
4.1 Software Requirement
C#.Net VB SQL Server 2000 Nokia PC Suite

4.2 Hardware Requirement


P4 2.0 Giga Hertz Mobile With camera Datacable 256 and Above RAM.

24

Face Biometric Criminal Databse

Chapter - 5 ADVANTAGES & DISADVANTAGES


5.1 Advantages
1. It provides automated security to the attendance system 2. It provides cheap implementation of the Biometric 3. The system is quite efficient and the false acceptance is very low 4. The biometric based system can be further used for automated salary generation 5. Time based security allows the system to take the attendance only at a specific interval.

5.2 Disadvantages
1. The system requires image acquisition device like the mobile or web cam 2. Detection is slower.

25

Face Biometric Criminal Databse

Chapter 6

System Design
Eno Employee Ename

1 Date N Attendence

Designation

Phone

ENo

26

Face Biometric Criminal Databse Block Diagram


Training Trained Database Face Recognition

Images

Employee Image

Registration

Attendance

Attendance Database

Employee Database

Working Principal

27

Face Biometric Criminal Databse


The images of the faces of all the employees are taken from some digital device like the web cam or the mobiles and are stored in a directory. The employee information is registered through a .Net GUI application. This application copies the image of the employee and pastes it to appropriate training directory. Two instances of each employee are taken and are placed in the database. Once the registration procedure is over, the VB based training application is called. This application pics up both the images of an employee and draws the training set in terms of the eigen vectors. Furthermore while inserting the attendence, the third instance of the employee is taken and given for testing. If the system recognizes his face then the attendence system is automatically invoked. If the time of the system is between 10.00 AM to 10.30 AM then the attendence of the employee is registered. Finally the administrator can view the total number of attendances of each employee for any specific month.

Chapter 7

28

Face Biometric Criminal Databse

SOFTWARE DETAILS
7.1 Introduction to C#
C# is Microsofts latest object oriented programming language developed for .NET platform & .NET is Microsofts latest platform technology for creating web services. C# is a C++ based language & was developed to provide portability for distributed applications over network & internet .Application development in .NET platform can be done in multiple languages including C# , C++ & visual basic. Programmers developed in all these languages are complied to Microsofts intermediate language (IL) & executed within common language run time (CLR) .We explain the core elements of .NET & how web applications are developed & run with this technology ..NET is not a programming language it is a virtual machine technology (similar to Java virtual machine technology) with a frame work that provides capability to run a verity of web applications. The .NET framework class library provides set of classes that provides essential functionality for applications build within the .NET environment. Web functionality, XML support, database support, threading & distributed computing support is provided by the .NET framework class library. All .NET code is translated to Microsoft intermediate language & run with CLR .CLR is similar to Java virtual machine (JVM) .The IL code is language independent & similar to Java byte code .A single .NET application may consist of several different languages .Two very important features of CLR are language interoperability & language independence.

What is C# ?
C# is a modern ,object oriented language that enables programmers to quickly build a wide range of applications for the new Microsoft .NET platform ,which 29

Face Biometric Criminal Databse


provides tools & services that fully exploit both computing & communication .Because of its elegant object oriented design .C# is a great choice for architecting a wide range of components from high level business objects to system level applications using simple C# language constructs , these component can be converted into XML web services , allowing them to be invoked across the internet, from any language running on any operating system.More than any thing else ,C# is designed to bring rapid development to the C++ programmer without sacrificing the power & control that have been a hallmark of C & C++.Because of this heritage C# has a high degree of fidelity with C & C+ + .Developers familiar with these languages can quickly become productive in C#

Why C#?
C# is the new language with the power of C++ & the slickness of visual basic. It cleans up many of the syntactic peculiarities of C++ without diluting much of its flavor (thereby enabling C++ developers to transition to it with little difficulty ) . And its superiority over VB6 in facilitating powerful OO implementation is without question. C# with clean OO syntax & large class library ( in conjunction with .NET & the base class libraries ) could be the most productive mainstream language & it is an ECMA standard language that offers the potential of being available across many platform .For the serious developer wanting Microsofts most productive & mainstream .NET language , C# is the choice.

Properties
Properties will be a familiar concept to Delphi & Visual basic users. The motivation is for the language to formalize the concept of getter/ setter methods, which is an extensively used pattern, particularly in RAD (Rapid Application developments) tools.

30

Face Biometric Criminal Databse


This is a typical code you must write in Java or C++: Foo.size (getsize () +1); Label.getfont ().setbold (true); The same code you would rite like this in C#: Foo.size; Label.font.bold=true; The code is immediately more readable by those who are using foo & label . There is similar simplicity when implementing properties: Java/C++ : Public int getsize(){ Return size; } public void setsize(int value){ Size=value; } C#: Public int size { get {return size; } set { size = value; } } Particularly for read/write properties, C# provides a cleaner way of handling this concept .the relationship between a get & set method is inherent in C#, while has to be maintained in Java or C++. There are many benefits of this approach. It encourages programmers to think in terms of properties, whether that property is more natural as read/write Vs read only, or whether it really a shouldnt be a property at all. If you wish to change the name of your property, you only have one place to look (Ive seen getters & setters several lines away from each other).Comments only have to be made once, & wont get out of sync with each

31

Face Biometric Criminal Databse


other .it is feasible that an IDE could help out here (& in fact I suggest they do), but one should remember an essential principle in programming is to try to make abstractions model our problem well space well. A language which supports properties will reap the benefits of that better abstraction.

Indexers
C# provides indexers allow objects to be treated like array, except that like properties, each element is exposed with a get and/ or set method. Public class Skyscraper { Story[] stories; Public story this [int index] { get { return stories[index]; } set { if (value !=null ) { Stories [index]=value; } } } Skyscraper empireState = new Skyscraper (); empireState [102] = new story (The top one,);

Delegates
A delegate can be thought of as a type- safe object oriented function pointer, which is able to hold multiple methods rather than just one. Delegates handles problems which would be solved with function pointers in C++ .And interfaces in

32

Face Biometric Criminal Databse


Java. It improves on the function pointer approach by being type safe & being able to hold multiple methods. It improves on the interface approach by allowing the invocation of a method without the need for inner-class adapters or extra code to handle multiple method invocations. The most important use of delegates is for event handling.

Events
C# provides direct support for events .Although event handling has been a fundamental part of programming since programming began, there has been surprisingly little efforts made by most languages to formalize this concept, If you look at how todays mainstream frameworks handle events .weve got examples like Delphis function pointers (called closures) ,Javas inner class adapters ,& of course ,the windows APIsmessage system. C# uses delegates along with the event keyword to provide a very clean solution to event handling .I thought the best way to illustrate this was to give an example showing the whole process of declaring ,firing & handling an event .

Pointer arithmetic
Pointer arithmetic can be performed in C# within methods marked with the unsafe modifier. When pointer point to garbage collected objects, the enforces complier the use of fixed word to pin the object .This is because garbage

collectors relay on moving objects around to reclaim memory , but if this happens when you are dealing with raw pointers you will be pointing to garbage . The choice of the word unsafe will be chosen since it discourages developers from using pointers unless they really need to .

33

Face Biometric Criminal Databse


Rectangular Arrays
C# allows both jagged & rectangular arrays to be created. Jagged arrays are pretty much the same as Java arrays .Rectangular arrays allow a more efficient and accurate representation for certain problems .An example of such an array would be: int[, ,]array = new int[3,4,5]; int[1,1,1]=5; Using jagged arrays ; Int[][][] array=new int[3][4][5]; Int[1][1][1]; In combination with strcts, C# can provide a level of efficiency making it a good choice for areas such as graphics & mathematics.

A sample C# program
Hello world File Hello .Cs Using system ; Class Hello{ Static void Main(){ Consple .WriteLine (Hello world ); } } Uses the namespace System Entry point must be called Main o/p goes to the console file name & class name need not be identical

Compilation (in the console window ) Csc Hellow .cs Execution 34

Face Biometric Criminal Databse


Hello

C# advantages
XML documentation generated from source code comments.(This is coming in VB.NET with Whidbey (the code name for the next version of Visual Studio & . NET), and there are tools which will do it with existing VB.NET code already.) Operator overloading = again, coming to VB.NET in Whidbey. Language support for unsigned types (you can use them from VB.NET, but they arent in the language itself). Again support for these is coming to VB.NET in Whidbey. The using statement ,which makes unmanaged resource disposal simple Explicit interface in a base class can be re implemented separately in a derived class .Arguably this makes the class harder to understand in the same way that member hiding normally does. Unsafe code This allows pointer arithmetic etc ,&can improve performance in Some situations. However, it is not to be used lightly , as a lot of the normal safety of C# is lost (as the name implies) .Note that unsafe code is still managed code, i.e. it is complied to IL, JITted & run with CLR. Boxing & unboxing add performance overhead since they involve dynamic memory allocation & runtime check. Generics can make C# more efficient, type safe & maintainable. Iterators help to create smaller & more efficient code. Using attribute features to express how fields should be seariliased into XML that means you con easily turn a class into XML & than easily reconstruct it again. Eliminates costly programming errors.

35

Face Biometric Criminal Databse


Reduces ongoing development costs with built in support for versioning. C# can be programmable only in .NET framework as .NET framework has not been designed for any other operating system than windows. It is not a platform independent language. Even though the programming type resembles to that of C++ many no.of classes & their objects are required to be remembered. Huge set of over loaded functions are available which demand the user to no each type for the application level. The use of pointers is restricted in C#.

C# applications
Builder design pattern: The builder pattern allows a client object to

construct a complex object by specifying only its type & content. The client is shielded from the details of the objects construction. Remoting in C#. C# component based development. Reflection in C#: The ability to find out information about objects at run time is called reflection. In reflection we can find out the objects class, details of an objects method, & even create one dynamically at run time. Creating web based code components: There are times when you need to explain the core structure & logic of your program, having an intention to make it encapsulated & hidden at the same time. DB access component: C# is said to be a component oriented language to create a simple database access component. Run time code generation.

Creating on line documents.

36

Face Biometric Criminal Databse 7.2 About SQL


Structured Query Language (SQL) To work with data in a database, you must use a set of commands and statements (Language) defined by the DBMS software. There are several different Langua ges that can be used with relational databases, the most common is SQL. Standards for SQL have been defined by both the American National Standards Institute (ANSI) and the International Standards Organization (ISO).

SQL Server Features


Microsoft SQL Server supports a set of features that results in the following benefits:

Ease of installations, deployment, and use. SQL server includes a set of administrative and development tools that improve your ability to install, deploy, manage and use SQL Server across Several sites. Scalability The same database engine can be used across platforms ranging from laptop computers running Microsoft Windows 95/98 to large, multiprocessor servers running Microsoft Windows NT, Enterprise Edition.

37

Face Biometric Criminal Databse


Data Warehousing SQL Server includes tools for extracting and analyzing summary data for online analytical processing (OLAP). SQL Server also includes tools for visually designing databases and analyzing data using English-based questions. System integration with other Server Software SQL Server integrates with e-mail, the Internet and the Windows.

Chapter-8

Source Code
using System; using System.Drawing; using System.Collections; using System.ComponentModel; using System.Windows.Forms; namespace MyAttendence { /// <summary> /// Summary description for MainForm. /// </summary> public class MainForm : System.Windows.Forms.Form { private System.Windows.Forms.GroupBox groupBox1; private System.Windows.Forms.Label label1;

38

Face Biometric Criminal Databse


private System.Windows.Forms.Label label2; private System.Windows.Forms.Button button1; private System.Windows.Forms.TextBox txtAdmin; private System.Windows.Forms.TextBox txtPw; private System.Windows.Forms.GroupBox groupBox2; public System.Windows.Forms.TextBox txtEno; private System.Windows.Forms.Label label3; private System.Windows.Forms.Button btnAttendence; private System.Windows.Forms.Timer timer1; private System.Windows.Forms.Label label4; private System.ComponentModel.IContainer components; public MainForm() { // // Required for Windows Form Designer support // InitializeComponent(); // // TODO: Add any constructor code after InitializeComponent call // } /// <summary> /// Clean up any resources being used. /// </summary> protected override void Dispose( bool disposing ) { if( disposing ) { if(components != null) { components.Dispose(); } } base.Dispose( disposing ); } #region Windows Form Designer generated code /// <summary> /// Required method for Designer support - do not modify /// the contents of this method with the code editor. /// </summary> private void InitializeComponent() { this.components = new System.ComponentModel.Container(); this.groupBox1 = new System.Windows.Forms.GroupBox(); this.button1 = new System.Windows.Forms.Button(); this.label2 = new System.Windows.Forms.Label(); this.txtPw = new System.Windows.Forms.TextBox();

39

Face Biometric Criminal Databse


this.label1 = new System.Windows.Forms.Label(); this.txtAdmin = new System.Windows.Forms.TextBox(); this.groupBox2 = new System.Windows.Forms.GroupBox(); this.btnAttendence = new System.Windows.Forms.Button(); this.label3 = new System.Windows.Forms.Label(); this.txtEno = new System.Windows.Forms.TextBox(); this.timer1 = new System.Windows.Forms.Timer(this.components); this.label4 = new System.Windows.Forms.Label(); this.groupBox1.SuspendLayout(); this.groupBox2.SuspendLayout(); this.SuspendLayout(); // // groupBox1 // this.groupBox1.BackColor = System.Drawing.Color.LightSkyBlue; this.groupBox1.Controls.Add(this.button1); this.groupBox1.Controls.Add(this.label2); this.groupBox1.Controls.Add(this.txtPw); this.groupBox1.Controls.Add(this.label1); this.groupBox1.Controls.Add(this.txtAdmin); this.groupBox1.Font = new System.Drawing.Font("Microsoft Sans Serif", 8.25F, System.Drawing.FontStyle.Bold, System.Drawing.GraphicsUnit.Point, ((System.Byte)(0))); this.groupBox1.Location = new System.Drawing.Point(8, 24); this.groupBox1.Name = "groupBox1"; this.groupBox1.Size = new System.Drawing.Size(392, 128); this.groupBox1.TabIndex = 0; this.groupBox1.TabStop = false; this.groupBox1.Text = "Admin"; // // button1 // this.button1.Location = new System.Drawing.Point(24, 96); this.button1.Name = "button1"; this.button1.Size = new System.Drawing.Size(296, 23); this.button1.TabIndex = 4; this.button1.Text = "Go To Admin"; this.button1.Click += new System.EventHandler(this.button1_Click); // // label2 // this.label2.Location = new System.Drawing.Point(16, 64); this.label2.Name = "label2"; this.label2.TabIndex = 3; this.label2.Text = "Password"; // // txtPw // this.txtPw.Location = new System.Drawing.Point(136, 64);

40

Face Biometric Criminal Databse


this.txtPw.Name = "txtPw"; this.txtPw.PasswordChar = '*'; this.txtPw.TabIndex = 2; this.txtPw.Text = ""; // // label1 // this.label1.Location = new System.Drawing.Point(16, 24); this.label1.Name = "label1"; this.label1.TabIndex = 1; this.label1.Text = "Admin Username"; this.label1.Click += new System.EventHandler(this.label1_Click); // // txtAdmin // this.txtAdmin.Location = new System.Drawing.Point(136, 24); this.txtAdmin.Name = "txtAdmin"; this.txtAdmin.TabIndex = 0; this.txtAdmin.Text = ""; // // groupBox2 // this.groupBox2.BackColor = System.Drawing.Color.LightSkyBlue; this.groupBox2.Controls.Add(this.btnAttendence); this.groupBox2.Controls.Add(this.label3); this.groupBox2.Controls.Add(this.txtEno); this.groupBox2.Font = new System.Drawing.Font("Microsoft Sans Serif", 8.25F, System.Drawing.FontStyle.Bold, System.Drawing.GraphicsUnit.Point, ((System.Byte)(0))); this.groupBox2.Location = new System.Drawing.Point(8, 168); this.groupBox2.Name = "groupBox2"; this.groupBox2.Size = new System.Drawing.Size(392, 104); this.groupBox2.TabIndex = 1; this.groupBox2.TabStop = false; this.groupBox2.Text = "Attendence"; // // btnAttendence // this.btnAttendence.Location = new System.Drawing.Point(40, 64); this.btnAttendence.Name = "btnAttendence"; this.btnAttendence.Size = new System.Drawing.Size(312, 23); this.btnAttendence.TabIndex = 2; this.btnAttendence.Text = "Put Your Attendence"; this.btnAttendence.Click += new System.EventHandler(this.btnAttendence_Click); // // label3 // this.label3.Location = new System.Drawing.Point(24, 24); this.label3.Name = "label3"; this.label3.Size = new System.Drawing.Size(112, 16);

41

Face Biometric Criminal Databse


this.label3.TabIndex = 1; this.label3.Text = "Employee Number"; // // txtEno // this.txtEno.Location = new System.Drawing.Point(152, 24); this.txtEno.Name = "txtEno"; this.txtEno.Size = new System.Drawing.Size(136, 20); this.txtEno.TabIndex = 0; this.txtEno.Text = ""; // // timer1 // this.timer1.Enabled = true; this.timer1.Interval = 1000; this.timer1.Tick += new System.EventHandler(this.timer1_Tick); // // label4 // this.label4.BackColor = System.Drawing.Color.AliceBlue; this.label4.Font = new System.Drawing.Font("Arial", 8.25F, System.Drawing.FontStyle.Regular, System.Drawing.GraphicsUnit.Point, ((System.Byte) (0))); this.label4.Location = new System.Drawing.Point(16, 288); this.label4.Name = "label4"; this.label4.Size = new System.Drawing.Size(56, 16); this.label4.TabIndex = 2; this.label4.Text = "label4"; this.label4.Click += new System.EventHandler(this.label4_Click); // // MainForm // this.AutoScaleBaseSize = new System.Drawing.Size(5, 13); this.BackColor = System.Drawing.Color.CornflowerBlue; this.ClientSize = new System.Drawing.Size(496, 334); this.Controls.Add(this.label4); this.Controls.Add(this.groupBox2); this.Controls.Add(this.groupBox1); this.Name = "MainForm"; this.Text = "Face Recognition Based Attendence System"; this.groupBox1.ResumeLayout(false); this.groupBox2.ResumeLayout(false); this.ResumeLayout(false); } #endreg

42

Face Biometric Criminal Databse

Chapter - 9

Results
REGISTRATION OF EMPLOYEES

43

Face Biometric Criminal Databse

Training

Verification

44

Face Biometric Criminal Databse

Chapter - 10
45

Face Biometric Criminal Databse

FUTURE SCOPE OF THE PROJECT


An Enhancement of the Current Technique
The face detection and recognition technique that we have proposed over here can be considered as the preliminary feature extraction method. Therefore we would discuss the mathematical approach to improve the system. We propose a hybrid system that combines the robust detection of feature points with a holistic and precise linear transform analysis of the face data. The detection of feature points uses a robust model capable of detecting individual features despite a wide range of translations, scale changes, 3D-pose changes and background clutter. This allows us to locate faces in an arbitrary, uncontrived image. Since we wish to utilize linear transform techniques, however, a consistent, normalized frontal mug-shot view of the face is needed. Thus, we propose synthesizing the required mug-shot view from the one detected in the original image. This is performed by inverting the 3D projection of the original face in the image and re-mapping it into frontal view via a deformable 3D model. Then, we perform illumination correction and segmentation to obtain an ideal mug-shot view of the individual in question. At this stage, we can safely apply holistic linear transform techniques (namely, the Karhunen-Loeve decomposition) and use the vector-representation of the face to recognize its identity. Figure 8.1 depicts the interaction between the two stages: feature detection and holistic recognition. Although feature localization is robust, when used alone it is too insensitive for recognition. Although holistic face recognition is precise, its use of linear transformation is not robust to large non-linear face variations. Thus, combining the two approaches provides a superior overall system.

46

Face Biometric Criminal Databse


Figure 8. 1: 3D Normalization as a Bridge Between Feature Detection and Face Recognition

Perceptual Contrast, Symmetry and Scale


For a computer based face recognition system to succeed, it must detect faces and the features that compose them despite variations each face has in the current scene. Faces can be anywhere in an image, at a variety of sizes, at a variety of poses and at a variety of illuminations. Although humans quickly detect the presence and location of faces and facial features from a photograph, automatic machine detection of such objects involves a complex set of operations and tests. It is uncertain exactly how humans detect faces in an image, however we can attempt to imitate the perceptual mechanisms humans seem to employ. We begin by defining and discussing the significance of contrast, symmetry and scale in human vision. This will serve as a basis for the biologically motivated computational tools that we will be using. We then discuss our technique for the computational extraction of contrast information. The

47

Face Biometric Criminal Databse


implementation of multi-scale analysis structure is then defined. Finally, two computational tools for obtaining information on symmetry are introduced: the symmetry transform and the selective symmetry detector.

Biological and Psychological Motivation


Some psychological research has proposed that certain parts of an image attract our attention more than others. Through visuo-motor experiments, research has demonstrated that fixation time and attentional resources are generally allocated to portions in a given scene which are visually interesting. This ``degree of perceptual significance'' of regions in a given image allows a human observer to almost automatically discriminate between insignificant regions in a scene and interesting ones which warrant further investigation. The ability to rapidly evaluate the level of interest in parts of a scene could benefit a face recognition system in a similar way: by reducing its search space for possible human faces. Instead of exhaustively examining each region in an image for a face-like structure, the system would only focus computational resources upon perceptually significant objects. It has been shown that three of the important factors in evaluating perceptual significance are contrast, symmetry and scale. Another property in estimating perceptual importance is symmetry. The precise definition of symmetry in the context of attentional mechanisms is different from the intuitive concept of symmetry. Symmetry, here, represents the symmetric enclosure or the approximate encirclement of a region by contours. The appropriate arrangement of edges which face each other to surround a region attracts the human eye to that region. Furthermore, the concept of enclosure is different from the mathematical sense of perfect closure since humans will still perceive a sense of enclosure despite gaps in boundaries that surround the region.

48

Face Biometric Criminal Databse


Low Level Filtering for Perceptually Interesting Objects
A computational technique is needed which can combine the effects of contrast, symmetry and scale to find the set of the interesting regions in an image. An example of the possible output of such an algorithm would be a collection of points defining circular regions of a certain radius (or scale) which exhibit perceptual importance. A mask or filter is needed which can be quickly applied locally (topographically) over the whole image at multiple scales. The output of the mask would be a perceptual significance map which measures the level of contrast and symmetric enclosure of the image region overlapped by the filter. To detect large perceptually significant objects first, this mask would be applied first at large scales (i.e., with a relatively large mask) and then at progressively smaller ones. Such a filter would provide us with an efficient attentional mechanism for quickly fixating further face-recognition computational resources only on interesting regions.

Edge Detection
The extraction of edges or contours from a two dimensional array of pixels (a gray-scale image) is a critical step in many image processing techniques. A variety of computations are available which determine the magnitude of contrast changes and their orientation. Extensive literature exists documenting the available operators and the post-processing methods to modify their output. A trade-off exists, though, between efficiency and quality of the edge detection. Fast and simple edge detection can be performed by filters such as the popular Sobel operator which requires the mere convolution of a small kernel (3x3 pixels) over the image. Alternatively, more computationally intensive contour detection techniques are available such as the Deriche or Canny method. These detectors require that a set of parameters be varied to detect the desired scale and curvature of edges in the image. It is necessary to compare the simple Sobel

49

Face Biometric Criminal Databse


detector and the complex Deriche-type detectors before selecting the edge detection scheme of preference.

The Sobel Operator


The 3x3 Sobel operator acts locally on the image and only detects edges at small scales. If an object with a jagged boundary is present, as shown in Figure 8. 2(a), the Sobel operator will find the edges at each spike and twist of the perimeter as in Figure 8.2(b). The operator is sensitive to high frequency noise in the image and will generate only local edge data instead of recovering the global structure of a boundary. Furthermore, smooth transitions in contrast that occur over too large a spatial scale to fit in the 3x3 window of the Sobel operator will not be detected. Figure 8.2: Sobel edge detection. (a) The original intensity image. (b) The Sobel edge map.

Deriche Edge Detection


The Deriche output, on the other hand, can be adjusted with the scale parameter to filter out high frequency noise and pixelization from the image by linking adjacent edges into long, smooth, continuous contours. This allows the 50

Face Biometric Criminal Databse


edge map to reflect the dominant structures in the image. The effect of small and large (the scale parameter) is shown in Figure 8.3(a) and Figure 8.3(b). Furthermore, the computation is not limited to a small window and can find edges which change gradually. Thus, the outline of the trees as separate whole objects is found instead of the outline of the leaves. Figure 8.3: Deriche edge detection at multiple scales (a) Deriche edge map at a small scale. (b) Deriche edge map at a large scale.

Edge Data Enhancement


Edge detection can be followed by further processing to enhance the output or to modify it. The techniques described below will be used on the output of the Sobel edge detection operation. The Sobel operation begins with an intensity image (Figure 8.4(a)) and produces a gradient magnitude map (Figure 8.4(b)) and an edge phase map (Figure 8.4(c)). Figure 8.4: Typical output from Sobel edge detection. (a) Input intensity image. (b) Sobel gradient magnitude map. (c) Sobel phase map.

51

Face Biometric Criminal Databse

Thresholding If a binary (i.e., black and white) version of the gradient map is desired, it can be obtained by thresholding. All gradient values whose magnitudes are less than a threshold will be set to 0 and all gradient magnitudes greater than a threshold will be set to the maximum edge value (255). Thus, the maximum image contrast is compressed to 1 bit. One strategy for selecting the threshold in question is to first search for the strongest edge magnitude in the image region of interest. Once the peak edge magnitude is found, the threshold is computed as a percentage of this peak (e.g, 10%). A thresholded version of Figure 8.4(a) is Figure 8.5(a).

Non-Maximal Suppression Non-maximal suppression nullifies the gradient value at a pixel if its gradient magnitude is non-maximal vis-a-vis neighbouring gradient magnitudes along the perpendicular to the pixel's gradient orientation. The result is a thinned edge map, as shown in Figure 2.4(b) where only the dominant edges are present. Square Root of Edge Magnitude 52

Face Biometric Criminal Databse


It may be necessary to adjust the edge magnitudes returned by the Sobel operation to change the significance of the contrast in a computation. In other words, we may want to re-map the contrast levels to emphasize the weak edges in an image. In some subsequent processing steps, the magnitude of the edge at each point is replaced by the square-root of the magnitude to attenuate the effect of contrast. Figure 8.5: Post-Processing Sobel edge detection. (a) Thresholded gradient magnitude map with threshold at 10% of peak edge magnitude. (b) Gradient map after non-maximal suppression.

Multi-Scale Analysis
We note at this point the significance of scale and the scalability of certain types of edge detection. Since structures and objects in an input image can have different sizes and resolutions, most spatial operators (for edge extraction or further processing) will require scalability. As demonstrated previously, the scalability of the Deriche edge detection operation makes it more flexible than the highly local Sobel operator. The objects in an input image and the image itself can have different sizes and resolutions so a scalable operator is necessary to search for edges at different scales. Kelly and Levine have approached edge detection by applying a Deriche-like operator over

53

Face Biometric Criminal Databse


many scales. By changing the operator's size, several edge maps (one for each scale) are obtained and subsequently processed in parallel. Elder proposes yet another technique wherein a single edge map is produced by computing the optimal scale of edge detection for each position of the input image and then performing scale-adaptive edge detection. In other words, the scale of the operator is varied appropriately at each position in the image. Given an image I(i,j) of dimensions M X N, a scaled version, Is(i,j) of the image at a scale factor of s can be obtained with dimensions [ M/s] x [N/s] by either subsampling the image or subaveraging it. The following describes the operations required for scaling by an integer factor s (although non-integer scaling is possible as well). Subsampling Subsampling involves selecting a sample from each neighbourhood of pixels of size sxs. Each sample will be used as a pixel value in the scaled image. Subsampling assumes that the sample appropriately represents its neighbourhood and that the image is not dominated by high-frequency data. This process only requires [M/s] x [N/s] computations. Subaveraging Subaveraging is similar to subsampling except that the sample taken from each neighbourhood of pixels has an intensity that is the average intensity in the neighbourhood. Thus, the original image undergoes low-pass filtering before being sampled so that the scaled image approximates the original optimally even if it has high frequency data. To avoid possible errors when scaling high frequency components in the image, we choose to implement subaveraging instead of subsampling. This process requires M x N computations.

54

Face Biometric Criminal Databse


Face Blob Localization
We begin with an arbitrary image of a natural uncontrived scene containing people. We then generate an intensity image pyramid as in Figure and a corresponding edge map pyramid as shown in Figure. We use the edge map pyramid to apply the symmetry transform at various scales. This allows us to detect blobs of arbitrary size in the image. The blob detector uses only the 6 annular sampling regions described in Table. We can afford to limit the number of annular sampling regions to six at this stage since the subaveraging involved in the pyramid obviates the need for more scale invariance in the operator. We apply the general symmetry transform to each of the edge maps and mark the centers of the detected blobs on the intensity pyramid. The general transform (not the dark or bright symmetry transform) was utilized since heads and faces do not consistently appear either brighter or darker than the background of a scene. This multi-scale interest detection operation provides us with the blob detection pyramid displayed in Figure 8.7.

Table 8.1: The annular sampling regions to be used for face detection

Annular Region Number 1 2 3 4 5 6

Range of Radii 0.75 < r < 2.25 pixels 1.75 < r < 3.25 pixels 2.75 < r < 4.25 pixels 3.75 < r < 5.25 pixels 4.75 < r < 7.25 pixels 6.75 < r < 9.25 pixels

55

Face Biometric Criminal Databse


Figure 8.7: The multi-scalar interest map pyramid

We thresholded the output of the interest map so that only attentional peaks exhibiting a certain minimal level of interest will appear in the output. The threshold on the interest map is very tolerant and allows many extremely weak blobs to register. Thus, the precise selection of an interest map threshold is not critical. Furthermore, we only consider the five (5) strongest peaks of interest or the five most significant blobs for each scale in the multi-scalar analysis. This is to prevent the system from spending too much time at each scale investigating blobs. We expect the face to be somewhat dominant in the image so that it will be one of the strongest five blobs in the image (at the scale it resides in). If we expect many faces or other blobs in the image at the same scale, this value can be increased beyond 5. This would be advantageous, for example, when analyzing group photos. Both a threshold on interest value and the limitation on the number of peaks are required since we do not wish to ever process more than 5 blobs per scale for efficiency and we require the blobs to exhibit a minimal level of significance to warrant any investigation whatsoever. Furthermore, we 56

Face Biometric Criminal Databse


stop applying the interest operator for scales smaller than 4x. The interest operator is limited in size to r=9 pixels and consequently, the blobs detected at scales lower than 4x would be too small and would have insufficient resolution for subsequent facial feature localization and recognition. For example, the blobs detected at scale 3x would be less than 54 x 54 pixel objects and the representation of a face at such a resolution would prevent accurate facial feature detection.

Face Contour Estimation


After having applied the symmetry transform over the whole image at multiple scales we have a set of loci for the perceptually significant blobs. Consequently, we have restricted the search space for faces and can afford to utilize the more computationally expensive selective symmetry detector. The selective symmetry operator is applied over a 5 x 5 neighbourhood around all loci generated by the multi-scalar interest maps. This is done to refine the coarse initial localization of interest peaks. We also boost resolution at each scale by a factor of 2 to use large templates with the selective symmetry operator. This permits the detector to utilize more image data in the computation. The facial contour is not necessarily of high contrast against the background and this is especially true of the chin area. The chin and the neck are composed of the same skin tone and thus the contrast generated from this contour between the two is only due to the foreshortening of the chin and the shading below it. Thus, a reduced sensitivity to contrast allows the selective symmetry detector to detect the strong sense of symmetric enclosure the chin contour brings to the facial structure despite its rather weak edge content. Figure 8.8: The search space for the selective symmetry detector

57

Face Biometric Criminal Databse

Figure 8.8 depicts the use of the local maxima in the interest map to define the search space for the selective symmetry detector. The interest map peaks at scales 28, 20 ,14 and 10 are shown on the left side of the Figure. Note the 5 x 5 window of dots forming a neighbourhood around these peaks in the images on the right. These images on the right side are higher resolution versions of the ones on the left (double the resolution) and will be operated upon by the selective symmetry detector at each of the 25 white points within them. This process is performed first at large scales (at scale 28 in the example). This agrees with the notion that large scales are usually more perceptually significant. Subsequently, we use the selective symmetry operator to compute the structure of the blob more exactly. We also need to guarantee a certain level of overlap between templates. For example, observe Figure 8.9 which displays 3 templates of a head with the following orientations: along the vertical, at +60 degrees from the vertical and at 58

Face Biometric Criminal Databse


-60 degrees from the vertical. If a face is encountered at -30 degrees from the vertical, we will probably not detect it. What is needed is a certain amount of overlap between one template and the next so that intermediate face contours will be detected. Thus, we must finely sample the orientation, aspect ratio and size ranges in our template creation process to ensure overlap. We seek roughly 50% area overlap between neighbouring templates. Furthermore, when we proposed the search space for the selective symmetry operator as a 25-point neighbourhood we sampled the search space appropriately to ensure proper coverage as well. In other words, we do not have gaps in the spatial domain. The thick annular operator overlaps the search area well since the ( x,y) points at which we apply the selective symmetry operation are densely arranged.

Figure 8.9: Insufficient operator overlap problems

For each blob, we exhaustively attempt each template matching and the strongest template is our final estimate for the facial contour. It must generate a certain minimal value of SE for it to be a legitimate facial contour. We select a threshold on the value of SE at 25%. Recall that the value of SE is expressed as a percentage of the peak value that can trigger the template in question. If the best template at the given peak is weaker than 25%, it will be rejected, indicating

59

Face Biometric Criminal Databse


that the interest map peak was generated by another structure which does not fit the shape of the face templates. Thus, certain points in the interest map will be rejected as non-faces at this stage if they fail to trigger the face templates adequately. The threshold value of 25% on the facial contour detection is a very tolerant one. All faces tested generated values of SE significantly above 25%. However, other non-facial yet symmetric structures will be discarded. The estimates for the facial contours resulting from the local peaks in the interest maps in Figure are displayed as darkened annular regions superimposed upon the input intensity images as shown in Figure .

Figure 8.10: The collection of detected possible facial contours

There is successful and precise detection of both face contours in cases (d) and (e) despite the variation in scale, focus, pose and illumination. Unfortunately, non-face structures also triggered the face contour detector as seen in cases (a), (b) and (c). The larger contours are triggered in part by the high contrast in the clothing of the individuals. Furthermore, the close proximity of the heads of the two individuals causes the selective symmetry detector to utilize contours from both faces in the image simultaneously. However, had a single face been the dominant object in the image, the contour detection would have triggered fewer

60

Face Biometric Criminal Databse


false alarms. Once again, false alarms are permissible at this stage since further testing and elimination will subsequently fine-tune the output. It is critical, though, that there are no misses at this stage since we only propagate the data that generated adequate facial contours to the subsequent testing stages in our hierarchy.

Figures and do not show the search space (25 white points) or the facial contour estimate for the two weakest peaks in the interest map at the 10x scale. This is because these points failed to generate values greater than 25% for any of the face/head templates. This is understandable since they are triggered by the clothing of the individuals in the scene (not faces). Thus, this selective symmetry detector stage not only refines the localization of the face's center, it detects facial contour structure and also filters out non-face-like blobs from the output of the interest map. The final output is a collection of possible facial contours whose interior must now be processed to determine the existence of appropriate facial features.

61

Face Biometric Criminal Databse


Eye Localization
Having found a set of possible facial contours in the image, we proceed with the detection of the eyes within the face. When we refer to the eyes in this section, we are referring not only to the iris but rather the collection of contours forming the pupil, iris, eyelids, eyelashes, eyebrows and the shading around the eye orbit. This general eye region is a larger and more dominant structure as a whole than its individual subcomponents. Therefore, it is more stable and easier to detect as a whole. Although the process of including the surrounding region improves robustness, it reduces accuracy since the contours of the eyebrows and eye orbit shading may have a center that does not coincide with the pupil's center. Some high quality, deformable model methods for detecting the iris and eyelids have been proposed by Yuille and others. However, they can be computationally expensive and are not as robust as the large operators acting on the whole eye region. For example, if an individual in the image is squinting or if the image quality is poor, the iris will not be clearly visible and such high precision methods which search exclusively for an iris or eyelids might fail. Figure 8.11: Generating eye search region from the facial contour

62

Face Biometric Criminal Databse


Figure 8.12 shows the eye bands or eye spatial search space as brightened strips superimposed upon the original intensity images.

Figure 8.12: Isolating the eye search regions

Detecting Eye Regions


The search space for the eye detection is now defined so we proceed to define the nature of the eye detection operation. In light of the highly symmetric, blob-like nature of the eye, we elect to use the symmetry transform to detect it as a peak in the interest map. Reisfeld proposes the use of a similar fixed size symmetry operator on the image. Similarly, we employ our more efficient symmetry transform (which also has a fixed size with its prespecified annular sampling regions). Observe Table 8.2 for the parameters of the annular regions for the symmetry transform at this stage. The usage of 8 different annular sampling regions with a wide range of radii is necessary due to the variety of sizes of the contours in the eye region. Large contours from the eye brows as well as small contours from the pupil are to be considered.

Table 8.2: The annular sampling regions to be used for eye detection

63

Face Biometric Criminal Databse


Annular Region Number 1 2 3 4 5 6 7 8 Range of Radii 0.75 < r < 2.25 pixels 1.75 < r < 3.25 pixels 2.75 < r < 4.25 pixels 3.75 < r < 5.25 pixels 4.75 < r < 7.25 pixels 6.75 < r < 9.25 pixels 8.75 < r < 11.25 pixels 10.75 < r < 14.25 pixels

Figure 8.13: Eye operator size versus face size

Furthermore, the eye region, eyebrow, and eyelashes are surrounded by skin, and the iris is surrounded by the bright white sclera. Thus, we expect the eye region objects to be darker with respect to their immediate background and can restrict the computation of symmetry to dark symmetry only. Figure 8.13 displays the resulting peaks in the interest maps once the dark symmetry transform has been computed. Usually, the strongest peaks

64

Face Biometric Criminal Databse


correspond to the eye regions. Thus we can limit the number of peaks to be processed further to the 5 strongest interest peaks.

Figure 8.14: Isolating the eye search regions

The set of interest peaks (approximately 5) representing the possible eyes has been acquired. However, of these 5 peaks, which ones are the true eyes of the individual? It is possible to merely select the top two peaks in the eye band. Since eyes are such strong interest points, this is satisfactory in the majority of cases. However, it is sometimes possible that the top interest points are generated by other structures. For example, a loop of hair from the head could fall into the eye band and generate a strong interest peak (Thus, we maintain the collection of possible eyes, accepting these false alarms for now. Further testing will isolate the true eyes from this set more reliably than the mere selection of the top two peaks. Figure 8.15: Strong symmetry responses from structures other than eyes

65

Face Biometric Criminal Databse

We need to consider each pair of eyes in the set of detected peaks in the eye band. If 5 peaks are present, the total number of possible pairs is ( 25) = 10. However, we proceed by testing the strongest pairs first in a sequential manner until we find a pair that passes all tests. We can then stop testing any weaker pairs since we have converged to the two true eyes. Usually, the top two peaks will be the true eyes so we quickly converge without exhaustively testing all 10 pairs of possible eyes.

Geometrical Tests
We test a pair of symmetry peaks to see if their position on the face is geometrically valid. If these peaks are not horizontally aligned or have insufficient intra-ocular distance, they could not be eyes and are to be discarded.

Horizontal Alignment of Eyes


The first test computes the line formed by the pair of symmetry peaks. This line should be roughly perpendicular to the axis of the face as detected by the face contour estimation. Symmetry peaks that form a line that is not perpendicular to within +30 degrees from the face's axis could not be eyes and are discarded.

Sufficient Intra-Ocular Distance


66

Face Biometric Criminal Databse


A pair of interest peaks within the eye band must have a certain intra-ocular distance separating them. If they are too close together, they cannot be eyes. Since the dimensions of the face contour are already known, we can estimate a minimum threshold distance between the eyes. However, the intra-ocular distance varies as the facial pose changes. For example, out of plane rotation induced when the subject is not looking straight into the camera will cause a reduction of the intra-ocular distance. Additionally, as the person rotates to the left or the right, the eyes do not seem centered within the facial contour in a 2D sense. Eyes travel to either side of the face as it is rotated severely. Thus, a threshold on the intra-ocular distance should be a function of the position of the center point between the two eyes relative to the face. In a near profile shot, as depicted in Figure 8.14 the center point between the two eyes is near the left side of the facial contour (note that 'left' and 'right' are defined with respect to the image viewer, not the photographed subject). The axis of the facial contour is marked with a vertical line while the midpoint between the two eyes is marked with a cross. We propose to compute the threshold on the intra-ocular distance as follows. We compute the midpoint between the two symmetry peaks under test which is shown in Figure 8.15. The eyes are shown in the Figure 8.15 within the eye-band (of width b). The horizontal distances from the midpoint to the sides of the eye-band are cm and b-cm where cm < b-cm. A variable threshold on dintraocular

is then computed using Equation. The constant kintra-ocular is typically set to 0.2.

This is a very conservative setting which can be tweaked if desired.

dintra-ocular > kintra-ocular x cm


Figure 8.16: Eye midpoint not centered in facial contour

67

Face Biometric Criminal Databse

Figure 8.17: Minimum intra-ocular distance

68

Face Biometric Criminal Databse


Rotation Transformation for Mouth and Nose Detection
Once a pair of interest peaks has a legal geometrical arrangement, we wish to test for the presence of a mouth and nose. If these structures are not present, then the detected pair of interest peaks was not a pair of eyes. Thus, we proceed by considering the next possible pair of eyes. However, before we test for the presence of a mouth or nose, we rotate the face such that the two current possible eyes being evaluated lie on the horizontal. This simplifies the subsequent steps of detecting a mouth or a nose. We also rotate a mask representing the facial contour in a similar fashion to keep track of the interior of the facial contour. The rotated images of the face and the mask are displayed in Figure

Mouth Localization
After having found a pair of possible eyes which satisfies the geometrical constraints imposed by the face, it is necessary to test for the presence of the mouth. This will be used not only to check the validity of the eyes but will more importantly localize the face further so that a more precise definition of its coordinates is obtained. We choose to locate the mouth after having located the eyes because it has a non-skin tone texture and stands out more clearly than the skin-covered nose. Furthermore, its position is more stable than the nose since it

69

Face Biometric Criminal Databse


lies consistently between the two eyes despite rotations of the face. Thus, the next most reliable step in the hierarchy is mouth detection. The result of the symmetry computation is an image at each value of r (r=1 to r=6) with each point containing 8 magnitudes for each symmetry orientation. Thus, a symmetry magnitude is computed for each point p for each r from r=1 to r=6 and for each from =1 to =8 (8 symmetry orientations). The resulting

symmetry points are output as in Figure .

Figure 8.17: Horizontal Projection of Symmetry Points

The 6 scales (r=1 to r=6) form our axial symmetry scale-space. The scale or r represents the vertical thickness of the horizontal symmetries detected in the image. A thin, closed mouth usually would generate a line of symmetry points at r=1. An open mouth, on the other hand, will generate a cloud of points at a larger r within its center. An open mouth's extremities taper off (since it is closed on both ends) regardless of its size. Thus, the mouth's extremities will appear as clouds at small r.

70

Face Biometric Criminal Databse


Nose Localization
The nose is a very useful feature since it accurately gives us an estimate for the pose of the individual. This is due to the significant displacement the nose undergoes in a 2D sense as facial pose changes. The nose position relative to the eyes tells us quite precisely if the subject is looking to the left, to the right or is in frontal view. The mouth and the facial contour, on the other hand, are not as reliable for estimating pose. Furthermore, the nose is mostly rigid, so its locus cannot change with facial expression. In most images, the nose is one of the brightest regions of the face. It protrudes from the face and is thus better illuminated than other regions. Simultaneously, the nostrils and its bottom surface are significantly darker than the rest of the nose. Even if the black nostrils are not present, a dark contour around the bottom of the nose is visible due to the shading under the nose and the steep foreshortening at the bottom of the nose tip. Thus, we can model the nose as a region of brightness with a dark boundary on the bottom. We are interested in detecting this change of intensity from brightness to darkness as we travel from the eyes to the mouth. From the gradient and phase maps derived by Sobel edge detection, we can compute the projection of the gradient magnitude of each edge along the vertical. Thus, we only consider vertical contrast changes. Actually, more specifically, we consider contrast changes that occur from bright to dark as we move downwards along the vertical. Figure 8.16 contains the original gradient map and Figure 8.17 shows the effect of projecting the edges along the upward vertical. Equation illustrates the projection of an edge i with magnitude and phase (where corresponds

to a vertical edge whose normal is along the horizontal). This generates the horizontally projected magnitude value, .

71

Face Biometric Criminal Databse

Figure 8.19: Nose edge data. (a) Sobel gradient map. (b) Gradient map projected along vertical.

Vertical Signatures
We now consider the use of vertical signatures of the projected gradient map to isolate the nose. This technique is reminiscent of Kanade's signature analysis

72

Face Biometric Criminal Databse

Figure 8.19: Nose height. (a) Search space. (b) Gradient map projected along upwards vertical with corresponding nose signature. Note that this is not quite a conventional signature since a triangular summation region is used.

We define the spatial search space using the previously localized eyes and mouth. This will restrict the signature analysis so that no facial contours or external edges will affect edge signature analysis. Figure shows the region where 73

Face Biometric Criminal Databse


signature analysis will be performed. The edges contained by this triangle are summed into bins corresponding to their y value. These bins form the vertical signature of the nose. The bin with the strongest edge content is the one that corresponds to the vertical position of the nose's bottom. Figure also shows the projected gradient map and the signature that was computed in the search space. The nose's bottom position corresponds to the peak value of the signature. However, we are interested in the nose tip, not the nose bottom. The nose tip characterizes 3D pose more clearly since it strongly protrudes from the ellipsoidal structure of the head. Assume the nose bottom was detected at position noseBottomy at the peak signature value of noseBottomvalue. We search a

window of height of up to

above the nose bottom for a weaker

signature value. The nose tip is defined as the closest point in the window with a

signature value below

. This simple adjustment is

depicted in Figure 8.18 a . The positions of the nose bottom and the nose tip are shown as horizontal lines in Figure 8.19 b. Note the effect of this computation is quite minor and the nose-tip is only 2 pixels above the nose-bottom. Although the definition of the nose-tip and the use of the 40% threshold are somewhat arbitrary, we merely wish to move out of the region corresponding to the nose bottom (nostrils and shading) by a marginal amount so that the position detected has a 3D height. In other words, we wish to move upwards a small distance so that we localize a point somewhere on the nose, taking advantage of its 3D protrusion on the face (which specifies pose more exactly than non-protruding features on the face). Furthermore, the small upwards adjustment from nosebottom to nose-tip does not have to be exact as long as we are somewhere on the nose and not on the junction between the nose-bottom and the face (which is not a 3D protrusion). Usually, the nose tip is brighter than the rest of the nose and the nose bottom is darker. However, the transition from nose tip to nose

74

Face Biometric Criminal Databse


bottom or bright to dark is somewhat gradual. By moving upwards in search of a 40% signature value (rather than the maximum), we are searching for the beginning of this transition and moving closer to the true nose position in the process.

Figure 8.20: Finding the nose tip from the nose bottom using the edge signature.

Figure : 8.21 The nose-bottom-line and the nose-tip-line.

Thus, we have roughly determined the height of the nose tip with respect to the eyes. However, we are uncertain of the exact horizontal position of the nose. The

75

Face Biometric Criminal Databse


required localization is difficult to perform using simple signature analysis. This is mainly due to the fact that noses have the same tone as the rest of the skincovered face and hence have low perceptual significance. For now, the noselocalization module merely outputs a height value at the nose-tip position so we do not have a single locus for the nose but, rather, a line of possible loci for the nose. This nose-line lies between the two eyes at a fixed perpendicular distance below them. Thus, the output of the nose detection defines a nose-line (as opposed to a nose locus) along which the nose is situated as depicted in Figure . The nose-line crosses the nose tip and is parallel to the line formed by the two eyes. Additionally, its length is equal to the intra-ocular distance. In other words, the nose-line starts below the left eye and ends below the right eye.

Improving and Filtering Localization


The localization procedure thus yields an output similar to the one found in Figure .8.22 Note that the horizontal (i.e., parallel with intra-ocular axis) position of the nose is uncertain. This is represented by a white line across the vertical position of the nose from the left eye to the right eye. The nose tip is a point on this line segment and its exact locus is found by the techniques described later. Furthermore, it is possible that the face detection computation will be triggered by a non-face which happens to have a blob-like structure with eye blobs and a mouth-limb. These 'false alarms' by the face detector will be rejected using techniques described later. Figure 8.22: A typical output of the face detection stage

76

Face Biometric Criminal Databse

Thus, we have given a procedure for localizing the face and facial features in an image. We begin by finding the face-blob and then estimating a facial contour. From there, we can define the eye-band, a region where the eyes might be present. Eye-blob detection is then performed and the blobs are tested geometrically to see if an adequate pair can be found. If two blobs are ``eye-like'' geometrically, we search for a mouth between them and then, finally, a nose-line.

Face Normalization and Recognition


The position of a rigid object can be specified by 6 parameters: 3 rotations and 3 translations. The rigid motion of a face or any object is specified by these 6 parameters. Rigid motion of the face accounts for a great amount of variance in its appearance in a 2D image array. Furthermore, the lighting changes caused by light sources at arbitrary positions and intensities also account for a significant amount of variance. Simultaneously, the non-rigid deformations of the face (from muscular actuation and identity variation) cause more subtle variations in the 2D image. An individual's identity, however, is captured by these small variations alone and is not specified by the variance due to the large rigid body motion and illumination of the face. Thus, it is necessary to compensate or normalize a face for position and illumination so that the variance due to these is minimized. Consequently, the small variations in the image due to identity, muscle actuation and so on will become the dominant source of intensity variance in an image and can thus be analyzed for recognition purposes.

77

Face Biometric Criminal Databse


Recall the output of the face detection and localization stage. The eyes, the nose and the mouth were identified using direct image processing techniques. Assume for now that the nose's horizontal position was also determined and an exact locus for the nose tip is available. The detection of the loci of these feature points (eyes, nose and mouth) gives an estimate of the pose of an individual's face.

Chapter - 11

Conclusion
The project is an effort to implement a biometric method and to protect the system through the face detection. The technique doesnt take into account of 3d faces. Therefore faces cannot be detected from the web camera. The project enhancement discussed here may be used to improve that project to 3d face recognization, especially from an un-even back ground. industrial project. The project is a simplistic approach and requires complicated filtering to enhance it to an

78

Face Biometric Criminal Databse

Chapter - 12 BIBLIOGRAPHY
C# and .Net platform by Andrew Troelsen. Inside C# by Tom Archer. Digital Image Processing using MATLAB by Gonzalez Adams Technical papers for pap smear test http://en.wikipedia.org/w/index.php?title=Liquid-based_cytology Medical Image Edge Detection (Fan et al., 2001, Rughooputh & Rughooputh, 1999, Anoraganingrum, 1999).

79

You might also like