The World of Biometrics
  Facial Recognition
 
Humans have always had the innate ability to recognize and distinguish between faces, yet computers only recently have shown the same ability. In the mid 1960s, scientists began work on using the computer to recognize human faces. Since then, facial recognition software has come a long way.

Various softwares such as FaceIt®, can pick someone's face out of a crowd, extract the face from the rest of the scene and compare it to a database of stored images. In order for this software to work, it has to know how to differentiate between a basic face and the rest of the background. Facial recognition software is based on the ability to recognize a face and then measure the various features of the face.

Every face has numerous, distinguishable landmarks, the different peaks and valleys that make up facial features. FaceIt defines these landmarks as nodal points. Each human face has approximately 80 nodal points. Some of these measured by the software are:

* Distance between the eyes
* Width of the nose
* Depth of the eye sockets
* The shape of the cheekbones
* The length of the jaw line

These nodal points are measured creating a numerical code, called a faceprint, representing the face in the database.

In the past, facial recognition software has relied on a 2D image to compare or identify another 2D image from the database. To be effective and accurate, the image captured needed to be of a face that was looking almost directly at the camera, with little variance of light or facial expression from the image in the database. This created quite a problem.

In most instances the images were not taken in a controlled environment. Even the smallest changes in light or orientation could reduce the effectiveness of the system, so they couldn't be matched to any face in the database, leading to a high rate of failure.


3D Imaging

A newly emerging trend, claimed to achieve previously unseen accuracies, is three-dimensional face recognition. This technique uses 3-D sensors to capture information about the shape of a face. This information is then used to identify distinctive features on the surface of a face, such as the contour of the eye sockets, nose, and chin.

One advantage of 3-D facial recognition is that it is not affected by changes in lighting like other techniques. It can also identify a face from a range of viewing angles, including a profile view

A newly-emerging trend in facial recognition software uses a 3D model, which claims to provide more accuracy. Capturing a real-time 3D image of a person's facial surface, 3D facial recognition uses distinctive features of the face -- where rigid tissue and bone is most apparent, such as the curves of the eye socket, nose and chin -- to identify the subject. These areas are all unique and don't change over time.



Using depth and an axis of measurement that is not affected by lighting, 3D facial recognition can even be used in darkness and has the ability to recognize a subject at different view angles with the potential to recognize up to 90 degrees (a face in profile).

Using the 3D software, the system goes through a series of steps to verify the identity of an individual.

Detection
Acquiring an image can be accomplished by digitally scanning an existing photograph (2D) or by using a video image to acquire a live picture of a subject (3D).

Alignment
Once it detects a face, the system determines the head's position, size and pose. As stated earlier, the subject has the potential to be recognized up to 90 degrees, while with 2D, the head must be turned at least 35 degrees toward the camera.

Measurement
The system then measures the curves of the face on a sub-millimeter (or microwave) scale and creates a template.

The process of recognizing faces comprises five steps: alignment, measurement, representation, matching and verification.

Representation
The system translates the template into a unique code. This coding gives each template a set of numbers to represent the features on a subject's face.

Matching
If the image is 3D and the database contains 3D images, then matching will take place without any changes being made to the image. However, there is a challenge currently facing databases that are still in 2D images. 3D provides a live, moving variable subject being compared to a flat, stable image. New technology is addressing this challenge. When a 3D image is taken, different points (usually three) are identified. For example, the outside of the eye, the inside of the eye and the tip of the nose will be pulled out and measured. Once those measurements are in place, an algorithm (a step-by-step procedure) will be applied to the image to convert it to a 2D image. After conversion, the software will then compare the image with the 2D images in the database to find a potential match.

Verification or Identification
In verification, an image is matched to only one image in the database (1:1). For example, an image taken of a subject may be matched to an image in the Department of Motor Vehicles database to verify the subject is who he says he is. If identification is the goal, then the image is compared to all images in the database resulting in a score for each potential match (1:N). In this instance, you may take an image and compare it to a database of mug shots to identify who the subject is.


Surface texture analysis

The image may not always be verified or identified in facial recognition alone. The development of softwares such as FaceIt®Argus uses skin biometrics, the uniqueness of skin texture, to yield even more accurate results.

The surface texture analysis (STA) algorithm operates on the top percentage of results as determined by the local feature analysis. STA creates a skinprint and performs either a 1:1 or 1:N match depending on whether you're looking for verification or identification.


The process, called Surface Texture Analysis, works much the same way facial recognition does. A picture is taken of a patch of skin, called a skinprint. That patch is then broken up into smaller blocks. Using algorithms to turn the patch into a mathematical, measurable space, the system will then distinguish any lines, pores and the actual skin texture. It can identify differences between identical twins, which is not yet possible using facial recognition software alone. According to Identix, by combining facial recognition with surface texture analysis, accurate identification can increase by 20 to 25 percent.

FaceIt currently uses three different templates to confirm or identify the subject: vector, local feature analysis and surface texture analysis.

* The vector template is very small and is used for rapid searching over the entire database primarily for one-to-many searching.
* The local feature analysis (LFA) template performs a secondary search of ordered matches following the vector template.
* The surface texture analysis (STA) is the largest of the three. It performs a final pass after the LFA template search, relying on the skin features in the image, which contains the most detailed information.

Uses

One of three biometric identification technologies internationally standardized by ICAO for use in future passports (the other two are iris recognition and fingerprint)

The London Borough of Newham, in the UK, previously trialled a facial recognition system built into their borough-wide CCTV system.

The German Federal Police use a facial recognition system to allow voluntary subscribers to pass fully automated border controls at Frankfurt Rhein-Main international airport. Recognition system are also used by casinos to catch card counters and other blacklisted individuals.

The Australian Customs Service has an automated border processing system called SmartGate that uses facial recognition. The system compares the face of the individual with the image in the e-passport microchip, certifying that the holder of the passport is the rightful owner.

Pennsylvania Justice Network searches crime scene photographs and CCTV footage in the mugshot database of previous arrests. A number of cold cases have been resolved since the system became operational in 2005. Other law enforcement agencies in the USA and abroad use arrest mugshot databases in their forensic investigative work.

U.S. Department of State operates one of the largest face recognition systems in the world with over 75 million photographs that is actively used for visa processing.

Spaceship Earth in EPCOT uses this for the touch screen part of the ride.

www.youtube.com/watch
 
 
  There have been 3089 visitors (5406 hits) to my website already. Welcome to the club! Project by BBT 2-06056  
 
This website was created for free with Own-Free-Website.com. Would you also like to have your own website?
Sign up for free