The June 2008 issue of Scientific American contains the article "Digital Image Forensics," by Hany Farid, Professor and Associate Chair of Computer Science at Dartmouth College. Professor Farid says he is "often asked to authenticate images for media outlets, law-enforcement agencies, the courts and private citizens."
The article describes various analytical techniques used to detect faked or altered digital imagery, including:
- Inferring light-source direction from object brightness
"Composite images made of pieces from different photographs can display subtle differences in the lighting conditions under which each person or object was originally photographed. [...] To infer the light-source direction, you must know the local orientation of the surface. At most places on an object in an image, it is difficult to determine the orientation. The one exception is along a surface contour, where the orientation is perpendicular to the contour."
- Inferring light-source direction from eye highlights
"Surrounding lights reflect in eyes to form small white dots called specular highlights. [Sometimes the highlights are] so inconsistent that visual inspection is enough to infer that the photograph has been doctored. Many cases, however, require a mathematical analysis. [...] Our algorithm calculates the orientation of a person's eyes from the shape of the irises in the image. With this information and the position of the specular highlights, the program estimates the direction to the light."
- Analyzing iris shapes
When a subject is gazing away from the camera, each iris will have a slightly different elliptical shape. The shapes together can be used to infer the principle point of the camera. If the principle points inferred from different subjects in the same image are inconsistent, then the image has been altered. Farid says "[...] we can reliably estimate large camera differences, such as when a person is moved from one side of the image to the middle. It is harder to tell if the person was moved much less than that."
Block signature matching
A signature value is computed for all small blocks in an image. Signature matching followed by region growing are then used to identify cloned image regions. The example given in the article identifies cloned regions in a scene showing a large crowd.
Knowledge of camera sensor patterns
Digital camera sensor patterns, such as the Bayer color filter array, impose certain pixel-to-pixel correlations that are expected following the demosaicking step. "If an image does not have the proper pixel correlations for the camera allegedly used to take the picture, the image has been retouched in some fashion. [...] A drawback of this technique is that it can be applied usefully only to an allegedly original digital image; a scan of a printout, for instance, would have new correlations imposed courtesy of the scanner."
To leave a comment, please click here to sign in to your MathWorks Account or create a new one.