(FREE PDF Sample) Advances in Computers Vol 77 1st Edition Marvin Zelkowitz (Ed.) Ebooks
(FREE PDF Sample) Advances in Computers Vol 77 1st Edition Marvin Zelkowitz (Ed.) Ebooks
https://ebookfinal.com/download/advances-in-computers-vol-79-1st-
edition-marvin-zelkowitz-ed/
https://ebookfinal.com/download/advances-in-computers-56-first-
edition-marvin-zelkowitz/
https://ebookfinal.com/download/security-on-the-web-1st-edition-
marvin-zelkowitz/
https://ebookfinal.com/download/vagabond-vol-29-29-inoue/
Emerging Technologies 1st Edition Marvin Zelkowitz Ph.D.
Ms Bs.
https://ebookfinal.com/download/emerging-technologies-1st-edition-
marvin-zelkowitz-ph-d-ms-bs/
https://ebookfinal.com/download/architectural-issues-1st-edition-
marvin-zelkowitz-ph-d-ms-bs/
https://ebookfinal.com/download/quality-software-development-1st-
edition-marvin-zelkowitz-ph-d-ms-bs/
https://ebookfinal.com/download/improving-the-web-1st-edition-marvin-
zelkowitz-ph-d-ms-bs/
https://ebookfinal.com/download/advances-in-applied-
microbiology-77-1st-edition-allen-i-laskin/
Advances in Computers Vol 77 1st Edition Marvin
Zelkowitz (Ed.) Digital Instant Download
Author(s): Marvin Zelkowitz (Ed.)
ISBN(s): 9780123748126, 0123748127
Edition: 1st
File Details: PDF, 4.52 MB
Year: 2009
Language: english
Academic Press is an imprint of Elsevier
32 Jamestown Road, London, NW1 7BY, UK
Radarweg 29, PO Box 211, 1000 AE Amsterdam, The Netherlands
30 Corporate Drive, Suite 400, Burlington, MA 01803, USA
525 B Street, Suite 1900, San Diego, CA 92101-4495, USA
No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or
by any means electronic, mechanical, photocopying, recording or otherwise without the prior written
permission of the publisher Permissions may be sought directly from Elsevier’s Science & Technology
Rights Department in Oxford, UK: phone (+44) (0) 1865 843830; fax (+44) (0) 1865 853333; email:
[email protected]. Alternatively you can submit your request online by visiting the Elsevier web
site at http://elsevier.com/locate/permissions, and selecting Obtaining permission to use Elsevier material
Notice
No responsibility is assumed by the publisher for any injury and/or damage to persons or property as a
matter of products liability, negligence or otherwise, or from any use or operation of any methods,
products, instructions or ideas contained in the material herein
ISBN: 978-0-12-374812-6
ISSN: 0065-2458
Prof. Robert Aalberts is the Lied Professor of Legal Studies at the University of
Nevada, Las Vegas. He received his Juris Doctor from Loyola University and an
M.A. from the University of Missouri-Columbia. Prior to his academic career, he
was an attorney for the Gulf Oil Company. His primary research interests include
real estate law, cyber law, and employment law. He has also published over 105
articles in legal and business periodicals. He is currently the Editor-in-Chief of the
Real Estate Law Journal where he has served for the past 16 years. He is also
coauthor of the textbook, Law and Business: The Regulatory Environment, 1994,
published by the McGraw-Hill Book Company and Real Estate Law, 7th edition,
2009 published by Southwestern/Cengage Learning.
Prof. Eric Allender received a B.A. from the University of Iowa in 1979, majoring
in Computer Science and Theatre, and a Ph.D. from Georgia Tech in 1985. He has
been at Rutgers University since then, serving as department chair from 2006 to
2009. He is a Fellow of the ACM and serves on the editorial boards of the ACM
Transactions on Computation Theory, Computational Complexity, and The Chicago
Journal of Theoretical Computer Science. He has chaired the Conference Commit-
tee for the annual IEEE Conference on Computational Complexity, and he serves on
the Scientific Board for the Electronic Colloquium on Computational Complexity
(ECCC).
                                           ix
x                                CONTRIBUTORS
Prof. Hany Farid received his undergraduate degree in Computer Science and
Applied Mathematics from the University of Rochester in 1989. He received his
Ph.D. in Computer Science from the University of Pennsylvania in 1997. Following
a 2-year postdoctoral position in Brain and Cognitive Sciencesat MIT, he joined the
Dartmouth faculty in 1999. He is the David T. McLaughlin Distinguished Professor
of Computer Science and Associate Chair of Computer Science. He is also affiliated
with the Institute for Security Technology Studies at Dartmouth. He is the recipient
of an NSF CAREER award, a Sloan Fellowship, and a Guggenheim Fellowship. He
can be reached at [email protected], and more information about his work can
be found at www.cs.dartmouth.edu/farid.
Dr. Mikael Lindvall is a Senior Scientist and the Director of the Software Archi-
tecture and Embedded Systems division at Fraunhofer Center for Experimental
Software Engineering, Maryland. He is interested in best practices and
                                 CONTRIBUTORS                                      xi
Prof. Jörn Loviscach is a Professor for Computer Graphics, Animation, and Simula-
tion at Hochschule Bremen (University of Applied Sciences) in Bremen, Germany.
He is interested in 2D and 3D graphics algorithms and systems, human–computer
interaction, audio and music computing, in particular concerning applications that
require signal processing and/or the development of specialized electronics. A regular
contributor to conferences such as SIGGRAPH, Eurographics, and the AES Conven-
tion, he has published numerous chapters in book series such as Game Programming
Gems and ShaderX Programming. Before becoming a professor in 2000, he was
Deputy Editor-in-Chief of the popular German computer magazine ‘‘c’t,’’ the editorial
staff of which he joined soon after earning his doctorate degree in physics.
Dr. Jürgen Münch is Division Manager for Software and Systems Quality Man-
agement at the Fraunhofer Institute for Experimental Software Engineering (IESE)
in Kaiserslautern, Germany. Before that, he was Department Head for Processes and
Measurement at Fraunhofer IESE and an executive board member of the temporary
research institute SFB 501, which focused on software product lines. He received his
Ph.D. degree (Dr. rer. nat.) in Computer Science from the University of Kaiserslau-
tern, Germany, at the chair of Prof. Dr. Dieter Rombach. His research interests in
software engineering include (1) modeling and measurement of software processes
and resulting products, (2) software quality assurance and control, (3) technology
evaluation through experimental means and simulation, (4) software product lines,
and (5) technology transfer methods. He has significant project management expe-
rience and has headed various large research and industrial software engineering
projects, including the definition of international quality and process standards.
His main industrial consulting activities are in the areas of process management,
goal-oriented measurement, quality management, and quantitative modeling. He
has been teaching and training in both university and industry environments. He has
coauthored more than 80 international publications, and has been co-organizer,
program cochair, or member of the program committee of numerous high-standard
software engineering conferences and workshops. He is a member of ACM, IEEE,
the IEEE Computer Society, and the German Computer Society (GI).
His primary research interests are in the investment area. He has published numer-
ous articles in both finance and other business periodicals, including the American
Business Law Journal, Journal of Finance, Journal of Banking and Finance, CACM,
Financial Review, and Financial Practice and Education. His research on portfolio
diversification has been cited by the Wall Street Journal and the Investor’s Business
Daily. He has served as an ad hoc reviewer for various academic financial period-
icals, including Financial Management, Financial Review, and Financial Practice
and Education. He also offered his expertise to the business community, including
seminars to a utilities company on the uses of options and futures to hedge energy
costs.
This is volume 77, the last volume in the 50th year of publication of the Advances in
Computers. Since 1960, annual volumes have been produced containing chapters
authored by some of the leading experts in the field of computers. For 50 years, these
volumes offer ideas and developments that are changing our society. This volume
presents eight different topics covering many different aspects of computer science.
I hope you find them of interest. The first three chapters provide insights into the
different ways individuals can interact with electronic devices. First we look at
digital photography. Then we look at other display devices and in Chapter 3 at other
game interfaces that have been developed.
   Today, the ubiquitous film camera has all but disappeared from view to be
replaced by ever cheaper and larger digital chips of memory. While allowing a
huge number of pictures to be taken essentially for free, there is a cost in security of
the pictures. Digital images and associated software allow the photographer (or
almost anyone else for that matter) to manipulate the bits of the image and hence
change the picture. How do we discover such tampering and how do digital forensics
work to uncover fakery? Hany Farid in ‘‘Photo Fakery and Forensics’’ in Chapter 1
of this volume discusses methods for detecting inconsistencies in lighting and pixel
correlations to detect forgeries.
   Not only have cameras changed, but so too have all other visual devices
connected to the computer. Jason Leigh, Andrew Johnson, and Luc Renambot in
Chapter 2’s ‘‘Advances in Computer Displays’’ discuss a wide variety of visual
display technology—from the old-fashioned cathode ray tube (CRT) to more
modern plasma displays, steroscopic displays, and wall displays. They discuss
what the environment of the future—whether at work or at home—is likely to
contain.
   In Chapter 3, Jörn Loviscach in ‘‘Playing with All Senses: Human–Computer
Interface Devices for Games’’ discusses mechanisms for interacting with games on a
computer. After quickly passing through the usual mouse, keyboard, and joystick, he
discusses pen and touch input devices, sensors, and cameras. Inertial sensors allow
the user to move the device and the computer to interpret that motion, such as in
Nintendo’s successful Wii machine. Incorporating all of these into the next
                                          xv
xvi                                   PREFACE
sent— only the electronic bits that describe the document are sent. So who owns this
document? This is only one simple example. Can our 1000-year-old system based
upon English common law adapt to this new technology? In the final chapter, ‘‘The
Common Law and Its Impact on the Internet,’’ Robert Aalberts, David Hames, Percy
Poon, and Paul D. Thistle discuss how the legal system is adapting to this new
cyberworld.
   I hope that you find these chapters of use to you in your work. If you have any
topics you would like to see in these volumes, let me know. If you would like
to write a chapter for a forthcoming volume, also let me know. I can be reached
at [email protected].
                                                                Marvin Zelkowitz
                                                           College Park, Maryland
Photo Fakery and Forensics
        HANY FARID
        Department of Computer Science, Dartmouth College,
        Hanover, New Hampshire 03755, USA
        Abstract
        Photographs can no longer be trusted. From the tabloid magazines to the fashion
        industry, mainstream media outlets, political campaigns, and the photo hoaxes
        that land in our email inboxes, doctored photographs are appearing with a
        growing frequency and sophistication. I will briefly describe the impact of all
        of this photographic tampering and recent technological advances that have the
        potential to return some trust to photographs. Specifically, I will describe a
        representative sample of image forensics techniques for detecting inconsistencies
        in lighting, pixel correlations, and compression artifacts.
1. Photo Fakery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
    1.1.    Media . . . . . . . . . . . . .           .    .    .    .    .    .    .    .    .    .    .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   2
    1.2.    Science . . . . . . . . . . . .          .    .    .    .    .    .    .    .    .    .    .    .   .   .   .   .   .   .   .   .   .   .   .   .   .   2
    1.3.    Law . . . . . . . . . . . . . .           .    .    .    .    .    .    .    .    .    .    .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   3
    1.4.    Politics . . . . . . . . . . . .         .    .    .    .    .    .    .    .    .    .    .    .   .   .   .   .   .   .   .   .   .   .   .   .   .   3
    1.5.    National Security . . . . . . .          .    .    .    .    .    .    .    .    .    .    .    .   .   .   .   .   .   .   .   .   .   .   .   .   .   4
2. Photo Forensics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
    2.1.    Lighting Direction (2D) . . . . . . . . . . . . . . . . . . . . . . . . . . . .                                                                          5
    2.2.    Lighting Direction (3D) . . . . . . . . . . . . . . . . . . . . . . . . . . . .                                                                         10
    2.3.    Lighting Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                                                                          20
    2.4.    Color Filter Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                                                                        32
    2.5.    JPEG Ghosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                                                                           41
3. Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
    Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
    References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
1. Photo Fakery
   History is riddled with the remnants of photographic fakery. Stalin, Mao, Hitler,
Mussolni, Castro, and Brezhnev each had photographs manipulated in an attempt to
alter history. Cumbersome and time-consuming darkroom techniques were required
to alter history on behalf of Stalin and others. Today, powerful and low-cost digital
technology has made it far easier for nearly anyone to alter digital images. And the
resulting fakes are often very difficult to detect. This photographic fakery is having a
significant impact in many different areas.
                                  1.1      Media
   For the past decade, Adnan Hajj has produced striking war photographs from the
ongoing struggle in the Middle East. On 7 August 2006, the Reuters news agency
published one of Hajj’s photographs showing the remnants of an Israeli bombing of a
Lebanese town. In the week that followed, hundreds of bloggers and nearly every major
news organization reported that the photograph had been doctored with the addition of
more smoke. The general consensus was one of outrage and anger—Hajj was accused
of doctoring the image to exaggerate the impact of the Israeli shelling. An embarrassed
Reuters quickly retracted the photograph and removed from its archives nearly 1000
photographs contributed by Hajj. The case of Hajj is, of course, by no means unique. In
2003, Brian Walski, a veteran photographer of numerous wars, doctored a photograph
that appeared on the cover of the Los Angeles Times. After discovering the fake, the
outraged editors of the LA Times fired Walski. The news magazines Time and News-
week have each been rocked by scandal after it was revealed that photographs appearing
on their covers had been doctored. And, in the past few years, countless news organiza-
tions around the world have been shaken by similar experiences.
                                 1.2     Science
   Those in the media are not alone in succumbing to the temptation to manipulate
photographs. In 2004, Professor Hwang Woo-Suk and colleagues published what
appeared to be groundbreaking advances in stem cell research. This paper appeared
in one of the most prestigious scientific journals, Science. Evidence slowly emerged
that these results were manipulated and/or fabricated. After months of controversy,
Hwang retracted the Science paper and resigned his position at the University. An
independent panel investigating the accusations of fraud found, in part, that at least
nine of the 11 customized stem cell colonies that Hwang had claimed to have made
were fakes. Much of the evidence for those nine colonies, the panel said, involved
                         PHOTO FAKERY AND FORENSICS                                    3
doctored photographs of two other, authentic, colonies. While this case garnered
international coverage and outrage, it is by no means unique. In an increasingly
competitive field, scientists are succumbing to the temptation to exaggerate or
fabricate their results. Mike Rossner, the managing editor of the Journal of Cell
Biology estimates that as many as 20% of accepted manuscripts to his journal
contain at least one figure that has to be remade because of inappropriate image
manipulation [1].
                                     1.3     Law
   The child pornography charges against its Police Chief shocked the small town of
Wapakoneta, OH. At his trial, the defendant’s lawyer argued that if the State could not
prove that the seized images were real, then the defendant was within his rights in
possessing the images. In 1996, the Child Pornography Prevention Act (CPPA)
extended the existing federal criminal laws against child pornography to include certain
types of ‘‘virtual porn.’’ In 2002, the United States Supreme Court found that portions
of the CPPA, being overly broad and restrictive, violated First Amendment rights. The
Court ruled that ‘‘virtual’’ or ‘‘computer-generated’’ images depicting a fictitious
minor are constitutionally protected. The burden of proof in this case, and countless
others, shifted to the State who had to prove that the images were real and not computer
generated. Given the sophistication of computer-generated images, several state and
federal rulings have further found that juries should not be asked to make the determi-
nation between real and virtual. And at least one federal judge questioned the ability of
even expert witnesses to make this determination. This example highlights the general
complexities that exist at the intersection of digital technology and the law.
                                  1.4      Politics
   ‘‘Fonda Speaks to Vietnam Veterans at Anti-War Rally’’ read the headline with
an accompanying photograph purportedly showing Senator John Kerry sharing a
stage with the then controversial Jane Fonda. The faux article was also a fake—a
composite of two separate and unrelated photographs. And just days after being
selected as a running mate to U.S. presidential hopeful John McCain, doctored
images of a bikini clad and gun-toting Sarah Palin were widely distributed on
the Internet. The pairing of one’s political enemies with controversial figures is
certainly not new. It is believed that a doctored photograph contributed to Senator
Millard Tydings’ electoral defeat in 1950. The photo of Tydings conversing with
Earl Browder, a leader of the American Communist party, was meant to suggest that
Tydings had communist sympathies. Recent political ads have seen a startling
number of doctored photographs pitting candidates in a flattering or damaging light.
4                                                H. FARID
2. Photo Forensics
    1
        Portions of this chapter have appeared in [5–8].
                         PHOTO FAKERY AND FORENSICS                                      5
                                                             ù
                                   ù
                                                             N
                                  N                                   ù
                                            ù
                                                                      L
                                            L
   FIG. 1. Schematic diagram of the imaging geometry for 3D surface normals (left) and 2D surface
                                                                         !
normals (right). In the 2D case, the z-component of the surface normal ( N ) is zero.
                         0 1 0                   12
                             Lx      I ðx1 ; y1 Þ 
                         
               !   B C B I ðx ; y Þ C
                               L y C B 2 2 C            ! ! 2
             E L ; A ¼  MB
                             @ Lz A  @ ⋮ A ¼ M v  b ;
                                                                                         ð2Þ
                                                
                             A       I xp ; yp     
   In [12], the authors suggest a clever solution for estimating two components of the
light source directionv (Lx and Ly ) from only a single image. While their approach
clearly provides less information regarding the light source direction, it does make
the problem tractable from a single image. The authors note that at the occluding
boundary of a surface, the z-component of the surface normal is zero, Nz ¼ 0.
In addition, the x- and y-components of the surface normal, Nx and Ny , can be
estimated directly from the image (Fig. 1, right).
   With this assumption, the error function of Equation 2 takes the form
                                                  0                1
                                    0 1             I ðx 1 ; y 1 Þ  2
                                        Lx                          
                                                  B                C
                          !         B C B I ðx2 ; y 2 Þ C
                      Eð L ; AÞ ¼  M@ Ly A  B                     C
                                                  @ ⋮ A                        ð5Þ
                                        A                         
                                                    I xp ; yp       
                                           !  2
                                 ¼  M !
                                        v  b  ;
where
                              0                               1
                            Nx ðx1 ; y1 Þ     Ny ðx1 ; y1 Þ 1
                          B Nx ðx2 ; y2 Þ     Ny ðx2 ; y2 Þ 1 C
                        M¼B
                          @
                                                              C:
                                                              A                        ð6Þ
                              ⋮               ⋮  ⋮
                            Nx xp ; yp        Ny xp ; yp    1
   This error function is minimized, as before, using standard least squares to yield
the same solution as in Equation 4, but with the matrix M taking the form given in
Equation 6. In this case, the solution requires knowledge of 2D surface normals from
at least three distinct points ( p  3) on a surface with the same reflectance.
   The intensity, Iðxi ; yi Þ, at a boundary point, ðxi ; yi Þ, cannot be directly measured
from the image as the surface is occluded. The authors in [12] note, however, that the
intensity can be extrapolated by considering the intensity profile along a ray coinci-
dent to the 2D surface normal. They also found that simply using the intensity close
to the border of the surface is often sufficient.
   We extend this basic formulation in two ways. First, we estimate the two-dimensional
light source direction from local patches along an object’s boundary (as opposed to
along extended boundaries as in [12]). This is done to relax the assumption that the
reflectance along the entire surface is constant. Then, a regularization (smoothness)
term is introduced to better condition the final estimate of light source direction.
   The constant reflectance assumption is relaxed by assuming that the reflectance
for a local surface patch (as opposed to the entire surface) is constant. This requires
                                                      !i
us to estimate individual light source directions, L , for each patch along a surface.
Under the infinite light source assumption, the orientation of these estimates should
not vary, but their magnitude may (recall that the estimate of the light source is only
within a scale factor, which depends on the reflectance value R, Equation 1).
8                                      H. FARID
   Consider a surface partitioned into n patches, and, for notational simplicity, assume
that each patch contains p points. The new error function to be minimized is constructed
by packing together, for each patch, the 2D version of the constraint of Equation 1:
                                                     0  1 1  12
                                         0 1             I x1 ; y1    
                                                                      
                                            L1x      B  ⋮  C
                                         B 1 C B                    C
                                            Ly C B I x1 ; y1 C
                   !1                B             B      p p C
                              !n         B   ⋮C CB                 C
               E1 L ; . . . ; L ; A ¼   M B          B     n⋮ n  C  
                                         B  L nC
                                                  C B I x1 ; y1 C
                                                       B                            ð7Þ
                                         B    x                     C 
                                         @ Lny A B                  C  
                                                     @  ⋮  A
                                             A                        
                                                        I xnp ; ynp   
                                         ! ! 2
                                     ¼  M v  b  ;
where
          
          0                                                                  1
        Nx x11 ; y11       Ny x11 ; y11            0               0          1
      B ⋮                  ⋮  ...             ⋮                ⋮          ⋮C
      B                                                                         C
      B Nx x1p ; y1p       Ny x1p ; y1p            0               0          1C
      B                                                                         C
      B                                   ..                                    C
    M¼B
      B     ⋮                  ⋮             .
                                                                                C
                                                                              ⋮ C:   ð8Þ
      B                                          ⋮             ⋮            C
      B     0                  0               Nx xn1 ; yn1    Ny xn1 ; yn1   1C
      B                                                                         C
      @     ⋮                  ⋮          ...    ⋮             ⋮          ⋮A
            0                  0               Nx xnp ; ynp    Ny xnp ; ynp   1
  The above quadratic error function is minimized, as before, using least squares
with the solution taking on the same form as in Equation 4. In this case, the
                                                              !1       !n
solution provides n estimates of the 2D light directions, L ; . . . ; L , and an ambient
term A. Note that while individual light source directions are estimated for each
surface patch, a single ambient term is assumed.
   While the local estimation of light source directions allows for the relaxation of
the constant reflectance assumption, it could potentially yield less stable results.
Note that under the assumption of an infinite point light source, the orientation of the
n light directions should be equal. With the additional assumption that the change in
reflectance from patch to patch is relatively small (i.e., the change in the magnitude
                 !i
of neighboring L is small), we can condition the individual estimates with the
following regularization term:
                                        n 
                                        X                2
                                                         
                          !1        !n          !i
                      E2 L ; . . . ; L ¼     L  !i
                                                   L  1  :              ð9Þ
                                                       
                                            i¼2
  This additional error term penalizes neighboring estimates that are different from
one another. The quadratic error function E1 ðÞ (Equation 7) is conditioned by
                               PHOTO FAKERY AND FORENSICS                                9
combining it with the regularization term E2 ðÞ, scaled by a factor l, to yield the final
error function:
                                                                       
              !1       !n             !1      !n                !1       !n
          E L ; . . . ; L ; A ¼ E1 L ; . . . ; L ; A þ lE2 L ; . . . ; L :           ð10Þ
   This combined error function can still be minimized using least-squares minimiza-
tion. The error function E2 ðÞ is first written in a more compact and convenient form as
                                         !  !2
                                                    
                                    E 2 v ¼   C v  ;                             ð11Þ
   Shown in Fig. 2 are eight images of objects illuminated by the sun on a clear day.
To determine the accuracy of our approach, a calibration target, consisting of a flat
surface with a rod extending from the center, was placed in each scene. The target
was approximately parallel to the image plane, so that the shadow cast by the rod
indicated the direction of the sun. Errors in our estimated light source direction are
given relative to this orientation. The average estimation error is 4.8 with a
minimum and maximum error of 0.6 and 10.9 . The image returning the largest
error is the parking meters. There are probably at least three reasons for this larger
error, and for errors in general. The first is that the metallic surface violates the
Lambertian assumption. The second is that the paint on the meter is worn in several
spots causing the reflectance to vary, at times, significantly from patch to patch. And
the third is that we did not calibrate the camera so as to remove luminance
nonlinearities (e.g., gamma correction) in the image.
   The creation of a digital forgery often involves combining objects/people from
separate images. In so doing, it is difficult to exactly match the lighting effects due to
directional lighting (e.g., the sun on a clear day). At least one reason for this is that
such a manipulation may require the creation or removal of shadows and lighting
gradients. And while large inconsistencies in lighting direction may be fairly
obvious, there is evidence from the human psychophysics literature that we are
surprisingly insensitive to differences in lighting across an image [13, 14]. To the
extent that the direction of the light source can be estimated for different objects/
people in an image, inconsistencies in lighting can be used as evidence of digital
tampering.
   FIG. 2. Shown are eight images with the extracted occluding boundaries (black), individual light
source estimates (white), and the final average light source direction (large arrow). In each image, the cast
shadow on the calibration target indicates the direction to the illuminating sun, and has been darkened to
enhance visibility.
12                                               H. FARID
                       Light
                                         L
                               N         qi
                                         qr
                                   V=R
     Camera
                                                                                      Eye
   FIG. 3. The formation of a specular highlight on an eye (small white dot on the iris). The position of
                                                  !                                                 !
the highlight is determined by the surface normal N and the relative directions to the light source L and
        !
viewer V .
                   !      !
view direction V ¼ R .!For an imperfect reflector, a specular highlight can!be seen
                                  !                                                   !
for viewing directions V near R , with the strongest  ! !
                                                          highlight!seen when V ¼ R .
   An algebraic relationship between
                                  !
                                         the
                                          !
                                             vectors L , N , and V is first derived. We
then show how the 3D vectors N and V!can be estimated from a single image, from
which the direction to the light source L is determined.
   The law of reflection states that a light ray reflects off of a surface at an angle of
reflection yr equal to the angle of
                                 !
                                    incidence yi , where these angles are measured with
respect to the surface normal N (Fig. 3).
                                                                               !
   Assuming unit length vectors, the direction of the reflected !        ray R can be
                                             !
described in terms of the light direction L and the surface normal N :
                              !      !
                                                   !    !
                                                           
                              R ¼ L þ2 cosðyi ÞN  L
                                              !    !                                ð16Þ
                                ¼ 2 cosðyi ÞN  L:
                                             !     !
By assuming a perfect reflector ( V ¼ R ), the above constraint yields
                                   !                   !    !
                                   L ¼ 2 cosðyi ÞN V
                                            !    ! !    !                                          ð17Þ
                                     ¼ 2 V T N N  V:
                           !                                                                     !
   The light direction L can, therefore, be estimated from the surface normal N and
                 !
view direction V at a specular highlight. Note that the light direction is specified with
respect to the eye, and not the camera. In practice, all of these vectors will be placed in a
common coordinate system, allowing us to compare light directions
                                                                !
                                                                          across the image.
                                     !
   To estimate the surface normal N and view direction V in a common coordinate
system, we first need to estimate the projective transform that describes the trans-
formation from world to image coordinates. With only a single image, this
                          PHOTO FAKERY AND FORENSICS                                     13
focal length, camera center, skew, and aspect ratio. For simplicity, we will assume that
the camera center is the image center and that the skew is 0 and the aspect ratio is 1,
leaving only the focal length f. The extrinsic parameters consist of a rotation matrix R
                       !
and translation vector t that define the transformation between the world and camera
coordinate systems. Since the world points lie on a single plane, the projective transform
can be decomposed in terms of the intrinsic and extrinsic parameters as
                                                    
                                             ! ! !
                                  H ¼ lK r 1 r 2 t ;                                  ð21Þ
where  denotes cross product. If the focal length is unknown, it can be directly
estimated as described in [6].
   Recall that the minimization of Equation 20 yields both the transform H and the
                   !
circle parameters a for the limbus. The unit vector from the center of the limbus to
                                                                             !
the origin of the camera coordinate system is the view direction, v . Let
 !
 Xc ¼ ðC1 C2 1Þ denote the estimated center of a limbus in world coordinates. In
the camera coordinate system, this point is given by
                                       !         !
                                       xc   ¼ H^ Xc :                                ð25Þ
  The view direction, as a unit vector, in the camera coordinate system is then given by
                                                !
                                     !           xc
                                      v¼       ! ;                                  ð26Þ
                                               k xc k
                               PHOTO FAKERY AND FORENSICS                                          15
  A                                                 B
                                                                                     V
                                                                       N
                                         Limbus
                                                               S           N
                          r1
                                r2   p
                      d        q
                                                                                     V
                                         Cornea
Sclera
   FIG. 4. (A) A side view of a 3D model of the human eye. The larger sphere represents the sclera and
the smaller sphere represents the cornea. The limbus is defined by the intersection of the two spheres.
                                  !                                                           !
(B) The surface normal at a point S in the plane of the limbus depends on the view direction V .
where the negative sign reverses the vector so that it points from the eye to
the camera.                 !
   The 3D surface normal N at a specular highlight is estimated from a 3D model of
the human eye [18]. The model consists of a pair of spheres as illustrated in Fig. 4A.
The larger sphere, with radius r1 ¼ 11:5 mm, represents the sclera and the smaller
sphere, with radius r2 ¼ 7:8 mm, represents the cornea. The centers of the spheres
are displaced by a distance d ¼ 4.7 mm. The limbus, a circle with radius p ¼ 5.8 mm,
is defined by the intersection of the two spheres. The distance between the center of
the smaller sphere and the plane containing the limbus is q ¼ 5.25 mm. These
measurements vary slightly among adults, and the radii of the spheres are approxi-
mately 0.1 mm smaller for female eyes [18, 19].                          !          
   Consider a specular highlight in world coordinates at location S !       ¼ Sx S y ,
measured with respect to the center of the limbus. The surface normal at S depends
                         !
on the view direction V . Fig. 4B is a schematic showing!
                                                             this relationship for two
different positions of the camera. The surface !normal N is determined by intersect-
                       !
ing the ray leaving S , along the direction V, with the edge of the sphere. This
intersection can be computed by solving a quadratic system for k, the distance
          !
between S and the edge of the sphere,
                                            2
                    ðSx þ kVx Þ2 þ Sy þ kVy þ ðq þ kVz Þ2 ¼ r22;
                                                                                ð27Þ
             k2 þ 2 Sx Vx þ Sy Vy þ qVz k þ S2x þ S2y þ q2  r22 ¼ 0;
where q and r2 are specified by the 3D model of the eye. The view direction
!
V ¼ Vx Vy Vz in the world coordinate system is given by
16                                    H. FARID
                                     !          !
                                     V ¼ R1 v ;                                  ð28Þ
       !
where v is the view direction in camera coordinates and R is the estimated!rotation
between the world and camera coordinate systems. The surface normal N in the
world coordinate system is then given by
                                    0             1
                                !
                                       S x þ kV x
                                N ¼ @ Sy þ kVy A;                              ð29Þ
                                       q þ kVz
                             !       !
and in camera coordinates: n ¼ R N.
                                !
  Consider a specular highlight xs specified in image coordinates and the estimated
projective transform H from world to image coordinates. The inverse transform H 1
maps the coordinates of the specular highlight into world coordinates:
                                     !         !
                                     Xs ¼ H 1 xs :                               ð30Þ
              !
  The center C and radius r of the limbus !in the world coordinate system determine
the coordinates of the specular highlight, S , with respect to the model:
                                                  
                                 !    p ! !
                                 S ¼       Xs  C ;                            ð31Þ
                                       r
where p is! specified by the 3D model of the eye. The position  !
                                                                       of the specular
highlight S is then used to determine
                               !
                                          the surface  normal  !
                                                                N . Combined  with the
estimate of the view direction V , the light source direction L can be estimated from
Equation 17. To compare light source estimates  !
                                                       in! the image, the light source
estimate is converted to camera coordinates: l ¼ R L .
   To test the efficacy of this light estimation, synthetic images of eyes were
rendered using the pbrt environment [20]. The shape of the eyes conformed to the
3D model described above and the eyes were placed in 1 of 12 different locations.
For each location, the eyes were rotated by a unique amount relative to the camera.
The eyes were illuminated with two light sources: a fixed light directly in line with
the camera and a second light placed in one of four different positions. The 12
locations and 4 light directions gave rise to 48 images (Fig. 5). Each image was
rendered at a resolution of 1200  1600 pixels, with the cornea occupying less than
0.1% of the entire image. Shown in Fig. 5 are several examples of the rendered eyes,
along with a schematic of the imaging geometry.
   The limbus and position of the specular highlight(s) were automatically extracted
from the rendered image. For each highlight, the projective transform H, the view
          !                      !
direction v and surface normal n were estimated, from which the direction !
                                                                                 to the
            !
light source l was determined. The angular error between the estimated l and actual
                              PHOTO FAKERY AND FORENSICS                                               17
   FIG. 5. Synthetically generated eyes. Each of the upper panels corresponds to different positions and
orientations of the eyes and locations of the light sources. The ellipse fit to each limbus is shown with a
dashed line, and the small dots denote the positions of the specular highlights. Shown below is a schematic
of the imaging geometry: the position of the lights, camera, and a subset of the eye positions.
18                                             H. FARID
!                                                  !      !
l0  light directions is computed as f ¼ cos1 l T  l0 , where the vectors are
normalized to be unit length. With a known focal length, the average angular error
in estimating the light source direction was 2.8 with a standard deviation of 1.3 and
a maximum error of 6.8 . With an unknown focal length, the average error was 2.8
with a standard deviation of 1.3 and a maximum error of 6.3 .
   To further test the efficacy of our technique, we photographed a subject
under controlled lighting. A camera and two lights were arranged along a wall,
and the subject was positioned 250 cm in front of the camera and at the same
elevation. The first light L1 was positioned 130 cm to the left of and 60 cm above
the camera. The second light L2 was positioned 260 cm to the right and 80 cm
above the camera. The subject was placed in five different locations and orientations
relative to the camera and lights (Fig. 6). A 6-megapixel Nikon D100 camera with a
35 mm lens was set to capture in the highest quality JPEG format.
   For each image, an ellipse was manually fit to the limbus of each eye. In these
images, the limbus did not form a sharp boundary—the boundary spanned roughly 3
pixels. As such, we fit the ellipses to the better defined inner outline [21] (Fig. 6).
   FIG. 6. A subject at different locations and orientations relative to the camera and two light sources.
Shown to the right are magnified views of the eyes. The ellipse fit to each limbus is shown with a dashed
line and the small dots denote the positions of the specular highlights. See also Table I.
                          PHOTO FAKERY AND FORENSICS                                       19
The radius of each limbus was approximately 9 pixels, and the cornea occupied
0.004% of the entire image. Each specular highlight was localized by specifying a
bounding rectangular area around each highlight and computing the centroid of the
selection. The weighting function for the centroid computation was chosen to be the
squared (normalized) pixel intensity. The location to the light source(s) was esti-
mated for each pair of eyes assuming a known and unknown focal length. The
angular errors for each image are given in Table I. Note that in some cases an
estimate for one of the light sources was not possible when the highlight was not
visible on the cornea. With a known focal length, the average angular error was 8.6 ,
and with an unknown focal length, the average angular error was 10.5 .
   When creating a composite of two or more people, it is often difficult to match the
lighting conditions under which each person was originally photographed. Specular
highlights that appear on the eye are a powerful cue as to the shape, color, and
location of the light source(s). Inconsistencies in these properties of the light can be
used as evidence of tampering. We can measure the 3D direction to a light source
from the position of the highlight on the eye. While we have not specifically focused
on it, the shape and color of a highlight are relatively easy to quantify and measure
and should also prove helpful in exposing digital forgeries. Since specular highlights
tend to be relatively small on the eye, it is possible to manipulate them to conceal
traces of tampering. To do so, the shape, color, and location of the highlight would
have to be constructed so as to be globally consistent with the lighting in other parts
of the image. Inconsistencies in this lighting may be detectable using the technique
described in the previous section.
                                           Table I
    ANGULAR ERRORS ( ) IN ESTIMATING THE LIGHT DIRECTION FOR THE IMAGES SHOWN IN FIG. 6
Image L1 L2 L1 L2 L1 L2 L1 L2
                 On the left are the errors for a known focal length, and on the right
              are the errors for an unknown focal length. A ‘‘–’’ indicates that the
              specular highlight for that light was not visible on the cornea.
20                                         H. FARID
                                                 Æ
                                                 N
                                  Æ
                                  V
                                                     Æ
                                                     X
  !                                                           !
E N , can be parametrized by the unit length surface normal
                                                          ! ! N and written as a
convolution ofthe
                !
                   reflectance function of the surface, R V ; N , with the lighting
environment L V :
                            !  ð !   ! ! 
                          E N ¼ L V R V ; N dO;                                ð32Þ
                                       O
where O represents the surface of the sphere and dO is an area differential on the
sphere. For a Lambertian surface, the reflectance function is a clamped cosine:
                            ! !            ! ! 
                          R V ; N ¼ max V  N ; 0 ;                            ð33Þ
                                                               !        !
which is either the cosine of the angle between vectors V and N , or zero when
the angle is greater than 90 . This reflectance function effectively limits
                                                                         !
                                                                             the integra-
tion in Equation 32 to the hemisphere about the surface normal N (Fig. 7). In
addition, while we have assumed no cast shadows, Equation 33 explicitly models
attached shadows,
           !
                     that is, shadows due to surface normals facing away from the
direction V .
   The convolution in Equation 32 can be simplified by expressing both the lighting
environment and the reflectance function in terms of spherical harmonics. Spherical
harmonics form an orthonormal basis for piecewise continuous functions on the
sphere and are analogous to the Fourier basis on the line or plane. The first three
orders of spherical harmonics are shown in Fig. 8 and defined as
            !     1                 !  sffiffiffiffiffi
                                                3
                                                   ffi                 !  sffiffiffiffiffi3
                                                                                  ffi
      Y0; 0 N ¼ pffiffiffiffiffiffi ;      Y1; 1 N ¼            y;        Y1; 0 N ¼            z;
                    4p                        4p                               4p
            !  sffiffiffiffiffiffi             !   sffiffiffiffiffiffiffiffi                !   sffiffiffiffiffiffiffiffi
                     3                          5                                5
      Y1; 1 N ¼          x;   Y2; 2 N ¼ 3            xy;    Y2; 1 N ¼ 3             yz;
                    4p                       12p                               12p
      !  1 sffiffiffiffiffi
                  ffi
                 5 2              !    sffiffiffiffiffiffiffiffi
                                               5               !  3 sffiffiffiffiffiffiffi
                                                                             ffi
                                                                          5  2           
Y2; 0 N ¼           3z  1 ; Y2; 1 N ¼ 3             xz; Y2; 2 N ¼              x  y2 ;
              2 4p                          12p                        2 12p
        !
where N ¼ ðx y zÞ in Cartesian coordinates.
  The lighting environment expanded in terms of these spherical harmonics is
                            ! X
                                1 X
                                  n             !
                          L V ¼     ln; m Yn; m V ;                                  ð34Þ
                                     n ¼ 0 m ¼n
where Yn; m ðÞ is the mth spherical harmonic of order n, and ln; m is the corresponding
coefficient of the lighting environment. Similarly, the reflectance function for
                           ! !
Lambertian surfaces, R V ; N , can be expanded in terms of spherical harmonics,
22                                              H. FARID
   FIG. 8. The first three orders of spherical harmonics as functions on the sphere. Shown from top to
bottom are the order zero spherical harmonic, Y0;0 ðÞ; the three order one spherical harmonics, Y1;m ðÞ; and
the five order two spherical harmonics, Y2;m ðÞ.
and due to its symmetry about the surface normal, only harmonics with m ¼ 0 appear
in the expansion
                       ! ! X                           
                                    1               ! ! T
                    R V; N ¼           rn Yn; 0 0 0 V  N    :                ð35Þ
                                             n¼0
   Note that for m ¼ 0, the spherical harmonic Yn;0 ðÞ depends only on the
z-component of its argument.
   Convolutions of functions on the sphere become products when represented in
terms of spherical harmonics [22, 23]. As a result, the irradiance (Equation 32) takes
the form
                          ! X    1 X  n                    !
                       E N ¼                 r^n ln; m Yn; m N ;                  ð36Þ
                                            n ¼ 0 m ¼ n
where
                                                 rffiffiffiffiffiffiffiffiffiffiffiffiffiffi
                                                      4p
                                           r^n ¼                 rn :                                   ð37Þ
                                                  2n þ 1
   The key observation in [22, 23] was that the coefficients r^n for a Lambertian
reflectance function decay rapidly, and thus the infinite sum in Equation 36 can be
well approximated by the first nine terms:
                          ! X    2   Xn                   !
                        E N                r^n ln; m Yn; m N :                ð38Þ
                                            n ¼ 0 m ¼ n
                         PHOTO FAKERY AND FORENSICS                                    23
    Since the constants r^n are known for a Lambertian reflectance function, the
irradiance of a convex Lambertian surface under arbitrary distant lighting can be
well modeled by the nine lighting environment coefficients ln; m up to order 2.
    Irradiance describes the total amount of light reaching a point on a surface. For a
Lambertian surface, the reflected light, or radiosity, is proportional to the irradiance
by a reflectance term r. In addition, Lambertian surfaces emit light uniformly in all
directions, so the amount of light received by a viewer (i.e., camera) is independent
of the view direction.
    A camera maps its received light to intensity through a camera response function
f ðÞ. Assuming the reflectance term r is constant across the surface, the measured
                     !
intensity at a point x in the image is given by [24]
                                !       !  !
                              I x ¼ f rtE N x              ;                        ð39Þ
                                 ! !                                   !
where EðÞ is the irradiance, N x is the surface normal at point x , and t is the
exposure time. For simplicity, we assume a linear camera response, and thus the
intensity is related to the irradiance by an unknown multiplicative factor, which is
assumed to have unit value—this assumption implies that the lighting coefficients
can only be estimated to within an unknown scale factor. Under these assumptions,
the relationship between image intensity and irradiance is simply
                                    !     !  !
                                 I x ¼E N x :                                       ð40Þ
  Since, under our assumptions, the intensity is equal to irradiance, Equation 40 can
be written in terms of spherical harmonics by expanding Equation 38:
   !                 !           2p        !          2p       !          2p       !
  I x ¼ l0; 0 pY0; 0 ð N Þ þ l1; 1 Y1; 1 ð N Þ þ l1; 0 Y1; 0 ð N Þ þ l1; 1 Y1; 1 ð N Þ
                                    3                    3                   3
                  p         !          p         !         p        !
       þ l2; 2 Y2; 2 ð N Þ þ l2; 1 Y2; 1 ð N Þ þ l2; 0 Y2; 0 ð N Þ
                  4                    4                   4
                p         !          p      !
        þ l2; 1 Y2; 1 ð N Þ þ l2; 2 Y2; 2 ð N Þ:
                4                    4
                                                                                      ð41Þ
    Note that this expression is linear in the nine lighting environment coefficients,
l0; 0 to l2; 2 . As such, given 3D surface normals at p  9 points on the surface of an
object, the lighting environment coefficients can be estimated as the least-squares
solution to the following system of linear equations:
24                                             H. FARID
 0           !  !                  !  !                    !  !  1
                             2p                            p
      pY0; 0 N x1               Y1; 1 N x1             ...  Y2; 2 N x1
 B                            3                            4                 C0       1
 B                                                                           C
 B        !  !                     !  !                    !  !  C l0; 0
 B                           2p                            p                 C l
 B pY0; 0 N x2                  Y1; 1 N x2            ...   Y2; 2 N x2 CB            C
                                                                             CB
                                                                                1; 1 C
 B                                                                            @  ⋮ A
                              3                            4
 B                                                                           C
 B                                                                           C
 B       ⋮                        ⋮!  !                     ⋮
                                                                   !  !  C l2; 2
 @ pY      ! !               2p
                                                       ...
                                                           p
                                                             Y2; 2 N xp A
     0; 0 N    xp               Y1; 1 N xp
                              3                            4                            ð42Þ
                                          0 ! 1      
                                            I x
                                          B  !1  C
                                          B        C
                                         ¼B
                                          B
                                            I x 2 C;
                                                   C
                                          @ ⋮  A
                                              !
                                            I xp
                                               !      !
                                            M v ¼ b;
                                                                                     !
where M is the matrix containing the sampled spherical harmonics, v is the vector of
                                                    !
unknown lighting environment coefficients, and b is the vector of intensities at
p points. The least-squares solution to this system is
                                       !       1 !
                                        v ¼ MT M MT b :                                                 ð43Þ
   This solution requires 3D surface normals from at least nine points on the surface
of an object. Without multiple images or known geometry, however, this require-
ment may be difficult to satisfy from an arbitrary image.
   As in [5, 12], we observe that under an assumption of orthographic projection, the
z-component of the surface normal is zero along the occluding contour of an object.
Therefore, the intensity profile along an occluding contour simplifies to
      !              2p        !        2p     !          p       !
    I x ¼ A þ l1; 1 Y1; 1 N þ l1; 1 Y1; 1 N þ l2; 2 Y2; 2 N
                        3                    3                  4                ð44Þ
                     p      !
              þ l2; 2 Y2; 2 N ;
                     4
where
                                                                 rffiffiffi
                                              p            p      5
                                    A ¼ l0; 0 pffiffiffi  l2; 0           :                                  ð45Þ
                                             2 p           16     p
  Note that the !functions Yi; j ðÞ depend only on the x- and y-components of the
surface normal N . That is, the five lighting coefficients can be estimated from only
2D surface normals, which are relatively simple to estimate from a single image.2
  2
      The 2D surface normal is the gradient vector of an implicit curve fit to the edge of an object.
                         PHOTO FAKERY AND FORENSICS                                    25
In addition, Equation 44 is still linear in its now five lighting environment coeffi-
cients, which can be estimated as the least-squares solution to
 0               !  !  2p        !  !  p         !  !  p       !  !  1
      2p
 B 1     Y 1;1   N   x 1     Y 1;1    N  x 1     Y2;2   N   x1     Y2;2   N x1 C
 B     3                    3                   4                  4                 C
 B               !  !  2p        !  !  p         !  !  p       !  !  C
 B 2p                                                                                C
 B1                                                                  Y2;2 N x2 C
 B       Y1;1 N x2           Y1;1 N x2           Y2;2 N x2                         C
 B     3                    3                   4                  4                 C
 B                                                                                   C
 B⋮             ⋮                     ⋮                  ⋮                  ⋮        C
 B               !  !  2p        !  !  p         !  !  p       !  !  C
 B 2p                                                                                C
 @1      Y1;1 N xp           Y1;1 N xp           Y2;2 N xp         Y2;2 N xp A
       3                    3                   4                  4
                               0         1 0 !  1
                                     A         I x
                               B l1;1 C B  !1  C
                               B         C B          C
                               B l1;1 C ¼ B I x2 C;                                ð46Þ
                               B         C B          C
                               @ l2;2 A @ ⋮  A
                                                  !
                                    l2;2       I xp
                                          !     !
                                       M v ¼ b;                                      ð47Þ
which has the same least-squares solution as before:
                                 !      1 !
                                 v ¼ MT M MT b :                                     ð48Þ
    Note that this solution only provides five of the nine lighting environment
coefficients. We will show, however, that this subset of coefficients is still suffi-
ciently descriptive for forensic analysis.
    When analyzing the occluding contours of objects in real images, it is often the case
that the range of surface normals is limited, leading to an ill-conditioned matrix M.
This limitation can arise from many sources, including occlusion or object geometry.
As a result, small amounts of noise in either the surface normals or the measured
intensities can cause large variations in the estimate of the
                                                             !lighting
                                                                       environment vector
!
 v . To better condition the estimate, an error function E v is defined that combines
the least-squares error of the original linear system with a regularization term:
                            !  ! !2                
                                                          !  2
                          E v ¼  M v  b  þ lC v  ;                      ð49Þ
where l is a scalar and the matrix C is diagonal with ð1 2 2 3 3Þ on the diagonal. The
matrix C is designed to dampen the effects of higher order harmonics and is motivated
by the observation that the average power of spherical harmonic coefficients
for natural lighting environments decreases with increasing harmonic order [25].
26                                    H. FARID
For the full lighting model when 3D surface normals are available (Equation 49), the
matrix C has ð1 2 2 2 3 3 3 3 3Þ on the diagonal.
  The error function to be minimized (Equation 49) is a least-squares problem with
a Tikhonov regularization [26]. The analytic minimum is found by differentiating
                 !
with respect to v :
                          !
                       dE0 v           !        !          !
                          !   ¼ 2MT M v 2MT b þ2lCT C v
                         dv                                                    ð50Þ
                                                          !   !
                                ¼ 2ðMT M þ lCT CÞ v 2MT b ;
                                                      !
setting the result equal to zero, and solving for v :
                            !               þ !
                             v ¼ MT M þ lCT C MT b :                            ð51Þ
   In practice, we have found that the conditioned estimate in Equation 51 is
appropriate if less than 180 of surface normals are available along the occluding
contour. If more than 180 of surface normals are available, the least-squares
estimate (Equation 48) can be used, though both estimates will give similar results
for small values of l.
                                    !
   The estimated coefficient vector v (Equation 51) is a low-order approximation of
the lighting environment. For forensics purposes, we would like to differentiate
between lighting environments based on these coefficients. Intuitively, coefficients
from objects in different lighting environments should be distinguishable, while
coefficients from objects in the same lighting environment should be similar. In
addition, measurable differences in sets of coefficients should be mostly due to
differences in the lighting environment and not to other factors such as object color
or image exposure. Taking these issues into consideration, we propose an error
                                                            !      !
measure between two estimated lighting environments. Let v1 and v2 be two vectors
of lighting environment coefficients. From these coefficients, the irradiance profile
along a circle (2D) or a sphere (3D) is synthesized, from which the error is
                                                     !       !
computed. The irradiance profiles corresponding to v1 and v2 are given by
                                      !        !
                                      x1   ¼ M v1 ;                             ð52Þ
                                      !        !
                                      x2   ¼ M v2 ;                             ð53Þ
where the matrix M is of the form in Equation 42 (for 3D normals) or Equation 46
(for 2D normals). After subtracting the mean, the correlation between these zero-
mean profiles is
                                 ! !           ! T!
                                                  x1 x2
                             corr x1 ; x2 ¼     !     ! :                       ð54Þ
                                               k x1 kk x2 k
                        PHOTO FAKERY AND FORENSICS                                  27
 In practice, this correlation can be computed directly from the lighting environ-
ment coefficients:
                           ! !               !T !
                                                v1 Q v2
                       corr v1 ; v2 ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
                                                     ffiqffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ;         ð55Þ
                                       !          !    !          !
                                        v1T Q v1 v2T Q v2
where the matrix Q for both the 2D and 3D cases is derived in [7]. By design, this
correlation is invariant to both additive and multiplicative factors on the irradiance
          !       !                                       !       !
profiles x1 and x2 . Recall that our coefficient vectors v1 and v2 are estimated to
within an unknown multiplicative factor. In addition, different exposure times under
a nonlinear camera response function can introduce an additive bias. The correlation
is, therefore, invariant to these factors and produces values in the interval [1, 1].
The final error is then given by
                           ! !  1                ! ! 
                         D v1 ; v2 ¼       1  corr v1 ; v2 ;                     ð56Þ
                                        2
with values in the range [0, 1].
   To test our ability to discriminate between lighting environments we photo-
graphed a diffuse sphere in 28 different locations with a 6.3-megapixel Nikon
D-100 digital camera set to capture in high-quality JPEG mode. The focal length
was set to 70 mm, the f-stop was fixed at f/8, and the shutter speed was varied to
capture two or three exposures per location. In total, there were 68 images. For each
image, the Adobe Photoshop ‘‘Quick Selection Tool’’ was used to locate the
occluding contour of the sphere from which both 2D and 3D surface normals
could be estimated. The 3D surface normals were used to estimate the full set of
nine lighting environment coefficients and the 2D surface normals along the occlud-
ing contour were used to estimate five coefficients. For both cases, the regularization
term l in Equation 51 was set to 0.01. For each pair of images, the error (Equation
56) between the estimated coefficients was computed. In total, there were 2278
image pairs: 52 pairs were different exposures from the same location and 2226 pairs
were captured in different locations. The errors for all pairs for both models (3D and
2D) are shown in Fig. 9. In both plots, the 52 image pairs from the same location are
plotted first (‘‘þ’’), sorted by error. The 2226 pairs from different locations are
               
plotted next (‘‘ ’’). Note that the axes are scaled logarithmically. For the 3D case,
the minimum error between an image pair from different locations is 0.0027 and the
maximum error between an image pair from the same location is 0.0023. Therefore,
the two sets of data, same location versus different location, are separated by a
threshold of 0.0025. For the 2D case, 13 image pairs (0.6%) fell below 0.0025. These
image pairs correspond to lighting environments that are indistinguishable based on
the five-coefficient model.
28                                           H. FARID
100
10−1
                          10−2
                  Error
10−3
10−4
                                 1                       100              1000
                                                  Image pair
100
10−1
                          10−2
                  Error
10−3
10−4
                                 1                       100              1000
                                                  Image pair
   FIG. 9. Errors between image pairs corresponding to the same (‘‘þ’’) and different (‘‘’’) locations
using the full nine-parameter model with 3D surface normals (top) and using the five-parameter model
with 2D surface normals (bottom). Both the horizontal and vertical axes are scaled logarithmically.
   3
       http://www.flickr.com
   FIG. 10. Superimposed on each image are the contours from which the surface normals and intensity values are extracted to form the matrix M and
                         !
the corresponding vector b (Equation 47).
30                                    H. FARID
   FIG. 11. Shown on the left are three forgeries: the ducks, swans, and football coach were each added
into their respective images. Shown on the right are the analyzed regions superimposed in white, and
spheres rendered from the estimated lighting coefficients.
an illuminating light source (Section 2.1). The work described here generalizes this
approach by allowing us to estimate more complex models of lighting and in fact can
be adapted to estimate the direction to a single light source. Specifically, by
considering only the two first-order spherical harmonics,
                                                                  ðÞ and Y1; 1 ðÞ, the
                                                              Y1; 1
direction to a light source can be estimated as tan1 l1; 1 =l1;1 .
   When creating a composite of two or more people, it is often difficult to exactly
match the lighting, even if the lighting seems perceptually consistent. The reason for
this is that complex lighting environments (multiple light sources, diffuse lighting,
Exploring the Variety of Random
Documents with Different Content
  As an example of the progress being made toward speeding up
computers, speakers at the recent Winter General Meeting of the
American Institute of Electrical Engineers described a coming
generation of “gigacycle” computers now on the drawing boards.
Present electronic machines operate at speeds in the megacycle
range, with 50 million cycles per second representing the most
advanced state of the art. Giga means billion; thus the new round of
computers will be some thousand times as fast as those now
operating.
  Among the firms who plan such ultraspeed computers are RCA,
IBM, and Sperry Rand Corporation. To achieve such a great increase
in speed requires faster electronic switches. Transistors have been
improved, and more exotic devices such as tunnel diodes, thin-film
cryotrons, magnetic thin-films, parametrons, and traveling-wave
tubes are now coming into use. Much of the development work is
being supported by the U.S. Bureau of Ships. Operational gigacycle
computers are expected within two years!
   Not just the brickmaker, but the architect too has been busy in the
job of optimizing the computer. The science of bionics and the study
of symbolic logic lead to better ways of doing things. The computer
itself comes up with improvements for its next generation, making
one part do the work of five, and eliminating the need for whole
sections of circuitry. Most computers have a fixed “clock”; that is,
they operate at a certain cyclic rate. Now appearing on the scene
are “asynchronous” computers which don’t stand around waiting
when one job is done, as their predecessors did.
   One advanced notion is the “growing” of complex electronic
circuitry, in which a completed amplifier, or array of amplifiers, is
pulled from the crystal furnace much the way material for transistors
is now grown. Pooh-poohed at first as ridiculous, the notion has
been tried experimentally. Since a computer is basically a multiplicity
of simple units, the idea is not far off at that. It is conceivable that
crystal structure can be exploited to produce millions of molecules of
the proper material properly aligned for the desired electronic action.
  With this shrinking come the benefits of small size, low power
consumption, low cost, and perhaps lower maintenance. The
computer will be cheap enough for applications not now
economically feasible. As this happens, what will the computer do
for us tomorrow?
   A figure of 7 per cent is estimated for the amount of paperwork
the computer has taken over in the business world. Computer men
are eyeing a market some five times that amount. It does not take a
vivid imagination to decide that such a percentage is perhaps
conservative in the extreme. Computer sales themselves promise to
show a fourfold increase in the five-year period from 1960 to 1965,
and in the past predictions have been exceeded many times.
  As population grows and business expands in physical size and
complexity, it is obvious that the computer and its data-processing
ability will be called upon more and more. There is another factor,
that of the internationalizing of business. Despite temporary
setbacks of war, protective tariffs, insular tendencies, and the like, in
the long run we will live in one integrated world shrunk by data links
that can get information from here to there and back again so fast it
will be like conversing with someone across the room. Already
planners are talking worldwide computerized systems.
  As a mathematical whiz, the computer will relieve us of our money
worries. Coupled with the credit card, perhaps issued to us at birth,
a central computer will permit us to make purchases anywhere in
the world and to credit our account with wages and other income. If
we try to overdraw, it may even flash a warning light as fast as we
put the card in the slot! This project interests General Dynamics
researchers.
   Of more importance than merely doing bookkeeping is the impact
the computer will have on the planning and running of businesses.
Although it is found in surveys that every person thinks computer
application reaches to the level just below his in the management
structure, pure logic should ultimately win out over man’s emotional
frailties at all levels. Operations research, implemented by the
computer, will make for more efficient businesses. Decisions will
increasingly be made not by vice-presidents but by digital
computers. At first we will have to gather the necessary information
for these electronic oracles, but in time they will take over this
function themselves.
   Business is tied closely to education, and we have had a hint of
the place the computer will make for itself in education. The effect
on our motivation to learn of the little need for much learning will be
interesting. But then, is modern man a weaker being because he
kills a tiger with a high-powered rifle instead of club or bare hands—
or has no need to kill the tiger in the first place?
   After having proved itself as a patent searcher, the computer is
sure to excel as inventor. It will invade the artistic field; computers
have already produced pleasing patterns of light. Music has felt the
effect of the computer; the trend will continue. Some day not far off
the hi-fi enthusiast will turn on his set and hear original compositions
one after the other, turned out by the computer in as regular or
random form as the hearer chooses to set the controls. Each
composition will bring the thrill of a new, fresh experience, unless
we choose to go back in the computer’s memory for the old music.
  The computer will do far more in the home than dream up random
music for listening pleasure. The recorded telephone answerer will
give way to one that can speak for us, making appointments and so
on, and remembering to bring us up to date when we get home. A
small computer to plug in the wall may do other things like selecting
menus and making food purchases for next week, planning our
vacations, and helping the youngsters with their homework. It is
even suggested that the computer may provide us with child-
guidance help, plus psychological counsel for ourselves and medical
diagnoses for the entire family. The entire house might be
computerized, able to run itself without human help—even after
people are gone, as in the grimly prophetic story by Ray Bradbury in
which a neat self-controlled home is shown as the curtains part in
the morning. A mechanical sweeper runs about gathering up dust,
the air conditioning, lighting, and entertainment are automatic, all
oblivious to the fact that one side of the house is blackened from the
blast of a bomb.
   Perhaps guarding against that eventuality is the most important
job the computer can do. Applications of computing power to
government have been given; and hints made of the sure path from
simple tasks like the census and income tax, Peace Corps work, and
so on to decision-making for the president. Just as logic is put to
work in optimizing business, it can be used to plan and run a taut
ship of state. At first such an electronic cabinet member will be given
all available information, which it will evaluate so as to be ready to
make suggestions on policy or emergency action. There is more
reason for it going beyond this status to become an active agent,
than there is against. Government has already become so complex
that perhaps a human brain, or a collection of them, cannot be
depended on to make the best possible decision. As communications
and transportation are speeded up, the problem is compounded.
Where once a commander-in-chief could weigh the situation for days
before he had to commit himself and his country to a final choice, he
may now be called upon to make such a far-reaching decision in
minutes—perhaps minutes from the time he is awakened from a
sound sleep. The strongest opposition to this delegation of power is
man’s own vanity. No machine can govern, even if it can think, the
politician exclaims. The soldier once felt the same way; but
operations research has given him more confidence in the machine,
and SAGE and NORAD prove to him that survival depends on the
speed and accuracy of the electronic computer.
  Incurable romanticism is found even among our scientific
community. The National Bureau of Standards describes a computer
called ADAM, for Absolutely Divine Automatic Machine. But the
scientists also know that ADAM, or man, needs help. Rather than
consider the machine a tool, or even an extension of man’s mind,
some are now concerned with a kind of marriage of man and
machine in which each plays a significant part. Dr. Simon Ramo,
executive vice president of Thompson Ramo Wooldridge, Inc., has
termed this mating of the minds “intellectronics.” The key to this
combination of man’s intellect and that of electronics is closer
rapport between the team members.
Department of Defense
  Computer use in defense is typified in this BIRDIE system of the United States
                                       Army.
  The man-machine concept has grown into a science called, for the
present at least, “synnoetics,” a coinage from the Greek words syn
and noe meaning “perceive” and “together.” This science is defined
as the treating of the properties of composite systems, consisting of
configurations of persons, mechanisms, plant or animal organisms,
and automata, whose main attribute is that their ability to invent, to
create, and to reason—their mental power—is greater than the
mental power of their components.
  We get a not-too-fanciful look into the future in a paper by Dr.
Louis Fein presented in the summer 1961 issue of American
Scientist, titled “Computer-related Sciences (Synnoetics) at a
University in 1975.” Dr. Fein is an authority on computers, as builder
of RAYDAC in 1952, and as founder and president of the Computer
Control Company. The paper ostensibly is being given to alumni
some years hence by the university president. Dr. Fein tells us that
students in the Department of Synnoetics study the formal
languages used in communication between the elements of a
synnoetic system, operations research, game theory, information
storage, organization and retrieval, and automatic programming.
One important study is that of error, called Hamartiology, from the
Greek word meaning “to miss the mark.”
   The speaker tells us that this field was variously called cybernetics,
information science, and finally computer-related science before
being formally changed to the present synnoetics. A list of the
courses available to undergraduates includes:
  Von Neumann Machines and Turing Machines
  Elements of Automatic Programming
  Theory, Design, and Construction of Compilers
  Algorithms: Theory, Design, and Applications
  Foundations of the Science of Models
  The Theory, Design, and Application of Non-Numeric Models
  Heuristics
  Self-Programming Computers
  Advice Giving—Man to Machine and Machine to Man
  Simulation: Principles and Techniques
  Pattern Recognition and Learning by Automata
  The Grammar, Syntax, and Use of Formal Languages for
   Communication Between Machine and Machine and Between
   Man and Man
  Man-Automaton Systems: Their Organization, Use, and Control
  Problem-Solving: an Analysis of the Relationship Between the
    Problem-Solver, the Problem, and the Means for Solution
  Measurements of the Fundamental Characteristics of the Elements
   of Synnoetic Systems
                          Botany Department
                  Machine-Guided Taxonomy in Botany
                            Business School
                     Synnoetic “Business Executives”
                           Engineering School
                Theory of Error and Equipment Reliability
                 Design of Analog and Digital Computers
                         Humanities Department
               Theory of Creative Processes in the Fine Arts
                            Law School
          Patent and Precedence Searches with Computers
    The Effect of Automata on the Legislative and Judicial Process
                      Mathematics Department
       The Theory of Graphs and the Organization of Automata
                         Medical School
  Computer-Aided Medical Diagnosis and Prescription for Treatment
                           Philosophy
  The Relationships between Models and the Phenomena That Are
                            Modeled
                         Psychology Department
         Studies in Intuition and Intellect of Synnoetic Systems
                  Simulation in the Behavioral Sciences
                           Sociology Department
                        Synnoetics in Modern Society
 The speaker proudly refers to the achievement of the faculty
mediator and a computer in settling the “famous” strike of 1970.
   He simply got both sides first to agree that each would benefit by concentrating
attention—not on arguing and finally settling the issues one at a time—but on
arguing and finally settling on a program for an automaton. This program would
evaluate the thousands of alternative settlements and would recommend a small
class of settlements each of which was nearly optimum for both sides. The
automaton took only 30 minutes to produce the new contract last year. It would
have taken one year to do this manually, and even then it would have been done
less exhaustively. Agreeing on the program took one week. Of course, you have
already heard that in many areas where people are bargaining or trying to make
optimum decisions such as in the World Nations Organization, in the World Court,
and in local, federal, and world legislative bodies, there is now serious
consideration being given to convincing opposing factions to try to agree on a
program and having once agreed on it, the contract or legislation or judgment or
decision produced with the program would be accepted as optimum for both sides.
Automata may also be provided to judges and juries to advise them of the effects
of such factors as weight of evidence on verdicts in civil cases.
   Dr. Fein makes an excellent case for the usefulness of the science
of synnoetics; the main point of challenge to his paper might be that
its date is too conservatively distant. Of interest to us here is the
idea of man and machine working in harmony for the good of both.
   Another paper, “The Coming Technological Society,” presented by
Dr. Simon Ramo at the University of California at Los Angeles, May 1,
1961, also discusses the possible results of man-machine
cooperation during the remainder of the twentieth century. He lists
more than a dozen specific and important applications for
intellectronics in the decades immediately ahead of us. Law,
medicine, engineering, libraries, money, and banking are among
these. Pointing out that man is as unsuited for “putting little marks
on pieces of paper” as he was for building pyramids with his own
muscles, he suggests that our thumbprints and electronic scanners
will take care of all accounting. Tongue in cheek, he does say that
there will continue to be risks associated with life; for instance, a
transistor burning out in Kansas City may accidentally wipe out
someone’s fortune in Philadelphia.
  The making of reservations is onerous busywork man should not
have to waste his valuable time on, and the control of moving things
too is better left to the machine for the different reason that man’s
unaided brain cannot cope with complex and high-speed traffic
arteries, be they in space or on Los Angeles freeways. Business and
military management will continue to be aided by the electronic
machine.
  But beyond all these benefits are those more important ones to
our brains, our society, and culture. Teaching machines, says Dr.
Ramo, can make education ten times more effective, thus increasing
our intellect. And this improved intellect, multiplied by the electronic
machine into intellectronic brainpower, is the secret of success in the
world ahead. Instead of an automated, robotlike regimented world
that some predict, Ramo sees greater democracy resulting. Using
the thumbprint again, and the speed of electronics, government of
our country will be truly by the people as they make their feelings
known daily if necessary.
   Intellectronic legislation will extend beyond a single country’s
boundaries in international cooperation. It will smash the language
and communication barriers. It will permit and implement not only
global prediction of weather, but global control as well. Because of
the rapid handling of vast amounts of information, man can form
more accurate and more logical concepts that will lead to better
relations throughout the world. Summing up, Dr. Ramo points out
that intellectronics benefits not only the technical man but social
man as well:
  The real bottleneck to progress, to a safe, orderly, and happy transition to the
coming technological age, lies in the severe disparity between scientific and
sociological advance. Having discussed technology, with emphasis on the future
extension of man’s intellect, we should ask: Will intellectronics aid in removing the
imbalance? Will technology, properly used, make possible a correction of the very
imbalance which causes technology to be in the lead? I believe that the
challenging intellectual task of accelerating social progress is for the human mind
and not his less intellectual partner. But perhaps there is hope. If the machines do
more of the routine, everyday, intellectual tasks and insure the success of the
material operation of the world, man’s work will be elevated to the higher mental
domains. He will have the time, the intellectual stature, and hence the inclination
to solve the world’s social problems. We must believe he has the capability.
                       Thompson Ramo Wooldridge, Inc.
CALCULO computer, 75
Calculus Ratiocinator, 109
Calendars as computers, 24
California Institute of Technology, 169
Cancer Society, American, 193
Candide, 30
Capek, Karel, 43, 121, 215
Caplin, Mortimer, 150
Carroll, Lewis, 38, 118
CDC 1604 computer, 165
Celanese Corp. of America, 207
Celestial simulator, 85
Census, 41
Census Bureau, U. S., 149
Chain circuit, 127
Characteristica Universalis, 109
Charactron tube, 66
Checkers (game), 8, 143
Checking, computer, 60
Checkout computer, 183
Chemical Corp., 249
Chess, 8, 9, 16, 35, 99, 142, 156
Circuit
  chain, 127
  delay-line, 63
  flip-flop, 63, 115
  molecular, 9, 253
  printed, 62
  reverberation, 128
Clapp, Verner, 248
Clarke, Arthur C., 265
CLASS teaching machine system, 226-228
Clock, 20, 24, 56, 85
COBOL language, 234
Code, computer
  binary-coded decimal, 103, 106
  binary-octal, 106
  economy, 106
  excess-3, 105, 114
  “Gray,” 106
  reflected binary, 106
  self-checking, 105
Color computer, 4
Commercial Art, 175
Commission on Professional and Hospital Activity, 194
Communication, use of computers, 179
Computer
  ADAM, 258
  addition, 106
  airborne, 90, 154, 158, 162
  analog, 21, 45, 72, 74, 80, 125, 203
     direct, 76, 79
     direct-current, 76
   discrete, 80
   indirect, 76, 79
   mechanical differential analyzer, 76
   scaling, 76
Antikythera, 25
Apollo, 182
   space vehicle, 169
applications, digital, 92
ASCC, 155
asynchronous, 255
Athena, 52
ballistic, 83
Bendix G-15, 183, 188
BINAC, 7, 47
BRAINIAC, 88, 117
CALCULO, 75
CLASS, 226-228
code, binary-coded decimal, 103, 106
color, 4
definition, 129
dictionary, 49, 50
difference engine, 5, 35
digital, 18, 45, 73, 84, 125, 203
division, 107
do-it-yourself, 75, 88, 117, 147
electrical-analog, 75
electronic, 1, 46, 122, 151
ENIAC, 7, 40, 46, 85, 215
ERMA, 173
family tree, 86
FINDER system, 161
flow chart, 58, 59
GE 210, 172
GE 225, 245
general-purpose, 54, 81, 191
gigacycle, 254
“Hand,” 132, 214, 215
household, 15, 257
hybrid, 80, 84, 92
ILLIAC, 197
input, 51, 54, 125
JOHNNIAC, 11, 47, 129, 140, 142
language, 233
LARC, 47, 162, 191
LGP-30, 198
limitations, 89
MANIAC, 47, 156, 165
Memex, 13
mill, 38, 51, 60
MIPS, 159
MOBIDIC, 157
MUSE, 48
music, 11, 92, 196, 257
on-line, 81, 205
on-stream, 83, 207
output, 51, 65, 125
parts, 50, 52, 53
problem-solving, 140, 143
Psychological Matrix Rotation, 78, 94
Q-5, 77
RAMAC, 150, 151, 198, 199
Range Keeper Mark I, 42
RAYDAC, 260
RCA 501, 151
“real-time,” 78, 168, 202, 205
RECOMP, 47
revolution, 251
Sabre, 183
SAGE, 3, 12, 37, 53, 158, 159, 226, 259
sequential, 126
“Shoebox,” 242
“software,” 54
  spaceborne, 167
  special-purpose, 79
  SSEC, 155, 156
  Stone Age, 21
  store, 36, 62
  STRETCH, 47, 48
  subtraction, 106
  testing, 117
  UNIVAC, 47, 149, 151, 171, 221
  VIDIAC, character-generator, 242
  Zuse L23, 199
Computer Control Co., 260
Conjunctive operation, 37, 51, 110
Consciousness, 144, 145, 267
Continuous analog computer, 80
Continuous digital computer, 80
Continuous quantity, 73
Control, computer, 51, 56
Control Data Corp., 194
Conversion
  analog-to-digital, 74
  digital-to-analog, 74
Converters, 94
Cook, William W., 29
Copland, Aaron, 11, 196
Cornell Medical College, 123
Cornell University, 133
Corrigan Communications, 231
Council on Library Resources, 248
Counting
  Australian, 20
  birds, 18
  boards, 20
  digital, 84
  machines, 20
  man, 19
  modulo-, 97, 101
Credit card, 13, 256
Cryogenics, 70
  components, 63
Cryotron, 9, 88, 141, 254, 255
Cybertron, 135, 139
Cyborg, 265
Daedalus, 18
Darwin, Charles, 32, 137, 252
Data
  link, 14, 185, 256
  logger, 205
  processing, 22, 171, 264
  recording media, 57
Daystrom, Inc., 211
Dead Sea Scrolls, 235
Decimal system, 19
Decision-making, 91
Defense, use of computer, 259
Delay-line circuit, 63
DeMorgan, Augustus, 38, 110, 115
Department of Commerce, U. S., 149, 221
Department of Defense, U. S., 148, 234
Design, use of computer, 14, 172, 186, 268
Desk calculator, 51
Diagnostic use of computer, 194
Diamond Ordnance Fuze Laboratory, U. S. Army, 69
Dictionary, computer, 49, 50
DIDAK teaching machine, 224
Difference engine, 5, 35
Digiflex trainer, 225
Digital computer, 18, 45, 73, 84, 125, 203
Digital differential analyzer, 94
Digitronics, 236
Discrete quantity, 73
Disjunctive operation, 110
Division, computer, 107
Dodgson, Charles L., 38
Do-it-yourself computer, 75, 88, 117, 147
Douglas Aircraft Co., 65
Dow Chemical Corp., 208
Du Pont Corp., 208
Dunsany, Lord, 108
Eccles-Jordan circuit, 47
Eckert, J. Presper, 47, 85
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
ebookfinal.com