100% found this document useful (5 votes)
65 views77 pages

(FREE PDF Sample) Advances in Computers Vol 77 1st Edition Marvin Zelkowitz (Ed.) Ebooks

This document provides information about the book 'Advances in Computers Vol 77', edited by Marvin Zelkowitz, which is available for download along with other related ebooks. It includes details about contributors, the book's content, and its significance in the field of computer science, highlighting various topics such as digital photography and digital forensics. The document also contains publishing information and copyright notices.

Uploaded by

kouchucedazi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (5 votes)
65 views77 pages

(FREE PDF Sample) Advances in Computers Vol 77 1st Edition Marvin Zelkowitz (Ed.) Ebooks

This document provides information about the book 'Advances in Computers Vol 77', edited by Marvin Zelkowitz, which is available for download along with other related ebooks. It includes details about contributors, the book's content, and its significance in the field of computer science, highlighting various topics such as digital photography and digital forensics. The document also contains publishing information and copyright notices.

Uploaded by

kouchucedazi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 77

Visit https://ebookfinal.

com to download the full version and


explore more ebooks

Advances in Computers Vol 77 1st Edition Marvin


Zelkowitz (Ed.)

_____ Click the link below to download _____


https://ebookfinal.com/download/advances-in-computers-
vol-77-1st-edition-marvin-zelkowitz-ed/

Explore and download more ebooks at ebookfinal.com


Here are some suggested products you might be interested in.
Click the link to download

Advances in Computers Vol 79 1st Edition Marvin Zelkowitz


(Ed.)

https://ebookfinal.com/download/advances-in-computers-vol-79-1st-
edition-marvin-zelkowitz-ed/

Advances in Computers 56 First Edition Marvin Zelkowitz

https://ebookfinal.com/download/advances-in-computers-56-first-
edition-marvin-zelkowitz/

Security on the Web 1st Edition Marvin Zelkowitz

https://ebookfinal.com/download/security-on-the-web-1st-edition-
marvin-zelkowitz/

Vagabond Vol 29 29 Inoue

https://ebookfinal.com/download/vagabond-vol-29-29-inoue/
Emerging Technologies 1st Edition Marvin Zelkowitz Ph.D.
Ms Bs.

https://ebookfinal.com/download/emerging-technologies-1st-edition-
marvin-zelkowitz-ph-d-ms-bs/

Architectural Issues 1st Edition Marvin Zelkowitz Ph.D.


Ms Bs.

https://ebookfinal.com/download/architectural-issues-1st-edition-
marvin-zelkowitz-ph-d-ms-bs/

Quality Software Development 1st Edition Marvin Zelkowitz


Ph.D. Ms Bs.

https://ebookfinal.com/download/quality-software-development-1st-
edition-marvin-zelkowitz-ph-d-ms-bs/

Improving the Web 1st Edition Marvin Zelkowitz Ph.D. Ms


Bs.

https://ebookfinal.com/download/improving-the-web-1st-edition-marvin-
zelkowitz-ph-d-ms-bs/

Advances in Applied Microbiology 77 1st Edition Allen I.


Laskin

https://ebookfinal.com/download/advances-in-applied-
microbiology-77-1st-edition-allen-i-laskin/
Advances in Computers Vol 77 1st Edition Marvin
Zelkowitz (Ed.) Digital Instant Download
Author(s): Marvin Zelkowitz (Ed.)
ISBN(s): 9780123748126, 0123748127
Edition: 1st
File Details: PDF, 4.52 MB
Year: 2009
Language: english
Academic Press is an imprint of Elsevier
32 Jamestown Road, London, NW1 7BY, UK
Radarweg 29, PO Box 211, 1000 AE Amsterdam, The Netherlands
30 Corporate Drive, Suite 400, Burlington, MA 01803, USA
525 B Street, Suite 1900, San Diego, CA 92101-4495, USA

First edition 2009

Copyright © 2009 Elsevier Inc. All rights reserved

No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or
by any means electronic, mechanical, photocopying, recording or otherwise without the prior written
permission of the publisher Permissions may be sought directly from Elsevier’s Science & Technology
Rights Department in Oxford, UK: phone (+44) (0) 1865 843830; fax (+44) (0) 1865 853333; email:
[email protected]. Alternatively you can submit your request online by visiting the Elsevier web
site at http://elsevier.com/locate/permissions, and selecting Obtaining permission to use Elsevier material
Notice
No responsibility is assumed by the publisher for any injury and/or damage to persons or property as a
matter of products liability, negligence or otherwise, or from any use or operation of any methods,
products, instructions or ideas contained in the material herein

Library of Congress Cataloging-in-Publication Data


A catalog record for this book is available from the Library of Congress
British Library Cataloguing-in-Publication Data
A catalogue record for this book is available from the British Library

ISBN: 978-0-12-374812-6
ISSN: 0065-2458

For information on all Academic Press publications


visit our web site at elsevierdirect.com

Printed and bound in USA


09 10 11 12 10 9 8 7 6 5 4 3 2 1
Contributors

Prof. Robert Aalberts is the Lied Professor of Legal Studies at the University of
Nevada, Las Vegas. He received his Juris Doctor from Loyola University and an
M.A. from the University of Missouri-Columbia. Prior to his academic career, he
was an attorney for the Gulf Oil Company. His primary research interests include
real estate law, cyber law, and employment law. He has also published over 105
articles in legal and business periodicals. He is currently the Editor-in-Chief of the
Real Estate Law Journal where he has served for the past 16 years. He is also
coauthor of the textbook, Law and Business: The Regulatory Environment, 1994,
published by the McGraw-Hill Book Company and Real Estate Law, 7th edition,
2009 published by Southwestern/Cengage Learning.

Christopher Ackermann is a Scientist at the Fraunhofer Center for Experimental


Software Engineering, Maryland and is pursuing a Ph.D. at the University of Mary-
land, College Park. He received his Bachelor’s Degree from the University for
Applied Sciences, Mannheim, Germany in 2006 and earned his Master’s Degree
from the University of Maryland in 2008. He has been active in the fields of software
architectures, software testing and verification, model-based development, empirical
software engineering, software visualization, and change impact analysis. His current
research interests include software architecture analysis, testing and verification, and
model-based development.

Prof. Eric Allender received a B.A. from the University of Iowa in 1979, majoring
in Computer Science and Theatre, and a Ph.D. from Georgia Tech in 1985. He has
been at Rutgers University since then, serving as department chair from 2006 to
2009. He is a Fellow of the ACM and serves on the editorial boards of the ACM
Transactions on Computation Theory, Computational Complexity, and The Chicago
Journal of Theoretical Computer Science. He has chaired the Conference Commit-
tee for the annual IEEE Conference on Computational Complexity, and he serves on
the Scientific Board for the Electronic Colloquium on Computational Complexity
(ECCC).

ix
x CONTRIBUTORS

Prof. Hany Farid received his undergraduate degree in Computer Science and
Applied Mathematics from the University of Rochester in 1989. He received his
Ph.D. in Computer Science from the University of Pennsylvania in 1997. Following
a 2-year postdoctoral position in Brain and Cognitive Sciencesat MIT, he joined the
Dartmouth faculty in 1999. He is the David T. McLaughlin Distinguished Professor
of Computer Science and Associate Chair of Computer Science. He is also affiliated
with the Institute for Security Technology Studies at Dartmouth. He is the recipient
of an NSF CAREER award, a Sloan Fellowship, and a Guggenheim Fellowship. He
can be reached at [email protected], and more information about his work can
be found at www.cs.dartmouth.edu/farid.

Prof. David Hames is an Associate Professor of Management at the University of


Nevada, Las Vegas. He earned his B.A. from Albion College, his M.L.I.R. from
Michigan State University, and his Ph.D. from the University of North Caroling at
Chapel Hill. His research, which focuses on employment law, human resource man-
agement, and labor–management relations, has been published in journals such as
Group and Organization Management, Human Resource Management Review,
Leadership and Organization Development Journal, Employee Responsibilities &
Rights Journal, Labor Law Journal, and Risk Management & Insurance Review.

Prof. Andrew Johnson is an Associate Professor in the Department of Computer


Science and member of the Electronic Visualization Laboratory at the University of
Illinois at Chicago. His research focuses on the development and effective use of
advanced visualization displays, including virtual reality displays, autostereo dis-
plays, high-resolution walls and tables, for scientific discovery and in formal and
informal education.

Prof. Jason Leigh is an Associate Professor of Computer Science and Director of


the Electronic Visualization Laboratory (EVL) at the University of Illinois at
Chicago. He is a cofounder of VRCO, the GeoWall Consortium, and the Global
Lambda Visualization Facility. He currently leads the visualization and collabora-
tion research on the National Science Foundation’s OptIPuter project, and has led
EVL’s Tele-Immersion research since 1995. His main area of interest is in develop-
ing ultra-high-resolution display and collaboration technologies for supporting a
wide range of applications ranging from the remote exploration of large-scale data,
education, and interactive entertainment.

Dr. Mikael Lindvall is a Senior Scientist and the Director of the Software Archi-
tecture and Embedded Systems division at Fraunhofer Center for Experimental
Software Engineering, Maryland. He is interested in best practices and
CONTRIBUTORS xi

methodologies for software engineering, in general, and specializes on software


architecture evaluation and software evolution. He received his Ph.D. in Computer
Science from Linköpings University, Sweden, in 1997. His Ph.D. work focused on
evolution of object-oriented systems and was based on a commercial development
project at Ericsson Radio in Sweden.

Prof. Jörn Loviscach is a Professor for Computer Graphics, Animation, and Simula-
tion at Hochschule Bremen (University of Applied Sciences) in Bremen, Germany.
He is interested in 2D and 3D graphics algorithms and systems, human–computer
interaction, audio and music computing, in particular concerning applications that
require signal processing and/or the development of specialized electronics. A regular
contributor to conferences such as SIGGRAPH, Eurographics, and the AES Conven-
tion, he has published numerous chapters in book series such as Game Programming
Gems and ShaderX Programming. Before becoming a professor in 2000, he was
Deputy Editor-in-Chief of the popular German computer magazine ‘‘c’t,’’ the editorial
staff of which he joined soon after earning his doctorate degree in physics.

Dr. Jürgen Münch is Division Manager for Software and Systems Quality Man-
agement at the Fraunhofer Institute for Experimental Software Engineering (IESE)
in Kaiserslautern, Germany. Before that, he was Department Head for Processes and
Measurement at Fraunhofer IESE and an executive board member of the temporary
research institute SFB 501, which focused on software product lines. He received his
Ph.D. degree (Dr. rer. nat.) in Computer Science from the University of Kaiserslau-
tern, Germany, at the chair of Prof. Dr. Dieter Rombach. His research interests in
software engineering include (1) modeling and measurement of software processes
and resulting products, (2) software quality assurance and control, (3) technology
evaluation through experimental means and simulation, (4) software product lines,
and (5) technology transfer methods. He has significant project management expe-
rience and has headed various large research and industrial software engineering
projects, including the definition of international quality and process standards.
His main industrial consulting activities are in the areas of process management,
goal-oriented measurement, quality management, and quantitative modeling. He
has been teaching and training in both university and industry environments. He has
coauthored more than 80 international publications, and has been co-organizer,
program cochair, or member of the program committee of numerous high-standard
software engineering conferences and workshops. He is a member of ACM, IEEE,
the IEEE Computer Society, and the German Computer Society (GI).

Prof. Percy Poon is an Associate Professor of Finance at the University of Nevada,


Las Vegas. He received his Ph.D. in Finance from Louisiana State University.
xii CONTRIBUTORS

His primary research interests are in the investment area. He has published numer-
ous articles in both finance and other business periodicals, including the American
Business Law Journal, Journal of Finance, Journal of Banking and Finance, CACM,
Financial Review, and Financial Practice and Education. His research on portfolio
diversification has been cited by the Wall Street Journal and the Investor’s Business
Daily. He has served as an ad hoc reviewer for various academic financial period-
icals, including Financial Management, Financial Review, and Financial Practice
and Education. He also offered his expertise to the business community, including
seminars to a utilities company on the uses of options and futures to hedge energy
costs.

Prof. Luc Renambot received a Ph.D. at the University of Rennes-1 (France) in


2000, conducting research on parallel rendering algorithms for illumination simula-
tion. Then holding a postdoctoral position at the Free University of Amsterdam,
until 2002, he worked on bringing education and scientific visualization to virtual
reality environments. In 2003, he joined EVL/UIC first as a PostDoc and now as
Research Assistant Professor, where his research topics include high-resolution
displays, computer graphics, parallel computing, and high-speed networking.

Prof. Günther Ruhe holds an Industrial Research Chair in Software Engineering at


University of Calgary. His main results and publications are in software engineering
decision support, software release planning, software project management, mea-
surement, simulation, and empirical research. From 1996 until 2001, he was Deputy
Director of the Fraunhofer Institute for Experimental Software Engineering. He is
the author of two books, several book chapters, and more than 120 publications.
Dr. Ruhe is a member of the ACM, the IEEE Computer Society, and the German
Computer Society (GI).

Dr. M. Omolade Saliu is a Decision Support Architect at Online Business Systems


in Calgary, Canada. He is currently involved in developing decision support solu-
tions for performance management. He received his Ph.D. in Computer Science
from the University of Calgary, Canada in 2007. His Ph.D. research was sponsored
by the Natural Sciences and Engineering Research Council of Canada (NSERC) and
the Alberta Informatics Circle of Research Excellence (iCORE). Saliu’s Ph.D.
research focussed on decision support for planning the releases of evolving software
systems. His research interests include software architecture evaluation, software
release planning, and decision support.

Prof. Paul D. Thistle is Professor of Finance at the University of Nevada, Las


Vegas. He earned his B.B.A. from the University of Portland and his M.S. and Ph.D.
CONTRIBUTORS xiii

in Economics from Texas A&M University. He has taught at the University of


Arizona, University of Alabama, and Western Michigan University, and was a
Heubner Postdoctoral Fellow at the Wharton School. While his primary research
interest is in insurance and risk management, he has published extensively in
economics, finance, insurance, real estate, and management information systems.
His research was supported by the Nevada Insurance Education Foundation.

Dr. Laurence Tratt is a Senior Lecturer at Bournemouth University and a Software


Consultant. He is an Associated Editor-in-Chief of IEEE Software. He received
the Ph.D. degree from King’s College of London. He is the chief designer of the
Converge programming language. His research interests include programming
languages, domain-specific languages, and software modeling.

Dr. Adam Trendowicz received a degree in Computer Science (B.Sc.) and in


Software Engineering (M.Sc.) from the Poznan University of Technology, Poland,
in 2000. He is currently a researcher at the Fraunhofer Institute for Experimental
Software Engineering (IESE), Kaiserslautern, Germany in the Processes and
Measurement department. Before that, he worked as a software engineering consul-
tant at Q-Labs GmbH, Germany. His research and industrial activities include
software cost modeling, measurement, data analysis, and process improvement.
Preface

This is volume 77, the last volume in the 50th year of publication of the Advances in
Computers. Since 1960, annual volumes have been produced containing chapters
authored by some of the leading experts in the field of computers. For 50 years, these
volumes offer ideas and developments that are changing our society. This volume
presents eight different topics covering many different aspects of computer science.
I hope you find them of interest. The first three chapters provide insights into the
different ways individuals can interact with electronic devices. First we look at
digital photography. Then we look at other display devices and in Chapter 3 at other
game interfaces that have been developed.
Today, the ubiquitous film camera has all but disappeared from view to be
replaced by ever cheaper and larger digital chips of memory. While allowing a
huge number of pictures to be taken essentially for free, there is a cost in security of
the pictures. Digital images and associated software allow the photographer (or
almost anyone else for that matter) to manipulate the bits of the image and hence
change the picture. How do we discover such tampering and how do digital forensics
work to uncover fakery? Hany Farid in ‘‘Photo Fakery and Forensics’’ in Chapter 1
of this volume discusses methods for detecting inconsistencies in lighting and pixel
correlations to detect forgeries.
Not only have cameras changed, but so too have all other visual devices
connected to the computer. Jason Leigh, Andrew Johnson, and Luc Renambot in
Chapter 2’s ‘‘Advances in Computer Displays’’ discuss a wide variety of visual
display technology—from the old-fashioned cathode ray tube (CRT) to more
modern plasma displays, steroscopic displays, and wall displays. They discuss
what the environment of the future—whether at work or at home—is likely to
contain.
In Chapter 3, Jörn Loviscach in ‘‘Playing with All Senses: Human–Computer
Interface Devices for Games’’ discusses mechanisms for interacting with games on a
computer. After quickly passing through the usual mouse, keyboard, and joystick, he
discusses pen and touch input devices, sensors, and cameras. Inertial sensors allow
the user to move the device and the computer to interpret that motion, such as in
Nintendo’s successful Wii machine. Incorporating all of these into the next

xv
xvi PREFACE

generation of game allows the user to experience a multimedia approach toward a


game—a far cry from the early Pong, which was a simple paddle for hitting an image
of a ball back across the screen.
One of the long-standing unsolved questions in computer science theory is the
resolution of the NP-completeness problem—simply stated as ‘‘Does P equal NP?’’
I have always been interested in this question since I was a graduate student in the
late 1960s when the problem was first posed. To date it has not yet been solved, but
I was very interested in seeing what has been learned over the past 40 years.
In Chapter 4, Eric Allender in ‘‘A Status Report on the P Versus NP Question’’
discusses what the question is and what has happened in this 40-year period.
Chapter 5 by Laurence Tratt is entitled ‘‘Dynamically Typed Languages.’’
Historically, most programming languages were compiled into executable machine
code by a compiler. For efficiency these languages such as FORTRAN, Algol,
Pascal, C were statically typed. That is, the data type (e.g., integer) was specified
so the compiler could generate efficient code for it. Today, there is more interest in
dynamically typed languages where data are processed while the program executes
and the distinction between compilation and execution is getting blurred. Languages
like Perl and Python, as well as the older LISP, are examples of this. In this chapter,
Dr. Tratt discusses the advantages of using such dynamically typed languages.
In Chapter 6, Adam Trendowicz and Jürgen Münch’s ‘‘Factors Influencing
Software Development Productivity—State-of-the-Art and Industrial Experiences’’
look at the continuing evolution of software engineering development practices.
Of particular interest is how one measures the productivity since scheduling of
appropriate resources is critical for completing a project with least organizational
impact? Over estimate productivity and you need to overspend to put more people
on project and under estimate productivity and you have employees with little to do.
In this chapter, the authors look at various factors that have been reported in the
literature as most important aspects affecting productivity.
In Chapter 7 ‘‘Evaluating the Modifiability of Software Architectural Designs’’ by
M. Omolade Saliu, Günther Ruhe, Mikael Lindvall, and Christopher Ackermann,
the authors present an architectural design evaluation technique called EBEAM.
Since software undergoes change as it evolves and since this becomes a dominant
cost factor over time, it is important to understand how modifiable the software is as
it develops. EBEAM is described and its use on an experimental system shows its
value.
As the Internet becomes more invasive in our lives, its impact, in terms of costs
and money transferred, is well into the multibillions of dollars (or euros or pounds)
per year. Although our legal system has been developing for hundreds of years, this
new technology is radically different from older systems. When you send a docu-
ment by email (e.g., the contents of a CD containing music), no object actually is
PREFACE xvii

sent— only the electronic bits that describe the document are sent. So who owns this
document? This is only one simple example. Can our 1000-year-old system based
upon English common law adapt to this new technology? In the final chapter, ‘‘The
Common Law and Its Impact on the Internet,’’ Robert Aalberts, David Hames, Percy
Poon, and Paul D. Thistle discuss how the legal system is adapting to this new
cyberworld.
I hope that you find these chapters of use to you in your work. If you have any
topics you would like to see in these volumes, let me know. If you would like
to write a chapter for a forthcoming volume, also let me know. I can be reached
at [email protected].

Marvin Zelkowitz
College Park, Maryland
Photo Fakery and Forensics

HANY FARID
Department of Computer Science, Dartmouth College,
Hanover, New Hampshire 03755, USA

Abstract
Photographs can no longer be trusted. From the tabloid magazines to the fashion
industry, mainstream media outlets, political campaigns, and the photo hoaxes
that land in our email inboxes, doctored photographs are appearing with a
growing frequency and sophistication. I will briefly describe the impact of all
of this photographic tampering and recent technological advances that have the
potential to return some trust to photographs. Specifically, I will describe a
representative sample of image forensics techniques for detecting inconsistencies
in lighting, pixel correlations, and compression artifacts.

1. Photo Fakery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1. Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2. Science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3. Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4. Politics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5. National Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2. Photo Forensics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1. Lighting Direction (2D) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2. Lighting Direction (3D) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3. Lighting Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.4. Color Filter Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.5. JPEG Ghosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3. Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

ADVANCES IN COMPUTERS, VOL. 77 1 Copyright © 2009 Elsevier Inc.


ISSN: 0065-2458/DOI: 10.1016/S0065-2458(09)01201-7 All rights reserved.
2 H. FARID

1. Photo Fakery

History is riddled with the remnants of photographic fakery. Stalin, Mao, Hitler,
Mussolni, Castro, and Brezhnev each had photographs manipulated in an attempt to
alter history. Cumbersome and time-consuming darkroom techniques were required
to alter history on behalf of Stalin and others. Today, powerful and low-cost digital
technology has made it far easier for nearly anyone to alter digital images. And the
resulting fakes are often very difficult to detect. This photographic fakery is having a
significant impact in many different areas.

1.1 Media
For the past decade, Adnan Hajj has produced striking war photographs from the
ongoing struggle in the Middle East. On 7 August 2006, the Reuters news agency
published one of Hajj’s photographs showing the remnants of an Israeli bombing of a
Lebanese town. In the week that followed, hundreds of bloggers and nearly every major
news organization reported that the photograph had been doctored with the addition of
more smoke. The general consensus was one of outrage and anger—Hajj was accused
of doctoring the image to exaggerate the impact of the Israeli shelling. An embarrassed
Reuters quickly retracted the photograph and removed from its archives nearly 1000
photographs contributed by Hajj. The case of Hajj is, of course, by no means unique. In
2003, Brian Walski, a veteran photographer of numerous wars, doctored a photograph
that appeared on the cover of the Los Angeles Times. After discovering the fake, the
outraged editors of the LA Times fired Walski. The news magazines Time and News-
week have each been rocked by scandal after it was revealed that photographs appearing
on their covers had been doctored. And, in the past few years, countless news organiza-
tions around the world have been shaken by similar experiences.

1.2 Science
Those in the media are not alone in succumbing to the temptation to manipulate
photographs. In 2004, Professor Hwang Woo-Suk and colleagues published what
appeared to be groundbreaking advances in stem cell research. This paper appeared
in one of the most prestigious scientific journals, Science. Evidence slowly emerged
that these results were manipulated and/or fabricated. After months of controversy,
Hwang retracted the Science paper and resigned his position at the University. An
independent panel investigating the accusations of fraud found, in part, that at least
nine of the 11 customized stem cell colonies that Hwang had claimed to have made
were fakes. Much of the evidence for those nine colonies, the panel said, involved
PHOTO FAKERY AND FORENSICS 3

doctored photographs of two other, authentic, colonies. While this case garnered
international coverage and outrage, it is by no means unique. In an increasingly
competitive field, scientists are succumbing to the temptation to exaggerate or
fabricate their results. Mike Rossner, the managing editor of the Journal of Cell
Biology estimates that as many as 20% of accepted manuscripts to his journal
contain at least one figure that has to be remade because of inappropriate image
manipulation [1].

1.3 Law
The child pornography charges against its Police Chief shocked the small town of
Wapakoneta, OH. At his trial, the defendant’s lawyer argued that if the State could not
prove that the seized images were real, then the defendant was within his rights in
possessing the images. In 1996, the Child Pornography Prevention Act (CPPA)
extended the existing federal criminal laws against child pornography to include certain
types of ‘‘virtual porn.’’ In 2002, the United States Supreme Court found that portions
of the CPPA, being overly broad and restrictive, violated First Amendment rights. The
Court ruled that ‘‘virtual’’ or ‘‘computer-generated’’ images depicting a fictitious
minor are constitutionally protected. The burden of proof in this case, and countless
others, shifted to the State who had to prove that the images were real and not computer
generated. Given the sophistication of computer-generated images, several state and
federal rulings have further found that juries should not be asked to make the determi-
nation between real and virtual. And at least one federal judge questioned the ability of
even expert witnesses to make this determination. This example highlights the general
complexities that exist at the intersection of digital technology and the law.

1.4 Politics
‘‘Fonda Speaks to Vietnam Veterans at Anti-War Rally’’ read the headline with
an accompanying photograph purportedly showing Senator John Kerry sharing a
stage with the then controversial Jane Fonda. The faux article was also a fake—a
composite of two separate and unrelated photographs. And just days after being
selected as a running mate to U.S. presidential hopeful John McCain, doctored
images of a bikini clad and gun-toting Sarah Palin were widely distributed on
the Internet. The pairing of one’s political enemies with controversial figures is
certainly not new. It is believed that a doctored photograph contributed to Senator
Millard Tydings’ electoral defeat in 1950. The photo of Tydings conversing with
Earl Browder, a leader of the American Communist party, was meant to suggest that
Tydings had communist sympathies. Recent political ads have seen a startling
number of doctored photographs pitting candidates in a flattering or damaging light.
4 H. FARID

1.5 National Security


With tensions mounting between the United States and Iran, the Iranian Govern-
ment announced the successful testing of ballistic missiles. As evidence, the gov-
ernment released a photograph showing the simultaneous launch of four missiles.
Shortly after its worldwide publication, it was revealed that the image had been
doctored. In reality, only three missiles had launched, while the fourth missile,
which failed to launch, was digitally inserted. This example highlighted the impor-
tance of image authentication and showed the potential implications of photo
tampering on a geopolitical stage.
While historically they may have been the exception, doctored photographs today
are increasingly impacting nearly every aspect of our society. While the technology
to distort and manipulate digital media is developing at breakneck speeds, the
technology to detect such alterations is lagging behind. To this end, I will describe
some recent innovations for detecting photo tampering that have the potential to
return some trust to photographs.

2. Photo Forensics

Digital watermarking has been proposed as a means by which an image can be


authenticated (see, e.g., [2, 3] for general surveys). The drawback of this approach is
that a watermark must be inserted at the time of recording, which would limit this
approach to specially equipped digital cameras. This method also relies on the
assumption that the digital watermark cannot be easily removed and reinserted—it
is not yet clear whether this is a reasonable assumption (e.g., [4]). In contrast to these
approaches, we have proposed techniques for detecting tampering in digital images
that work in the absence of any digital watermark or signature.
Given the variety of images and forms of tampering, the forensic analysis of
images benefits from a variety of tools that can detect various forms of tampering.
Over the past 8 years my students, colleagues, and I have developed a suite of
computational and mathematical techniques for detecting tampering in digital
images. Our approach in developing each forensic tool is to first understand how a
specific form of tampering disturbs certain statistical or geometric properties of an
image, and then to develop a computational techniques to detect these perturbations.
Within this framework, I describe five of such techniques.1

1
Portions of this chapter have appeared in [5–8].
PHOTO FAKERY AND FORENSICS 5

Specifically, I will describe three techniques for detecting inconsistencies in


lighting, the first two of which estimate the direction to a light source, and the
third of which estimates a more complex lighting environment consisting of multiple
light sources. The fourth technique exploits pixel correlations that are introduced
into an image as a result of the specific design of digital camera sensors. And the
final technique leverages the artifacts introduced by the JPEG compression algo-
rithm. These techniques were chosen as a representative sample of a larger body of
image forensic techniques.

2.1 Lighting Direction (2D)


Consider the creation of a forgery showing two movie stars, rumored to be
romantically involved, walking down a sunset beach. Such an image might be
created by splicing together individual images of each movie star. In so doing, it
is often difficult to exactly match the lighting effects due to directional lighting (e.g.,
the sun on a clear day). Differences in lighting can, therefore, be a telltale sign of
digital tampering. To the extent that the direction of the light source can be estimated
for different objects/people in an image, inconsistencies in the lighting direction can
be used as evidence of digital tampering.
The standard approaches for estimating light source direction begin by making
some simplifying assumptions (1) the surface of interest is Lambertian (the surface
reflects light isotropically), (2) the surface has a constant reflectance value, (3) the
surface is illuminated by a point light source infinitely far away, and (4) the angle
between the surface normal and the light direction is in the range 0–90 . Under these
assumptions, the image intensity can be expressed as
! !
I ðx; yÞ ¼ R N ðx; yÞ L þ A; ð1Þ
!
where R is the constant
!
reflectance value, L is a 3-vector pointing in the direction of
the light source, N ðx; yÞ is a 3-vector representing the surface normal at the point
(x, y), and A is a constant ambient light term [9] (Fig. 1, left). If we are only
interested in the direction of the light source, then the reflectance term, !
R,
can be considered to have unit value, understanding that the estimation of L will
only be within an unknown scale factor. The resulting linear
!
equation provides a single
constraint in four unknowns, the three components of L and the ambient term A.
With at!least four points with the same reflectance, R, and distinct surface
normals, N , the light source direction and ambient term can be solved for using
standard least-squares estimation. To begin, a quadratic error function, embodying
the imaging model of Equation 1, is given by
6 H. FARID

ù
ù
N
N ù
ù
L
L

FIG. 1. Schematic diagram of the imaging geometry for 3D surface normals (left) and 2D surface
!
normals (right). In the 2D case, the z-component of the surface normal ( N ) is zero.

 0 1 0 12
 Lx I ðx1 ; y1 Þ 
 
 !   B C B I ðx ; y Þ C
L y C B 2 2 C  ! ! 2
E L ; A ¼  MB
@ Lz A  @ ⋮ A ¼ M v  b ;
  ð2Þ
   
 A I xp ; yp 

where kk denotes!


vector norm, Lx , Ly , and Lz denote the components of the light
source direction L , and
0 1
Nx ð x 1 ; y 1 Þ Ny ðx1 ; y1 Þ Nz ðx1 ; y1 Þ 1
B Nx ðx2 ; y2 Þ Ny ðx2 ; y2 Þ Nz ðx2 ; y2 Þ 1 C
M¼B @
C;
A ð3Þ
⋮  ⋮  ⋮  ⋮
Nx xp ; yp Ny xp ; yp Nz xp ; yp 1
where N! x ðxi ; yi Þ, Ny ðxi ; yi Þ, and Nz ðxi ; yi Þ denote the components of the surface
normal N at image coordinate ðxi ; yi Þ. The quadratic error function above is mini-
!
mized by differentiating with respect to the unknown, v , setting the result equal to
!
zero, and solving for v to yield the least-squares estimate:
!  1 !
v ¼ MT M MT b : ð4Þ
Note that this solution requires knowledge of 3D surface normals from at least
four distinct points ( p  4) on a surface with the same reflectance. With only a
single image and no objects of known geometry in the scene, it is unlikely that this
will be possible. Most approaches to overcome this problem rely on acquiring
multiple images [10] or placing an object of known geometry in the scene (e.g.,
a sphere) [11]. For forensic applications, these solutions are not practical.
PHOTO FAKERY AND FORENSICS 7

In [12], the authors suggest a clever solution for estimating two components of the
light source directionv (Lx and Ly ) from only a single image. While their approach
clearly provides less information regarding the light source direction, it does make
the problem tractable from a single image. The authors note that at the occluding
boundary of a surface, the z-component of the surface normal is zero, Nz ¼ 0.
In addition, the x- and y-components of the surface normal, Nx and Ny , can be
estimated directly from the image (Fig. 1, right).
With this assumption, the error function of Equation 2 takes the form
 0 1
 0 1 I ðx 1 ; y 1 Þ  2
 Lx 
 B C
!  B C B I ðx2 ; y 2 Þ C
Eð L ; AÞ ¼  M@ Ly A  B C
 @ ⋮ A ð5Þ
 A   
 I xp ; yp 
 !  2
¼  M !
v  b  ;
where
0 1
Nx ðx1 ; y1 Þ Ny ðx1 ; y1 Þ 1
B Nx ðx2 ; y2 Þ Ny ðx2 ; y2 Þ 1 C
M¼B
@
C:
A ð6Þ
⋮  ⋮  ⋮
Nx xp ; yp Ny xp ; yp 1
This error function is minimized, as before, using standard least squares to yield
the same solution as in Equation 4, but with the matrix M taking the form given in
Equation 6. In this case, the solution requires knowledge of 2D surface normals from
at least three distinct points ( p  3) on a surface with the same reflectance.
The intensity, Iðxi ; yi Þ, at a boundary point, ðxi ; yi Þ, cannot be directly measured
from the image as the surface is occluded. The authors in [12] note, however, that the
intensity can be extrapolated by considering the intensity profile along a ray coinci-
dent to the 2D surface normal. They also found that simply using the intensity close
to the border of the surface is often sufficient.
We extend this basic formulation in two ways. First, we estimate the two-dimensional
light source direction from local patches along an object’s boundary (as opposed to
along extended boundaries as in [12]). This is done to relax the assumption that the
reflectance along the entire surface is constant. Then, a regularization (smoothness)
term is introduced to better condition the final estimate of light source direction.
The constant reflectance assumption is relaxed by assuming that the reflectance
for a local surface patch (as opposed to the entire surface) is constant. This requires
!i
us to estimate individual light source directions, L , for each patch along a surface.
Under the infinite light source assumption, the orientation of these estimates should
not vary, but their magnitude may (recall that the estimate of the light source is only
within a scale factor, which depends on the reflectance value R, Equation 1).
8 H. FARID

Consider a surface partitioned into n patches, and, for notational simplicity, assume
that each patch contains p points. The new error function to be minimized is constructed
by packing together, for each patch, the 2D version of the constraint of Equation 1:
 0  1 1  12
 0 1 I x1 ; y1 
 
 L1x B  ⋮  C
 B 1 C B C
 Ly C B I x1 ; y1 C
 !1   B B p p C
!n  B ⋮C CB C
E1 L ; . . . ; L ; A ¼   M B B  n⋮ n  C 
 B L nC
C B I x1 ; y1 C
B  ð7Þ
 B x C 
 @ Lny A B C  
 @  ⋮  A
 A 
 I xnp ; ynp 
 ! ! 2
¼  M v  b  ;
where

0    1
Nx x11 ; y11 Ny x11 ; y11 0 0 1
B ⋮  ⋮  ... ⋮ ⋮ ⋮C
B C
B Nx x1p ; y1p Ny x1p ; y1p 0 0 1C
B C
B .. C
M¼B
B ⋮ ⋮ .
C
⋮ C: ð8Þ
B ⋮  ⋮  C
B 0 0 Nx xn1 ; yn1 Ny xn1 ; yn1 1C
B C
@ ⋮ ⋮ ... ⋮  ⋮  ⋮A
0 0 Nx xnp ; ynp Ny xnp ; ynp 1

The above quadratic error function is minimized, as before, using least squares
with the solution taking on the same form as in Equation 4. In this case, the
!1 !n
solution provides n estimates of the 2D light directions, L ; . . . ; L , and an ambient
term A. Note that while individual light source directions are estimated for each
surface patch, a single ambient term is assumed.
While the local estimation of light source directions allows for the relaxation of
the constant reflectance assumption, it could potentially yield less stable results.
Note that under the assumption of an infinite point light source, the orientation of the
n light directions should be equal. With the additional assumption that the change in
reflectance from patch to patch is relatively small (i.e., the change in the magnitude
!i
of neighboring L is small), we can condition the individual estimates with the
following regularization term:
 n 
 X  2

!1 !n !i
E2 L ; . . . ; L ¼  L  !i
L 1  : ð9Þ
 
i¼2

This additional error term penalizes neighboring estimates that are different from
one another. The quadratic error function E1 ðÞ (Equation 7) is conditioned by
PHOTO FAKERY AND FORENSICS 9

combining it with the regularization term E2 ðÞ, scaled by a factor l, to yield the final
error function:
     
!1 !n !1 !n !1 !n
E L ; . . . ; L ; A ¼ E1 L ; . . . ; L ; A þ lE2 L ; . . . ; L : ð10Þ

This combined error function can still be minimized using least-squares minimiza-
tion. The error function E2 ðÞ is first written in a more compact and convenient form as
 !  !2
  
E 2 v ¼   C v  ; ð11Þ

where the 2n  2  2n þ 1 matrix C is given by


0 1
1 0 1 0 . . . 0 0 0 0 0
B 0 1 0 1 . . . 0 0 0 0 0C
B .. C
C¼B B ⋮ . ⋮CC; ð12Þ
@ 0 0 0 0 . . . 1 0 1 0 0A
0 0 0 0 ... 0 1 0 1 0
 T
! 1 1 2 2 n n
with v ¼ Lx Ly Lx Ly . . . Lx Ly A . The error function of Equation 10 then takes
the form
 !  ! !2    2
 !
E v ¼  M v  b  þ lC v  : ð13Þ

Differentiating this error function yields


0 ! ! ! !
E v ¼ 2MT M v 2MT b þ2lCT Cv
! ! ð14Þ
¼ 2ðMT M þ lCT CÞ v 2MT b:
!
Setting this result equal to zero and solving for v yields the least-squares estimate:
!  þ !
v ¼ MT M þ lCT C MT b ; ð15Þ
where þ denotes pseudoinverse. The final light direction estimate is computed by
!1 !n
averaging the n resulting light direction estimates, L ; . . . ; L .
The light direction estimation requires the localization of an occluding boundary.
These boundaries are extracted by manually selecting points in the image along an
occluding boundary. This rough estimate of the position of the boundary is used to
define its spatial extent. The boundary is then partitioned into approximately eight
small patches. Three points near the occluding boundary are manually selected for
each patch, and fit with a quadratic curve. The surface normals along each patch are
then estimated analytically from the resulting quadratic fit.
10 H. FARID

Shown in Fig. 2 are eight images of objects illuminated by the sun on a clear day.
To determine the accuracy of our approach, a calibration target, consisting of a flat
surface with a rod extending from the center, was placed in each scene. The target
was approximately parallel to the image plane, so that the shadow cast by the rod
indicated the direction of the sun. Errors in our estimated light source direction are
given relative to this orientation. The average estimation error is 4.8 with a
minimum and maximum error of 0.6 and 10.9 . The image returning the largest
error is the parking meters. There are probably at least three reasons for this larger
error, and for errors in general. The first is that the metallic surface violates the
Lambertian assumption. The second is that the paint on the meter is worn in several
spots causing the reflectance to vary, at times, significantly from patch to patch. And
the third is that we did not calibrate the camera so as to remove luminance
nonlinearities (e.g., gamma correction) in the image.
The creation of a digital forgery often involves combining objects/people from
separate images. In so doing, it is difficult to exactly match the lighting effects due to
directional lighting (e.g., the sun on a clear day). At least one reason for this is that
such a manipulation may require the creation or removal of shadows and lighting
gradients. And while large inconsistencies in lighting direction may be fairly
obvious, there is evidence from the human psychophysics literature that we are
surprisingly insensitive to differences in lighting across an image [13, 14]. To the
extent that the direction of the light source can be estimated for different objects/
people in an image, inconsistencies in lighting can be used as evidence of digital
tampering.

2.2 Lighting Direction (3D)


In the previous section, we described how to estimate the light source direction in
2D. While this approach has the benefit of being applicable to arbitrary objects, it
has the drawback that it can only determine the direction to the light source within
1 degree of ambiguity. Next we describe how the full 3D light source direction can
be estimated by leveraging a 3D model of the human eye. Specifically, we describe
how to estimate the 3D direction to a light source from specular highlights on the
eyes.
The position of a specular highlight is determined by the relative positions of the
light source, the reflective surface and the viewer (or camera). In Fig. 3, for example,
is a diagram showing! !
the creation of a specular highlight on an eye. In this diagram,
!
the three vectors L , N , and R correspond to the direction to the light, the surface
normal at the point at which the highlight is formed, and the direction in which
the highlight will be seen. For a perfect reflector, the highlight is seen only when the
PHOTO FAKERY AND FORENSICS 11

FIG. 2. Shown are eight images with the extracted occluding boundaries (black), individual light
source estimates (white), and the final average light source direction (large arrow). In each image, the cast
shadow on the calibration target indicates the direction to the illuminating sun, and has been darkened to
enhance visibility.
12 H. FARID

Light
L

N qi
qr

V=R
Camera
Eye

FIG. 3. The formation of a specular highlight on an eye (small white dot on the iris). The position of
! !
the highlight is determined by the surface normal N and the relative directions to the light source L and
!
viewer V .

! !
view direction V ¼ R .!For an imperfect reflector, a specular highlight can!be seen
! !
for viewing directions V near R , with the strongest ! !
highlight!seen when V ¼ R .
An algebraic relationship between
!
the
!
vectors L , N , and V is first derived. We
then show how the 3D vectors N and V!can be estimated from a single image, from
which the direction to the light source L is determined.
The law of reflection states that a light ray reflects off of a surface at an angle of
reflection yr equal to the angle of
!
incidence yi , where these angles are measured with
respect to the surface normal N (Fig. 3).
!
Assuming unit length vectors, the direction of the reflected ! ray R can be
!
described in terms of the light direction L and the surface normal N :
! !
 ! !

R ¼ L þ2 cosðyi ÞN  L
! ! ð16Þ
¼ 2 cosðyi ÞN  L:
! !
By assuming a perfect reflector ( V ¼ R ), the above constraint yields
! ! !
L ¼ 2 cosðyi ÞN V
! ! ! ! ð17Þ
¼ 2 V T N N  V:
! !
The light direction L can, therefore, be estimated from the surface normal N and
!
view direction V at a specular highlight. Note that the light direction is specified with
respect to the eye, and not the camera. In practice, all of these vectors will be placed in a
common coordinate system, allowing us to compare light directions
!
across the image.
!
To estimate the surface normal N and view direction V in a common coordinate
system, we first need to estimate the projective transform that describes the trans-
formation from world to image coordinates. With only a single image, this
PHOTO FAKERY AND FORENSICS 13

calibration is generally an underconstrained problem. In our case, however, the


known geometry of the eye can be exploited to estimate this required transform.
Throughout, uppercase symbols will denote world coordinates and lowercase will
denote camera/image coordinates.
The limbus, the boundary between the sclera (white part of the eye) and the iris
(colored part of the eye), can be well modeled as a circle [15]. The image of the
limbus, however, will be an ellipse except when the eye is directly facing the
camera. Intuitively, the distortion of the ellipse away from a circle will be related
to the pose and position of the eye relative to the camera. We therefore seek the
transform that aligns the image of the limbus to a circle.
In general, a projective transform that maps 3D world coordinates to 2D image
coordinates can be represented, in homogeneous coordinates, as a 3  4 matrix. We
assume that points on a limbus are coplanar, and define the world coordinate system
such that the limbus lies in the Z ¼ 0 plane. With this assumption, the projective
transformation reduces to a 3  3 planar projective transform [16], where the world
!
points X and image points ! x are represented by 2D homogeneous vectors.
Points on the limbus in our world coordinate system satisfy the following implicit
equation of a circle:
! ! 
f X ; a ¼ ðX1  C1 Þ2 þ ðX2  C2 Þ2  r 2 ¼ 0; ð18Þ
!
where a ¼ ðC1 C2 r ÞT denotes the circle center and radius.
!
Consider a collection of points, Xi ; i ¼ 1; . . . ; m, each of which satisfy Equation 18.
! !
Under an ideal pinhole camera model, the world point Xi maps to the image point xi
as follows:
! !
xi ¼ H Xi ; ð19Þ
where H is a 3  3 projective transform matrix.
The estimation of H can be formulated in an orthogonal distance fitting frame-
!
work. Let EðÞ be an error function on the parameter vector a and the unknown
projective transform H:
!  X m  !2
 ! ^  ;
E a; H ¼ min
!
 x i  H X ð20Þ
i ¼ 1 X^
! !
where X^ is on the circle parametrized by a . The error embodies the!sum of the
!
squared errors between the data, xi , and the closest point on the model, X^ . This error
function is minimized using nonlinear least squares via the Levenberg–Marquardt
iteration [17].
Once estimated, the projective transform H can be decomposed in terms of intrinsic
and extrinsic camera parameters [16]. The intrinsic parameters consist of the camera
14 H. FARID

focal length, camera center, skew, and aspect ratio. For simplicity, we will assume that
the camera center is the image center and that the skew is 0 and the aspect ratio is 1,
leaving only the focal length f. The extrinsic parameters consist of a rotation matrix R
!
and translation vector t that define the transformation between the world and camera
coordinate systems. Since the world points lie on a single plane, the projective transform
can be decomposed in terms of the intrinsic and extrinsic parameters as
 
! ! !
H ¼ lK r 1 r 2 t ; ð21Þ

where the 3  3 intrinsic matrix K is


0 1
f 0 0
K ¼ @0 f 0 A; ð22Þ
0 0 1
! !
where l is a scale factor, the column vectors r 1 and r 2 are the first two columns of
!
the rotation matrix R, and t is the translation vector.
With a known focal length f, and hence a known matrix K, the world to camera
coordinate transform H^ can be estimated directly:
 
1 1 ! ! !
K H ¼ r1 r2 t
l ð23Þ
! ! 
!
H^ ¼ r 1 r 2 t ;
! !
where the scale factor l is chosen so that r 1 and r 2 are unit vectors. The complete
rotation matrix is given by
! ! ! !

R ¼ r1 r2 r1  r2 ; ð24Þ

where  denotes cross product. If the focal length is unknown, it can be directly
estimated as described in [6].
Recall that the minimization of Equation 20 yields both the transform H and the
!
circle parameters a for the limbus. The unit vector from the center of the limbus to
!
the origin of the camera coordinate system is the view direction, v . Let
!
Xc ¼ ðC1 C2 1Þ denote the estimated center of a limbus in world coordinates. In
the camera coordinate system, this point is given by
! !
xc ¼ H^ Xc : ð25Þ
The view direction, as a unit vector, in the camera coordinate system is then given by
!
! xc
v¼ ! ; ð26Þ
k xc k
PHOTO FAKERY AND FORENSICS 15

A B
V
N
Limbus
S N
r1
r2 p

d q
V
Cornea

Sclera

FIG. 4. (A) A side view of a 3D model of the human eye. The larger sphere represents the sclera and
the smaller sphere represents the cornea. The limbus is defined by the intersection of the two spheres.
! !
(B) The surface normal at a point S in the plane of the limbus depends on the view direction V .

where the negative sign reverses the vector so that it points from the eye to
the camera. !
The 3D surface normal N at a specular highlight is estimated from a 3D model of
the human eye [18]. The model consists of a pair of spheres as illustrated in Fig. 4A.
The larger sphere, with radius r1 ¼ 11:5 mm, represents the sclera and the smaller
sphere, with radius r2 ¼ 7:8 mm, represents the cornea. The centers of the spheres
are displaced by a distance d ¼ 4.7 mm. The limbus, a circle with radius p ¼ 5.8 mm,
is defined by the intersection of the two spheres. The distance between the center of
the smaller sphere and the plane containing the limbus is q ¼ 5.25 mm. These
measurements vary slightly among adults, and the radii of the spheres are approxi-
mately 0.1 mm smaller for female eyes [18, 19]. !  
Consider a specular highlight in world coordinates at location S ! ¼ Sx S y ,
measured with respect to the center of the limbus. The surface normal at S depends
!
on the view direction V . Fig. 4B is a schematic showing!
this relationship for two
different positions of the camera. The surface !normal N is determined by intersect-
!
ing the ray leaving S , along the direction V, with the edge of the sphere. This
intersection can be computed by solving a quadratic system for k, the distance
!
between S and the edge of the sphere,
 2
ðSx þ kVx Þ2 þ Sy þ kVy þ ðq þ kVz Þ2 ¼ r22;
  ð27Þ
k2 þ 2 Sx Vx þ Sy Vy þ qVz k þ S2x þ S2y þ q2  r22 ¼ 0;

where q and r2 are specified by the 3D model of the eye. The view direction
!
V ¼ Vx Vy Vz in the world coordinate system is given by
16 H. FARID

! !
V ¼ R1 v ; ð28Þ
!
where v is the view direction in camera coordinates and R is the estimated!rotation
between the world and camera coordinate systems. The surface normal N in the
world coordinate system is then given by
0 1
!
S x þ kV x
N ¼ @ Sy þ kVy A; ð29Þ
q þ kVz
! !
and in camera coordinates: n ¼ R N.
!
Consider a specular highlight xs specified in image coordinates and the estimated
projective transform H from world to image coordinates. The inverse transform H 1
maps the coordinates of the specular highlight into world coordinates:
! !
Xs ¼ H 1 xs : ð30Þ
!
The center C and radius r of the limbus !in the world coordinate system determine
the coordinates of the specular highlight, S , with respect to the model:
 
! p ! !
S ¼ Xs  C ; ð31Þ
r
where p is! specified by the 3D model of the eye. The position !
of the specular
highlight S is then used to determine
!
the surface normal !
N . Combined with the
estimate of the view direction V , the light source direction L can be estimated from
Equation 17. To compare light source estimates !
in! the image, the light source
estimate is converted to camera coordinates: l ¼ R L .
To test the efficacy of this light estimation, synthetic images of eyes were
rendered using the pbrt environment [20]. The shape of the eyes conformed to the
3D model described above and the eyes were placed in 1 of 12 different locations.
For each location, the eyes were rotated by a unique amount relative to the camera.
The eyes were illuminated with two light sources: a fixed light directly in line with
the camera and a second light placed in one of four different positions. The 12
locations and 4 light directions gave rise to 48 images (Fig. 5). Each image was
rendered at a resolution of 1200  1600 pixels, with the cornea occupying less than
0.1% of the entire image. Shown in Fig. 5 are several examples of the rendered eyes,
along with a schematic of the imaging geometry.
The limbus and position of the specular highlight(s) were automatically extracted
from the rendered image. For each highlight, the projective transform H, the view
! !
direction v and surface normal n were estimated, from which the direction !
to the
!
light source l was determined. The angular error between the estimated l and actual
PHOTO FAKERY AND FORENSICS 17

FIG. 5. Synthetically generated eyes. Each of the upper panels corresponds to different positions and
orientations of the eyes and locations of the light sources. The ellipse fit to each limbus is shown with a
dashed line, and the small dots denote the positions of the specular highlights. Shown below is a schematic
of the imaging geometry: the position of the lights, camera, and a subset of the eye positions.
18 H. FARID

! ! !
l0 light directions is computed as f ¼ cos1 l T  l0 , where the vectors are
normalized to be unit length. With a known focal length, the average angular error
in estimating the light source direction was 2.8 with a standard deviation of 1.3 and
a maximum error of 6.8 . With an unknown focal length, the average error was 2.8
with a standard deviation of 1.3 and a maximum error of 6.3 .
To further test the efficacy of our technique, we photographed a subject
under controlled lighting. A camera and two lights were arranged along a wall,
and the subject was positioned 250 cm in front of the camera and at the same
elevation. The first light L1 was positioned 130 cm to the left of and 60 cm above
the camera. The second light L2 was positioned 260 cm to the right and 80 cm
above the camera. The subject was placed in five different locations and orientations
relative to the camera and lights (Fig. 6). A 6-megapixel Nikon D100 camera with a
35 mm lens was set to capture in the highest quality JPEG format.
For each image, an ellipse was manually fit to the limbus of each eye. In these
images, the limbus did not form a sharp boundary—the boundary spanned roughly 3
pixels. As such, we fit the ellipses to the better defined inner outline [21] (Fig. 6).

FIG. 6. A subject at different locations and orientations relative to the camera and two light sources.
Shown to the right are magnified views of the eyes. The ellipse fit to each limbus is shown with a dashed
line and the small dots denote the positions of the specular highlights. See also Table I.
PHOTO FAKERY AND FORENSICS 19

The radius of each limbus was approximately 9 pixels, and the cornea occupied
0.004% of the entire image. Each specular highlight was localized by specifying a
bounding rectangular area around each highlight and computing the centroid of the
selection. The weighting function for the centroid computation was chosen to be the
squared (normalized) pixel intensity. The location to the light source(s) was esti-
mated for each pair of eyes assuming a known and unknown focal length. The
angular errors for each image are given in Table I. Note that in some cases an
estimate for one of the light sources was not possible when the highlight was not
visible on the cornea. With a known focal length, the average angular error was 8.6 ,
and with an unknown focal length, the average angular error was 10.5 .
When creating a composite of two or more people, it is often difficult to match the
lighting conditions under which each person was originally photographed. Specular
highlights that appear on the eye are a powerful cue as to the shape, color, and
location of the light source(s). Inconsistencies in these properties of the light can be
used as evidence of tampering. We can measure the 3D direction to a light source
from the position of the highlight on the eye. While we have not specifically focused
on it, the shape and color of a highlight are relatively easy to quantify and measure
and should also prove helpful in exposing digital forgeries. Since specular highlights
tend to be relatively small on the eye, it is possible to manipulate them to conceal
traces of tampering. To do so, the shape, color, and location of the highlight would
have to be constructed so as to be globally consistent with the lighting in other parts
of the image. Inconsistencies in this lighting may be detectable using the technique
described in the previous section.

Table I
ANGULAR ERRORS ( ) IN ESTIMATING THE LIGHT DIRECTION FOR THE IMAGES SHOWN IN FIG. 6

Left eye Right eye Left eye Right eye

Image L1 L2 L1 L2 L1 L2 L1 L2

1 5.8 7.6 3.8 1.6 5.8 7.7 3.9 1.7


2 – 8.7 – 0.8 – 10.4 – 18.1
3 9.3 – 11.0 – 17.6 – 10.1 –
4 12.5 16.4 7.5 7.3 10.4 13.6 7.4 5.6
5 14.0 – 13.8 – 17.4 – 16.5 –

On the left are the errors for a known focal length, and on the right
are the errors for an unknown focal length. A ‘‘–’’ indicates that the
specular highlight for that light was not visible on the cornea.
20 H. FARID

2.3 Lighting Environment


In the previous two sections, we have shown how to estimate the direction to a
light source, and how inconsistencies in the illuminant direction can be used to
detect tampering. This approach is appropriate when the lighting is dominated by a
single light source, but is less appropriate in more complex lighting environments
containing multiple light sources or nondirectional lighting. Here, we describe how
to quantify such complex lighting environments and how to use inconsistencies in
lighting to detect tampering.
The lighting of a scene can be complex—any number of lights can be placed in
any number of positions, creating different lighting environments. To model such
complex lighting, we assume that the lighting is distant and that surfaces in the scene
are convex and Lambertian. To use this model in a forensic setting, we also assume
that the surface reflectance is constant and that the camera response is linear.
Under the assumption of distant lighting, an arbitrary !lighting
 environment
!
can be
expressed as a nonnegative function on the sphere,
!
L V , where V is a unit vector
in Cartesian coordinates
!
and the value of L V is the intensity of the incident light
along direction V (Fig. 7). If the object being illuminated is convex, the irradiance
(light received) at any point on the surface is due to only the lighting environment;
that is, there are no cast shadows or interreflections [22]. As a result, the irradiance,

Æ
N
Æ
V

Æ
X

FIG. 7. The irradiance (light received) at a point !


x is determined by integrating the amount of
! !
incoming light from all directions V in the hemisphere about the surface normal N .
PHOTO FAKERY AND FORENSICS 21

 ! !
E N , can be parametrized by the unit length surface normal
 ! ! N and written as a
convolution ofthe
!
reflectance function of the surface, R V ; N , with the lighting
environment L V :
 !  ð !   ! ! 
E N ¼ L V R V ; N dO; ð32Þ
O

where O represents the surface of the sphere and dO is an area differential on the
sphere. For a Lambertian surface, the reflectance function is a clamped cosine:
 ! ! ! ! 
R V ; N ¼ max V  N ; 0 ; ð33Þ
! !
which is either the cosine of the angle between vectors V and N , or zero when
the angle is greater than 90 . This reflectance function effectively limits
!
the integra-
tion in Equation 32 to the hemisphere about the surface normal N (Fig. 7). In
addition, while we have assumed no cast shadows, Equation 33 explicitly models
attached shadows,
!
that is, shadows due to surface normals facing away from the
direction V .
The convolution in Equation 32 can be simplified by expressing both the lighting
environment and the reflectance function in terms of spherical harmonics. Spherical
harmonics form an orthonormal basis for piecewise continuous functions on the
sphere and are analogous to the Fourier basis on the line or plane. The first three
orders of spherical harmonics are shown in Fig. 8 and defined as
 ! 1  !  sffiffiffiffiffi
3
ffi  !  sffiffiffiffiffi3

Y0; 0 N ¼ pffiffiffiffiffiffi ; Y1; 1 N ¼ y; Y1; 0 N ¼ z;
4p 4p 4p
 !  sffiffiffiffiffiffi  ! sffiffiffiffiffiffiffiffi  ! sffiffiffiffiffiffiffiffi
3 5 5
Y1; 1 N ¼ x; Y2; 2 N ¼ 3 xy; Y2; 1 N ¼ 3 yz;
4p 12p 12p
 !  1 sffiffiffiffiffi

5 2   ! sffiffiffiffiffiffiffiffi
5  !  3 sffiffiffiffiffiffiffi

5  2 
Y2; 0 N ¼ 3z  1 ; Y2; 1 N ¼ 3 xz; Y2; 2 N ¼ x  y2 ;
2 4p 12p 2 12p

!
where N ¼ ðx y zÞ in Cartesian coordinates.
The lighting environment expanded in terms of these spherical harmonics is
 ! X
1 X
n  !
L V ¼ ln; m Yn; m V ; ð34Þ
n ¼ 0 m ¼n

where Yn; m ðÞ is the mth spherical harmonic of order n, and ln; m is the corresponding
coefficient of the lighting environment. Similarly, the reflectance function for
 ! !
Lambertian surfaces, R V ; N , can be expanded in terms of spherical harmonics,
22 H. FARID

FIG. 8. The first three orders of spherical harmonics as functions on the sphere. Shown from top to
bottom are the order zero spherical harmonic, Y0;0 ðÞ; the three order one spherical harmonics, Y1;m ðÞ; and
the five order two spherical harmonics, Y2;m ðÞ.

and due to its symmetry about the surface normal, only harmonics with m ¼ 0 appear
in the expansion
 ! ! X  
1 ! ! T
R V; N ¼ rn Yn; 0 0 0 V  N : ð35Þ
n¼0

Note that for m ¼ 0, the spherical harmonic Yn;0 ðÞ depends only on the
z-component of its argument.
Convolutions of functions on the sphere become products when represented in
terms of spherical harmonics [22, 23]. As a result, the irradiance (Equation 32) takes
the form
 ! X 1 X n  !
E N ¼ r^n ln; m Yn; m N ; ð36Þ
n ¼ 0 m ¼ n

where
rffiffiffiffiffiffiffiffiffiffiffiffiffiffi
4p
r^n ¼ rn : ð37Þ
2n þ 1
The key observation in [22, 23] was that the coefficients r^n for a Lambertian
reflectance function decay rapidly, and thus the infinite sum in Equation 36 can be
well approximated by the first nine terms:
 ! X 2 Xn  !
E N  r^n ln; m Yn; m N : ð38Þ
n ¼ 0 m ¼ n
PHOTO FAKERY AND FORENSICS 23

Since the constants r^n are known for a Lambertian reflectance function, the
irradiance of a convex Lambertian surface under arbitrary distant lighting can be
well modeled by the nine lighting environment coefficients ln; m up to order 2.
Irradiance describes the total amount of light reaching a point on a surface. For a
Lambertian surface, the reflected light, or radiosity, is proportional to the irradiance
by a reflectance term r. In addition, Lambertian surfaces emit light uniformly in all
directions, so the amount of light received by a viewer (i.e., camera) is independent
of the view direction.
A camera maps its received light to intensity through a camera response function
f ðÞ. Assuming the reflectance term r is constant across the surface, the measured
!
intensity at a point x in the image is given by [24]
 !   !  !
I x ¼ f rtE N x ; ð39Þ
! ! !
where EðÞ is the irradiance, N x is the surface normal at point x , and t is the
exposure time. For simplicity, we assume a linear camera response, and thus the
intensity is related to the irradiance by an unknown multiplicative factor, which is
assumed to have unit value—this assumption implies that the lighting coefficients
can only be estimated to within an unknown scale factor. Under these assumptions,
the relationship between image intensity and irradiance is simply
 !  !  !
I x ¼E N x : ð40Þ

Since, under our assumptions, the intensity is equal to irradiance, Equation 40 can
be written in terms of spherical harmonics by expanding Equation 38:
! ! 2p ! 2p ! 2p !
I x ¼ l0; 0 pY0; 0 ð N Þ þ l1; 1 Y1; 1 ð N Þ þ l1; 0 Y1; 0 ð N Þ þ l1; 1 Y1; 1 ð N Þ
3 3 3
p ! p ! p !
þ l2; 2 Y2; 2 ð N Þ þ l2; 1 Y2; 1 ð N Þ þ l2; 0 Y2; 0 ð N Þ
4 4 4
p ! p !
þ l2; 1 Y2; 1 ð N Þ þ l2; 2 Y2; 2 ð N Þ:
4 4
ð41Þ
Note that this expression is linear in the nine lighting environment coefficients,
l0; 0 to l2; 2 . As such, given 3D surface normals at p  9 points on the surface of an
object, the lighting environment coefficients can be estimated as the least-squares
solution to the following system of linear equations:
24 H. FARID

0  !  !   !  !   !  !  1
2p p
pY0; 0 N x1 Y1; 1 N x1 ... Y2; 2 N x1
B 3 4 C0 1
B C
B  !  !   !  !   !  !  C l0; 0
B 2p p C l
B pY0; 0 N x2 Y1; 1 N x2 ... Y2; 2 N x2 CB C
CB
1; 1 C
B @ ⋮ A
3 4
B C
B C
B ⋮   ⋮!  !  ⋮
 !  !  C l2; 2
@ pY ! ! 2p
...
p
Y2; 2 N xp A
0; 0 N xp Y1; 1 N xp
3 4 ð42Þ
0 ! 1 
I x
B  !1  C
B C
¼B
B
I x 2 C;
C
@ ⋮  A
!
I xp
! !
M v ¼ b;
!
where M is the matrix containing the sampled spherical harmonics, v is the vector of
!
unknown lighting environment coefficients, and b is the vector of intensities at
p points. The least-squares solution to this system is
!  1 !
v ¼ MT M MT b : ð43Þ
This solution requires 3D surface normals from at least nine points on the surface
of an object. Without multiple images or known geometry, however, this require-
ment may be difficult to satisfy from an arbitrary image.
As in [5, 12], we observe that under an assumption of orthographic projection, the
z-component of the surface normal is zero along the occluding contour of an object.
Therefore, the intensity profile along an occluding contour simplifies to
 ! 2p  ! 2p  ! p  !
I x ¼ A þ l1; 1 Y1; 1 N þ l1; 1 Y1; 1 N þ l2; 2 Y2; 2 N
3 3 4 ð44Þ
p  !
þ l2; 2 Y2; 2 N ;
4
where
rffiffiffi
p p 5
A ¼ l0; 0 pffiffiffi  l2; 0 : ð45Þ
2 p 16 p
Note that the !functions Yi; j ðÞ depend only on the x- and y-components of the
surface normal N . That is, the five lighting coefficients can be estimated from only
2D surface normals, which are relatively simple to estimate from a single image.2

2
The 2D surface normal is the gradient vector of an implicit curve fit to the edge of an object.
PHOTO FAKERY AND FORENSICS 25

In addition, Equation 44 is still linear in its now five lighting environment coeffi-
cients, which can be estimated as the least-squares solution to
0  !  !  2p  !  !  p  !  !  p  !  !  1
2p
B 1 Y 1;1 N x 1 Y 1;1 N x 1 Y2;2 N x1 Y2;2 N x1 C
B 3 3 4 4 C
B  !  !  2p  !  !  p  !  !  p  !  !  C
B 2p C
B1 Y2;2 N x2 C
B Y1;1 N x2 Y1;1 N x2 Y2;2 N x2 C
B 3 3 4 4 C
B C
B⋮ ⋮ ⋮ ⋮ ⋮ C
B  !  !  2p  !  !  p  !  !  p  !  !  C
B 2p C
@1 Y1;1 N xp Y1;1 N xp Y2;2 N xp Y2;2 N xp A
3 3 4 4
0 1 0 !  1
A I x
B l1;1 C B  !1  C
B C B C
B l1;1 C ¼ B I x2 C; ð46Þ
B C B C
@ l2;2 A @ ⋮  A
!
l2;2 I xp

! !
M v ¼ b; ð47Þ
which has the same least-squares solution as before:
!  1 !
v ¼ MT M MT b : ð48Þ
Note that this solution only provides five of the nine lighting environment
coefficients. We will show, however, that this subset of coefficients is still suffi-
ciently descriptive for forensic analysis.
When analyzing the occluding contours of objects in real images, it is often the case
that the range of surface normals is limited, leading to an ill-conditioned matrix M.
This limitation can arise from many sources, including occlusion or object geometry.
As a result, small amounts of noise in either the surface normals or the measured
intensities can cause large variations in the estimate of the
 !lighting
 environment vector
!
v . To better condition the estimate, an error function E v is defined that combines
the least-squares error of the original linear system with a regularization term:
 !  ! !2  
  !  2
E v ¼  M v  b  þ lC v  ; ð49Þ

where l is a scalar and the matrix C is diagonal with ð1 2 2 3 3Þ on the diagonal. The
matrix C is designed to dampen the effects of higher order harmonics and is motivated
by the observation that the average power of spherical harmonic coefficients
for natural lighting environments decreases with increasing harmonic order [25].
26 H. FARID

For the full lighting model when 3D surface normals are available (Equation 49), the
matrix C has ð1 2 2 2 3 3 3 3 3Þ on the diagonal.
The error function to be minimized (Equation 49) is a least-squares problem with
a Tikhonov regularization [26]. The analytic minimum is found by differentiating
!
with respect to v :
!
dE0 v ! ! !
! ¼ 2MT M v 2MT b þ2lCT C v
dv ð50Þ
! !
¼ 2ðMT M þ lCT CÞ v 2MT b ;
!
setting the result equal to zero, and solving for v :
!  þ !
v ¼ MT M þ lCT C MT b : ð51Þ
In practice, we have found that the conditioned estimate in Equation 51 is
appropriate if less than 180 of surface normals are available along the occluding
contour. If more than 180 of surface normals are available, the least-squares
estimate (Equation 48) can be used, though both estimates will give similar results
for small values of l.
!
The estimated coefficient vector v (Equation 51) is a low-order approximation of
the lighting environment. For forensics purposes, we would like to differentiate
between lighting environments based on these coefficients. Intuitively, coefficients
from objects in different lighting environments should be distinguishable, while
coefficients from objects in the same lighting environment should be similar. In
addition, measurable differences in sets of coefficients should be mostly due to
differences in the lighting environment and not to other factors such as object color
or image exposure. Taking these issues into consideration, we propose an error
! !
measure between two estimated lighting environments. Let v1 and v2 be two vectors
of lighting environment coefficients. From these coefficients, the irradiance profile
along a circle (2D) or a sphere (3D) is synthesized, from which the error is
! !
computed. The irradiance profiles corresponding to v1 and v2 are given by
! !
x1 ¼ M v1 ; ð52Þ
! !
x2 ¼ M v2 ; ð53Þ
where the matrix M is of the form in Equation 42 (for 3D normals) or Equation 46
(for 2D normals). After subtracting the mean, the correlation between these zero-
mean profiles is
! !  ! T!
x1 x2
corr x1 ; x2 ¼ ! ! : ð54Þ
k x1 kk x2 k
PHOTO FAKERY AND FORENSICS 27

In practice, this correlation can be computed directly from the lighting environ-
ment coefficients:
! !  !T !
v1 Q v2
corr v1 ; v2 ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ffiqffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ; ð55Þ
! ! ! !
v1T Q v1 v2T Q v2

where the matrix Q for both the 2D and 3D cases is derived in [7]. By design, this
correlation is invariant to both additive and multiplicative factors on the irradiance
! ! ! !
profiles x1 and x2 . Recall that our coefficient vectors v1 and v2 are estimated to
within an unknown multiplicative factor. In addition, different exposure times under
a nonlinear camera response function can introduce an additive bias. The correlation
is, therefore, invariant to these factors and produces values in the interval [1, 1].
The final error is then given by
! !  1   ! ! 
D v1 ; v2 ¼ 1  corr v1 ; v2 ; ð56Þ
2
with values in the range [0, 1].
To test our ability to discriminate between lighting environments we photo-
graphed a diffuse sphere in 28 different locations with a 6.3-megapixel Nikon
D-100 digital camera set to capture in high-quality JPEG mode. The focal length
was set to 70 mm, the f-stop was fixed at f/8, and the shutter speed was varied to
capture two or three exposures per location. In total, there were 68 images. For each
image, the Adobe Photoshop ‘‘Quick Selection Tool’’ was used to locate the
occluding contour of the sphere from which both 2D and 3D surface normals
could be estimated. The 3D surface normals were used to estimate the full set of
nine lighting environment coefficients and the 2D surface normals along the occlud-
ing contour were used to estimate five coefficients. For both cases, the regularization
term l in Equation 51 was set to 0.01. For each pair of images, the error (Equation
56) between the estimated coefficients was computed. In total, there were 2278
image pairs: 52 pairs were different exposures from the same location and 2226 pairs
were captured in different locations. The errors for all pairs for both models (3D and
2D) are shown in Fig. 9. In both plots, the 52 image pairs from the same location are
plotted first (‘‘þ’’), sorted by error. The 2226 pairs from different locations are

plotted next (‘‘ ’’). Note that the axes are scaled logarithmically. For the 3D case,
the minimum error between an image pair from different locations is 0.0027 and the
maximum error between an image pair from the same location is 0.0023. Therefore,
the two sets of data, same location versus different location, are separated by a
threshold of 0.0025. For the 2D case, 13 image pairs (0.6%) fell below 0.0025. These
image pairs correspond to lighting environments that are indistinguishable based on
the five-coefficient model.
28 H. FARID

100

10−1

10−2
Error

10−3

10−4

1 100 1000
Image pair

100

10−1

10−2
Error

10−3

10−4

1 100 1000
Image pair

FIG. 9. Errors between image pairs corresponding to the same (‘‘þ’’) and different (‘‘’’) locations
using the full nine-parameter model with 3D surface normals (top) and using the five-parameter model
with 2D surface normals (bottom). Both the horizontal and vertical axes are scaled logarithmically.

To be useful in a forensic setting, lighting estimates from objects in the same


lighting environment should be robust to differences in color and material type, as
well as to geometric differences, since arbitrary objects may not have the full range
of surface normals available. To test our algorithm under these conditions, we
downloaded 20 images of multiple objects in natural lighting environments from
Flickr3 (Fig. 10). In each image, occluding contours of two to four objects were

3
http://www.flickr.com
FIG. 10. Superimposed on each image are the contours from which the surface normals and intensity values are extracted to form the matrix M and
!
the corresponding vector b (Equation 47).
30 H. FARID

specified using a semiautomated approach. A coarse contour was defined by paint-


ing along the edge of the object using Adobe Photoshop. Each stroke was then
automatically divided into quadratic segments, or regions, which were fit to nearby
points with large gradients. The analyzed regions for all images are shown in Fig. 10.
Analytic surface normals and intensities along the occluding contour were measured
from the regions. With the 2D surface normals and intensities, the five lighting
environment coefficients were estimated (Equation 51). The regularization term l in
Equation 51 was increased to 0.1, which is larger than in the simulation due to an
increasing sensitivity to noise.
Across all 20 images, there were 49 pairs of objects from the same image and
1329 pairs of objects from different images. For each pair of objects, the error
between the estimated coefficients was computed. For objects in the same image, the
average error was 0.009 with a standard deviation of 0.007 and a maximum error of
0.027. For comparison, between objects in different images the average error was
0.295 with a standard deviation of 0.273. There were, however, 196 pairs of objects
(15%) from different images that fell below 0.027. The lighting environments in
these images (e.g., the two police images, the trees and skiers images, etc.) were
indistinguishable using the five coefficient model. For objects from the same image,
the pair with the maximum error of 0.027 is the basketball and basketball player. The
sweaty skin of the basketball player is somewhat shiny, a violation of the Lamber-
tian assumption. In addition, the shoulders and arms of the basketball player provide
only a limited extent of surface normals, making the linear system somewhat ill
conditioned. In contrast, the objects from the same image with the minimum error of
0.0001 are the left and right pumpkins on the bench. Both pumpkins provide a large
extent of surface normals, over 200 , and the surfaces are fairly diffuse. Since the
surfaces fit the assumptions and the linear systems are well conditioned, the error
between the estimated coefficients is small.
We created three forgeries by mixing and matching several of the images in
Fig. 10. These forgeries are shown in Fig. 11. Regions along the occluding contour
of two to four objects in each image were selected for analysis. These regions are
superimposed on the images in the right column of Fig. 11. Surface normals and
intensities along these occluding contour were extracted, from which the five
lighting environment coefficients were estimated (Equation 51) with the regulariza-
tion term l ¼ 0.1. Shown in each panel is a sphere rendered with the estimated
coefficients. These spheres qualitatively show discrepancies between the lighting.
For all pairs of objects originally in the same lighting environment, the average error
is 0.005 with maximum error of 0.01. For pairs of objects from different lighting
environments, the average error is 0.15 with a minimum error of 0.03.
The ability to estimate complex lighting environments was motivated by our
earlier work in which we showed how to detect inconsistencies in the direction to
PHOTO FAKERY AND FORENSICS 31

FIG. 11. Shown on the left are three forgeries: the ducks, swans, and football coach were each added
into their respective images. Shown on the right are the analyzed regions superimposed in white, and
spheres rendered from the estimated lighting coefficients.

an illuminating light source (Section 2.1). The work described here generalizes this
approach by allowing us to estimate more complex models of lighting and in fact can
be adapted to estimate the direction to a single light source. Specifically, by
considering only the two first-order spherical harmonics,
  ðÞ and Y1; 1 ðÞ, the
Y1; 1
direction to a light source can be estimated as tan1 l1; 1 =l1;1 .
When creating a composite of two or more people, it is often difficult to exactly
match the lighting, even if the lighting seems perceptually consistent. The reason for
this is that complex lighting environments (multiple light sources, diffuse lighting,
Exploring the Variety of Random
Documents with Different Content
As an example of the progress being made toward speeding up
computers, speakers at the recent Winter General Meeting of the
American Institute of Electrical Engineers described a coming
generation of “gigacycle” computers now on the drawing boards.
Present electronic machines operate at speeds in the megacycle
range, with 50 million cycles per second representing the most
advanced state of the art. Giga means billion; thus the new round of
computers will be some thousand times as fast as those now
operating.
Among the firms who plan such ultraspeed computers are RCA,
IBM, and Sperry Rand Corporation. To achieve such a great increase
in speed requires faster electronic switches. Transistors have been
improved, and more exotic devices such as tunnel diodes, thin-film
cryotrons, magnetic thin-films, parametrons, and traveling-wave
tubes are now coming into use. Much of the development work is
being supported by the U.S. Bureau of Ships. Operational gigacycle
computers are expected within two years!
Not just the brickmaker, but the architect too has been busy in the
job of optimizing the computer. The science of bionics and the study
of symbolic logic lead to better ways of doing things. The computer
itself comes up with improvements for its next generation, making
one part do the work of five, and eliminating the need for whole
sections of circuitry. Most computers have a fixed “clock”; that is,
they operate at a certain cyclic rate. Now appearing on the scene
are “asynchronous” computers which don’t stand around waiting
when one job is done, as their predecessors did.
One advanced notion is the “growing” of complex electronic
circuitry, in which a completed amplifier, or array of amplifiers, is
pulled from the crystal furnace much the way material for transistors
is now grown. Pooh-poohed at first as ridiculous, the notion has
been tried experimentally. Since a computer is basically a multiplicity
of simple units, the idea is not far off at that. It is conceivable that
crystal structure can be exploited to produce millions of molecules of
the proper material properly aligned for the desired electronic action.
With this shrinking come the benefits of small size, low power
consumption, low cost, and perhaps lower maintenance. The
computer will be cheap enough for applications not now
economically feasible. As this happens, what will the computer do
for us tomorrow?
A figure of 7 per cent is estimated for the amount of paperwork
the computer has taken over in the business world. Computer men
are eyeing a market some five times that amount. It does not take a
vivid imagination to decide that such a percentage is perhaps
conservative in the extreme. Computer sales themselves promise to
show a fourfold increase in the five-year period from 1960 to 1965,
and in the past predictions have been exceeded many times.
As population grows and business expands in physical size and
complexity, it is obvious that the computer and its data-processing
ability will be called upon more and more. There is another factor,
that of the internationalizing of business. Despite temporary
setbacks of war, protective tariffs, insular tendencies, and the like, in
the long run we will live in one integrated world shrunk by data links
that can get information from here to there and back again so fast it
will be like conversing with someone across the room. Already
planners are talking worldwide computerized systems.
As a mathematical whiz, the computer will relieve us of our money
worries. Coupled with the credit card, perhaps issued to us at birth,
a central computer will permit us to make purchases anywhere in
the world and to credit our account with wages and other income. If
we try to overdraw, it may even flash a warning light as fast as we
put the card in the slot! This project interests General Dynamics
researchers.
Of more importance than merely doing bookkeeping is the impact
the computer will have on the planning and running of businesses.
Although it is found in surveys that every person thinks computer
application reaches to the level just below his in the management
structure, pure logic should ultimately win out over man’s emotional
frailties at all levels. Operations research, implemented by the
computer, will make for more efficient businesses. Decisions will
increasingly be made not by vice-presidents but by digital
computers. At first we will have to gather the necessary information
for these electronic oracles, but in time they will take over this
function themselves.
Business is tied closely to education, and we have had a hint of
the place the computer will make for itself in education. The effect
on our motivation to learn of the little need for much learning will be
interesting. But then, is modern man a weaker being because he
kills a tiger with a high-powered rifle instead of club or bare hands—
or has no need to kill the tiger in the first place?
After having proved itself as a patent searcher, the computer is
sure to excel as inventor. It will invade the artistic field; computers
have already produced pleasing patterns of light. Music has felt the
effect of the computer; the trend will continue. Some day not far off
the hi-fi enthusiast will turn on his set and hear original compositions
one after the other, turned out by the computer in as regular or
random form as the hearer chooses to set the controls. Each
composition will bring the thrill of a new, fresh experience, unless
we choose to go back in the computer’s memory for the old music.
The computer will do far more in the home than dream up random
music for listening pleasure. The recorded telephone answerer will
give way to one that can speak for us, making appointments and so
on, and remembering to bring us up to date when we get home. A
small computer to plug in the wall may do other things like selecting
menus and making food purchases for next week, planning our
vacations, and helping the youngsters with their homework. It is
even suggested that the computer may provide us with child-
guidance help, plus psychological counsel for ourselves and medical
diagnoses for the entire family. The entire house might be
computerized, able to run itself without human help—even after
people are gone, as in the grimly prophetic story by Ray Bradbury in
which a neat self-controlled home is shown as the curtains part in
the morning. A mechanical sweeper runs about gathering up dust,
the air conditioning, lighting, and entertainment are automatic, all
oblivious to the fact that one side of the house is blackened from the
blast of a bomb.
Perhaps guarding against that eventuality is the most important
job the computer can do. Applications of computing power to
government have been given; and hints made of the sure path from
simple tasks like the census and income tax, Peace Corps work, and
so on to decision-making for the president. Just as logic is put to
work in optimizing business, it can be used to plan and run a taut
ship of state. At first such an electronic cabinet member will be given
all available information, which it will evaluate so as to be ready to
make suggestions on policy or emergency action. There is more
reason for it going beyond this status to become an active agent,
than there is against. Government has already become so complex
that perhaps a human brain, or a collection of them, cannot be
depended on to make the best possible decision. As communications
and transportation are speeded up, the problem is compounded.
Where once a commander-in-chief could weigh the situation for days
before he had to commit himself and his country to a final choice, he
may now be called upon to make such a far-reaching decision in
minutes—perhaps minutes from the time he is awakened from a
sound sleep. The strongest opposition to this delegation of power is
man’s own vanity. No machine can govern, even if it can think, the
politician exclaims. The soldier once felt the same way; but
operations research has given him more confidence in the machine,
and SAGE and NORAD prove to him that survival depends on the
speed and accuracy of the electronic computer.
Incurable romanticism is found even among our scientific
community. The National Bureau of Standards describes a computer
called ADAM, for Absolutely Divine Automatic Machine. But the
scientists also know that ADAM, or man, needs help. Rather than
consider the machine a tool, or even an extension of man’s mind,
some are now concerned with a kind of marriage of man and
machine in which each plays a significant part. Dr. Simon Ramo,
executive vice president of Thompson Ramo Wooldridge, Inc., has
termed this mating of the minds “intellectronics.” The key to this
combination of man’s intellect and that of electronics is closer
rapport between the team members.
Department of Defense
Computer use in defense is typified in this BIRDIE system of the United States
Army.

The man-machine concept has grown into a science called, for the
present at least, “synnoetics,” a coinage from the Greek words syn
and noe meaning “perceive” and “together.” This science is defined
as the treating of the properties of composite systems, consisting of
configurations of persons, mechanisms, plant or animal organisms,
and automata, whose main attribute is that their ability to invent, to
create, and to reason—their mental power—is greater than the
mental power of their components.
We get a not-too-fanciful look into the future in a paper by Dr.
Louis Fein presented in the summer 1961 issue of American
Scientist, titled “Computer-related Sciences (Synnoetics) at a
University in 1975.” Dr. Fein is an authority on computers, as builder
of RAYDAC in 1952, and as founder and president of the Computer
Control Company. The paper ostensibly is being given to alumni
some years hence by the university president. Dr. Fein tells us that
students in the Department of Synnoetics study the formal
languages used in communication between the elements of a
synnoetic system, operations research, game theory, information
storage, organization and retrieval, and automatic programming.
One important study is that of error, called Hamartiology, from the
Greek word meaning “to miss the mark.”
The speaker tells us that this field was variously called cybernetics,
information science, and finally computer-related science before
being formally changed to the present synnoetics. A list of the
courses available to undergraduates includes:
Von Neumann Machines and Turing Machines
Elements of Automatic Programming
Theory, Design, and Construction of Compilers
Algorithms: Theory, Design, and Applications
Foundations of the Science of Models
The Theory, Design, and Application of Non-Numeric Models
Heuristics
Self-Programming Computers
Advice Giving—Man to Machine and Machine to Man
Simulation: Principles and Techniques
Pattern Recognition and Learning by Automata
The Grammar, Syntax, and Use of Formal Languages for
Communication Between Machine and Machine and Between
Man and Man
Man-Automaton Systems: Their Organization, Use, and Control
Problem-Solving: an Analysis of the Relationship Between the
Problem-Solver, the Problem, and the Means for Solution
Measurements of the Fundamental Characteristics of the Elements
of Synnoetic Systems

Of course, synnoetics spills over into the other schools, as shown


in the following typical courses taught:

Botany Department
Machine-Guided Taxonomy in Botany

Business School
Synnoetic “Business Executives”

Engineering School
Theory of Error and Equipment Reliability
Design of Analog and Digital Computers

Humanities Department
Theory of Creative Processes in the Fine Arts
Law School
Patent and Precedence Searches with Computers
The Effect of Automata on the Legislative and Judicial Process

Mathematics Department
The Theory of Graphs and the Organization of Automata

Medical School
Computer-Aided Medical Diagnosis and Prescription for Treatment

Philosophy
The Relationships between Models and the Phenomena That Are
Modeled

Psychology Department
Studies in Intuition and Intellect of Synnoetic Systems
Simulation in the Behavioral Sciences

Sociology Department
Synnoetics in Modern Society
The speaker proudly refers to the achievement of the faculty
mediator and a computer in settling the “famous” strike of 1970.
He simply got both sides first to agree that each would benefit by concentrating
attention—not on arguing and finally settling the issues one at a time—but on
arguing and finally settling on a program for an automaton. This program would
evaluate the thousands of alternative settlements and would recommend a small
class of settlements each of which was nearly optimum for both sides. The
automaton took only 30 minutes to produce the new contract last year. It would
have taken one year to do this manually, and even then it would have been done
less exhaustively. Agreeing on the program took one week. Of course, you have
already heard that in many areas where people are bargaining or trying to make
optimum decisions such as in the World Nations Organization, in the World Court,
and in local, federal, and world legislative bodies, there is now serious
consideration being given to convincing opposing factions to try to agree on a
program and having once agreed on it, the contract or legislation or judgment or
decision produced with the program would be accepted as optimum for both sides.
Automata may also be provided to judges and juries to advise them of the effects
of such factors as weight of evidence on verdicts in civil cases.
Dr. Fein makes an excellent case for the usefulness of the science
of synnoetics; the main point of challenge to his paper might be that
its date is too conservatively distant. Of interest to us here is the
idea of man and machine working in harmony for the good of both.
Another paper, “The Coming Technological Society,” presented by
Dr. Simon Ramo at the University of California at Los Angeles, May 1,
1961, also discusses the possible results of man-machine
cooperation during the remainder of the twentieth century. He lists
more than a dozen specific and important applications for
intellectronics in the decades immediately ahead of us. Law,
medicine, engineering, libraries, money, and banking are among
these. Pointing out that man is as unsuited for “putting little marks
on pieces of paper” as he was for building pyramids with his own
muscles, he suggests that our thumbprints and electronic scanners
will take care of all accounting. Tongue in cheek, he does say that
there will continue to be risks associated with life; for instance, a
transistor burning out in Kansas City may accidentally wipe out
someone’s fortune in Philadelphia.
The making of reservations is onerous busywork man should not
have to waste his valuable time on, and the control of moving things
too is better left to the machine for the different reason that man’s
unaided brain cannot cope with complex and high-speed traffic
arteries, be they in space or on Los Angeles freeways. Business and
military management will continue to be aided by the electronic
machine.
But beyond all these benefits are those more important ones to
our brains, our society, and culture. Teaching machines, says Dr.
Ramo, can make education ten times more effective, thus increasing
our intellect. And this improved intellect, multiplied by the electronic
machine into intellectronic brainpower, is the secret of success in the
world ahead. Instead of an automated, robotlike regimented world
that some predict, Ramo sees greater democracy resulting. Using
the thumbprint again, and the speed of electronics, government of
our country will be truly by the people as they make their feelings
known daily if necessary.
Intellectronic legislation will extend beyond a single country’s
boundaries in international cooperation. It will smash the language
and communication barriers. It will permit and implement not only
global prediction of weather, but global control as well. Because of
the rapid handling of vast amounts of information, man can form
more accurate and more logical concepts that will lead to better
relations throughout the world. Summing up, Dr. Ramo points out
that intellectronics benefits not only the technical man but social
man as well:
The real bottleneck to progress, to a safe, orderly, and happy transition to the
coming technological age, lies in the severe disparity between scientific and
sociological advance. Having discussed technology, with emphasis on the future
extension of man’s intellect, we should ask: Will intellectronics aid in removing the
imbalance? Will technology, properly used, make possible a correction of the very
imbalance which causes technology to be in the lead? I believe that the
challenging intellectual task of accelerating social progress is for the human mind
and not his less intellectual partner. But perhaps there is hope. If the machines do
more of the routine, everyday, intellectual tasks and insure the success of the
material operation of the world, man’s work will be elevated to the higher mental
domains. He will have the time, the intellectual stature, and hence the inclination
to solve the world’s social problems. We must believe he has the capability.
Thompson Ramo Wooldridge, Inc.

Information in many forms can be displayed with “polymorphic” data-processing


systems.

Antedating synnoetics and intellectronics is another idea of such a


relationship. In his book The World, The Flesh and the Devil, J. D.
Bernal considers man’s replacement of various of his body’s parts
with mechanical substitutes until the only organic remains would be
his brain. This is a sort of wrong-end-to synnoetics, but in 1929
when the book was published there was already plenty of raw
material for such a notion. Wooden legs and hooks or claws for
hands, metal plates for bone material, for example; and the artificial
heart already being developed. More recently we have seen the
artificial kidney used, along with other organs. We have also added
electronic gear to our organic components, for example the
“pacemaker” implanted in many laggard hearts to keep them beating
in proper cadence, plastic plumbing, and the like. There is a word for
this sort of part-organic, part-mechanical man: the name “cyborg”
for cybernetic organism was proposed by two New York doctors.
Their technical definition of cyborg is “an exogenously extended
organizational complex functioning as a homeostatic system.” There
is of course strong precedent in nature for the idea of such a
beneficial combination: symbiosis, the co-existence or close union of
two dissimilar organisms. The shark and his buddy, the pilot fish, are
examples; as are man and the many parasites to which he is host.
The idea of man being part of machine harks back to youthful
rides in soapbox racers, and later experiences driving cars or flying
aircraft. The pilot who flew “by the seat of his pants” in the early
days easily felt himself part of the machine. As planes—and cars—
grew bigger and more complex, this “one-manship” became more
remote and harder to identify. The jet transport pilot may well have
the feeling of handling a train when he applies force to his controls
and must wait for it to be amplified through a servo system and
finally act on the air stream. In the space age the man-machine
combination not only survives but also flourishes. Arthur C. Clarke
writes in a science-fiction story of a legless space man who serves
well and happily in the weightlessness of his orbiting satellite station.
We have two stages of development, then, not necessarily
sequential: man working with the machine and man as part of the
machine. Several writers have suggested a third stage in which the
machine gradually supplants the weaker human being much as other
forms eased out the dinosaur of old. William O. Stapledon’s book,
Last and First Men, describes immortal and literal giant brains. Many
writers believe that these “brains” will not be man’s, but those of the
machine, since frail humanity cannot survive in its increasingly
hostile environment.
Arthur C. Clarke is most articulate in describing what he calls the
evolutionary cycle from man to machine. As the discovery of tools by
pre-man created man, so man’s invention of thinking machines set
about the workings that will make him extinct. Clarke theorizes that
this breakthrough by man may well be his last, and that his
machines will “think” him off the face of the earth!
Hughes Aircraft Company

Withstanding underwater pressures, at depths too great for human divers, a


Mobot vehicle demonstrates in this artist’s concept how it can perform salvage and
rescue operations at the bottom of the ocean.

As we move into a technology that embraces communication at a


distance of millions of miles, survival under death-dealing radiation,
and travel at fantastic speeds, man’s natural equipment falters and
he must rely on the machine both as muscle and brain. Intelligence
arose from life but does not necessarily need life, in the sense we
think of it, to continue. Thus the extension of man’s intellect by
electronics as hailed by Dr. Ramo will lead ultimately to our
extinction.
Clarke feels that the man-machine partnership we have entered,
while mutually benevolent, is doomed to instability and that man
with his human shortcomings will fall by the wayside, perhaps in
space, which may well be the machine’s true medium. What will
remain will be the intelligent machine, reduced as time goes on to
“pure” intelligence free to roam where it will and do what it wants, a
matterless state of affairs that even Clarke modestly disclaims the
imagination to speculate upon.
Before writing man off as a lost cause, we should investigate a
strong argument against such a take-over by the machine. Man
stands apart from other creatures in his consciousness of himself. He
alone seems to have the ability to ponder his fate, to reflect, and to
write books about his thoughts and dreams. Lesser animals
apparently take what comes, do what they have to do, and get
through this life with a minimum of changing their environment and
themselves. Thus far the machines man has built do not seem to be
conscious of themselves. While “rational beings,” perhaps, they do
not have the “ability to laugh” or otherwise show conscious
awareness of their fate. A term applied to primitive mechanical
beings is “plugsuckers.” They learn to seek out a wall socket or other
form of energy and nourish themselves much as animals must do.
Just where man himself switched from plugsucking and began to
rewire his own world is a fuzzy demarcation, but he seems to have
accomplished this.
Consciousness is subjective in the extreme, and thus far only in
fiction have computers paused to reflect and consider what they
have done and its effect on them. However, the machine-builder, if
not yet the machine itself, is aware of this consciousness problem.
The Hoffman Electronics Corporation recently published an
advertisement in the form of a science-fiction story by A. E. Van
Vogt. The hero is a defense vehicle, patrolling the Pacific more
effectively because it thinks it is king of the Philippine Deep. Its
name is Itself, and it has a built-in alter ego. Hoffman admits it has
not produced a real Itself—yet, but points out calmly that the
company’s business is the conversion of scientific fiction to scientific
fact.
It has been suggested that mechanical consciousness may evolve
when the computer begins to reproduce itself, a startling conception
blessed in theory by logicians and mathematicians, as well as
philosophers. A crude self-replicating model has been built by
scientists—a toy train that reproduces itself by coupling together the
proper cars to copy the parent train, a whimsical reflection of
Samuel Butler’s baby engines playing about the roundhouse door.
Self-reproducing machines may depend on a basic “cell”
containing a blueprint of what it should look like when complete,
which simply hunts around for the proper parts and assembles itself.
In the process it may even make an improvement or two. Having
finished, it will make a carbon copy of its blueprint and start another
“baby” machine on the way. Writers on this subject—some under the
guise of science-fiction—wonder at what point the machines will
begin to wonder about how they came to be. Will they produce
philosophic or religious literature, or will this step in evolution prove
that consciousness was a bad mutation, like seven fingers or three
heads, and drop it from the list of instructions?
Clarke admits that the take-over by the machines is centuries off;
meantime we can enjoy a golden age of intellectronic partnership
with the machine. Linus Pauling, pointing out that knowledge of
molecular structure has taken away the mystery of life, hopes that a
“molecular theory of thinking” will be developed and so improve man
that he may remake his thoughts and his world. Mathematician John
Williams believes that existing human intelligence can preserve its
distinction only by withdrawing from competition with the machine
and defining human intelligence rigorously enough to exclude that of
the machines. He suggests using the computer not just for a
molecular theory of thinking, but also in the science of genetics to
design our children!
Whatever lies ahead, it seems obvious that one of the most
important things the computer can help us think about is the
computer itself. It is a big part of our future.
Index

Abacus, 5, 21, 22, 60, 85, 129, 178, 181


Abstracting computer, 245, 248
Accuracy
analog computer, 82
digital computer, 87
Ackerman, 110
ADAM computer, 258
Adaptive principle, 205
Adders, 107, 108, 115
Adding machine, 129
Addition, computer, 106
Address, computer, 63
Advertising, use of computer, 180
AID, 183, 184
AIEE, 254
Aiken, 46
Air Force, 6, 132, 133, 151, 160, 182, 225
Airborne computer, 90, 154, 158, 162
AiResearch Mfg. Co., 69
Airline reservations, computer, 58, 183, 184
Algebra, Boolean, 8, 110, 119
Alpha rhythm, 126
Alphanumeric code, 104
American Premium Systems, Inc., 175
Analog computer, 21, 45, 72, 74, 80, 125, 203
direct, 76, 79
direct-current, 76
discrete, 80
indirect, 76, 79
mechanical differential analyzer, 76
scaling, 76
Analytical engine, 36, 37
AND gate, 112, 113, 117, 119
Antikythera computer, 25
Apollo computer, 182
space vehicle, 169
Applications, digital computer, 92
A priori concept, 126, 135
APT computer, 209
Aquinas, St. Thomas, 235
Arabic numbers, 23
Archytas, 25
Arithmetic unit, computer, 51, 60
Aristotle, 26
Aristotelian logic, 109
Arizona Journal, 179
Army, U. S., 21, 78, 146, 259
Ars Magna, 28, 29
ARTOC, 157
Artron, 136
Ashby, W. Ross, 51, 124, 128, 251
ASC computer, 155
Associated Press computer system, 177
Asynchronous computer, 255
Athena computer, 52
Atlas missile, 4, 168
Atlas-Centaur missile, 169
Atomic Energy Commission, U. S., 149
Automatic
control, 80, 203
pilot, 203
Automation, 26, 80, 173, 181, 201, 202, 203, 211, 217
Automaton, 26
Auto-parking, use of computer, 178
Autonetics, 207
AUTOPROMPT computer, 210
AUTOTAG, 156
AutoTutor teaching machine, 213, 225

B-29, 45, 77, 82


Babbage, 5, 35, 37, 41, 51
Babylonian arithmetic, 23
Ballistic computer, 83
Banking, 1, 172, 173
Bar Association, American, 152, 249
Battelle Memorial Institute, 195
Batten, Barton, Durstine, & Osborn, 180
Bell Telephone Laboratories, 4, 147, 241
Bendix Corp., 182, 190, 218
Bendix G-15 computer, 183, 188
Bernal, J. D., 264
Bernstein, Alex, 141
Bettelheim, Bruno, 144
BIAX memory units, 10
Bierce, Ambrose, 43, 121
BINAC computer, 7, 47
Binary, 98
digit, 55, 104
notation, 101, 103
pure, 102, 104
system, 85, 97, 99
variables, 114
Bionics, 7, 132, 135, 255
BIRDIE, 259
Birds, counting, 18
Bit, 55, 104
“Black box” concept, 50, 115
BLADES system, 191
Block diagram, 58
BMEWS, 159
Boeing Airplane Co., 186
Boltzmann equation, 158
Bomarc missile, 186
Book of Contemplation, 27
Book of Knowledge, 6, 226
Boole, George, 38, 110
Boolean algebra, 38, 110, 119
Bradbury, Ray, 153, 257
Brain, 121
Brain
computer, 128, 129, 130
human, 87, 125, 128
BRAINIAC computer, 88, 117
Britton, Lionel, 121
Buffer
computer, 55
lexical, 238
Buildings, automation of, 217
Burack, Benjamin, 44
Bureau of Mines, U. S., 189
Bureau of Ships, U. S., 255
Burke, Edmund, 32
Burkhart, William, 45
Bush, Vannevar, 13, 45, 76
Business, computer in, 171
Business management, use of computer, 12, 143
Butler, Samuel, 32, 33, 121, 252, 268

CALCULO computer, 75
Calculus Ratiocinator, 109
Calendars as computers, 24
California Institute of Technology, 169
Cancer Society, American, 193
Candide, 30
Capek, Karel, 43, 121, 215
Caplin, Mortimer, 150
Carroll, Lewis, 38, 118
CDC 1604 computer, 165
Celanese Corp. of America, 207
Celestial simulator, 85
Census, 41
Census Bureau, U. S., 149
Chain circuit, 127
Characteristica Universalis, 109
Charactron tube, 66
Checkers (game), 8, 143
Checking, computer, 60
Checkout computer, 183
Chemical Corp., 249
Chess, 8, 9, 16, 35, 99, 142, 156
Circuit
chain, 127
delay-line, 63
flip-flop, 63, 115
molecular, 9, 253
printed, 62
reverberation, 128
Clapp, Verner, 248
Clarke, Arthur C., 265
CLASS teaching machine system, 226-228
Clock, 20, 24, 56, 85
COBOL language, 234
Code, computer
binary-coded decimal, 103, 106
binary-octal, 106
economy, 106
excess-3, 105, 114
“Gray,” 106
reflected binary, 106
self-checking, 105
Color computer, 4
Commercial Art, 175
Commission on Professional and Hospital Activity, 194
Communication, use of computers, 179
Computer
ADAM, 258
addition, 106
airborne, 90, 154, 158, 162
analog, 21, 45, 72, 74, 80, 125, 203
direct, 76, 79
direct-current, 76
discrete, 80
indirect, 76, 79
mechanical differential analyzer, 76
scaling, 76
Antikythera, 25
Apollo, 182
space vehicle, 169
applications, digital, 92
ASCC, 155
asynchronous, 255
Athena, 52
ballistic, 83
Bendix G-15, 183, 188
BINAC, 7, 47
BRAINIAC, 88, 117
CALCULO, 75
CLASS, 226-228
code, binary-coded decimal, 103, 106
color, 4
definition, 129
dictionary, 49, 50
difference engine, 5, 35
digital, 18, 45, 73, 84, 125, 203
division, 107
do-it-yourself, 75, 88, 117, 147
electrical-analog, 75
electronic, 1, 46, 122, 151
ENIAC, 7, 40, 46, 85, 215
ERMA, 173
family tree, 86
FINDER system, 161
flow chart, 58, 59
GE 210, 172
GE 225, 245
general-purpose, 54, 81, 191
gigacycle, 254
“Hand,” 132, 214, 215
household, 15, 257
hybrid, 80, 84, 92
ILLIAC, 197
input, 51, 54, 125
JOHNNIAC, 11, 47, 129, 140, 142
language, 233
LARC, 47, 162, 191
LGP-30, 198
limitations, 89
MANIAC, 47, 156, 165
Memex, 13
mill, 38, 51, 60
MIPS, 159
MOBIDIC, 157
MUSE, 48
music, 11, 92, 196, 257
on-line, 81, 205
on-stream, 83, 207
output, 51, 65, 125
parts, 50, 52, 53
problem-solving, 140, 143
Psychological Matrix Rotation, 78, 94
Q-5, 77
RAMAC, 150, 151, 198, 199
Range Keeper Mark I, 42
RAYDAC, 260
RCA 501, 151
“real-time,” 78, 168, 202, 205
RECOMP, 47
revolution, 251
Sabre, 183
SAGE, 3, 12, 37, 53, 158, 159, 226, 259
sequential, 126
“Shoebox,” 242
“software,” 54
spaceborne, 167
special-purpose, 79
SSEC, 155, 156
Stone Age, 21
store, 36, 62
STRETCH, 47, 48
subtraction, 106
testing, 117
UNIVAC, 47, 149, 151, 171, 221
VIDIAC, character-generator, 242
Zuse L23, 199
Computer Control Co., 260
Conjunctive operation, 37, 51, 110
Consciousness, 144, 145, 267
Continuous analog computer, 80
Continuous digital computer, 80
Continuous quantity, 73
Control, computer, 51, 56
Control Data Corp., 194
Conversion
analog-to-digital, 74
digital-to-analog, 74
Converters, 94
Cook, William W., 29
Copland, Aaron, 11, 196
Cornell Medical College, 123
Cornell University, 133
Corrigan Communications, 231
Council on Library Resources, 248
Counting
Australian, 20
birds, 18
boards, 20
digital, 84
machines, 20
man, 19
modulo-, 97, 101
Credit card, 13, 256
Cryogenics, 70
components, 63
Cryotron, 9, 88, 141, 254, 255
Cybertron, 135, 139
Cyborg, 265

Daedalus, 18
Darwin, Charles, 32, 137, 252
Data
link, 14, 185, 256
logger, 205
processing, 22, 171, 264
recording media, 57
Daystrom, Inc., 211
Dead Sea Scrolls, 235
Decimal system, 19
Decision-making, 91
Defense, use of computer, 259
Delay-line circuit, 63
DeMorgan, Augustus, 38, 110, 115
Department of Commerce, U. S., 149, 221
Department of Defense, U. S., 148, 234
Design, use of computer, 14, 172, 186, 268
Desk calculator, 51
Diagnostic use of computer, 194
Diamond Ordnance Fuze Laboratory, U. S. Army, 69
Dictionary, computer, 49, 50
DIDAK teaching machine, 224
Difference engine, 5, 35
Digiflex trainer, 225
Digital computer, 18, 45, 73, 84, 125, 203
Digital differential analyzer, 94
Digitronics, 236
Discrete quantity, 73
Disjunctive operation, 110
Division, computer, 107
Dodgson, Charles L., 38
Do-it-yourself computer, 75, 88, 117, 147
Douglas Aircraft Co., 65
Dow Chemical Corp., 208
Du Pont Corp., 208
Dunsany, Lord, 108

Eccles-Jordan circuit, 47
Eckert, J. Presper, 47, 85
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

ebookfinal.com

You might also like