100% found this document useful (1 vote)
164 views46 pages

Ch-9 - Viewing A Local Illumiantion Model

Uploaded by

dejenehundaol91
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
164 views46 pages

Ch-9 - Viewing A Local Illumiantion Model

Uploaded by

dejenehundaol91
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 46

Chapter 9

Viewing a Local Illumination


Model

1
Types of Projection

There are different projection types that can
be used with the camera.

The most common types are the
perspective and orthographic projections.

The CAMERA_TYPE should be the first
item in a camera statement. If none is
specified, the perspective camera is the
default.
2
Perspective projection

The perspective keyword specifies the default
perspective camera which simulates the classic pinhole
camera.

It graphically approximates on a planar (two-dimensional)
surface (e.g. paper) the images of three-dimensional
objects so as to approximate actual visual perception.

The (horizontal) viewing angle is either determined by
the ratio between the length of the direction vector and
the length of the right vector or by the optional keyword
angle.

The viewing angle has to be larger than 0 degrees and
smaller than 180 degrees. 3
4
Orthographic projection

uses parallel camera rays to create an image of the
scene.

The area of view is determined by the lengths of the right
and up vectors.

If, in a perspective camera, you replace the perspective
keyword by orthographic and leave all other parameters
the same, you'll get an orthographic view with the same
image area, i.e. the size of the image is the same.

The same can be achieved by adding the angle keyword
to an orthographic camera. A value for the angle is
optional.

Parallel lines remain parallel and no perception of depth.
5

the visible parts of the scene change when
switching from perspective to orthographic view.

As long as all objects of interest are near the
look_at point they'll be still visible if the
orthographic camera is used.

Objects farther away may get out of view while
nearer objects will stay in view.

If objects are too close to the camera location
they may disappear.

Too close here means, behind the orthographic
camera projection plane (the plane that goes
through the look_at point). 6
Fisheye projection


This is a spherical projection. The viewing
angle is specified by the angle keyword.

An angle of 180 degrees creates the
"standard" fisheye while an angle of 360
degrees creates a super-fisheye ("I-see-
everything-view").

If you use this projection you should get a
circular image.
7
Spherical projection

Using this projection the scene is projected onto a
sphere.

The first value after angle sets the horizontal
viewing angle of the camera.

With the optional second value, the vertical viewing
angle is set: both in degrees. If the vertical angle is
not specfied, it defaults to half the horizontal angle.

it uses rectangular coordinates instead of polar
coordinates;

it allows effects such as "environment mapping",
often used for simulating reflections in scanline
renderers. 8
Lighting and Shading
General Principles

What the human eye ( or virtual camera ) sees is a result of light
coming off of an object or other light source and striking receptors in the
eye.

In order to understand and model this process, it is necessary to
understand different light sources and the ways that different materials
reflect those light sources.

The techniques described here are heuristics which produce
appropriate results, but they do not work in the same way reality works -
because that would take too long to compute, at least for interactive
graphics.

Instead of just specifying a single colour for a ploygon we will instead
specify the properties of the material that the polygon is supposed to be
made out of, ( i.e. how the material responds to different kinds of light ),
and the properties of the light or lights shining onto that material. 9
Illumination Models
An illumination model, also called a lighting
model, is used to calculate the intensity of
light that we should see at a given point on
the surface of an object.

Light intensity refers to the strength or
amount of light produced by a specific
lamp source.

It is the measure of the wavelength-
weighted power emitted by a light source.
10

Equation for computing illumination

Usually includes:

11
Light Bounces at Surfaces

Light strikes A

Some reflected

Some absorbed

Some reflected light from
A strikes B

Some reflected

Some absorbed

Some of this reflected light strikes A and so on

The infinite reflection, scattering and absorption of
light is described by the rendering equation 12
Global Illumination (Lighting)
Model

Global illumination: model interaction of light
from all surfaces in scene (track multiple
bounces)

13
Local Illumination (Lighting)
Model

One bounce!

Real time rendering

Simple! Only considers:
– Light sources
– Viewer position
– Surface Material properties

don’t consider light reflected from other
surfaces.

14

Certain things cannot easily be rendered
with this model:

Brushed metal

Marble (subsurface scattering)

Colour bleeding (surfaces are colored by
reflection of colored light from nearby
surfaces.)

15
Light-Material Interaction

Light strikes object, some absorbed, some
reflected

Fraction reflected determines object color
and brightness

Example: A surface looks red under white
light because red component of light is
reflected, other wavelengths absorbed

Reflected light depends on surface
smoothness and orientation 16
Light Sources

General light sources are difficult to model
because we must compute effect of light
coming from all points on light source

17
Basic Light Sources

We generally use simpler light sources

Abstractions that are easier to model

18
Phong Illumination Model

Simple lighting model that can be computed
quickly

3 components:

Diffuse

Specular

Ambient

Compute each component separately

Vertex Illumination = ambient + diffuse + specular

Materials reflect each component differently

Material reflection coefficients control reflection 19

Compute lighting (components) at each
vertex (P)

Uses 4 vectors, from vertex

To light source (l)

To viewer (v)

Normal (n)

Mirror direction (r)

20
Mirror Direction

Angle of reflection = angle of
incidence(the angle between the normal
and reflected)

Normal is determined by surface
orientation

The three vectors must be coplanar

21
Surface Roughness

Smooth surfaces: more reflected light
concentrated in mirror direction

Rough surfaces: reflects light in all
directions

22
Diffuse (Lambertian) reflection

Non-shiny illumination and shadows.

When light hits an object with a rough surface,
it is reflected in all directions

Amount of light hitting the surface depends on
the angle between the normal vector and the
incident vector of the incoming light.

The larger the angle (up to 90 degrees), the
larger the area the incident light is spread over

23
Example of diffuse

24
Specular reflection

Direct reflections of the light source off of
a shiny surface.

Smooth surfaces

25
Specular light contribution

Incoming light reflected out in small
surface area

Specular bright in mirror direction

Drops off away from mirror direction

Depends on viewer position relative to
mirror direction

26
Modeling Specular Relections

27
The Shininess Coefficient, α

α controls falloff sharpness

High α = sharper falloff = small, bright
highlight

Low α = slow falloff = large, dull highlight

α between 100 and 200 = metals

α between 5 and 10 = plastic look

28
Specular light: Effect of ‘α’

29
Ambient lighting

Light reflected or scattered from other
objects in the scene

Environmental light

background illumination

No direction!

Independent of light position, object
orientation, observer’s position or
orientation

30
31
32
Combined lighting models

Combining ambient, diffuse and specular
highlights gives the Phong Illumination
model.

33
Shading

Flat shading

Gouraud shading

Phong shading
We know how to colour single points on the
surface, but how do we colour the whole
object?

Shading

Performed during rasterisation

34
Shading models

Flat shading (one lighting calculation per
polygon)

Gouraud shading (one lighting calculation
per vertex)

Phong shading (one calculation per pixel)

35
Flat shading

Colour is computed once for each
polygon

All pixels in a polygon are set to the same
colour

Works for objects made of flat faces

36
Gouraud shading

Colour is computed once per vertex using
the local illumination model

Polygons interpolate colours over their
surface

37
Phong shading

Lighting computation is performed at each
pixel

Normal vectors are interpolated over the
polygon

38
Chapter 10
Application Modeling

39
3D Modeling

3D modeling comes before rendering when making a video or
animated model.

3D modeling involves using computer software to create
geometric representations of objects.

Designers use specialized software to make these virtual
models, including Rhino, SketchUp, 3ds Max and Blender.

3D models are pertinent to industries such as engineering,
architecture, manufacturing, cinema, gaming, science and
education because they can flawlessly describe every detail of
an object.

Functions of 3D modeling include:

Visualizes schematic structure before construction begins

Enables more efficient project planning

Displays possible interference between building systems
40

Eliminates major system conflicts before installations

Graphics APIs can be divided into
immediate and retained mode depending
on how they operate:

Immediate mode APIs and

Retained mode APIs

41
Immediate Mode

Primitives are sent to pipeline and
displayed right away

More calls to openGl commands

No memory of graphical entities

Primitive data lost after drawing

42
glBegin(GL2.GL_TRIANGLES);{
gl.glVertex3d(0,2,-4);
gl.glVertex3d(-2,-2,-4);
gl.glVertex3d(2,-2,-4);
}gl.glEnd();

GlNewList()…..glEndList for display lists

43
Retained Mode

Store data in the graphics card’s memory
instead of retransmitting every time

OpenGL can store data in Vertex

Display Buffers on GPU

44

VBOs are allocated by glGenBuffers
which creates int IDs for each buffer
created.

For example generating two bufferIDs
int bufferIDs[] = new int[2];
gl.glGenBuffers(2, bufferIDs,0);

45
VBO(vertex buffer object)
Usage Hints

GL2.GL_STATIC_DRAW: data is expected to
be used many times without modification.

Optimal to store on graphics card.

GL2.GL_STREAM_DRAW: data used only a
few times. Not so important to store on graphics
card

GL2.GL_DYNAMIC_DRAW: data will be
changed many times

46

You might also like