0% found this document useful (0 votes)
14 views9 pages

Cos340exo-10 Latest

Uploaded by

Ergo Proxy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views9 pages

Cos340exo-10 Latest

Uploaded by

Ergo Proxy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

2 COS340A

October/November 2010

COS340-A
JANUARY/FEBRUARY 2011
COMPUTER SCIENCE

COMPUTER GRAPHICS
Duration: 2 hours
Total: 70 marks

Examiners:
First: Mr L Aron and Mr C Dongmo
External: Dr P Marais
...........................................................................................................

...........................................................................................................

MEMORANDUM

[TURN OVER]
3 COS340A
October/November 2010

QUESTION 1 [10]
a) What is meant by the term “double buffering” and for what purpose is it used? [3]

The process of using 2 buffers ,a front and back buffer, in computer animation. The front buffer is
displayed while the application renders into the back buffer. When the rendering to the back buffer is
completed a buffer swap is done, the formerly back buffer is the one displayed (it becomes the new front
buffer).

b) What is the difference between a frame buffer and a z-buffer? [2]

The information needed to define the picture displayed on the screen is stored in a part of the memory
called the frame buffer.

z-buffer contains depth information used in hidden surface removal

c) Give brief definitions of the following terms in the context of computer graphics:

(i) Rasterization [1]


(ii) Fragment [2]
(iii) Interpolation [2]

(i)Rasterisation is the conversion of primitives (points, lines and polygons) to fragments.

(ii)A fragment is a potential pixel and consists of its colour values, location and possibly its depth value.

(iii)Interpolation is a way of determining value (of some parameter) for any point between two endpoints
of which the parameter values are known (e.g. the colour of any points between two points, or the normal
of any point between two points)

[TURN OVER]
4 COS340A
October/November 2010

QUESTION 2 [12]
a) Transformations are often carried out using a homogeneous co-ordinate representation. Why is this
representation used? [2]

In the 3D coordinate representation it is difficult to distinguish between points and vectors.


Homogeneous coordinates avoids this difficulty by using a four dimensional representation for both
points and vectors in 3 dimensions.

b) Rotation transformations are not commutative. Demonstrate this by computing

i) the transformation matrix for a rotation by 90º about x followed by a rotation by 90º about
y and [3]

ii) the transformation matrix for a 90º rotation about y followed by a 90º rotation about x.
[2]

iii) Apply the two composite transformation matrices to the point (1, 1, 1) to demonstrate that
rotations are not commutative. [3]

All rotations are clockwise.The matrices for rotation about x and rotation about y are
 cosθ 0 − sin θ 0 
 
 0 cosθ 0 0
Ry=
− sin θ 0 cosθ 0 
1 0 0 0  
  
 0 0 0 1 
0 cosθ − sin θ 0
Rx = 
0 sin θ cosθ 0
 

0 0 0 1

see next page for solution

c) Give two other combinations of transformations that do not commute. [2]

Rotation and translation


Rotation and non-uniform scaling
Rotation and shear

[TURN OVER]
5 COS340A
October/November 2010

i)

1 0 0 0  0 0 −1 0  0 0 −1 0
    
0 0 −1 0  0 0 0 0  1 0 0 0
Rx. Ry =  =
0 1 0 0 − 1 0 0 0  0 0 0 0
    

0 0 0 1
0 0 0 1 0 0 0 1

ii)

0 0 −1 0 1 0 0 0  0 −1 0 0
    
0 0 0 0 0 0 −1 0  0 0 0 0
Ry. Rx =  =
− 1 0 0 0 0 1 0 0  − 1 0 0 0
    

0 0 0 1
0 0 0 1 0 0 0 1

iii)

 0 −1 0 0 1 − 1 0 0 −1 0 1 − 1
         
0 0 0 0 1  0  1 0 0 0 1  1 
Rxy. a = 
Ryx. a =  = 0 0 0
=
0 1  0 
− 1 0 0 0 1 − 1     
     
0 0 0 1 1
   1 

 0 0 0 1 1  1 

For the xy transformation the new point is (-1,1,0)


For the yx transformation the new point is (-1,0,-1)

[TURN OVER]
6 COS340A
October/November 2010

QUESTION 3 [12]
a) What is the difference between parallel and perspective projections? Describe an application where
each type of projection would be preferable. [5]

In a perspective projection, lines (called projectors) are drawn from the objects to a point called the
centre of projection (COP). The projection of the objects is where these lines intersect the projection
plane. animation.

In an orthographic projection, the projectors do not converge to a point but are parallel to one another,
in a particular direction - the so-called direction of projection. In this case, the COP is assumed to be at
an infinite distance. As with a perspective projection, the projection of the objects is where the
projectors intersect the projection plane. architecture/working drawings

b) A synthetic camera co-ordinate reference frame is given by a view reference point (VRP) a view
plane normal (VPN) and a view up vector (VUP).

i) Using a diagram show how these quantities describe the location and orientation of the
synthetic camera. [5]

VPN VUP

VRP
U

**v and u not necessary


Camera is positioned at the origin, pointing in the negative z direction Camera is centred at point
called the VRP. Orientation of the camera is specified by VPN and VUP. VPN is the orientation of the
projection plane or back of camera. The orientation of the plane does not specify the up direction of the
camera hence we have VUP which is the up direction of the camera. VUP fixes the camera.

ii) The OpenGL call gluLookAt takes an eye point, an at point and an up point. Express VRP,
VPN and VUP in terms of these three points. [2]

VRP = eye point VPN = at point – eye point VUP = up point

[TURN OVER]
7 COS340A
October/November 2010

QUESTION 4 [12]
a) Hidden-surface-removal algorithms can be divided into two broad classes: Object-space algorithms
and Image space algorithms.

i) Differentiate between the object space approach and image space approach to hidden
surface removal? [4]

Object space algorithms attempt to order surfaces of the objects in the scene such. If surfaces are
rendered in the correct order then the correct image will be created.

Image space algorithms work as part of the projection process and seek to determine the relationship
among object points on each projector.

ii) Name one algorithm for each approach? [2]

Object space: Depth sort/ painters algorithm


image space: z-buffer

b) Using diagrams describe briefly the Liang-Barsky clipping algorithm. [6]

Two marks for diagram

[TURN OVER]
8 COS340A
October/November 2010

Suppose we have a line segment defined by two endpoints p (x1, y1) q(x1, y1). The parametric equation
of the line segment gives x-values and y-values for every point in terms of a parameter α that ranges
from 0 to 1.

x(α) = (1 -α) x1 + α x2
y(α) = (1- α) y1 + α y2

There are four points where line intersects side of windows

tB tL tT tR

we can order these points and then determine where clipping needs to take place.
If for example tL > tR , this implies that the line must be rejected as it falls outside the window.
To use this strategy effectively we need to avoid computing intersections until they are needed. Many
lines can be rejected before all four intersections are known.

QUESTION 5 [12]
a) In a simple computer graphics lighting model we assume the ambient component Iamb = ka La .

i) What lighting situation does the ambient component approximate? [2]


ii) What does ka represent? [1]
iii) Is ka a property of the light or the surface? [1]

i) a situation where lights have been designed or positioned to provide uniform lighting
across an area.

ii) the amount of light reflected

iii) surface

[TURN OVER]
9 COS340A
October/November 2010

b) Discuss the difference between a global and a local lighting model [4]

With the graphics pipeline employed by many graphics systems, each object is processed/rendered (incl.
shading) independently. Consequently, shadows and reflections from other objects cannot be taken into
account. This is called a local lighting model. A global lighting model does take shadows and
reflections from other objects into account when shading objects. This is usually done by means of ray-
tracing and radiosity techniques.

c) Why does Phong shaded images appear smoother than Smooth (Gouraud) or Flat shaded images?
[4]

In flat shading a polygon is filled with a single colour or shade across its surface. A single normal is
calculated for the whole surface, and this determines the colour. In smooth shading colour per vertex is
calculated using vertex normals and then this colour is interpolated across the polygon. In Phong
shading, the normals at the vertices are interpolated across the surface of the polygon. The lighting
model is then applied at every point of within the polygon. Because normals gives the local surface
orientation, by interpolating the normals across the surface of a polygon, the surface appears to be
curved rather than flat hence the smoother appearance of Phong shaded images.

QUESTION 6 [12]

a) Describe environment maps, and explain the difference between the use of cube maps and
spherical maps to implement them. [4]

One way to create fairly realistic rendering of an object with a highly reflective surface is to map a
texture to its surface (called an environment map) which contains an image of other objects around the
highly reflective object as they would be seen reflected in its surface.

A spherical map is an environment map in the form of a sphere, which is then converted to a flat image (by
some projection) to be applied to the surface.

A cube map is in the form of a cube, i.e. six projections of the other objects in the scene. The renderer then
picks the appropriate part of one of these projections to map to each polygon on the surface of the
reflective object (as a texture).

[TURN OVER]
10 COS340A
October/November 2010

b) Describe briefly how texture mapping is implemented in OpenGL. [4]

Two dimensional texture mapping starts with an array of texels, which is the same as a two dimensional
pixel rectangle.We specify that this array is to be used as a two dimensional texture; We need to enable
texture mapping through . The second part of setting up texture mapping is to specify how the texture is
mapped onto a geometric object. OpenGL uses two coordinates (say s and t) both of which range over the
interval (0.0, 1.0) over the texture image. Any values of s and t in the unit interval correspond to a unique
texel in the texel array. OpenGL leaves the mapping of texture coordinates to vertices to the application
by having the values of s and t as part of the OpenGl state. These values are assigned by the function
glTexCoord2f(s, t).The renderer uses the present texture coordinates when processing a vertex. If
we want to change the texture coordinate assigned to a vertex., we must set the texture coordinate before
we specify the vertex.

c) Briefly explain what antialiasing is and then explain how the alpha channel and the accummulation
buffer can be used to achieve this. [4]

Angel pages 346-348; 351; 408-409 In the conversion between analog values of object or world
coordinates and colours to the discrete values of screen coordinates and colours, rasterised (rendered)
line segments and edges of polygons often become jagged or pixels that contrast too strongly with there
neighbourhood are displayed.. This can be prevented by using antialiasing. Antialiasing blends and
smooths points, lines, or polygons to get rid of sharp contrasts or other unwanted patterns.

Alpha values: Angel page 346 Section 7.9.4 -”One of the major uses of the alpha channel is for
antialiasing." When rendering a line, instead of colouring an entire pixel with the colour of the line if it
passes through it, the amount of contribution of the line to the pixel is stored in the pixels alpha value.
This value is then used to calculate the intensity of the colour (specified by the RGB values), and avoids
the sharp contrasts and steps of aliasing.

Accumulation buffer - Angel Section 7.9.4 and page 351 Section 7.10.1: ”One of the most important uses
of the accumulation buffer is for antialiasing." Rather than antialiasing individual lines and polygons we
can antialias an entire scene using the accumulation buffer. The idea is that if we regenerate the same
scene with all the objects, or the viewer, shifted slightly (less than one pixel), then we generate different
aliasing artifacts. If we can average together the resulting images, the aliasing effects are smoothed out.

[TURN OVER]

You might also like