Tut1 2 3
Tut1 2 3
1.1 INTRODUCTION
Computer graphics are graphics created using computers and, more generally, the representation
and manipulation of image data by a computer hardware and software. The development of
computer graphics, or simply referred to as CG, has made computers easier to interact with, and
better for understanding and interpreting many types of data. Developments in computer
graphics have had a profound impact on many types of media and have revolutionized the
animation and video game industry. 2D computer graphics are digital images—mostly from two-
dimensional models, such as 2D geometric models, text (vector array), and 2D data. 3D
computer graphics in contrast to 2D computer graphics are graphics that use a three-dimensional
representation of geometric data that is stored in the computer for the purposes of performing
calculations and rendering images.
1.2 OPEN GL OpenGL is the most extensively documented 3D graphics API(Application
Program Interface) to date. Information regarding OpenGL is all over the Web and in print. It is
impossible to exhaustively list all sources of OpenGL information. OpenGL programs are
typically written in C and C++. One can also program OpenGL from Delphi (a Pascal-like
language), Basic, Fortran, Ada, and other langauges. To compile and link OpenGL programs,
one will need OpenGL header files. To run OpenGL programs one may need shared or
dynamically loaded OpenGL libraries, or a vendor-specific OpenGL Installable Client Driver
(ICD).
1.3 GLUT The OpenGL Utility Toolkit (GLUT) is a library of utilities for OpenGL programs,
which primarily perform system-level I/O with the host operating system. Functions performed
include window definition, window control, and monitoring of keyboard and mouse input.
Routines for drawing a number of geometric primitives (both in solid and wireframe mode) are
also provided, including cubes, spheres, and cylinders. GLUT even has some limited support for
creating pop-up menus. The two aims of GLUT are to allow the creation of rather portable code
between operating systems (GLUT is cross-platform) and to make learning OpenGL easier. All
GLUT functions start with the glut prefix (for example, glutPostRedisplay marks the current
window as needing to be redrawn).
1.4 KEY STAGES IN THE OPENGL RENDERING PIPELINE:
Display Lists
All data, whether it describes geometry or pixels, can be saved in a display list for current or later
use. (The alternative to retaining data in a display list is processing the data immediately - also
known as immediate mode.) When a display list is executed, the retained data is sent from the
display list just as if it were sent by the application in immediate mode
Evaluators
All geometric primitives are eventually described by vertices. Parametric curves and surfaces
may be initially described by control points and polynomial functions called basis functions.
Evaluators provide a method to derive the vertices used to represent the surface from the control
points. The method is a polynomial mapping, which can produce surface normal, texture
coordinates, colors, and spatial coordinate values from the control points.
Per-Vertex Operations
For vertex data, next is the "per-vertex operations" stage, which converts the vertices into
primitives. Some vertex data (for example, spatial coordinates) are transformed by 4 x 4 floating-
point matrices. Spatial coordinates are projected from a position in the 3D world to a position on
your screen. If advanced features are enabled, this stage is even busier. If texturing is used,
texture coordinates may be generated and transformed here. If lighting is enabled, the lighting
calculations are performed using the transformed vertex, surface normal, light source position,
material properties, and other lighting information to produce a color value.
Primitive Assembly
Clipping, a major part of primitive assembly, is the elimination of portions of geometry which
fall outside a half-space, defined by a plane. Point clipping simply passes or rejects vertices; line
or polygon clipping can add additional vertices depending upon how the line or polygon is
clipped. In some cases, this is followed by perspective division, which makes distant geometric
objects appear smaller than closer objects. Then viewport and depth (z coordinate) operations are
applied. If culling is enabled and the primitive is a polygon, it then may be rejected by a culling
test. Depending upon the polygon mode, a polygon may be drawn as points or lines. The results
of this stage are complete geometric primitives, which are the transformed and clipped vertices
with related color, depth, and sometimes texture-coordinate values and guidelines for the
rasterization step.
Pixel Operations
While geometric data takes one path through the OpenGL rendering pipeline, pixel data takes a
different route. Pixels from an array in system memory are first unpacked from one of a variety
of formats into the proper number of components. Next the data is scaled, biased, and processed
by a pixel map. The results are clamped and then either written into texture memory or sent to
the rasterization step If pixel data is read from the frame buffer, pixel-transfer operations (scale,
bias, mapping, and clamping) are performed. Then these results are packed into an appropriate
format and returned to an array in system memory.
There are special pixel copy operations to copy data in the framebuffer to other parts of the
framebuffer or to the texture memory. A single pass is made through the pixel transfer operations
before the data is written to the texture memory or back to the framebuffer.
Texture Assembly
An OpenGL application may wish to apply texture images onto geometric objects to make them
look more realistic. If several texture images are used, it’s wise to put them into texture objects
so that you can easily switch among them. Some OpenGL implementations may have special
resources to accelerate texture performance. There may be specialized, high-performance texture
memory. If this memory is available, the texture objects may be prioritized to control the use of
this limited and valuable resource.
Rasterization
Rasterization is the conversion of both geometric and pixel data into fragments. Each fragment
square corresponds to a pixel in the framebuffer. Line and polygon stipples, line width, point
size, shading model, and coverage calculations to support antialiasing are taken into
consideration as vertices are connected into lines or the interior pixels are calculated for a filled
polygon. Color and depth values are assigned for each fragment square.
Fragment Operations
Before values are actually stored into the framebuffer, a series of operations are performed that
may alter or even throw out fragments. All these operations can be enabled or disabled. The first
operation which may be encountered is texturing, where a texel (texture element) is generated
from texture memory for each fragment and applied to the fragment. Then fog calculations may
be applied, followed by the scissor test, the alpha test, the stencil test, and the depth-buffer test
(the depth buffer is for hidden-surface removal). Failing an enabled test may end the continued
processing of a fragment’s square. Then, blending, dithering, logical operation, and masking by a
bitmask may be performed.Finally, the thoroughly processedfragment is drawn into the
appropriate buffer, where it has finally advanced to be a pixel and achieved its final resting place.
Download Codeblocks from following link and follow the instructions given below:
https://drive.google.com/file/d/1UpSJc-NWnIXyOcUVIBUx91mq5A5LopWM/view
On left hand side, you will see workplace --Click on + symbol of Sources folder -- Double click
on main.cpp
#include <GL/glu.h>
#include <GL/glut.h>
void Draw()
{
}
int main(int c, char *v[])
{
glutInit(&c,v);
glutInitWindowSize(600,500)
glutCreateWindow("Basic of OpenGL");
glutDisplayFunc(Draw);
glutMainLoop();
return 0;
}
#include <GL/glu.h>
#include <GL/glut.h>
void Draw()
{
}
int main(int c, char *v[])
{
glutInit(&c,v);
glutInitWindowPosition(100,50);
glutInitWindowSize(600,500);
glutCreateWindow("Basic of OpenGL");
glutDisplayFunc(Draw);
glutMainLoop();
return 0;
}
• Mode of Display---2D/3D
• Function syntax:
• glutInitDisplayMode(GLUT_RGB | GLUT_SINGLE) ;
#include <GL/glu.h>
#include <GL/glut.h>
void Draw()
{
}
int main(int c, char *v[])
{
glutInit(&c,v);
glutInitWindowPosition(100,50);
glutInitWindowSize(600,500);
glutInitDisplayMode(GLUT_RGB | GLUT_SINGLE);
glutCreateWindow("Basic of OpenGL");
glutDisplayFunc(Draw);
glutMainLoop();
return 0;
}
Build & Run.
You will not see any effect but this will be helpful when you will convert the 2D image to 3D
image.
• The glClearColor function specifies the red, green, blue, and alpha values used
by glClear to clear the color buffers. Values specified by glClearColor are clamped to
the range [0,1].
#include <GL/glu.h>
#include <GL/glut.h>
void Draw()
{
}
int main(int c, char *v[])
{
glutInit(&c,v);
glutInitWindowPosition(100,50);
glutInitWindowSize(600,500);
glutInitDisplayMode(GLUT_RGB | GLUT_SINGLE);
glutCreateWindow("Basic of OpenGL");
glClearColor(0,0,0,1);
glutDisplayFunc(Draw);
glutMainLoop();
return 0;
}
• When we create window using openGL, window is divided as per following in X & Y
coordinates ranging from +1 to -1
+1
(-0.5,0.5) (0.5,0.5)
-1 X +1
(-0.5,-0.5) (0.5,-0.5)
-1
let us study the openGL coordinate system
Diagram shows default coordinate system for openGL
-X axis value is -1 +X axis value is +1
-Y axis value is -1 +Y axis value is +1
Min X axis value is -1 max X axis value is +1
Min Y axis value is -1 maxY axis value is +1
• Let us draw square/rectangle
• To draw square , we will need four coordinates/points.These points in openGL are called
as vertex or vertices.
• GO to Draw(); Function in program
• First function to call in Draw() is to clear background
• Syntax: glClear(GL_COLOR_BUFFER_BIT);
• Define vertices:
glvertex2f(-0.5,0.5);
glvertex2f(0.5,0.5); We will use this points to draw basic primitives
glvertex2f(0.5,-0.5);
glvertex2f(-0.5,-0.5
• Put all points within glBegin(); and glEnd();
glBegin();
glvertex2f(-0.5,0.5);
glvertex2f(0.5,0.5);
glvertex2f(0.5,-0.5);
glvertex2f(-0.5,-0.5)
glEnd();
Now you can use these points to draw any premitives
Eg.
glBegin(GL_POINTS);
glVertex2f(-0.5,0.5);
glVertex2f(0.5,0.5);
glVertex2f(0.5,-0.5);
glVertex2f(-0.5,-0.5);
glEnd();
#include <GL/glu.h>
#include <GL/glut.h>
void Draw()
{
glClear(GL_COLOR_BUFFER_BIT);
glBegin(GL_POINTS);
glVertex2f(-0.5,0.5);
glVertex2f(0.5,0.5);
glVertex2f(0.5,-0.5);
glVertex2f(-0.5,-0.5);
glEnd();
}
int main(int c, char *v[])
{
glutInit(&c,v);
glutInitWindowPosition(100,50);
glutInitWindowSize(600,500);
glutInitDisplayMode(GLUT_RGB | GLUT_SINGLE);
glutCreateWindow("Basic of OpenGL");
glClearColor(0,0,0,1);
glColor3f(1,0,0);
glutDisplayFunc(Draw);
glutMainLoop();
return 0;
}
Build & Run
You will not able see anything on screen.Because whatever we draw within glBegin() and
glEnd()
We have to flush the same.
#include <GL/glu.h>
#include <GL/glut.h>
void Draw()
{
glClear(GL_COLOR_BUFFER_BIT);
glPointSize(5);
glBegin(GL_POINTS);
glVertex2f(-0.5,0.5);
glVertex2f(0.5,0.5);
glVertex2f(0.5,-0.5);
glVertex2f(-0.5,-0.5);
glEnd();
glFlush();
}
int main(int c, char *v[])
{
glutInit(&c,v);
glutInitWindowPosition(100,50);
glutInitWindowSize(600,500);
glutInitDisplayMode(GLUT_RGB | GLUT_SINGLE);
glutCreateWindow("Basic of OpenGL");
glClearColor(0,0,0,1);
glColor3f(1,0,0);
glutDisplayFunc(Draw);
glutMainLoop();
return 0;
}
OpenGL Primitives:
1.What is Computer Graphics? Answer: Computer graphics are graphics created using computers
and, more generally, the representation and manipulation of image data by a computer.
2. What is OpenGL? Answer: OpenGL is the most extensively documented 3D graphics API
(Application Program Interface) to date. It is used to create Graphics.
3. What is GLUT? Answer: The OpenGL Utility Toolkit (GLUT) is a library of utilities for
OpenGL programs, which primarily perform system-level I/O with the host operating system.
4. What are the applications of Computer Graphics? Answer: Gaming Industry, Animation
Industry and Medical Image Processing Industries. The sum total of these industries is a Multi
Billion Dollar Market. Jobs will continue to increase in this arena in the future.
Characteristics:
i. This is also called as Random Scan Display.
ii. It draws a continuous and smooth line.
iii. It only draws lines and characters and is more costly.
Line:
It is the path between two end points.
Any point (x, y) on the line must follow the line equation:
Point is the fundamental element of the picture representation. It is nothing but the position in a
plane defined as either pairs or triplets of numbers depending whether the data are two or three
dimensional. Thus (x1, y1) or (x1, y1, z1) would represent a point two or three dimensional
space.
Two points used to specify line by below equation
y = m * x + b, where
– m is the slope of the line.
– b is a constant that represent the intercept on y-axis.
If we have two end points, (x0, y0) and (x1, y1), then the slope of the line (m) between those
points can be calculated as:
m = Δy / Δx = (y1 – y0) / (x1 – x0)
Draw a line with the help of line equation is very time consuming because it’s required lots of
calculation. A cathode ray tube (CRT) raster display is considered a matrix of discrete finite area
cells (pixels), it is not possible to draw a straight line directly from one point to another. The
process of determining which pixels provide the best approximation to the desired line is called
rasterization.
To draw a line, we can use two algorithms:
i) DDA (Digital Differential Analyzer)/Vector Generation Algorithm.
ii) Bresenham’s Line Algorithm.
1. DDA Line Drawing Algorithm:
Digital Differential Analyzer (DDA) algorithm is the simple line generation algorithm. It is the
simplest algorithm and it does not require special skills for implementation. It is a faster method
for calculating pixel positions than the direct use of equation y=mx + b. It eliminates
the multiplication in the equation by making use of raster characteristics, so that appropriate
increments are applied in the x or y direction to find the pixel positions along the line path.
Disadvantages of DDA Algorithm
1. Floating point arithmetic in DDA algorithm is still time-consuming.
2. The algorithm is orientation dependent. Hence end point accuracy is poor.
Algorithm for DDA Line:
1. Start
2. Read the line end points (x1,y1) and (x2,y2) such that they are not equal.
3. Calculate the difference between two end points.
dx = x1 – x2
dy = y1 - y2
6. If dx >= dy and x1 <= x2 then //gentle slope lines
Increment x by 1
Increment y by m
Else if dx >= dy and x1 > x2 then //gentle slope lines
Increment x by -1
Increment y by -m
Else if dx < dy and y1 <= y2 then //steep/sharp slope lines
Increment x by 1/m
Increment y by 1
Else if dx < dy and y1 > y2 then //steep/sharp slope lines
Increment x by -1/m
Increment y by -1
7. plot xincrement and yincrement
8. repeat step 6 until other end point is reach
9. Closegraph
10. Stop.
2.4 Questions:-
Q 1: State advantages of DDA Line drawing algorithm?
Q 3: What is long form of DDA?
Q 6: Which line is clearer DDA or Line generated by Equation?
2.5 Conclusion:-
In This way we have studied that how to draw a line using DDA Algorithms.