0% found this document useful (0 votes)
29 views

Opengl - Is Double Buffering Needed Any More - Stack Overflow

Double buffering helps avoid visual artifacts by rendering to an offscreen buffer before swapping with the visible buffer. Compositing window managers also provide some level of double buffering, but X11 still lacks an ideal synchronization mechanism between the application and compositor.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views

Opengl - Is Double Buffering Needed Any More - Stack Overflow

Double buffering helps avoid visual artifacts by rendering to an offscreen buffer before swapping with the visible buffer. Compositing window managers also provide some level of double buffering, but X11 still lacks an ideal synchronization mechanism between the application and compositor.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Is double buffering needed any more

Asked 12 years, 9 months ago Modified 11 years, 11 months ago Viewed 16k times

As today's cards seem to keep a list of render commands and flush only on a call to glFlush or
glFinish , is double buffering really needed any more? An OpenGL game I am developing on Linux (ATI

14 Mobility radeon card) with SDL/OpenGL actually flickers less when SDL_GL_swapbuffers() is replaced
by glFinish() and with SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER,0) in the init code. Is this a
particular case of my card or are such things likely on all cards?

EDIT: I've discovered that the cause for this is KWin. It appears that as datenwolf said, compositing
without sync was the cause. When I switched off KWin compositing, the game works fine without ANY
source code patches

opengl sdl-opengl

Share Improve this question Follow edited Jul 1, 2011 at 9:51 asked Jul 1, 2011 at 5:28
Sudarshan S
1,255 2 16 25

2 Answers Sorted by: Highest score (default)

Double buffering and glFinish are two very different things.

glFinish blocks the program, until all drawing operations are completed.
21
Double buffering is used to hide the rendering process from the user. Without double buffering, each and
every single drawing operation would become visible immediately, assuming that the display refresh
frequency is infinitely high. In practice you will get some display artifacts, like parts of the scene visible in
one state, the rest not visible or in some other state, the picture could be incomplete, etc. Double
buffering avoids this by first rendering into a back buffer, and only after the rendering has been finished
swapping this back with the front buffer, that gets sent to the display device.

Now today compositing window management becomes prevalent: Windows has Aero, MacOS X Quartz
Extreme and on Linux at least Unity and the GNOME3 shell use compositing if available. The point is:
Compositing technically creates doublebuffering: Windows draw to offscreen buffers and of these the
final screen is composited. So if you're running on a machine with compositing, then double buffering is
kind of redundant if performed in your program, and all it'd take was some kind of synchronization
mechanism, to tell the compositor when the next frame is ready. MacOS X has this. X11 still lacks a
proper synchronization scheme, see this post on the maillist:
http://lists.freedesktop.org/archives/xorg/2004-May/000607.html

TL;DR: Double buffering and glFinish are different things, and you need double buffering (of some
sort) to make things look good.

Share Improve this answer Follow edited Jul 1, 2011 at 8:27 answered Jul 1, 2011 at 6:36
datenwolf
161k 13 189 303
The post mentioned is over 6 years old. Does X11 still lack a sync scheme? And what about Windows?
– Sudarshan S Jul 1, 2011 at 9:20

3 In addition to datenwolf's explanation, you should note that you will usually never want to call either glFlush or
glFinish , except maybe in some very very very rare special cases. glFinish does nothing that
(wgl|glx)SwapBuffers does not already do (presumed that vsync is enabled), and glFlush only flushes the
queued commands and signals the server to begin processing them, which does nothing in the best case (but is a
useless call and context switch), and results in worse performance in the worst case (because of sub-optimal
scheduling of GPU resources). – Damon Jul 1, 2011 at 9:46

1 Ideally, you will want to throw as many commands at the GL as you can, with dependencies spread as far as you
can (i.e. if you use a texture, first send the commands to define the texture image, set the texture state etc, then
send some commands that do something else, and only then draw something that uses this texture). This ensures
that a) the commands in your command stream are less likely to block because of dependencies and b) the driver
can schedule some other commands to utilize the GPU (OpenCL or other program?) if the commands in your
queue would stall. – Damon Jul 1, 2011 at 9:49

With that, your program should always kept running at maximum speed without any delays (never sleep or such!),
and the vertical sync will block it when it's appropriate and doesn't hurt. Thus, your program does not burn 100%
CPU but runs at optimal speed. – Damon Jul 1, 2011 at 9:50

1 @Sudarshan S: Unfortunately no, no real advantage has been made there, which is a pitty. It is really neccesary
though, but one has to admit that the topic is highly nontrivial. ATM this goes by using the XDamage extension to
tell the compositor the image has been finished. But then you're still left with the task, how to blank until the next
VSync. If you just glXSwapBuffers you'll introduce a one frame lag because glXSwapBuffers will also block your
program. – datenwolf Jul 1, 2011 at 10:07

I would expect that it has more to do with what you're rendering or your hardware than anything that
could be generalized to something not on your machine. So no: don't try to do this.
2
Oh, and don't forget multisampling. Many implementations only multisample the back buffer; the front
buffer is not multisampled. Doing a swap will downsample from the multisampled buffer.

Share Improve this answer Follow edited May 23, 2017 at 12:00 answered Jul 1, 2011 at 5:42
Community Bot Nicol Bolas
1 1 462k 63 799 1k

You might also like