0% found this document useful (0 votes)
48 views94 pages

GDC 2019 - Breaking Down Barriers (Public)

The document discusses GPU synchronization and barriers. It explains that barriers are needed for dependencies between GPU tasks to ensure results are visible. Thread barriers synchronize thread execution to prevent overlap of dependent tasks, while memory barriers ensure correct ordering of memory operations. The performance impact of barriers comes from idle GPU cores when threads are flushed during synchronization. Strategies like batching barriers and overlapping independent work can help mitigate these costs.

Uploaded by

Liliana Queirolo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views94 pages

GDC 2019 - Breaking Down Barriers (Public)

The document discusses GPU synchronization and barriers. It explains that barriers are needed for dependencies between GPU tasks to ensure results are visible. Thread barriers synchronize thread execution to prevent overlap of dependent tasks, while memory barriers ensure correct ordering of memory operations. The performance impact of barriers comes from idle GPU cores when threads are flushed during synchronization. Strategies like batching barriers and overlapping independent work can help mitigate these costs.

Uploaded by

Liliana Queirolo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 94

Breaking Down Barriers:

An Intro to GPU Synchronization


Matt Pettineo
Lead Engine Programmer
Ready At Dawn Studios
Who am I?
● Ready At Dawn for 9 years
● Lead Engine Programmer for 5
● I like GPUs and APIs!
● Lots of blogging, Twitter, and GitHub
● You may know me as MJP!
What is this talk about?
● GPU Synchronization!
● What is it?
● Why do you need it?
● How does it work?
● How does it affect performance?
Barriers in D3D12/Vulkan
● New concept!
● Annoying
● D3D11 didn’t need them!
● Difficult
● People keep talking about them
● Affects performance
● But why? And how?
CPU Thread Barriers
● Thread sync point
● “Wait until all threads get here”
● Spin wait
● OS primitives
● Barrier is a toll plaza
CPU Memory Barriers
● Ensure correct order of reads/writes
● Ex: write finishes before barrier, read happens after
● Affects CPU memory ops
● and compiler ordering!
● Barrier is a doggie gate
What’s The Common Thread?
● Dependencies!
● Task A produces something
● Task B consumes something
● Task B depends on Task A
● Results need to be visible to dependent tasks!
Single-Threaded Dependencies
● int a = GetOffset(); int b = myArray[a];
● The compiler + CPU have your back!
● Automatic dependency analysis
● No need for manual barriers
● Expected ordering on a single core
● Easy mode
Multi-Threaded Dependencies
● Dependencies no longer visible!
● Arbitrary numbers of threads
● Free-for all memory access
● CPU mechanisms break down
● Per-core store buffers and caches
● Everyone has failed you
● You’re on your own
Task Dependencies
CPU

Core 0

Get
Get Bread
Bread

Tasks Overlap!
Core 1

Spread
Spread Peanut
Peanut Butter
Butter
Task Dependencies
CPU

Core 0

Get
Get Bread
Bread

No Overlap!
Core 1

Spread
Barrier
BarrierPeanut
Spread Peanut Butter
Butter
GPU Parallelism
● GPU is not a serial machine!
● Looks are deceiving
● HW and drivers help you out
GPUs are Thread Monsters!
GPUs are Thread Monsters!
● Lots of overlapping when possible
● No dependencies
● Re-ordering for render target writes (ROPs)
● Overlap improves performance!
● More on this later
GPU Thread Barriers
● Dependencies between draw/dispatch/copy
● Wait for batch of threads to finish
● Same as CPU task scheduler
● Often called “flush”, “drain”, “WaitForIdle”
GPU Cache Barriers
● Lots of caches!
● Not always coherent!
● Different from CPU’s
● Flush and/or invalidate
to ensure visibility
● Batch your barriers!
Uh oh
GPU Compression Barriers
● HW-enabled lossless compression
● Delta Color Compression (DCC)
● Saves bandwidth
● (may) Decompress for read
● Decompress for UAV write
D3D12 Barriers
● Higher level - “resource state” abstraction
● Texture is in an SRV read state
● Buffer is in a UAV write state
● Mostly describes resource visibility
● Implicit dependencies from state transition
● Layout/compression also implied
Vulkan Barriers
● More explicit (verbose) than D3D12
● Specifies
● Producing/consuming GPU stage
● Read/write state
● Texture layout
D3D12/Vulkan Barriers
● Both abstract away GPU specifics
● Both let you over-sync/flush/decompress
● RGP will show you!
● PIX can warn you!
What about D3D11?
● Driver tracked dependencies! Incompatible with
● Like a run-time compiler D3D12/Vulkan!
● Easy mode
● Lots of CPU work!
● Hard to do multithreaded
● Requires CPU-visible resource binding
Let’s Make a GPU!
Current The Muscle
Cycle
Count
The Brains 10
10 cy
cy Command
Processor

Current Shader
Memory
Command Cores

Command
Buffer
Thread
Queue

Introducing: The MJP-3000


MJP-3000 Limitations
● Compute only
● Only 16 shader cores
● No SIMD
● No thread cycling
● No caches
Simple Dispatch Example
● Dispatch 32 threads
● Each thread writes 1 element to memory
Simple Dispatch Example
0 cy

DISPATCH(A,
DISPATCH(A, 32)
32)

NOP
NOP

NOP
NOP
Simple Dispatch Example
0 cy

32 Dispatch
threads
DISPATCH(A,
DISPATCH(A, 32)
32)
enqueued

NOP
NOP

NOP
NOP
Simple Dispatch Example
0 cy
Shader cores
execute threads
from queue
16

DISPATCH(A,
DISPATCH(A, 32)
32)

NOP
NOP

NOP
NOP
Simple Dispatch Example
100 cy

DISPATCH(A,
DISPATCH(A, 32)
32)
16

NOP
NOP Threads write
data to
memory
NOP
NOP

NOP
NOP
Simple Dispatch Example
Remaining
100 cy threads start
executing
DISPATCH(A,
DISPATCH(A, 32)
32)

NOP
NOP

NOP
NOP

NOP
NOP
Simple Dispatch Example
200 cy

NOP
NOP

NOP
NOP

All threads are


NOP
NOP done writing
to memory
NOP
NOP
Thread Barrier Example
● Dispatch B is dependent on Dispatch A
● We can’t have any overlap!
● New command: FLUSH
● Command processor waits for thread queue and
shader cores to become empty
Thread Barrier Example
0 cy

DISPATCH(A,
DISPATCH(A, 24)
24)

FLUSH
FLUSH

DISPATCH(B,
DISPATCH(B, 24)
24)
Thread Barrier Example
0 cy

DISPATCH(A,
DISPATCH(A, 24)
24)

FLUSH
FLUSH

DISPATCH(B,
DISPATCH(B, 24)
24)
Thread Barrier Example
100 cy

DISPATCH(A,
DISPATCH(A, 24)
24)

FLUSH
FLUSH

DISPATCH(B,
DISPATCH(B, 24)
24)

NOP
NOP

FLUSH waits for No overlap! Cores are idle!


queue to empty
Thread Barrier Example
200 cy

FLUSH
FLUSH

DISPATCH(B,
DISPATCH(B, 24)
24)

NOP
NOP

NOP
NOP
Thread Barrier Example
200 cy

FLUSH
FLUSH
24

DISPATCH(B,
DISPATCH(B, 24)
24)

NOP
NOP

NOP
NOP
Thread Barrier Example
200 cy

FLUSH
FLUSH
8

DISPATCH(B,
DISPATCH(B, 24)
24)

NOP
NOP

NOP
NOP
Thread Barrier Example
300 cy

NOP
NOP

NOP
NOP

NOP
NOP

NOP
NOP
Thread Barrier Example
400 cy

NOP
NOP

NOP
NOP

NOP
NOP

NOP
NOP
Thread Barrier Example
● FLUSH prevented overlap 
● …but cores were 50% idle for 200 cycles
● 75% overall utilization 
● Took 400 cycles instead of 300 cycles
The Cost of a Barrier
● Barrier cost is relative to the drop in utilization!
● Gain from removing a barrier is relative to %
of idle shader cores
● Larger dispatches => better utilization
● Longer running threads => high flush cost
● Amdahl’s Law
D3D12/Vulkan Barriers are Flushes!
● Expect a thread flush for a transition/pipeline
barrier between draws/dispatches
● Same for a D3D12_RESOURCE_UAV_BARRIER
● Try to group non-dependent draws/dispatches
between barriers
● May not be true for future GPUs!
Overlapping Dispatches Example
● Dispatch B still dependent on Dispatch A
● Dispatch C dependent on neither
● Let’s try to recover some perf from idle cores
Overlapping Dispatches Example
0 cy

DISPATCH(A,
DISPATCH(A, 24)
24)

DISPATCH(C,
DISPATCH(C, 8)
8)

FLUSH
FLUSH
Overlapping Dispatches Example
0 cy

DISPATCH(A,
DISPATCH(A, 24)
24)
8

DISPATCH(C,
DISPATCH(C, 8)
8) 8

FLUSH
FLUSH

DISPATCH(B,
DISPATCH(B, 24)
24)
Overlapping Dispatches Example
100 cy

DISPATCH(C,
DISPATCH(C, 8)
8)

FLUSH
FLUSH

DISPATCH(B,
DISPATCH(B, 24)
24)

NOP
NOP

Threads from
Dispatch C keep
our cores busy!
Overlapping Dispatches Example
200 cy

NOP
NOP
8

NOP
NOP

NOP
NOP

NOP
NOP
Overlapping Dispatches Example
300 cy

NOP
NOP

NOP
NOP

NOP
NOP

NOP
NOP
Overlapping Dispatches Example
400 cy

NOP
NOP

NOP
NOP

NOP
NOP

NOP
NOP
Overlapping Dispatches Example
● Same latency for Dispatch A + Dispatch B
● But we got Dispatch C for free!
● Overall throughput increased
● Saved 100 cycles vs. sequential execution
● 75%->87.5% utilization!
Insights From Overlapping
● What if we think of the GPU as a CPU?
● Each command is an instruction
● Overlapping == Instruction Level Parallelism
● Explicit parallelism, not implicit
● Similar to VLIW (Very Long Instruction Word)
Bad Overlap Example
0 cy

DISPATCH(A,
DISPATCH(A, 24)
24)

DISPATCH(C,
DISPATCH(C, 8)
8)

FLUSH
FLUSH
Bad Overlap Example
0 cy

DISPATCH(A,
DISPATCH(A, 24)
24)
8

DISPATCH(C,
DISPATCH(C, 8)
8) 8

FLUSH
FLUSH

DISPATCH(B,
DISPATCH(B, 24)
24)
Bad Overlap Example Uh oh

200 cy

DISPATCH(C,
DISPATCH(C, 8)
8)

FLUSH
FLUSH

DISPATCH(B,
DISPATCH(B, 24)
24)

NOP
NOP
Bad Overlap Example
500 cy

FLUSH
FLUSH
8

DISPATCH(B,
DISPATCH(B, 24)
24)

NOP
NOP

NOP
NOP
Bad Overlap Example
600 cy

NOP
NOP

NOP
NOP

NOP
NOP

NOP
NOP
Bad Overlap Example
700 cy

NOP
NOP

NOP
NOP

NOP
NOP

NOP
NOP
What Happened?
● 400 cycles with 50% idle cores
● 71.4% utilization
● 1 CP -> 1 queue -> global flush/sync
● B wanted to sync on A, but also synced on C
● Re-arranging could help a bit
● But wouldn’t fix the issue
Why Not Two Command Processors?
Upgrading To The MJP-4000
DISPATCH(D,
DISPATCH(D, 8)
8)
24
8
FLUSH
FLUSH

DISPATCH(E,
DISPATCH(E, 16)
16)

FLUSH
FLUSH

DISPATCH(A,
DISPATCH(A, 24)
24)
24
FLUSH
FLUSH

DISPATCH(C,
DISPATCH(C, 8)
8) Second Front End
FLUSH
FLUSH
Introducing The MJP-4000
● Two front-ends
● Two command processors for syncing
● Two thread queues
● Two independent command streams
● Still 16 shader cores
● Max throughput same as MJP-3000
● First-come first-serve for thread queues
Dual Command Stream Example
● Dispatch A -> 68 threads, 100 cycles
● Dispatch B -> 8 threads, 400 cycles
● B depends on A
● Dispatch C -> 80 threads, 100 cycles
● Dispatch D -> 80 threads, 100 cycles
● D depends on C
Independent command streams
Dual Command Stream Example
0 cy
24
DISPATCH(A,
DISPATCH(A, 68)
68)

FLUSH
FLUSH

DISPATCH(B,
DISPATCH(B, 8)
8)

First command
stream
submitted
Dual Command Stream Example
DISPATCH(A,
DISPATCH(A, 68)
68) 50 cy
24
52
FLUSH
FLUSH

DISPATCH(B,
DISPATCH(B, 8)
8)

Second command
stream submitted

80
DISPATCH(C,
DISPATCH(C, 80)
80)

FLUSH
All cores are busy
FLUSH
– threads stay in
DISPATCH(D,
DISPATCH(D, 80)
80) the queue
Dual Command Stream Example
DISPATCH(A,
DISPATCH(A, 68)
68) 100 cy
24
52
FLUSH
FLUSH

DISPATCH(B,
DISPATCH(B, 8)
8)

Cores are free –


queues will split
available cores
DISPATCH(C,
DISPATCH(C, 80)
80)
80
FLUSH
FLUSH

DISPATCH(D,
DISPATCH(D, 80)
80)
Dual Command Stream Example
DISPATCH(A,
DISPATCH(A, 68)
68) 100 cy
24
44
FLUSH
FLUSH

DISPATCH(B,
DISPATCH(B, 8)
8)

DISPATCH(C,
DISPATCH(C, 80)
80)
72
FLUSH
FLUSH

DISPATCH(D,
DISPATCH(D, 80)
80)
Dual Command Stream Example
DISPATCH(A,
DISPATCH(A, 68)
68) 600 cy
24
FLUSH
FLUSH

DISPATCH(B,
DISPATCH(B, 8)
8)

DISPATCH(C,
DISPATCH(C, 80)
80)
28
FLUSH
FLUSH

DISPATCH(D,
Dispatch A has only 4 threads
DISPATCH(D, 80)
80)
left, but Dispatch C keeps the
remaining cores busy!
Dual Command Stream Example
FLUSH
FLUSH 700 cy
24
8
DISPATCH(B,
DISPATCH(B, 8)
8)

DISPATCH(C,
DISPATCH(C, 80)
80)
12
FLUSH
FLUSH

DISPATCH(D,
DISPATCH(D, 80)
80)
Dual Command Stream Example
FLUSH
FLUSH 800 cy
24
DISPATCH(B,
DISPATCH(B, 8)
8)

DISPATCH(C,
DISPATCH(C, 80)
80)
4
FLUSH
FLUSH

DISPATCH(D,
Dispatch B can only
DISPATCH(D, 80)
80)
saturate half the cores, but
Dispatch C can fill the rest!
Dual Command Stream Example
FLUSH
FLUSH 900 cy
24
DISPATCH(B,
DISPATCH(B, 8)
8)

DISPATCH(C,
DISPATCH(C, 80)
80)

FLUSH
FLUSH

DISPATCH(D,
DISPATCH(D, 80)
80)
Dual Command Stream Example
FLUSH
FLUSH 1000 cy
24
DISPATCH(B,
DISPATCH(B, 8)
8)

FLUSH
FLUSH
72
DISPATCH(D,
DISPATCH(D, 80)
80)

Dispatch D continues to keep


the remaining 8 cores busy
Dual Command Stream Example
FLUSH
FLUSH 1200 cy
24
DISPATCH(B,
DISPATCH(B, 8)
8)

FLUSH
FLUSH
48
DISPATCH(D,
DISPATCH(D, 80)
80)
Dual Command Stream Example
FLUSH
FLUSH 1600 cy
24
DISPATCH(B,
DISPATCH(B, 8)
8)

FLUSH
FLUSH

DISPATCH(D,
DISPATCH(D, 80)
80)
Did Two Front-Ends Help?
● It sure did!
● ~98% utilization!
● No additional cores
● Lower total execution time for
A+B+C+D
● Higher latency for A+B or C+D
submitted individually
Even Better For Real GPUs!
● Threads stalled on memory access
● Real GPU’s will cycle threads on cores
● Idle time from cache flushes
● Tasks with limited shader core usage
● Depth-only rasterization
● On-Chip Tessellation/GS
● DMA
Thinking in CPU Terms
● Multiple front-ends ≈ SMT
● Simultaneous Multithreading (Hyperthreading)
● Interleave two instruction streams that share
execution resources
● Similar goal: reduce idle time from stalls
Real-World Example: Bloom + DOF

Downscale
Downscale Downscale
Downscale Blur
Blur HH Blur
Blur VV Upscale
Upscale Upscale
Upscale

Main
Main Pass
Pass Independent command streams Tone
Tone Mapping
Mapping

Setup
Setup Downscale
Downscale Bokeh
Bokeh Gather
Gather Flood
Flood Fill
Fill
Real-World Example: Bloom + DOF
Command Processor 0

Main
Main Pass
Pass Downscale
Downscale Downscale
Downscale Blur
Blur HH Blur
Blur VV Upscale
Upscale Upscale
Upscale Tone
Tone Mapping
Mapping

Queue-Local Barriers

Setup
Setup Downscale
Downscale Bokeh
Bokeh Gather
Gather Flood
Flood Fill
Fill

Command Processor 1

Cross-Queue Barriers
Submitting Commands in D3D12
● App records + submits command list(s)
● With fences for synchronization
● OS schedules commands to run on an engine
● Engine = driver exposed HW queue
● Direct, compute, copy, and video
● HW command processor executes commands
Bloom + DOF in D3D12
Command Processor 0

Main
Main Pass
Pass Downscale
Downscale Downscale
Downscale Blur
Blur HH Blur
Blur VV Upscale
Upscale Upscale
Upscale Tone
Tone Mapping
Mapping

Queue-Local Barriers

Setup
Setup Downscale
Downscale Bokeh
Bokeh Gather
Gather Flood
Flood Fill
Fill

Command Processor 1

Cross-Queue Barriers
Bloom + DOF in D3D12
GfxCmdListB GfxCmdListC
Direct Queue

Main
Main Pass
Pass Downscale
Downscale BB Downscale
Downscale BB Blur
Blur HH BB Blur
Blur VV B Upscale
Upscale BB Upscale
Upscale Tone
Tone Mapping
Mapping

FF FF
EE EE
GfxCmdListA NN NN
CC CC
EE EE
Setup
Setup BB Downscale
Downscale BB Bokeh
Bokeh Gather
Gather BB Flood
Flood Fill
Fill

Compute Queue
ComputeCmdList
D3D12 Multi-Queue Submission
● Submissions to multiple command queues
will possibly execute concurrently
● Depends on the OS scheduler
● Depends on the GPU
● Depends on the driver
● Depends on the queue/command list type
● Similar to threads on a CPU
D3D12 Virtualizes Queues
● D3D12 command queues ≠ hardware queues
● Hardware may have many queues, or only 1!
● The OS/scheduler will figure it out for you
● Flattening of parallel submissions
● Dependencies visible to scheduler via fences
● Check GPUView/PIX/RGP/Nsight to see what’s
going on!
Vulkan Queues Are Different!
● They’re not virtualized!
● …or at least not in the same way
● Query at runtime for “queue families”
● Vk queue family ≈ D3D12 engine
● Explicit bind to exposed queue
● Still not guaranteed to be a HW queue
Using Async Compute
● Fills in idle shader cores
● Just like our MJP-4000 example!
● Identify independent command streams
● …and submit them on separate queues
● Works best when lots of cores are idle
● Depth-only rendering
● Lots of barriers
Recap
GPU Barriers Ensure Data Visibility
● Probably involves GPU thread sync
● Maybe involves cache flushes
● Maybe involves data transformation
● Decompression
● API barriers describe visibility + dependencies
● Think about your dependencies! (or visualize them!)
GPUs Aren’t That Different
● Command processor = task scheduler
● Shader cores = worker cores
● Multi-core CPU’s have similar problems!
● Parallel operations
● Coherency issues
Barriers = Idle Cores
● Keep the thread monster fed!
● Waits/stalls decrease utilization
● Careful barrier use => higher utilization
● Watch out for long-running threads!
● Batch your barriers!
● Flushing cache once >>> flushing multiple times
Using Multiple Queues
● Parallel submissions may increase utilization
● Not guaranteed! – check your tools!
● Won’t magically increase the core count
● Look for independent command streams
● Don’t go crazy with D3D12 fences
That’s It!
● Thanks to…
● Ste Tovey
● Rys Sommefeldt
● Nick Thibieroz
● Andrei Tatarinov
● Everyone at Ready At Dawn
Contact Info
[email protected]
[email protected]
● @mynameismjp
● https://mynameismjp.wordpress.com/
● https://github.com/TheRealMJP/GDC2019_Public
● Includes pptx and PDF with full speaker notes

You might also like