某些渲染技术(如路径追踪和累积运动模糊)通过合并多个中间子帧的信息来生成最终 “收敛” 帧。每个中间子帧对应略微不同的时间点,可有效计算基于物理的累积运动模糊,准确考虑物体旋转、变形、材质或光照变化。
高清渲染管线(HDRP)提供脚本 API,允许控制子帧创建和多帧渲染效果的收敛。该 API 可设置中间子帧数量(采样数)和每个子帧对应的时间点,还能通过快门配置文件(Shutter Profile) 控制子帧权重 —— 快门配置文件描述物理相机快门的开合速度。
此 API 在录制路径追踪动画时尤为实用。通常编辑场景时,路径追踪的收敛会随场景变化重新开始,以提供艺术家交互式编辑工作流;但录制时需要保持收敛状态。下图展示了使用多帧录制 API 录制的旋转游戏对象(含路径追踪和累积运动模糊)。
(一).API 概述
HDRP 的录制 API 包含三个调用:
- BeginRecording:开始多帧渲染时调用。
- PrepareNewSubFrame:渲染新子帧前调用。
- EndRecording:停止多帧渲染时调用。
BeginRecording 参数说明:
参数 | 描述 |
---|---|
Samples | 累积的子帧数量,覆盖体积中的路径追踪采样数。 |
ShutterInterval | 连续帧之间的快门开启时间:0 表示瞬间快门(无运动模糊),1 表示无时间间隔。 |
ShutterProfile | 指定快门间隔内快门位置的动画曲线;或提供快门完全开启和开始关闭的时间。 |
调用累积 API 前,需设置Time.captureDeltaTime
。以下脚本示例演示 API 用法:
(二).脚本 API 示例
以下脚本演示如何使用多帧渲染 API 录制含路径追踪或累积运动模糊的收敛动画序列。将脚本附加到场景相机后,可通过组件上下文菜单的 “开始录制” 和 “停止录制” 操作控制:
csharp
using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.HighDefinition;
public class FrameManager : MonoBehaviour
{
// 累积使用的采样数
public int samples = 128;
[Range(0.0f, 1.0f)]
public float shutterInterval = 1.0f;
// 快门间隔内完全开启的时间
[Range(0.0f, 1.0f)]
public float shutterFullyOpen = 0.25f;
// 快门开始关闭的时间
[Range(0.0f, 1.0f)]
public float shutterBeginsClosing = 0.75f;
// 录制子帧时的目标帧率
[Min(1)]
public int captureFrameRate = 30;
bool m_Recording = false;
int m_Iteration = 0;
int m_RecordedFrames = 0;
float m_OriginalDeltaTime = 0;
[ContextMenu("Start Recording")]
void BeginMultiframeRendering()
{
// 使用累积API前设置目标捕获间隔时间
m_OriginalDeltaTime = Time.captureDeltaTime;
Time.captureDeltaTime = 1.0f / captureFrameRate;
RenderPipelineManager.beginFrameRendering += PrepareSubFrameCallBack;
HDRenderPipeline renderPipeline = RenderPipelineManager.currentPipeline as HDRenderPipeline;
renderPipeline.BeginRecording(samples, shutterInterval, shutterFullyOpen, shutterBeginsClosing);
m_Recording = true;
m_Iteration = 0;
m_RecordedFrames = 0;
}
[ContextMenu("Stop Recording")]
void StopMultiframeRendering()
{
RenderPipelineManager.beginFrameRendering -= PrepareSubFrameCallBack;
HDRenderPipeline renderPipeline = RenderPipelineManager.currentPipeline as HDRenderPipeline;
renderPipeline?.EndRecording();
m_Recording = false;
Time.captureDeltaTime = m_OriginalDeltaTime;
}
void PrepareSubFrameCallBack(ScriptableRenderContext cntx, Camera[] cams)
{
HDRenderPipeline renderPipeline = RenderPipelineManager.currentPipeline as HDRenderPipeline;
if (renderPipeline != null && m_Recording)
{
renderPipeline.PrepareNewSubFrame();
m_Iteration++;
}
if (m_Recording && m_Iteration % samples == 0)
{
ScreenCapture.CaptureScreenshot($"frame_{m_RecordedFrames++}.png");
}
}
void OnDestroy()
{
if (m_Recording)
{
StopMultiframeRendering();
}
}
void OnValidate()
{
// 确保快门开始关闭时间晚于完全开启时间
shutterBeginsClosing = Mathf.Max(shutterFullyOpen, shutterBeginsClosing);
}
void Update()
{
// 录制时保存截图到磁盘
if (m_Recording && m_Iteration % samples == 0)
{
ScreenCapture.CaptureScreenshot($"frame_{m_RecordedFrames++}.png");
}
}
}
(三).快门配置文件(Shutter Profiles)
BeginRecording 允许指定相机快门的开合速度(即 “快门配置文件”)。下图展示不同快门配置文件对蓝色球体从左向右移动时运动模糊的影响(球体速度相同,仅快门配置不同):
- 横轴为时间,纵轴为快门开启程度。
- 前三种配置可通过设置开启 / 关闭参数(如 (0,1)、(1,1)、(0.25, 0.75))实现,最后一种需使用动画曲线。
- 慢开启配置会产生运动轨迹效果,而平滑开合配置可生成更流畅的动画。
(四).累积抗锯齿(High Quality Anti-aliasing with Accumulation)
可通过累积 API 实现高质量抗锯齿(类似超级采样),相比高分辨率渲染更节省 GPU 内存。方法是对每个子帧的投影矩阵添加抖动(Jitter),以下脚本演示该实现:
csharp
using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.HighDefinition;
using System.Collections.Generic;
public class SuperSampling : MonoBehaviour
{
// 水平和垂直方向的累积采样数
public int samples = 8;
public bool saveToDisk = true;
bool m_Recording = false;
int m_Iteration = 0;
int m_RecordedFrames = 0;
float m_OriginalDeltaTime = 0;
List<Matrix4x4> m_OriginalProectionMatrix = new List<Matrix4x4>();
[ContextMenu("Start Accumulation")]
void BeginAccumulation()
{
m_OriginalDeltaTime = Time.captureDeltaTime;
Time.captureDeltaTime = 1.0f / 30;
RenderPipelineManager.beginContextRendering += PrepareSubFrameCallBack;
RenderPipelineManager.endContextRendering += EndSubFrameCallBack;
HDRenderPipeline renderPipeline = RenderPipelineManager.currentPipeline as HDRenderPipeline;
renderPipeline.BeginRecording(samples * samples, 1, 0.0f, 1.0f);
m_Recording = true;
m_Iteration = 0;
m_RecordedFrames = 0;
}
[ContextMenu("Stop Accumulation")]
void StopAccumulation()
{
RenderPipelineManager.beginContextRendering -= PrepareSubFrameCallBack;
RenderPipelineManager.endContextRendering -= EndSubFrameCallBack;
HDRenderPipeline renderPipeline = RenderPipelineManager.currentPipeline as HDRenderPipeline;
renderPipeline?.EndRecording();
m_Recording = false;
Time.captureDeltaTime = m_OriginalDeltaTime;
}
Matrix4x4 GetJitteredProjectionMatrix(Camera camera)
{
int totalSamples = samples * samples;
int subframe = m_Iteration % totalSamples;
int stratumX = subframe % samples;
int stratumY = subframe / samples;
float jitterX = stratumX * (1.0f / samples) - 0.5f;
float jitterY = stratumY * (1.0f / samples) - 0.5f;
var planes = camera.projectionMatrix.decomposeProjection;
float vertFov = Mathf.Abs(planes.top) + Mathf.Abs(planes.bottom);
float horizFov = Mathf.Abs(planes.left) + Mathf.Abs(planes.right);
var planeJitter = new Vector2(jitterX * horizFov / camera.pixelWidth,
jitterY * vertFov / camera.pixelHeight);
planes.left += planeJitter.x;
planes.right += planeJitter.x;
planes.top += planeJitter.y;
planes.bottom += planeJitter.y;
return Matrix4x4.Frustum(planes);
}
void PrepareSubFrameCallBack(ScriptableRenderContext cntx, List<Camera> cameras)
{
HDRenderPipeline renderPipeline = RenderPipelineManager.currentPipeline as HDRenderPipeline;
if (renderPipeline != null && m_Recording)
{
renderPipeline.PrepareNewSubFrame();
m_Iteration++;
}
m_OriginalProectionMatrix.Clear();
foreach (var camera in cameras)
{
// 对投影矩阵添加抖动
m_OriginalProectionMatrix.Add(camera.projectionMatrix);
camera.projectionMatrix = GetJitteredProjectionMatrix(camera);
}
}
void EndSubFrameCallBack(ScriptableRenderContext cntx, List<Camera> cameras)
{
for (int i=0; i < cameras.Count; ++i)
{
cameras[i].projectionMatrix = m_OriginalProectionMatrix[i];
}
}
void OnDestroy()
{
if (m_Recording)
{
StopAccumulation();
}
}
void OnValidate()
{
// 确保至少有一个采样
samples = Mathf.Max(1, samples);
}
void Update()
{
// 录制时保存截图到磁盘
if (saveToDisk && m_Recording && m_Iteration % (samples * samples) == 0)
{
ScreenCapture.CaptureScreenshot($"frame_{m_RecordedFrames++}.png");
}
}
}
(五).局限性
多帧渲染 API 会内部修改场景的Time.timeScale
,这意味着:
- 无法为每个相机设置不同的累积运动模糊参数。
- 已逐帧修改此参数的项目与该功能不兼容。