0% found this document useful (0 votes)
169 views14 pages

Untitled

Uploaded by

lonrot
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
169 views14 pages

Untitled

Uploaded by

lonrot
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd

# ComfyUI Error Report

## Error Details
- **Node ID:** 78
- **Node Type:** Canny
- **Exception Type:** torch.OutOfMemoryError
- **Exception Message:** Allocation on device
This error means you ran out of memory on your GPU.

TIPS: If the workflow worked before you might have accidentally set the batch_size
to a large number.
## Stack Trace
```
File "C:\ComfyUI-Easy-Install\ComfyUI\execution.py", line 496, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await
get_output_data(prompt_id, unique_id, obj, input_data_all,
execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb,
hidden_inputs=hidden_inputs)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\ComfyUI-Easy-Install\ComfyUI\execution.py", line 315, in get_output_data


return_values = await _async_map_node_over_list(prompt_id, unique_id, obj,
input_data_all, obj.FUNCTION, allow_interrupt=True,
execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb,
hidden_inputs=hidden_inputs)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\ComfyUI-Easy-Install\ComfyUI\execution.py", line 289, in


_async_map_node_over_list
await process_inputs(input_dict, i)

File "C:\ComfyUI-Easy-Install\ComfyUI\execution.py", line 277, in process_inputs


result = f(**inputs)
^^^^^^^^^^^

File "C:\ComfyUI-Easy-Install\ComfyUI\comfy_extras\nodes_canny.py", line 19, in


detect_edge
output = canny(image.to(comfy.model_management.get_torch_device()).movedim(-1,
1), low_threshold, high_threshold)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^

File "C:\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\kornia\filters\
canny.py", line 112, in canny
nms_magnitude: Tensor = F.conv2d(magnitude, nms_kernels,
padding=nms_kernels.shape[-1] // 2)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

```
## System Information
- **ComfyUI Version:** 0.3.49
- **Arguments:** ComfyUI\main.py --windows-standalone-build --use-sage-attention
- **OS:** nt
- **Python Version:** 3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC
v.1938 64 bit (AMD64)]
- **Embedded Python:** true
- **PyTorch Version:** 2.7.1+cu128
## Devices

- **Name:** cuda:0 NVIDIA GeForce RTX 4070 SUPER : cudaMallocAsync


- **Type:** cuda
- **VRAM Total:** 12878086144
- **VRAM Free:** 11446255616
- **Torch VRAM Total:** 100663296
- **Torch VRAM Free:** 63963136

## Logs
```
2025-08-16T00:07:39.769103 -
2025-08-16T00:07:42.981112 - FETCH ComfyRegistry Data: 50/942025-08-
16T00:07:42.981112 -
2025-08-16T00:07:46.191140 - FETCH ComfyRegistry Data: 55/942025-08-
16T00:07:46.191140 -
2025-08-16T00:07:49.407615 - FETCH ComfyRegistry Data: 60/942025-08-
16T00:07:49.407615 -
2025-08-16T00:07:52.616054 - FETCH ComfyRegistry Data: 65/942025-08-
16T00:07:52.616054 -
2025-08-16T00:07:55.811347 - FETCH ComfyRegistry Data: 70/942025-08-
16T00:07:55.811347 -
2025-08-16T00:07:59.055664 - FETCH ComfyRegistry Data: 75/942025-08-
16T00:07:59.055664 -
2025-08-16T00:08:02.256919 - FETCH ComfyRegistry Data: 80/942025-08-
16T00:08:02.256919 -
2025-08-16T00:08:05.501013 - FETCH ComfyRegistry Data: 85/942025-08-
16T00:08:05.501013 -
2025-08-16T00:08:08.813961 - FETCH ComfyRegistry Data: 90/942025-08-
16T00:08:08.813961 -
2025-08-16T00:08:11.828202 - FETCH ComfyRegistry Data [DONE]2025-08-
16T00:08:11.828202 -
2025-08-16T00:08:11.929523 - [ComfyUI-Manager] default cache updated:
https://api.comfy.org/nodes
2025-08-16T00:08:11.960406 - FETCH DATA from:
https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-
list.json2025-08-16T00:08:11.960406 - 2025-08-16T00:08:12.208648 - [DONE]2025-08-
16T00:08:12.208648 -
2025-08-16T00:08:12.256738 - [ComfyUI-Manager] All startup tasks have been
completed.
2025-08-16T00:08:54.181848 - got prompt
2025-08-16T00:08:54.196010 - Failed to validate prompt for output 193:
2025-08-16T00:08:54.196010 - * LoaderGGUF 129:
2025-08-16T00:08:54.196010 - - Value not in list: gguf_name: 'Wan2.1-VACE-14B-
Q4_K_M.gguf' not in ['Wan2.1_14B_VACE-Q4_K_S.gguf',
'wan2.2_i2v_high_noise_14B_Q4_K_M.gguf', 'wan2.2_i2v_low_noise_14B_Q4_K_M.gguf']
2025-08-16T00:08:54.196010 - * LoadImage 73:
2025-08-16T00:08:54.196010 - - Custom validation failed for node: image - Invalid
image file: 20250430_1655_Freckled
Focus_simple_compose_01jt3k243dfxa8kzagna8wz6yc.png
2025-08-16T00:08:54.196010 - * VHS_LoadVideo 114:
2025-08-16T00:08:54.196010 - - Custom validation failed for node: video - Invalid
video file: 46948-449623669_tiny.mp4
2025-08-16T00:08:54.196010 - Output will be ignored
2025-08-16T00:08:54.196010 - Failed to validate prompt for output 149:
2025-08-16T00:08:54.196010 - Output will be ignored
2025-08-16T00:08:54.196010 - Failed to validate prompt for output 174:
2025-08-16T00:08:54.196010 - Output will be ignored
2025-08-16T00:08:54.196010 - Failed to validate prompt for output 181:
2025-08-16T00:08:54.196010 - Output will be ignored
2025-08-16T00:08:54.196010 - Failed to validate prompt for output 112:
2025-08-16T00:08:54.196010 - Output will be ignored
2025-08-16T00:08:54.196010 - Failed to validate prompt for output 202:
2025-08-16T00:08:54.196010 - Output will be ignored
2025-08-16T00:08:54.196010 - invalid prompt: {'type':
'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation',
'details': '', 'extra_info': {}}
2025-08-16T00:11:39.951150 - got prompt
2025-08-16T00:11:40.028810 - Using xformers attention in VAE
2025-08-16T00:11:40.030131 - Using xformers attention in VAE
2025-08-16T00:11:40.210264 - [MultiGPU get_torch_device_patched] Returning device:
cuda:0 (current_device=cuda:0)
2025-08-16T00:11:40.236748 - VAE load device: cuda:0, offload device: cpu, dtype:
torch.bfloat16
2025-08-16T00:11:44.030921 - [MultiGPU text_encoder_device_patched] Returning
device: cuda:0 (current_text_encoder_device=cuda:0)
2025-08-16T00:11:44.575034 - CLIP/text encoder model load device: cuda:0, offload
device: cpu, current: cpu, dtype: torch.float16
2025-08-16T00:11:45.593214 - clip missing:
['encoder.block.0.layer.0.SelfAttention.q.scale_weight',
'encoder.block.0.layer.0.SelfAttention.k.scale_weight',
'encoder.block.0.layer.0.SelfAttention.v.scale_weight',
'encoder.block.0.layer.0.SelfAttention.o.scale_weight',
'encoder.block.0.layer.1.DenseReluDense.wi_0.scale_weight',
'encoder.block.0.layer.1.DenseReluDense.wi_1.scale_weight',
'encoder.block.0.layer.1.DenseReluDense.wo.scale_weight',
'encoder.block.1.layer.0.SelfAttention.q.scale_weight',
'encoder.block.1.layer.0.SelfAttention.k.scale_weight',
'encoder.block.1.layer.0.SelfAttention.v.scale_weight',
'encoder.block.1.layer.0.SelfAttention.o.scale_weight',
'encoder.block.1.layer.1.DenseReluDense.wi_0.scale_weight',
'encoder.block.1.layer.1.DenseReluDense.wi_1.scale_weight',
'encoder.block.1.layer.1.DenseReluDense.wo.scale_weight',
'encoder.block.2.layer.0.SelfAttention.q.scale_weight',
'encoder.block.2.layer.0.SelfAttention.k.scale_weight',
'encoder.block.2.layer.0.SelfAttention.v.scale_weight',
'encoder.block.2.layer.0.SelfAttention.o.scale_weight',
'encoder.block.2.layer.1.DenseReluDense.wi_0.scale_weight',
'encoder.block.2.layer.1.DenseReluDense.wi_1.scale_weight',
'encoder.block.2.layer.1.DenseReluDense.wo.scale_weight',
'encoder.block.3.layer.0.SelfAttention.q.scale_weight',
'encoder.block.3.layer.0.SelfAttention.k.scale_weight',
'encoder.block.3.layer.0.SelfAttention.v.scale_weight',
'encoder.block.3.layer.0.SelfAttention.o.scale_weight',
'encoder.block.3.layer.1.DenseReluDense.wi_0.scale_weight',
'encoder.block.3.layer.1.DenseReluDense.wi_1.scale_weight',
'encoder.block.3.layer.1.DenseReluDense.wo.scale_weight',
'encoder.block.4.layer.0.SelfAttention.q.scale_weight',
'encoder.block.4.layer.0.SelfAttention.k.scale_weight',
'encoder.block.4.layer.0.SelfAttention.v.scale_weight',
'encoder.block.4.layer.0.SelfAttention.o.scale_weight',
'encoder.block.4.layer.1.DenseReluDense.wi_0.scale_weight',
'encoder.block.4.layer.1.DenseReluDense.wi_1.scale_weight',
'encoder.block.4.layer.1.DenseReluDense.wo.scale_weight',
'encoder.block.5.layer.0.SelfAttention.q.scale_weight',
'encoder.block.5.layer.0.SelfAttention.k.scale_weight',
'encoder.block.5.layer.0.SelfAttention.v.scale_weight',
'encoder.block.5.layer.0.SelfAttention.o.scale_weight',
'encoder.block.5.layer.1.DenseReluDense.wi_0.scale_weight',
'encoder.block.5.layer.1.DenseReluDense.wi_1.scale_weight',
'encoder.block.5.layer.1.DenseReluDense.wo.scale_weight',
'encoder.block.6.layer.0.SelfAttention.q.scale_weight',
'encoder.block.6.layer.0.SelfAttention.k.scale_weight',
'encoder.block.6.layer.0.SelfAttention.v.scale_weight',
'encoder.block.6.layer.0.SelfAttention.o.scale_weight',
'encoder.block.6.layer.1.DenseReluDense.wi_0.scale_weight',
'encoder.block.6.layer.1.DenseReluDense.wi_1.scale_weight',
'encoder.block.6.layer.1.DenseReluDense.wo.scale_weight',
'encoder.block.7.layer.0.SelfAttention.q.scale_weight',
'encoder.block.7.layer.0.SelfAttention.k.scale_weight',
'encoder.block.7.layer.0.SelfAttention.v.scale_weight',
'encoder.block.7.layer.0.SelfAttention.o.scale_weight',
'encoder.block.7.layer.1.DenseReluDense.wi_0.scale_weight',
'encoder.block.7.layer.1.DenseReluDense.wi_1.scale_weight',
'encoder.block.7.layer.1.DenseReluDense.wo.scale_weight',
'encoder.block.8.layer.0.SelfAttention.q.scale_weight',
'encoder.block.8.layer.0.SelfAttention.k.scale_weight',
'encoder.block.8.layer.0.SelfAttention.v.scale_weight',
'encoder.block.8.layer.0.SelfAttention.o.scale_weight',
'encoder.block.8.layer.1.DenseReluDense.wi_0.scale_weight',
'encoder.block.8.layer.1.DenseReluDense.wi_1.scale_weight',
'encoder.block.8.layer.1.DenseReluDense.wo.scale_weight',
'encoder.block.9.layer.0.SelfAttention.q.scale_weight',
'encoder.block.9.layer.0.SelfAttention.k.scale_weight',
'encoder.block.9.layer.0.SelfAttention.v.scale_weight',
'encoder.block.9.layer.0.SelfAttention.o.scale_weight',
'encoder.block.9.layer.1.DenseReluDense.wi_0.scale_weight',
'encoder.block.9.layer.1.DenseReluDense.wi_1.scale_weight',
'encoder.block.9.layer.1.DenseReluDense.wo.scale_weight',
'encoder.block.10.layer.0.SelfAttention.q.scale_weight',
'encoder.block.10.layer.0.SelfAttention.k.scale_weight',
'encoder.block.10.layer.0.SelfAttention.v.scale_weight',
'encoder.block.10.layer.0.SelfAttention.o.scale_weight',
'encoder.block.10.layer.1.DenseReluDense.wi_0.scale_weight',
'encoder.block.10.layer.1.DenseReluDense.wi_1.scale_weight',
'encoder.block.10.layer.1.DenseReluDense.wo.scale_weight',
'encoder.block.11.layer.0.SelfAttention.q.scale_weight',
'encoder.block.11.layer.0.SelfAttention.k.scale_weight',
'encoder.block.11.layer.0.SelfAttention.v.scale_weight',
'encoder.block.11.layer.0.SelfAttention.o.scale_weight',
'encoder.block.11.layer.1.DenseReluDense.wi_0.scale_weight',
'encoder.block.11.layer.1.DenseReluDense.wi_1.scale_weight',
'encoder.block.11.layer.1.DenseReluDense.wo.scale_weight',
'encoder.block.12.layer.0.SelfAttention.q.scale_weight',
'encoder.block.12.layer.0.SelfAttention.k.scale_weight',
'encoder.block.12.layer.0.SelfAttention.v.scale_weight',
'encoder.block.12.layer.0.SelfAttention.o.scale_weight',
'encoder.block.12.layer.1.DenseReluDense.wi_0.scale_weight',
'encoder.block.12.layer.1.DenseReluDense.wi_1.scale_weight',
'encoder.block.12.layer.1.DenseReluDense.wo.scale_weight',
'encoder.block.13.layer.0.SelfAttention.q.scale_weight',
'encoder.block.13.layer.0.SelfAttention.k.scale_weight',
'encoder.block.13.layer.0.SelfAttention.v.scale_weight',
'encoder.block.13.layer.0.SelfAttention.o.scale_weight',
'encoder.block.13.layer.1.DenseReluDense.wi_0.scale_weight',
'encoder.block.13.layer.1.DenseReluDense.wi_1.scale_weight',
'encoder.block.13.layer.1.DenseReluDense.wo.scale_weight',
'encoder.block.14.layer.0.SelfAttention.q.scale_weight',
'encoder.block.14.layer.0.SelfAttention.k.scale_weight',
'encoder.block.14.layer.0.SelfAttention.v.scale_weight',
'encoder.block.14.layer.0.SelfAttention.o.scale_weight',
'encoder.block.14.layer.1.DenseReluDense.wi_0.scale_weight',
'encoder.block.14.layer.1.DenseReluDense.wi_1.scale_weight',
'encoder.block.14.layer.1.DenseReluDense.wo.scale_weight',
'encoder.block.15.layer.0.SelfAttention.q.scale_weight',
'encoder.block.15.layer.0.SelfAttention.k.scale_weight',
'encoder.block.15.layer.0.SelfAttention.v.scale_weight',
'encoder.block.15.layer.0.SelfAttention.o.scale_weight',
'encoder.block.15.layer.1.DenseReluDense.wi_0.scale_weight',
'encoder.block.15.layer.1.DenseReluDense.wi_1.scale_weight',
'encoder.block.15.layer.1.DenseReluDense.wo.scale_weight',
'encoder.block.16.layer.0.SelfAttention.q.scale_weight',
'encoder.block.16.layer.0.SelfAttention.k.scale_weight',
'encoder.block.16.layer.0.SelfAttention.v.scale_weight',
'encoder.block.16.layer.0.SelfAttention.o.scale_weight',
'encoder.block.16.layer.1.DenseReluDense.wi_0.scale_weight',
'encoder.block.16.layer.1.DenseReluDense.wi_1.scale_weight',
'encoder.block.16.layer.1.DenseReluDense.wo.scale_weight',
'encoder.block.17.layer.0.SelfAttention.q.scale_weight',
'encoder.block.17.layer.0.SelfAttention.k.scale_weight',
'encoder.block.17.layer.0.SelfAttention.v.scale_weight',
'encoder.block.17.layer.0.SelfAttention.o.scale_weight',
'encoder.block.17.layer.1.DenseReluDense.wi_0.scale_weight',
'encoder.block.17.layer.1.DenseReluDense.wi_1.scale_weight',
'encoder.block.17.layer.1.DenseReluDense.wo.scale_weight',
'encoder.block.18.layer.0.SelfAttention.q.scale_weight',
'encoder.block.18.layer.0.SelfAttention.k.scale_weight',
'encoder.block.18.layer.0.SelfAttention.v.scale_weight',
'encoder.block.18.layer.0.SelfAttention.o.scale_weight',
'encoder.block.18.layer.1.DenseReluDense.wi_0.scale_weight',
'encoder.block.18.layer.1.DenseReluDense.wi_1.scale_weight',
'encoder.block.18.layer.1.DenseReluDense.wo.scale_weight',
'encoder.block.19.layer.0.SelfAttention.q.scale_weight',
'encoder.block.19.layer.0.SelfAttention.k.scale_weight',
'encoder.block.19.layer.0.SelfAttention.v.scale_weight',
'encoder.block.19.layer.0.SelfAttention.o.scale_weight',
'encoder.block.19.layer.1.DenseReluDense.wi_0.scale_weight',
'encoder.block.19.layer.1.DenseReluDense.wi_1.scale_weight',
'encoder.block.19.layer.1.DenseReluDense.wo.scale_weight',
'encoder.block.20.layer.0.SelfAttention.q.scale_weight',
'encoder.block.20.layer.0.SelfAttention.k.scale_weight',
'encoder.block.20.layer.0.SelfAttention.v.scale_weight',
'encoder.block.20.layer.0.SelfAttention.o.scale_weight',
'encoder.block.20.layer.1.DenseReluDense.wi_0.scale_weight',
'encoder.block.20.layer.1.DenseReluDense.wi_1.scale_weight',
'encoder.block.20.layer.1.DenseReluDense.wo.scale_weight',
'encoder.block.21.layer.0.SelfAttention.q.scale_weight',
'encoder.block.21.layer.0.SelfAttention.k.scale_weight',
'encoder.block.21.layer.0.SelfAttention.v.scale_weight',
'encoder.block.21.layer.0.SelfAttention.o.scale_weight',
'encoder.block.21.layer.1.DenseReluDense.wi_0.scale_weight',
'encoder.block.21.layer.1.DenseReluDense.wi_1.scale_weight',
'encoder.block.21.layer.1.DenseReluDense.wo.scale_weight',
'encoder.block.22.layer.0.SelfAttention.q.scale_weight',
'encoder.block.22.layer.0.SelfAttention.k.scale_weight',
'encoder.block.22.layer.0.SelfAttention.v.scale_weight',
'encoder.block.22.layer.0.SelfAttention.o.scale_weight',
'encoder.block.22.layer.1.DenseReluDense.wi_0.scale_weight',
'encoder.block.22.layer.1.DenseReluDense.wi_1.scale_weight',
'encoder.block.22.layer.1.DenseReluDense.wo.scale_weight',
'encoder.block.23.layer.0.SelfAttention.q.scale_weight',
'encoder.block.23.layer.0.SelfAttention.k.scale_weight',
'encoder.block.23.layer.0.SelfAttention.v.scale_weight',
'encoder.block.23.layer.0.SelfAttention.o.scale_weight',
'encoder.block.23.layer.1.DenseReluDense.wi_0.scale_weight',
'encoder.block.23.layer.1.DenseReluDense.wi_1.scale_weight',
'encoder.block.23.layer.1.DenseReluDense.wo.scale_weight']
2025-08-16T00:11:45.657438 - C:\ComfyUI-Easy-Install\ComfyUI\custom_nodes\gguf\
pig.py:341: UserWarning: The given NumPy array is not writable, and PyTorch does
not support non-writable tensors. This means writing to this tensor will result in
undefined behavior. You may want to copy the array to protect its data or make it
writable before converting it to a tensor. This type of warning will be suppressed
for the rest of this program. (Triggered internally at C:\actions-runner\_work\
pytorch\pytorch\pytorch\torch\csrc\utils\tensor_numpy.cpp:209.)
torch_tensor = torch.from_numpy(tensor.data)
2025-08-16T00:11:45.664949 - gguf qtypes: F32 (836), Q4_K (437), Q5_K (52), F16
(6)2025-08-16T00:11:45.665953 -
2025-08-16T00:11:45.675362 - [MultiGPU get_torch_device_patched] Returning device:
cuda:0 (current_device=cuda:0)
2025-08-16T00:11:45.696699 - model weight dtype torch.float16, manual cast: None
2025-08-16T00:11:45.697699 - model_type FLOW
2025-08-16T00:11:45.824572 - #[33m[rgthree-comfy][Power Lora Loader]#[0m Lora
"Wan21_CausVid_14B_T2V_lora_rank32.safetensors" not found, skipping.#[0m2025-08-
16T00:11:45.824572 -
2025-08-16T00:11:45.824572 - Requested to load WanTEModel
2025-08-16T00:11:50.900875 - loaded completely 9633.8 6419.4765625 True
2025-08-16T00:11:52.227806 - Requested to load WanVAE
2025-08-16T00:11:52.326192 - loaded completely 268.92187118530273
242.02829551696777 True
2025-08-16T00:11:59.647418 - Requested to load WAN21_Vace
2025-08-16T00:12:06.648973 - loaded partially 6383.687973297119 6383.68359375 0
2025-08-16T00:12:06.652974 - Attempting to release mmap (491)2025-08-
16T00:12:06.652974 -
2025-08-16T00:13:28.920306 -
100%|
███████████████████████████████████████████████████████████████████████████████████
█| 6/6 [01:14<00:00, 12.29s/it]2025-08-16T00:13:28.920306 -
100%|
███████████████████████████████████████████████████████████████████████████████████
█| 6/6 [01:14<00:00, 12.46s/it]2025-08-16T00:13:28.920306 -
2025-08-16T00:13:28.926751 - Requested to load WanVAE
2025-08-16T00:13:29.526417 - loaded completely 848.546875 242.02829551696777 True
2025-08-16T00:13:37.482158 - [MultiGPU get_torch_device_patched] Returning device:
cuda:0 (current_device=cuda:0)
2025-08-16T00:13:42.198802 - Comfy-VFI: Clearing cache...2025-08-16T00:13:42.198802
- 2025-08-16T00:13:42.228482 - Done cache clearing2025-08-16T00:13:42.228482 -
2025-08-16T00:13:46.317238 - Comfy-VFI: Clearing cache...2025-08-16T00:13:46.317238
- 2025-08-16T00:13:46.318238 - Done cache clearing2025-08-16T00:13:46.318238 -
2025-08-16T00:13:50.931171 - Comfy-VFI: Clearing cache...2025-08-16T00:13:50.931171
- 2025-08-16T00:13:50.935169 - Done cache clearing2025-08-16T00:13:50.936470 -
2025-08-16T00:13:55.234318 - Comfy-VFI: Clearing cache...2025-08-16T00:13:55.234318
- 2025-08-16T00:13:55.244773 - Done cache clearing2025-08-16T00:13:55.245780 -
2025-08-16T00:13:55.589890 - Comfy-VFI done! 82 frames generated at resolution:
torch.Size([3, 480, 720])2025-08-16T00:13:55.589890 -
2025-08-16T00:13:55.590901 - Comfy-VFI: Final clearing cache...2025-08-
16T00:13:55.590901 - 2025-08-16T00:13:55.590901 - Done cache clearing2025-08-
16T00:13:55.590901 -
2025-08-16T00:13:55.911029 - [MultiGPU get_torch_device_patched] Returning device:
cuda:0 (current_device=cuda:0)
2025-08-16T00:13:57.909931 - [MultiGPU get_torch_device_patched] Returning device:
cuda:0 (current_device=cuda:0)
2025-08-16T00:14:23.464116 - [MultiGPU get_torch_device_patched] Returning device:
cuda:0 (current_device=cuda:0)
2025-08-16T00:14:23.478814 - Prompt executed in 163.50 seconds
2025-08-16T00:15:12.890296 - got prompt
2025-08-16T00:15:18.576343 - Requested to load WanVAE
2025-08-16T00:15:18.915194 - loaded completely 6359.410327911377 242.02829551696777
True
2025-08-16T00:15:24.216091 - Requested to load WAN21_Vace
2025-08-16T00:15:28.459694 - loaded partially 6333.176426208496 6333.176025390625 0
2025-08-16T00:17:01.452873 -
100%|
███████████████████████████████████████████████████████████████████████████████████
█| 6/6 [01:32<00:00, 15.70s/it]2025-08-16T00:17:01.452873 -
100%|
███████████████████████████████████████████████████████████████████████████████████
█| 6/6 [01:32<00:00, 15.50s/it]2025-08-16T00:17:01.452873 -
2025-08-16T00:17:01.455022 - Requested to load WanVAE
2025-08-16T00:17:01.915964 - loaded completely 785.707202911377 242.02829551696777
True
2025-08-16T00:17:12.640059 - Comfy-VFI: Clearing cache...2025-08-16T00:17:12.640059
- 2025-08-16T00:17:12.667200 - Done cache clearing2025-08-16T00:17:12.668199 -
2025-08-16T00:17:16.493420 - Comfy-VFI: Clearing cache...2025-08-16T00:17:16.493420
- 2025-08-16T00:17:16.494416 - Done cache clearing2025-08-16T00:17:16.494416 -
2025-08-16T00:17:21.009131 - Comfy-VFI: Clearing cache...2025-08-16T00:17:21.009131
- 2025-08-16T00:17:21.015642 - Done cache clearing2025-08-16T00:17:21.015642 -
2025-08-16T00:17:25.025951 - Comfy-VFI: Clearing cache...2025-08-16T00:17:25.025951
- 2025-08-16T00:17:25.027073 - Done cache clearing2025-08-16T00:17:25.027073 -
2025-08-16T00:17:25.347590 - Comfy-VFI done! 82 frames generated at resolution:
torch.Size([3, 480, 720])2025-08-16T00:17:25.347590 -
2025-08-16T00:17:25.348617 - Comfy-VFI: Final clearing cache...2025-08-
16T00:17:25.348617 - 2025-08-16T00:17:25.348617 - Done cache clearing2025-08-
16T00:17:25.348617 -
2025-08-16T00:17:25.675248 - [MultiGPU get_torch_device_patched] Returning device:
cuda:0 (current_device=cuda:0)
2025-08-16T00:17:27.569115 - [MultiGPU get_torch_device_patched] Returning device:
cuda:0 (current_device=cuda:0)
2025-08-16T00:17:38.003957 - got prompt
2025-08-16T00:17:44.930682 - [MultiGPU get_torch_device_patched] Returning device:
cuda:0 (current_device=cuda:0)
2025-08-16T00:17:44.934206 - Prompt executed in 152.01 seconds
2025-08-16T00:17:45.753316 - Requested to load WanVAE
2025-08-16T00:17:45.974431 - loaded completely 6359.410327911377 242.02829551696777
True
2025-08-16T00:17:51.425321 - Requested to load WAN21_Vace
2025-08-16T00:17:53.785173 - loaded partially 6333.176426208496 6333.176025390625 0
2025-08-16T00:19:10.778659 -
100%|
███████████████████████████████████████████████████████████████████████████████████
█| 6/6 [01:16<00:00, 12.43s/it]2025-08-16T00:19:10.778659 -
100%|
███████████████████████████████████████████████████████████████████████████████████
█| 6/6 [01:16<00:00, 12.83s/it]2025-08-16T00:19:10.778659 -
2025-08-16T00:19:10.786049 - Requested to load WanVAE
2025-08-16T00:19:11.277399 - loaded completely 813.871265411377 242.02829551696777
True
2025-08-16T00:19:21.880332 - Comfy-VFI: Clearing cache...2025-08-16T00:19:21.880332
- 2025-08-16T00:19:21.907563 - Done cache clearing2025-08-16T00:19:21.907563 -
2025-08-16T00:19:25.716632 - Comfy-VFI: Clearing cache...2025-08-16T00:19:25.716632
- 2025-08-16T00:19:25.716632 - Done cache clearing2025-08-16T00:19:25.717633 -
2025-08-16T00:19:29.529637 - Comfy-VFI: Clearing cache...2025-08-16T00:19:29.529637
- 2025-08-16T00:19:29.530691 - Done cache clearing2025-08-16T00:19:29.530691 -
2025-08-16T00:19:33.530394 - Comfy-VFI: Clearing cache...2025-08-16T00:19:33.530394
- 2025-08-16T00:19:33.531394 - Done cache clearing2025-08-16T00:19:33.531394 -
2025-08-16T00:19:33.849553 - Comfy-VFI done! 82 frames generated at resolution:
torch.Size([3, 480, 720])2025-08-16T00:19:33.849553 -
2025-08-16T00:19:33.850770 - Comfy-VFI: Final clearing cache...2025-08-
16T00:19:33.850770 - 2025-08-16T00:19:33.850770 - Done cache clearing2025-08-
16T00:19:33.850770 -
2025-08-16T00:19:34.168263 - [MultiGPU get_torch_device_patched] Returning device:
cuda:0 (current_device=cuda:0)
2025-08-16T00:19:36.068052 - [MultiGPU get_torch_device_patched] Returning device:
cuda:0 (current_device=cuda:0)
2025-08-16T00:19:52.002869 - [MultiGPU get_torch_device_patched] Returning device:
cuda:0 (current_device=cuda:0)
2025-08-16T00:19:52.004875 - Prompt executed in 126.75 seconds
2025-08-16T00:23:16.308989 - got prompt
2025-08-16T00:23:17.194644 - [MultiGPU get_torch_device_patched] Returning device:
cuda:0 (current_device=cuda:0)
2025-08-16T00:23:19.075068 - #[33m[rgthree-comfy][Power Lora Loader]#[0m Lora
"Wan21_CausVid_14B_T2V_lora_rank32.safetensors" not found, skipping.#[0m2025-08-
16T00:23:19.076068 -
2025-08-16T00:23:19.104790 - Requested to load WanTEModel
2025-08-16T00:23:45.062514 - loaded completely 9496.8 6419.4765625 True
2025-08-16T00:23:45.857880 - Requested to load WanVAE
2025-08-16T00:23:47.900445 - loaded completely 4383.3828125 242.02829551696777 True
2025-08-16T00:24:02.926463 - Requested to load WAN21_Vace
2025-08-16T00:24:02.997546 - 0 models unloaded.
2025-08-16T00:24:03.145918 - loaded partially 128.0 127.998291015625 0
2025-08-16T00:24:03.151075 - Attempting to release mmap (726)2025-08-
16T00:24:03.151075 -
2025-08-16T00:25:26.581127 -
33%|████████████████████████████
| 2/6 [01:12<02:24, 36.07s/it]2025-08-16T00:25:37.079659 - [MultiGPU
get_torch_device_patched] Returning device: cuda:0 (current_device=cuda:0)
2025-08-16T00:26:04.309875 -
50%|██████████████████████████████████████████
| 3/6 [01:50<01:50, 36.83s/it]2025-08-16T00:26:06.490379 - got prompt
2025-08-16T00:26:42.499639 -
67%|████████████████████████████████████████████████████████
| 4/6 [02:28<01:14, 37.37s/it]2025-08-16T00:26:55.882511 - got prompt
2025-08-16T00:27:57.889779 -
100%|
███████████████████████████████████████████████████████████████████████████████████
█| 6/6 [03:43<00:00, 37.47s/it]2025-08-16T00:27:57.889779 -
100%|
███████████████████████████████████████████████████████████████████████████████████
█| 6/6 [03:43<00:00, 37.27s/it]2025-08-16T00:27:57.889779 -
2025-08-16T00:27:57.893245 - Requested to load WanVAE
2025-08-16T00:27:57.988235 - loaded completely 3266.614990234375 242.02829551696777
True
2025-08-16T00:28:11.034156 - Prompt executed in 294.69 seconds
2025-08-16T00:28:11.792264 - Prompt executed in 0.01 seconds
2025-08-16T00:28:18.520767 - Requested to load WanTEModel
2025-08-16T00:28:21.674696 - loaded completely 9126.773413467406 6419.4765625 True
2025-08-16T00:28:28.585164 - Requested to load WAN21_Vace
2025-08-16T00:28:31.802076 - loaded partially 6334.687973297119 6334.679931640625 0
2025-08-16T00:28:31.806589 - Attempting to release mmap (487)2025-08-
16T00:28:31.806589 -
2025-08-16T00:29:50.236391 -
100%|
███████████████████████████████████████████████████████████████████████████████████
█| 6/6 [01:17<00:00, 12.73s/it]2025-08-16T00:29:50.236391 -
100%|
███████████████████████████████████████████████████████████████████████████████████
█| 6/6 [01:17<00:00, 12.84s/it]2025-08-16T00:29:50.236391 -
2025-08-16T00:29:50.240319 - Requested to load WanVAE
2025-08-16T00:29:50.761273 - loaded completely 913.95703125 242.02829551696777 True
2025-08-16T00:29:58.845994 - Comfy-VFI: Clearing cache...2025-08-16T00:29:58.845994
- 2025-08-16T00:29:58.873704 - Done cache clearing2025-08-16T00:29:58.873704 -
2025-08-16T00:30:02.853168 - Comfy-VFI: Clearing cache...2025-08-16T00:30:02.854168
- 2025-08-16T00:30:02.855286 - Done cache clearing2025-08-16T00:30:02.855286 -
2025-08-16T00:30:07.129880 - Comfy-VFI: Clearing cache...2025-08-16T00:30:07.129880
- 2025-08-16T00:30:07.132387 - Done cache clearing2025-08-16T00:30:07.132387 -
2025-08-16T00:30:11.191634 - Comfy-VFI: Clearing cache...2025-08-16T00:30:11.191634
- 2025-08-16T00:30:11.192688 - Done cache clearing2025-08-16T00:30:11.192688 -
2025-08-16T00:30:11.505137 - Comfy-VFI done! 82 frames generated at resolution:
torch.Size([3, 480, 720])2025-08-16T00:30:11.505137 -
2025-08-16T00:30:11.506375 - Comfy-VFI: Final clearing cache...2025-08-
16T00:30:11.506375 - 2025-08-16T00:30:11.506375 - Done cache clearing2025-08-
16T00:30:11.506375 -
2025-08-16T00:30:11.850120 - [MultiGPU get_torch_device_patched] Returning device:
cuda:0 (current_device=cuda:0)
2025-08-16T00:30:13.973887 - [MultiGPU get_torch_device_patched] Returning device:
cuda:0 (current_device=cuda:0)
2025-08-16T00:30:41.205932 - [MultiGPU get_torch_device_patched] Returning device:
cuda:0 (current_device=cuda:0)
2025-08-16T00:30:41.214510 - Prompt executed in 149.41 seconds
2025-08-16T00:35:16.487988 - got prompt
2025-08-16T00:35:16.543249 - Prompt executed in 0.03 seconds
2025-08-16T00:35:35.646165 - got prompt
2025-08-16T00:35:36.612988 - [MultiGPU get_torch_device_patched] Returning device:
cuda:0 (current_device=cuda:0)
2025-08-16T00:35:38.588121 - Requested to load WanTEModel
2025-08-16T00:35:46.858365 - loaded completely 9496.8 6419.4765625 True
2025-08-16T00:35:47.888421 - Requested to load WanVAE
2025-08-16T00:35:49.622719 - loaded completely 4383.3828125 242.02829551696777 True
2025-08-16T00:36:05.054576 - Requested to load WAN21_Vace
2025-08-16T00:36:05.136492 - 0 models unloaded.
2025-08-16T00:36:06.060417 - loaded partially 128.0 127.998291015625 0
2025-08-16T00:36:06.064467 - Attempting to release mmap (726)2025-08-
16T00:36:06.064467 -
2025-08-16T00:40:11.699676 -
100%|
███████████████████████████████████████████████████████████████████████████████████
█| 6/6 [03:50<00:00, 38.30s/it]2025-08-16T00:40:11.700921 -
100%|
███████████████████████████████████████████████████████████████████████████████████
█| 6/6 [03:50<00:00, 38.50s/it]2025-08-16T00:40:11.700921 -
2025-08-16T00:40:11.703986 - Requested to load WanVAE
2025-08-16T00:40:11.816114 - loaded completely 3266.614990234375 242.02829551696777
True
2025-08-16T00:40:25.194750 - Prompt executed in 289.52 seconds
2025-08-16T00:40:27.387805 - got prompt
2025-08-16T00:40:27.402339 - Failed to validate prompt for output 112:
2025-08-16T00:40:27.402339 - * ClipLoaderGGUF 117:
2025-08-16T00:40:27.402339 - - Value not in list: clip_name: 'umt5-xxl-encoder-
Q8_0.gguf' not in ['clip_g_hidream.safetensors', 'clip_l.safetensors',
'clip_l_hidream.safetensors', 'llama_3.1_8b_instruct_fp8_scaled.safetensors',
'qwen_2.5_vl_7b_fp8_scaled.safetensors', 'qwen_2.5_vl_fp16.safetensors',
't5xxl_fp16.safetensors', 't5xxl_fp8_e4m3fn.safetensors',
't5xxl_fp8_e4m3fn_scaled.safetensors', 'umt5_xxl_fp8_e4m3fn_scaled.safetensors']
2025-08-16T00:40:27.402339 - Output will be ignored
2025-08-16T00:40:45.335286 - [MultiGPU get_torch_device_patched] Returning device:
cuda:0 (current_device=cuda:0)
2025-08-16T00:40:48.599290 - !!! Exception during processing !!! Allocation on
device
2025-08-16T00:40:48.830739 - Traceback (most recent call last):
File "C:\ComfyUI-Easy-Install\ComfyUI\execution.py", line 496, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await
get_output_data(prompt_id, unique_id, obj, input_data_all,
execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb,
hidden_inputs=hidden_inputs)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI-Easy-Install\ComfyUI\execution.py", line 315, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj,
input_data_all, obj.FUNCTION, allow_interrupt=True,
execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb,
hidden_inputs=hidden_inputs)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI-Easy-Install\ComfyUI\execution.py", line 289, in
_async_map_node_over_list
await process_inputs(input_dict, i)
File "C:\ComfyUI-Easy-Install\ComfyUI\execution.py", line 277, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "C:\ComfyUI-Easy-Install\ComfyUI\comfy_extras\nodes_canny.py", line 19, in
detect_edge
output = canny(image.to(comfy.model_management.get_torch_device()).movedim(-1,
1), low_threshold, high_threshold)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\kornia\filters\
canny.py", line 102, in canny
magnitude: Tensor = torch.sqrt(gx * gx + gy * gy + eps)
~~~~~~~~^~~~~~~~~
torch.OutOfMemoryError: Allocation on device

2025-08-16T00:40:48.831745 - Got an OOM, unloading all loaded models.


2025-08-16T00:40:48.832751 - [MultiGPU get_torch_device_patched] Returning device:
cuda:0 (current_device=cuda:0)
2025-08-16T00:40:48.949853 - Prompt executed in 21.54 seconds
2025-08-16T00:40:49.468331 - [MultiGPU get_torch_device_patched] Returning device:
cuda:0 (current_device=cuda:0)
2025-08-16T00:41:52.531409 - got prompt
2025-08-16T00:41:52.547827 - Failed to validate prompt for output 112:
2025-08-16T00:41:52.547827 - * ClipLoaderGGUF 117:
2025-08-16T00:41:52.547827 - - Value not in list: clip_name: 'umt5-xxl-encoder-
Q8_0.gguf' not in ['clip_g_hidream.safetensors', 'clip_l.safetensors',
'clip_l_hidream.safetensors', 'llama_3.1_8b_instruct_fp8_scaled.safetensors',
'qwen_2.5_vl_7b_fp8_scaled.safetensors', 'qwen_2.5_vl_fp16.safetensors',
't5xxl_fp16.safetensors', 't5xxl_fp8_e4m3fn.safetensors',
't5xxl_fp8_e4m3fn_scaled.safetensors', 'umt5_xxl_fp8_e4m3fn_scaled.safetensors']
2025-08-16T00:41:52.547827 - Output will be ignored
2025-08-16T00:41:52.578256 - [MultiGPU get_torch_device_patched] Returning device:
cuda:0 (current_device=cuda:0)
2025-08-16T00:41:53.872147 - !!! Exception during processing !!! Allocation on
device
2025-08-16T00:41:53.873149 - Traceback (most recent call last):
File "C:\ComfyUI-Easy-Install\ComfyUI\execution.py", line 496, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await
get_output_data(prompt_id, unique_id, obj, input_data_all,
execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb,
hidden_inputs=hidden_inputs)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI-Easy-Install\ComfyUI\execution.py", line 315, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj,
input_data_all, obj.FUNCTION, allow_interrupt=True,
execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb,
hidden_inputs=hidden_inputs)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI-Easy-Install\ComfyUI\execution.py", line 289, in
_async_map_node_over_list
await process_inputs(input_dict, i)
File "C:\ComfyUI-Easy-Install\ComfyUI\execution.py", line 277, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "C:\ComfyUI-Easy-Install\ComfyUI\comfy_extras\nodes_canny.py", line 19, in
detect_edge
output = canny(image.to(comfy.model_management.get_torch_device()).movedim(-1,
1), low_threshold, high_threshold)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\kornia\filters\
canny.py", line 112, in canny
nms_magnitude: Tensor = F.conv2d(magnitude, nms_kernels,
padding=nms_kernels.shape[-1] // 2)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.OutOfMemoryError: Allocation on device

2025-08-16T00:41:53.873149 - Got an OOM, unloading all loaded models.


2025-08-16T00:41:53.873149 - [MultiGPU get_torch_device_patched] Returning device:
cuda:0 (current_device=cuda:0)
2025-08-16T00:41:53.884099 - Prompt executed in 1.33 seconds
2025-08-16T00:43:43.665763 - got prompt
2025-08-16T00:43:43.680773 - Failed to validate prompt for output 112:
2025-08-16T00:43:43.680773 - * ClipLoaderGGUF 117:
2025-08-16T00:43:43.680773 - - Value not in list: clip_name: 'umt5-xxl-encoder-
Q8_0.gguf' not in ['clip_g_hidream.safetensors', 'clip_l.safetensors',
'clip_l_hidream.safetensors', 'llama_3.1_8b_instruct_fp8_scaled.safetensors',
'qwen_2.5_vl_7b_fp8_scaled.safetensors', 'qwen_2.5_vl_fp16.safetensors',
't5xxl_fp16.safetensors', 't5xxl_fp8_e4m3fn.safetensors',
't5xxl_fp8_e4m3fn_scaled.safetensors', 'umt5_xxl_fp8_e4m3fn_scaled.safetensors']
2025-08-16T00:43:43.680773 - Output will be ignored
2025-08-16T00:43:43.711814 - [MultiGPU get_torch_device_patched] Returning device:
cuda:0 (current_device=cuda:0)
2025-08-16T00:43:44.842162 - !!! Exception during processing !!! Allocation on
device
2025-08-16T00:43:44.842162 - Traceback (most recent call last):
File "C:\ComfyUI-Easy-Install\ComfyUI\execution.py", line 496, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await
get_output_data(prompt_id, unique_id, obj, input_data_all,
execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb,
hidden_inputs=hidden_inputs)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI-Easy-Install\ComfyUI\execution.py", line 315, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj,
input_data_all, obj.FUNCTION, allow_interrupt=True,
execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb,
hidden_inputs=hidden_inputs)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI-Easy-Install\ComfyUI\execution.py", line 289, in
_async_map_node_over_list
await process_inputs(input_dict, i)
File "C:\ComfyUI-Easy-Install\ComfyUI\execution.py", line 277, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "C:\ComfyUI-Easy-Install\ComfyUI\comfy_extras\nodes_canny.py", line 19, in
detect_edge
output = canny(image.to(comfy.model_management.get_torch_device()).movedim(-1,
1), low_threshold, high_threshold)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\kornia\filters\
canny.py", line 112, in canny
nms_magnitude: Tensor = F.conv2d(magnitude, nms_kernels,
padding=nms_kernels.shape[-1] // 2)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.OutOfMemoryError: Allocation on device

2025-08-16T00:43:44.842162 - Got an OOM, unloading all loaded models.


2025-08-16T00:43:44.842162 - [MultiGPU get_torch_device_patched] Returning device:
cuda:0 (current_device=cuda:0)
2025-08-16T00:43:44.855106 - Prompt executed in 1.17 seconds
2025-08-16T00:46:24.937578 - [MultiGPU get_torch_device_patched] Returning device:
cuda:0 (current_device=cuda:0)
2025-08-16T00:46:31.829271 - got prompt
2025-08-16T00:46:31.859942 - [MultiGPU get_torch_device_patched] Returning device:
cuda:0 (current_device=cuda:0)
2025-08-16T00:46:34.917942 - !!! Exception during processing !!! Allocation on
device
2025-08-16T00:46:34.918948 - Traceback (most recent call last):
File "C:\ComfyUI-Easy-Install\ComfyUI\execution.py", line 496, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await
get_output_data(prompt_id, unique_id, obj, input_data_all,
execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb,
hidden_inputs=hidden_inputs)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI-Easy-Install\ComfyUI\execution.py", line 315, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj,
input_data_all, obj.FUNCTION, allow_interrupt=True,
execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb,
hidden_inputs=hidden_inputs)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI-Easy-Install\ComfyUI\execution.py", line 289, in
_async_map_node_over_list
await process_inputs(input_dict, i)
File "C:\ComfyUI-Easy-Install\ComfyUI\execution.py", line 277, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "C:\ComfyUI-Easy-Install\ComfyUI\comfy_extras\nodes_canny.py", line 19, in
detect_edge
output = canny(image.to(comfy.model_management.get_torch_device()).movedim(-1,
1), low_threshold, high_threshold)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\kornia\filters\
canny.py", line 112, in canny
nms_magnitude: Tensor = F.conv2d(magnitude, nms_kernels,
padding=nms_kernels.shape[-1] // 2)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.OutOfMemoryError: Allocation on device

2025-08-16T00:46:34.918948 - Got an OOM, unloading all loaded models.


2025-08-16T00:46:34.918948 - [MultiGPU get_torch_device_patched] Returning device:
cuda:0 (current_device=cuda:0)
2025-08-16T00:46:34.921280 - Prompt executed in 3.08 seconds
2025-08-16T00:46:35.223802 - [MultiGPU get_torch_device_patched] Returning device:
cuda:0 (current_device=cuda:0)

```
## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as
API keys or passwords.
```
Workflow too large. Please manually upload the workflow from local file system.
```

## Additional Context
(Please add any additional context or steps to reproduce the error here)

You might also like