【pytorch】thread: [16,0,0] Assertion `t >= 0 && t < n_classes` failed

报错信息:C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:250: block: [0,0,0], thread: [16,0,0] Assertion `t >= 0 && t < n_classes` failed

可能原因:标签label或者预测的结果pred,超出了数据的范围,比如标签里面只有0-15的数字,但是pred中出现19这个数字,那么在交叉熵损失计算中就会报错。

D:\py\Anaconda3\envs\sdsd_torch\python.exe C:\Users\suxin.com\PycharmProjects\pythonProject\model.py C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [4,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [5,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [8,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [9,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [10,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [11,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [12,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [13,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [14,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [15,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [16,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [17,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [18,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [19,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [20,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [21,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [22,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [23,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [24,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [25,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [26,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [27,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [29,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [30,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:240: block: [0,0,0], thread: [31,0,0] Assertion `t >= 0 && t < n_classes` failed. Traceback (most recent call last): File "C:\Users\suxin.com\PycharmProjects\pythonProject\model.py", line 114, in <module> train_model(model, criterion, optimizer, train_loader, num_epochs=10) File "C:\Users\suxin.com\PycharmProjects\pythonProject\model.py", line 102, in train_model loss.backward() File "D:\py\Anaconda3\envs\sdsd_torch\lib\site-packages\torch\_tensor.py", line 487, in backward torch.autograd.backward( File "D:\py\Anaconda3\envs\sdsd_torch\lib\site-packages\torch\autograd\__init__.py", line 193, in backward grad_tensors_ = _make_grads(tensors, grad_tensors_, is_grads_batched=False) File "D:\py\Anaconda3\envs\sdsd_torch\lib\site-packages\torch\autograd\__init__.py", line 89, in _make_grads new_grads.append(torch.ones_like(out, memory_format=torch.preserve_format)) RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. 进程已结束,退出代码为 1
最新发布
05-17
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值