一、问题
网上关于vscode + wsl + cude 的编程文章比较杂乱,问题也比较多,本文章为在其他安装好之后,想使用vscode中的code runner运行代码遇到的问题。
编写测试文件,点击右上角的小三角运行,报错如下:
/bin/sh: 1: nvcc: not found
其次,我发现vscode中的运行也不能用,下面给出tasks.json(负责编译的配置文件)文件的配置信息:
{
"version": "2.0.0",
"tasks": [
{
"label": "build",
"type": "shell",
"command": "nvcc",
"args":["-g","${file}","-o","${fileDirname}/${fileBasenameNoExtension}.out",
// include 头文件
"-I", "/usr/local/cuda/include",
"-I", "/usr/local/cuda-12.6/samples/common/inc",
// lib 库文件地址
"-L", "/usr/local/cuda/lib64",
"-L", "/usr/local/cuda-12.6/samples/common/lib",
"-l", "cudart",
"-l", "cublas",
"-l", "cudnn",
"-l", "curand",
"-D_MWAITXINTRIN_H_INCLUDED"
]
}
]
}
运行报错:
终端输出错误信息:
同样是找不到nvcc
二、思考
找不到 nvcc, 但是我的环境是配置好的
在wsl下可以看到nvcc编译器的版本信息
我认为可能是vscode的问题,找不到wsl中的nvcc,具体原因不太清楚(有大佬知道麻烦解答一下)。但是其实这个问题好解决,既然你编译找不到nvcc的编译器,那我编译的时候直接给你完整路径不就行了吗?(注意,该解决方法的前提是你的cuda环境没问题!)
三、解决
修改tasks.json中编译器的路径:
{
"version": "2.0.0",
"tasks": [
{
"label": "build",
"type": "shell",
//在这里直接给出编译器的完整路径,注意修改成你自己的nvcc路径
"command": "/usr/local/cuda-12.6/bin/nvcc",
"args":["-g","${file}","-o","${fileDirname}/${fileBasenameNoExtension}.out",
// include 头文件
"-I", "/usr/local/cuda/include",
"-I", "/usr/local/cuda-9.0/samples/common/inc",
// lib 库文件地址
"-L", "/usr/local/cuda/lib64",
"-L", "/usr/local/cuda-9.0/samples/common/lib",
"-l", "cudart",
"-l", "cublas",
"-l", "cudnn",
"-l", "curand",
"-D_MWAITXINTRIN_H_INCLUDED"
]
}
]
}
其次修改code runner的配置文件,点击插件的设置
找到Code-runner: Executor Map By File Extension设置
往下滑,找到.cu的配置信息
将nvcc修改成绝对路径
现在两种方式都可以成功运行了
附录--测试代码
#include <iostream>
#include <math.h>
// Kernel function to add the elements of two arrays
__global__ void add(int n, float *x, float *y)
{
int index = threadIdx.x;
int stride = blockDim.x;
for (int i = index; i < n; i += stride)
y[i] = x[i] + y[i];
}
int main(void)
{
int N = 1 << 20;
float *x, *y;
// Allocate Unified Memory – accessible from CPU or GPU
cudaMallocManaged(&x, N * sizeof(float));
cudaMallocManaged(&y, N * sizeof(float));
// initialize x and y arrays on the host
for (int i = 0; i < N; i++)
{
x[i] = 1.0f;
y[i] = 2.0f;
}
// Run kernel on 1M elements on the GPU
add<<<1, 256>>>(N, x, y);
// Wait for GPU to finish before accessing on host
cudaDeviceSynchronize();
// Check for errors (all values should be 3.0f)
float maxError = 0.0f;
for (int i = 0; i < N; i++)
maxError = fmax(maxError, fabs(y[i] - 3.0f));
std::cout << "Max error: " << maxError << std::endl;
std::cout << "Hello, World6666!" << std::endl;
// Free memory
cudaFree(x);
cudaFree(y);
return 0;
}
配置文件的时候在csdn摘的,找不到原作者了,就不引用了 抱歉