We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Please answer the following questions for yourself before submitting an issue.
I just wanna running ggml-model-q4_0.bin on my Windows 7.
Docker Toolbox 1.13.1 docker client: 1.13.1 os/arch: windows 7 /amd64 docker server:19.03.12 os/arch:ubuntu 22.04 /amd64 CPU type: Intel Core i7 6700 , supported command set: MMX, SSE, SSE2, ......, AVX, AVX2, FMA3, TSX
Error occurs:
ERROR: /app/.devops/tools.sh: line 40: 6 Illegal instruction ./quantize $arg2
I alread got ggml-model-f16.bin successfuly after executing --convert "/models/7B/" 1
but when I executed :
--quantize "/models/7B/ggml-model-f16.bin" "/models/7B/ggml-model-q4_0.bin" 2
in the same environment , and I checked my quantize file:
quantize: ELF 64-bit LSB shared object, x86-64, version 1 <GNU/Linux> ...... for GNU/Linux 3.2.0, not stripped
But the architecture of docker image named ghcr.io/ggerganov/llama.cpp:full is amd64:
docker image ghcr.io/ggerganov/llama.cpp:full|grep Architecture "Architecture": "amd64"
I just wanna know that the reason is the difference between x86-64 and amd64? Must I recompile quantize to amd64? or any other solution?
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Expected Behavior
I just wanna running ggml-model-q4_0.bin on my Windows 7.
Environment and Context
Failure Information (for bugs)
Error occurs:
Steps to Reproduce
I alread got ggml-model-f16.bin successfuly after executing --convert "/models/7B/" 1
but when I executed :
in the same environment , and I checked my quantize file:
But the architecture of docker image named ghcr.io/ggerganov/llama.cpp:full is amd64:
I just wanna know that the reason is the difference between x86-64 and amd64? Must I recompile quantize to amd64? or any other solution?
The text was updated successfully, but these errors were encountered: