Name and Version
Latest version
Operating systems
Windows
GGML backends
CPU
Hardware
CPU
Models
No response
Problem description & steps to reproduce
Based on HuggingFace's pretrained SigLiP2 and Qwen3, I locally trained a combination of the two to build my own VLM. The config.json file shows "architecture": "VLM".
When using convert_hf_to_gguf.py for conversion, it prompts: "ERROR: hf-to-gguf: Model VLM is not supported"
How can I solve this? Thank you very much.
First Bad Commit
No response
Relevant log output
Logs
Name and Version
Latest version
Operating systems
Windows
GGML backends
CPU
Hardware
CPU
Models
No response
Problem description & steps to reproduce
Based on HuggingFace's pretrained SigLiP2 and Qwen3, I locally trained a combination of the two to build my own VLM. The config.json file shows "architecture": "VLM".
When using convert_hf_to_gguf.py for conversion, it prompts: "ERROR: hf-to-gguf: Model VLM is not supported"
How can I solve this? Thank you very much.
First Bad Commit
No response
Relevant log output
Logs