Skip to content

Eval bug: Transformers does not recognize this architecture #21942

@Xiao-hb

Description

@Xiao-hb

Name and Version

Latest version

Operating systems

Windows

GGML backends

CPU

Hardware

CPU

Models

No response

Problem description & steps to reproduce

Based on HuggingFace's pretrained SigLiP2 and Qwen3, I locally trained a combination of the two to build my own VLM. The config.json file shows "architecture": "VLM".

When using convert_hf_to_gguf.py for conversion, it prompts: "ERROR: hf-to-gguf: Model VLM is not supported"

How can I solve this? Thank you very much.

First Bad Commit

No response

Relevant log output

Logs

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions