Skip to content

arg : clean up handling --mmproj with -hf #13082

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Apr 24, 2025

Conversation

ngxson
Copy link
Collaborator

@ngxson ngxson commented Apr 23, 2025

The current code is a bit hacky, so the CI on #12898 is currently broken.

The idea is:

  • If -hf is specified, we ask HF backend where the file is.
    The backend will response with auto_detected.ggufFile and optionally auto_detected.mmprojFile
  • If auto_detected.mmprojFile is returned by HF backend:
    • If current mmproj file is not set, we use it
    • Otherwise, discard the value given by backend, use either mmproj.path or mmproj.url

@ngxson ngxson requested a review from ggerganov April 23, 2025 22:03
@ngxson
Copy link
Collaborator Author

ngxson commented Apr 24, 2025

One more case that need to be handled: if the example not using mmproj like llama-cli, we should skip downloading the file. Fixing that now..

@ngxson ngxson merged commit 80982e8 into ggml-org:master Apr 24, 2025
47 checks passed
pockers21 pushed a commit to pockers21/llama.cpp that referenced this pull request Apr 28, 2025
* arg : clean up handling --mmproj with -hf

* rm change about no_mmproj

* Revert "rm change about no_mmproj"

This reverts commit 2cac8e0.

* handle no_mmproj explicitly

* skip download mmproj on examples not using it
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants