wikis Search Results · repo:ggml-org/llama.vscode language:CSS
Filter by
33 results
(151 ms)33 results
inggml-org/llama.vscode (press backspace or delete to remove)Generate a commit message Requred servers Chat server How to use it In the source control panel just click on the star button (near the commit button). This generate a commit message, based on the current ...
- Last updated on May 11
Rules What are rules Rules are additional user instructions that are added to the system prompt when an agent request is sent to the AI model. They are stored in a file, created by the user in a plain ...
- Last updated on Sep 18
Setup llama.cpp server for Linux Download the release files for your OS from llama.cpp releases. (or build from source). Add the bin folder to PATH, so that it is globally available The configurations ...
- Last updated on Aug 11
Setup llama.cpp servers for Mac Show llama-vscode menu (Ctrl+Shift+M) and select "Install/upgrade llama.cpp" (if not yet done). After that add/select the models you want to use. The instructions ...
- Last updated on Aug 11
Setup llama.cpp servers for Windows Show llama-vscode menu (Ctrl+Shift+M) and select "Install/upgrade llama.cpp" (if not yet done). After that add/select the models you want to use. The instructions ...
- Last updated on Aug 11
Model selection What is model selection At a given time only one model could be selected (no model selected is also possible). If a model is selected, llama-vscode assumes this model is available at the ...
- Last updated on Aug 14
Use as local AI runner (as LM Studio, Ollama, etc.) Overview llama-vscode could be used as a local AI runner (as LM Studio, Ollama, etc.) . Models are searched in Huggingface. After a model is selected, ...
- Last updated on Aug 11
How to use llama-vscode Overview llama-vscode is an extension for code completion, chat with ai and agentic coding, focused on local model usage with llama.cpp. How to use it Install llama.cpp Show llama-vscode ...
- Last updated on Aug 15
MCP Support Requred servers Tools server Overview llama-vscode could use the the tools from the MCP servers, which are installed in VS Code (part of VS Code's Extensions view). How to use it Install ...
- Last updated on Aug 15
Delete models Overview Llama-vscode automatically downloads (if not yet done) models (LLMs) from Huggingface if a local model (or env) is selected. The downloaded models are GGUF files. Once downloaded, ...
- Last updated on Aug 16