Skip to content

Pull requests: ggml-org/llama.vim

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Reviews
Assignee
Filter by who’s assigned
Assigned to nobody Loading
Sort

Pull requests list

llama.vim : reduce max predict time to 0.5s
#12 by ggerganov was merged Nov 17, 2024 Loading…
llama.vim : better request throttling mechanism
#19 by ggerganov was merged Jan 4, 2025 Loading…
llama.vim : disable temp sampler and set n_predict = 0
#25 by ggerganov was merged Jan 22, 2025 Loading…
llama.vim: speculative fim help wanted Extra attention is needed
#31 by VJHack was closed Mar 10, 2025 Draft
Explicit llama.cpp server launch arguments
#33 by makuche was closed Jan 25, 2025 Loading…
readme : add full name of FIM
#45 by ShikChen was merged Feb 13, 2025 Loading…
core : improve suggestions starting with empty lines
#51 by ggerganov was merged Mar 10, 2025 Loading…
core : decouple rendering from server requests
#52 by ggerganov was merged Mar 10, 2025 Loading…
core : add speculative fim
#53 by ggerganov was merged Mar 10, 2025 Loading…
Add ollama backend
#59 by akashjss was closed Mar 30, 2025 Loading…
readme : add windows install command
#68 by ggerganov was merged May 22, 2025 Loading…
core : do not evict chunks similar to current context
#79 by ggerganov was merged Aug 20, 2025 Loading…
Update llama.vim
#84 by vitaly-rudenko was closed Oct 9, 2025 Loading…
ProTip! Add no:assignee to see everything that’s not assigned.