Simple HTML file that is an interface for local LLamaCPP, Gemini, or OpenAI LLMs

Usage: Enter API keys to use OpenAI or Gemini, to run llama models you need LLamaCPP from https://github.com/ggml-org/llama.cpp/releases Extract the archive and you want to run llama-server
llama-server -m model.gguf
where model is the name of the gguf model from huggingface you want to run