最近和同事闲聊,我们能不能内网自己部署一个LLM,于是便有了Ollama + webUI的尝试
对于Linux,使用一行命令即可
curl -fsSL https://ollama.com/install.sh | sh
ollama --help
Large language model runnerUsage:ollama [flags]ollama [command]Available Commands:serve Start ollamacreate Create a model from a Modelfileshow Show information for a modelrun Run a modelpull Pull a model from a registrypush Push a model to a registrylist List modelsps List running modelscp Copy a modelrm Remove a modelhelp Help about any commandFlags:-h, --help help for ollama-v, --version Show version information
docker run -d --gpus=all -p 11434:11434 --name ollama ollama/ollama
除了 Llama 3, Phi 3, Mistral, Gemma 2, and
docker run -d -p 3001:8080 --gpus all --add-host=host.docker.internal:host-gateway -e OLLAMA_BASE_URL=https://example.com --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda
[1] https://ollama.com/
[2] https://ollama.com/library
[3]https://ollama.com/blog/ollama-is-now-available-as-an-official-docker-image