Home / Servers / Open WebUI VPS
Self-host Open WebUI on your own VPS
Open WebUI is the most popular self-hosted ChatGPT-style interface (124K+ GitHub stars). Plug in OpenAI, Anthropic, OpenRouter, Ollama, or any OpenAI-compatible endpoint and get a familiar chat UI with multi-user support, RAG over your documents, and full conversation history. Hosting it on your own VPS means your prompts and uploads never touch a third-party server. Servury is built for it: anonymous signup, crypto payments, full root, no logs.
Open WebUI runs comfortably on a 4 GB plan. Starting at $15.59/mo.
Why self-host Open WebUI
Quick start: Open WebUI on Servury via Docker
Tested on Ubuntu 24. Pick a 4 GB+ plan, deploy, SSH in.
# 1. Install Docker
curl -fsSL https://get.docker.com | sh
# 2. Run Open WebUI with persistent storage
docker volume create open-webui
docker run -d --name open-webui --restart unless-stopped \
-p 3000:8080 \
-v open-webui:/app/backend/data \
-e WEBUI_AUTH=true \
ghcr.io/open-webui/open-webui:main
# 3. Open http://YOUR_SERVER_IP:3000 and create your admin account
# 4. Add your LLM provider in Settings (OpenAI, Anthropic, OpenRouter, Ollama URL...)
# 5. Recommended: put Caddy or nginx in front for HTTPS
Recommended specs
Just you, OpenAI/Anthropic API as the backend, light RAG.
VPS-100 plan
Small team, cloud LLM backends, growing RAG corpus.
VPS-150 plan
Open WebUI + Ollama on the same box for fully-offline small models.
VDS or VPS-250
Frequently asked questions
What is Open WebUI?
Open WebUI is the most-starred self-hosted ChatGPT-style interface (124K+ stars on GitHub). It gives you a familiar chat UI with multi-user accounts, conversation history, file uploads, RAG, and the ability to plug in any LLM provider (OpenAI, Anthropic, Ollama, OpenRouter, Groq, etc.).
How is this different from ChatGPT Plus?
Three things: (1) you bring your own API keys so you only pay per usage instead of $20/month flat, (2) prompts and uploads never leave your VPS, (3) you can switch between models per conversation (Claude for writing, GPT-4 for code, a local model for sensitive stuff).
How much VPS do I need?
2 GB RAM is enough if you use cloud LLM APIs (OpenAI/Anthropic). 4 GB is more comfortable and recommended for any real RAG workload. If you want to run Ollama on the same VPS for fully-local inference, 16 GB+ minimum.
Can I use my own API keys?
Yes. Open WebUI supports OpenAI, Anthropic, Azure OpenAI, OpenRouter, Mistral, Groq, and any OpenAI-compatible endpoint. Keys are stored in the local DB on your VPS, not sent anywhere.
Does it support file uploads and document chat?
Yes. Drop in PDFs, Word docs, text files, or web URLs. Open WebUI handles chunking, embedding, and retrieval automatically. The vector index lives on your VPS.
Can multiple people use the same instance?
Yes. Open WebUI has built-in multi-user support with admin/user roles, per-user conversation history, and group permissions. Good for small teams.
Will my data stay private?
Yes if you self-host. All conversations, uploads, and embeddings are stored locally on your VPS. Servury runs zero application-level logging on customer servers, so even we can't see what you're doing.
Where are servers located?
Montreal (owned hardware), New York, London, Paris, Frankfurt, Netherlands, and Singapore. Pick whichever is closest to where you'll use the chat interface.