← Docs Hub
Self-Hosting Guide
Docker, Kubernetes, environment variables
Prerequisites
- Node.js 18+ and npm
- An OpenAI-compatible LLM endpoint (local or cloud)
- Git
- Optional: Docker and Kubernetes for production deployments
Quick Start (Development)
The fastest way to get OSF running locally:
Terminal
# Clone the repositories
git clone https://github.com/TobiasLante/openshopfloor.git
git clone https://github.com/TobiasLante/openshopfloor-gateway.git
# Frontend
cd openshopfloor
npm install
npm run dev
# → http://localhost:3000
# Gateway (in another terminal)
cd openshopfloor-gateway
npm install
cp .env.example .env
# Edit .env with your settings
npm run dev
# → http://localhost:3001Warning
You need to configure at least the LLM endpoint and JWT secret in the
.env file before the gateway will work correctly.Environment Variables
Key environment variables for the gateway (.env):
| Variable | Description | Example |
|---|---|---|
| JWT_SECRET | Secret for JWT signing | a-long-random-string |
| LLM_URL | OpenAI-compatible chat endpoint | http://localhost:5001/v1 |
| LLM_SPECIALIST_URL | Specialist model endpoint (optional, falls back to LLM_URL) | http://localhost:5002/v1 |
| MCP_ERP_URL | Factory Simulator MCP URL (ERP tools) | http://factory-sim:8020 |
| MCP_UNS_URL | UNS MQTT MCP server URL | http://mqtt-uns:8025 |
| MCP_KG_URL | Knowledge Graph MCP server URL | http://kg-server:8035 |
| PORT | Gateway port | 3001 |
| FRONTEND_URL | Frontend URL (for CORS) | http://localhost:3000 |
| GITHUB_CLIENT_ID | GitHub OAuth app client ID (optional) | — |
| GITHUB_CLIENT_SECRET | GitHub OAuth app secret (optional) | — |
Docker
docker-compose.yml
version: "3.8"
services:
frontend:
build: ./openshopfloor
ports:
- "3000:3000"
gateway:
build: ./openshopfloor-gateway
ports:
- "3001:3001"
env_file: .env
depends_on:
- factory-sim
- mqtt-uns
factory-sim:
image: ghcr.io/zeroguess/factory-sim:latest
ports:
- "8020:8020"
mqtt-uns:
image: ghcr.io/zeroguess/mqtt-uns:latest
ports:
- "8025:8025"
kg-server:
image: ghcr.io/zeroguess/kg-server:latest
ports:
- "8035:8035"Info
You'll also need an LLM server. Use any OpenAI-compatible endpoint (vLLM, text-generation-inference, Ollama, or a cloud API).
Kubernetes
For production deployments, OSF runs on Kubernetes. The hosted instance uses the following setup:
- Gateway — Deployment in namespace
osf, with liveness probes (failureThreshold=20, periodSeconds=30) - MCP Servers — Deployments in namespace
demo - Memory — Gateway needs at least 2Gi memory limit (Node-RED + flow engine + V8 sandboxes)
- Container Registry — Use any registry accessible from your cluster
Warning
The gateway embeds Node-RED, which can consume significant memory. Set memory limits to at least 2Gi to avoid OOM kills during flow execution.
LLM Setup
OSF needs an OpenAI-compatible chat completion endpoint. Options:
- vLLM — Best performance for local GPU deployment
- Ollama — Easiest setup for local development
- text-generation-inference — HuggingFace's inference server
- OpenAI API — Use any cloud provider with compatible API
Recommended models: qwen2.5-14b or larger for good tool-calling performance. Smaller models may struggle with complex MCP tool selection.