← Docs Hub

Self-Hosting Guide

Docker, Kubernetes, environment variables

Prerequisites

  • Node.js 18+ and npm
  • An OpenAI-compatible LLM endpoint (local or cloud)
  • Git
  • Optional: Docker and Kubernetes for production deployments

Quick Start (Development)

The fastest way to get OSF running locally:

Terminal
# Clone the repositories
git clone https://github.com/TobiasLante/openshopfloor.git
git clone https://github.com/TobiasLante/openshopfloor-gateway.git

# Frontend
cd openshopfloor
npm install
npm run dev
# → http://localhost:3000

# Gateway (in another terminal)
cd openshopfloor-gateway
npm install
cp .env.example .env
# Edit .env with your settings
npm run dev
# → http://localhost:3001
Warning
You need to configure at least the LLM endpoint and JWT secret in the .env file before the gateway will work correctly.

Environment Variables

Key environment variables for the gateway (.env):

VariableDescriptionExample
JWT_SECRETSecret for JWT signinga-long-random-string
LLM_URLOpenAI-compatible chat endpointhttp://localhost:5001/v1
LLM_SPECIALIST_URLSpecialist model endpoint (optional, falls back to LLM_URL)http://localhost:5002/v1
MCP_ERP_URLFactory Simulator MCP URL (ERP tools)http://factory-sim:8020
MCP_UNS_URLUNS MQTT MCP server URLhttp://mqtt-uns:8025
MCP_KG_URLKnowledge Graph MCP server URLhttp://kg-server:8035
PORTGateway port3001
FRONTEND_URLFrontend URL (for CORS)http://localhost:3000
GITHUB_CLIENT_IDGitHub OAuth app client ID (optional)
GITHUB_CLIENT_SECRETGitHub OAuth app secret (optional)

Docker

docker-compose.yml
version: "3.8"
services:
  frontend:
    build: ./openshopfloor
    ports:
      - "3000:3000"

  gateway:
    build: ./openshopfloor-gateway
    ports:
      - "3001:3001"
    env_file: .env
    depends_on:
      - factory-sim
      - mqtt-uns

  factory-sim:
    image: ghcr.io/zeroguess/factory-sim:latest
    ports:
      - "8020:8020"

  mqtt-uns:
    image: ghcr.io/zeroguess/mqtt-uns:latest
    ports:
      - "8025:8025"

  kg-server:
    image: ghcr.io/zeroguess/kg-server:latest
    ports:
      - "8035:8035"
Info
You'll also need an LLM server. Use any OpenAI-compatible endpoint (vLLM, text-generation-inference, Ollama, or a cloud API).

Kubernetes

For production deployments, OSF runs on Kubernetes. The hosted instance uses the following setup:

  • Gateway — Deployment in namespace osf, with liveness probes (failureThreshold=20, periodSeconds=30)
  • MCP Servers — Deployments in namespace demo
  • Memory — Gateway needs at least 2Gi memory limit (Node-RED + flow engine + V8 sandboxes)
  • Container Registry — Use any registry accessible from your cluster
Warning
The gateway embeds Node-RED, which can consume significant memory. Set memory limits to at least 2Gi to avoid OOM kills during flow execution.

LLM Setup

OSF needs an OpenAI-compatible chat completion endpoint. Options:

  • vLLM — Best performance for local GPU deployment
  • Ollama — Easiest setup for local development
  • text-generation-inference — HuggingFace's inference server
  • OpenAI API — Use any cloud provider with compatible API

Recommended models: qwen2.5-14b or larger for good tool-calling performance. Smaller models may struggle with complex MCP tool selection.

This site uses a cookie to remember your preferences. Analytics are anonymous and cookie-free. Privacy Policy