This guide has moved to the Wiki. Go to Visual Flows Guide →
Node-RED Visual Flows
Build multi-step AI workflows by connecting agents, MCP tools, LLM prompts, and human approvals in a visual drag-and-drop editor — plus all native Node-RED nodes.
What are Flows?
Flows let you chain multiple AI operations together in a visual graph editor powered by Node-RED. Each node in your flow performs one action — calling an MCP tool, running TypeScript code, prompting an LLM, making a decision, or waiting for human input. Data flows from one node to the next automatically.
OpenShopFloor uses a hybrid execution model: custom OSF nodes run through our optimized engine with SSE streaming, while all native Node-RED nodes (switch, change, function, http-request, etc.) are also fully supported. You get the best of both worlds.
Getting Started
- Open the Editor — Go to Flows and click "Open Editor". The Node-RED editor opens in a full-screen view.
- Add Nodes — Drag nodes from the left palette onto the canvas. Look for the "OpenShopFloor" category for custom OSF nodes, or use any native Node-RED node.
- Connect Nodes — Draw wires between node outputs and inputs to define the execution order. Nodes can have multiple inputs and outputs.
- Configure — Double-click a node to set its parameters (which tool to call, template text, LLM settings, etc.).
- Save — Click the Save button in the top bar. Give your flow a name and description.
- Run — Go back to the Flows listing and click Run on your saved flow. Results stream in real time.
OSF Custom Nodes
These are the custom OpenShopFloor nodes, purpose-built for manufacturing AI workflows.
Run custom TypeScript code in a secure V8 sandbox. Full access to MCP tools, LLM, and storage via the SDK. Supports multi-output: configure 1-5 output ports and return an array to route data to different paths.
Config: Write TypeScript code. Set the number of outputs (1-5). Return a single value or an array for multi-output.
Aggregates data from multiple upstream nodes into a single JSON object. Each input is keyed by the source node's label. Connect multiple nodes into one context to collect data before sending it to an LLM.
Config: Optional: Override key names for each input source.
Template engine for building LLM prompts. Use ${context} and ${input} placeholders to inject data from upstream nodes. Great for crafting structured prompts with dynamic data.
Config: Write your template text with ${context} and ${input} variables.
Send messages to an LLM with per-node configuration. Accepts two inputs: context (from osf-context, sent as system message) and prompt (from osf-prompt-tpl, sent as user message). Supports JSON mode for structured output.
Config: Set LLM URL, model, temperature, and toggle JSON mode. Connect osf-context and osf-prompt-tpl as inputs.
Make HTTP requests to external APIs. Supports GET, POST, PUT, DELETE with custom headers, authentication, and JSON mode. URL supports template variables.
Config: Set method, URL, headers, auth token, timeout, and JSON mode.
Call another saved flow as a sub-routine. The sub-flow receives the current node's input and returns its final output. Includes recursion protection (max depth 5).
Config: Enter the Flow ID to call and set max recursion depth.
Validate and parse JSON output against a schema. If validation fails, automatically retries by asking the LLM to fix the output. Ensures structured, reliable data from LLM responses.
Config: Define a JSON schema. Set max retry attempts (1-5).
Conditional branching. Route the flow to different paths based on the previous node's output using configurable conditions.
Config: Define conditions for each output port (e.g., 'has-errors', 'always', 'never').
Run an existing AI agent (built-in or community). The agent performs its full analysis with MCP tool calls and returns the result.
Config: Select the agent by name from the dropdown.
Send a prompt to the LLM. Supports template variables using {{input}} to inject data from the previous node.
Config: Write your prompt template. Use {{input}} to reference the output of the previous node.
Pause the flow and wait for human approval or input. The flow resumes when the user responds.
Config: Set the prompt text and optional choices.
MCP Tool Nodes
Call a tool from the ERP MCP server. Access production orders, customer data, delivery schedules, and more.
Config: Select the tool and configure its parameters.
Call a tool from the Manufacturing MCP server. Access machine data, OEE metrics, production status.
Config: Select the tool and configure its parameters.
Call a tool from the Quality Management (QMS) MCP server. Access defect reports, quality metrics, audit data.
Config: Select the tool and configure its parameters.
Call a tool from the Tool Management (TMS) MCP server. Access tool inventory, usage history, maintenance schedules.
Config: Select the tool and configure its parameters.
Call a tool from the OEE MCP server. Access machine OEE, production history, scrap rates, energy data.
Config: Select the tool and configure its parameters.
Call a tool from the UNS (MQTT) MCP server. Live machine data, topic search, alerts, cross-machine comparisons.
Config: Select the tool and configure its parameters.
Call a tool from the Knowledge Graph MCP server. Impact analysis, paths, semantic search, Cypher queries, delivery snapshots.
Config: Select the tool and configure its parameters.
Native Node-RED Nodes
All standard Node-RED nodes are available in the editor and fully supported by the execution engine. Here are the most commonly used ones for AI workflows:
Route messages based on property values. Supports equality, comparison, range, regex, and more. Each rule maps to an output port.
Config: Set the property to evaluate and define rules for each output.
Set, change, delete, or move properties on the message object. Useful for transforming data between nodes.
Config: Add rules: set a value, change (find/replace), delete, or move a property.
Mustache-style template rendering. Use {{payload.field}} syntax to build strings from message data.
Config: Write a Mustache template. Access message properties with double braces.
Run custom JavaScript code. Receives a msg object and can modify or create new messages. Supports multiple outputs.
Config: Write JavaScript. Return msg to pass it on, or return an array for multiple outputs.
Make HTTP requests to external services. Supports all methods, custom headers, authentication, and response parsing.
Config: Set URL, method, headers, and payload. Response is available in msg.payload.
Split arrays or objects into individual messages, or join multiple messages back together. Useful for parallel processing.
Config: Split: choose array, string, or object mode. Join: set count or timeout.
Delay message delivery by a configurable amount of time (max 60 seconds in the OSF engine).
Config: Set delay duration in seconds.
Log message data to the Node-RED debug sidebar and the OSF run output. Passes the message through unchanged.
Config: Choose what to display: full message or specific property.
Multi-Output Nodes
Some nodes support multiple output ports, letting you route data to different downstream paths:
- osf-ts — Configure 1-5 outputs. Return an array where each element goes to the corresponding port.
- osf-decision — Each condition maps to a separate output port.
- switch — Each rule creates an output port for conditional routing.
- function — Return an array of messages for multiple outputs.
- split — Splits a single message into multiple messages.
Multi-Input Nodes
Nodes like osf-context and osf-llm accept multiple inputs. The engine waits for all upstream nodes to complete before executing a multi-input node.
- osf-context — Merges all upstream outputs into a single JSON object, keyed by source node label.
- osf-llm — Expects two inputs: context (system message from osf-context) and prompt (user message from osf-prompt-tpl).
- join — Collects multiple messages and combines them into an array or object.
How Execution Works
- The flow engine builds a directed acyclic graph (DAG) from your nodes and wires.
- Entry nodes (no incoming connections) execute first.
- Each node receives the output of its predecessor as context. Multi-input nodes receive all upstream outputs.
- Execution proceeds in topological order — all dependencies must complete before a node runs.
- Multi-output nodes route data to specific downstream paths based on port index.
- If a
human-inputnode is reached, the flow pauses until the user responds. - Results are streamed in real time via SSE (Server-Sent Events).
Example: OEE Analysis Pipeline
A modular flow that collects data from multiple sources, builds context, and generates an analysis:
Example: Quality Alert Flow
A flow that monitors defects and escalates to a human when quality drops:
Tips & Best Practices
- Start simple — begin with 2-3 nodes and add complexity gradually.
- Use osf-context for multi-source data — collect data from multiple MCP/TS nodes before sending to the LLM.
- Use osf-prompt-tpl for structured prompts — separate your prompt template from data collection.
- Add human checkpoints — use human-input nodes before destructive or high-impact actions.
- Validate LLM output — use osf-output-parser after osf-llm to ensure structured JSON responses.
- Use debug nodes — attach debug nodes to inspect data at any point in the flow.
- Name your nodes — double-click and set a descriptive name. osf-context uses these names as JSON keys.
- Native nodes for data transforms — use switch, change, and template nodes for routing and transforming data without writing code.
- One tab per flow — each Node-RED tab becomes a separate saveable flow.
Keyboard Shortcuts
| Shortcut | Action |
|---|---|
| Ctrl + A | Select all nodes |
| Ctrl + C / V | Copy / paste nodes |
| Delete | Delete selected nodes |
| Ctrl + Z | Undo |
| Double-click | Edit node properties |
| Ctrl + Space | Quick-add node |