← Docs Hub

Troubleshooting & FAQ

Common errors, debugging tips, FAQ

Common Issues

"Session expired" or constant logouts

JWT tokens expire after 24 hours. If you're logged out frequently, your browser may be blocking cookies or your clock may be skewed. Try:

  • Clear browser storage and log in again
  • Check that your system clock is accurate
  • Ensure third-party cookies are not blocked for this domain

Chat not responding / SSE stream stuck

The AI chat uses Server-Sent Events (SSE) for streaming. If the stream seems stuck:

  • Wait 30 seconds — LLM calls can take time
  • Check if you're behind a proxy that buffers SSE (corporate proxies sometimes do this)
  • Refresh the page and try again
  • If using a VPN, try disconnecting — some VPNs interfere with long-lived connections

Flow editor not loading

The Node-RED editor runs in an iframe. If it shows a blank screen:

  • Check browser console for CSP or X-Frame-Options errors
  • Disable browser extensions that block iframes (e.g., uBlock Origin in strict mode)
  • Try a different browser or incognito mode
  • Ensure cookies are enabled — the editor auth uses the osf_editor_token cookie

Agent run fails with "timeout"

Code agents have a configurable timeout (default 60s, max 300s). If your agent times out:

  • Increase the timeout in osf-agent.yaml
  • Reduce MCP calls — each tool call adds latency
  • Avoid calling the LLM multiple times in sequence when possible

Flow execution shows "zombie" run

A flow run that stays in "running" state indefinitely is usually caused by a server restart during execution. These runs are automatically marked as failed after a timeout. You can safely ignore them and start a new run.

GitHub agent sync fails

Check the following:

  • The repository must be public
  • The repo must contain a valid osf-agent.yaml at the root
  • The entry file path must match what's in the manifest
  • Your GitHub connection may have expired — try reconnecting in Settings

MCP tool returns empty or error result

MCP tool calls can fail if the tool name is wrong or required parameters are missing:

  • Check the Tool Reference for the exact tool name and required parameters
  • Tool names are case-sensitive and use underscores (e.g., factory_get_latest_oee)
  • Some tools return empty results if no data matches the query — this is expected

FAQ

Is the factory data real?

No. OSF uses a continuously running simulation that generates realistic manufacturing data. The data patterns are modeled after real factory scenarios, but all data is synthetic.

Can I use my own LLM?

In the hosted version, the platform uses locally hosted LLMs. If you self-host, you can configure any OpenAI-compatible LLM endpoint via environment variables.

What LLM models does OSF use?

The hosted instance uses qwen2.5-14b for specialist tasks and a larger model for moderation and synthesis. Both run on local GPUs.

Can multiple users use it at the same time?

Yes. Each user has their own conversations, agents, and flows. MCP tool calls are shared across the same factory simulation. LLM requests are queued when the GPU is busy.

Is there rate limiting?

Yes. Agent runs are limited to 3 per minute. Chat messages and MCP calls have higher limits. These limits protect the shared LLM server from overload.

Is OSF open source?

Yes. Both the frontend and gateway are open source. You can self-host the entire platform. See the Self-Hosting Guide.

Getting Help

If you're stuck, here are your options:

This site uses a cookie to remember your preferences. Analytics are anonymous and cookie-free. Privacy Policy