Controlling Arduino Hardware from n8n — Including from an AI Agent
Controlling Arduino Hardware from n8n — Including from an AI Agent The Arduino UNO Q is a small Linux board with a co-processor microcontroller — think a Raspberry Pi and an Arduino Uno welded together, with a software ...
Controlling Arduino Hardware from n8n — Including from an AI Agent
The Arduino UNO Q is a small Linux board with a co-processor microcontroller — think a Raspberry Pi and an Arduino Uno welded together, with a software bridge between them. It runs Docker. It runs n8n. And it has an on-board MCU that can read sensors, drive GPIO, and talk to I²C devices.
The obvious question, once you have n8n running on the Q, is: can n8n workflows talk to the MCU directly? Read a temperature sensor, flip a relay, react when a button is pressed?
The less obvious question — but the one that made this project actually interesting — is: can an LLM talk to the MCU as a tool? Can you drop a "read temperature" node onto an AI Agent's tool port and let the model decide when to sample it, chain the result with a database lookup, and then decide whether to turn on a fan?
The answer to both is yes. Here is how we got there, what we built, and what we learned along the way.
The hardware: Arduino UNO Q and arduino-router
The UNO Q runs a Go service called arduino-router as a systemd daemon. The router is a MessagePack-RPC hub: it sits between the Linux environment and the MCU's serial port, and routes method calls in both directions. Any process on the Linux side — a container, a script, an App Lab app — can connect to its Unix socket at /var/run/arduino-router.sock and speak standard MessagePack-RPC.
The socket is world-writable (0666), so containers don't need privilege escalation. You bind-mount it with -v /var/run/arduino-router.sock:/var/run/arduino-router.sock and you're in.
The Arduino team provides a Python client (arduino_app_bricks.Bridge) as the reference implementation. It works fine if Python is your runtime. We wanted Node.js.

The naïve approach we didn't take
The first thing you might reach for is a Python sidecar: spin a container that wraps Bridge.call() in a small HTTP/TCP server, then have n8n call that over the network. This is exactly what UNOQ_DoubleBridge does for Node-RED — a Python TCP proxy that relays RPC calls.
We rejected this. One extra container to ship, configure, and monitor. Two network hops instead of one. And — critically — async events from the MCU (a button press, a heartbeat, an interrupt flag) become much harder: you'd need the Python sidecar to forward them over a second protocol, probably polling or a persistent HTTP connection. That's complexity with no upside.
Arduino staff confirmed on the forum that the right path is to implement the MessagePack-RPC client in Node.js directly: "you need to implement an interface to the arduino-router in node.js the same way the bridge.py script does." So we did.
Package 1 — @raasimpact/arduino-uno-q-bridge
The first package is a pure Node.js client for arduino-router. No n8n dependency. The only external dependency is @msgpack/msgpack. License: MIT.
The MessagePack-RPC protocol is simple: requests are [0, msgid, method, params], responses are [1, msgid, error, result], and notifications are [2, method, params]. What's not obvious is that the router sends raw msgpack values back-to-back with no length prefix — so you need a streaming decoder that reads one value at a time from the socket buffer.
The public API covers what you actually need:
import { Bridge } from '@raasimpact/arduino-uno-q-bridge';
const bridge = await Bridge.connect({ socket: '/var/run/arduino-router.sock' });
// Outbound — router forwards to whoever registered this method on the MCU
const answer = await bridge.call('read_temperature', []);
// Inbound — register ourselves as the handler of a method
await bridge.provide('ask_llm', async (params, msgid) => {
// MCU called us; return a computed response
return { answer: await queryModel(params[0]) };
});
// Inbound notifications (fire-and-forget from MCU)
bridge.onNotify('button_pressed', (params) => console.log('pressed', params));
Under the hood: a monotonic msgid counter with in-flight requests in a Map, a 5-second default timeout per call, exponential-backoff reconnect (configurable), and automatic re-registration of all provide and onNotify subscriptions on reconnect. A typed error hierarchy (BridgeError, TimeoutError, ConnectionError, MethodNotAvailableError) makes error handling in workflows explicit.
We validated the full stack against a real UNO Q via SSH-tunneled socket before writing a line of n8n code: raw msgpack smoke test, then the full integration suite with the test sketch flashed on the board. Tests cover $/version, RPC round-trips, NOTIFY delivery, array-typed params, and async MCU events (heartbeat arriving within 7 seconds, interrupt-driven gpio_event).
Package 2 — n8n-nodes-uno-q: four nodes
The n8n community package exposes four nodes. Three are straightforward; the fourth is where things get interesting.
Arduino UNO Q Call is an action node: you give it a method name and a parameters array, it calls the MCU via the bridge, and emits the response. The MCU equivalent of an HTTP Request node.
Arduino UNO Q Trigger listens for inbound calls or notifications from the MCU and fires a workflow. It has two sub-modes: Notification (fire-and-forget, multiple triggers can share a method) and Request (the router holds the RPC connection open until the workflow responds). Multiple active workflows can all listen for the same notification — the bridge deduplicates the $/register call internally and fans out to all handlers.
Arduino UNO Q Respond is the companion to Trigger's Request mode: it closes the pending RPC response with a workflow-computed value. Same pattern as n8n's "Respond to Webhook" node, just over the router socket instead of HTTP. This makes workflows like "MCU asks for a config value → n8n queries a database → MCU gets the answer" possible without polling.
Arduino UNO Q Method is where the project gets interesting.
The interesting part: an LLM that can touch hardware
n8n's Tools AI Agent can call any node connected to its Tool port as a tool during inference. The model sees the tool's name and description, decides whether to call it, fills in the parameters, and acts on the result — all autonomously, in a loop, until it has an answer.
The Arduino UNO Q Method node is exactly this: a node with usableAsTool: true that exposes one MCU method to the agent. You configure the method name, a plain-English description the LLM reads, and the parameter schema. Drop it on the Agent's Tool port. Now the LLM can decide to call read_temperature, evaluate the result, and decide whether to call set_fan_speed — expressed in natural language, resolved by the model choosing the right tools.
One node per MCU method is a deliberate design choice, not a lazy one. It gives you human-in-the-loop granularity at the connector level (n8n lets you gate approval on individual tool connections), canvas visibility (reviewers see exactly what the agent can do), and error isolation. It also matches the idiom of every maintained n8n community tool package.
What we learned the hard way: community packages and tool nodes
n8n has two ways a node can appear as a tool to an AI Agent. The stock @n8n/nodes-langchain packages (ToolCode, McpClientTool, ToolHttpRequest) use supplyData with an AiTool output — a sub-node with no execute method. Community packages cannot use this pattern.
We discovered this the hard way. An earlier implementation copied the ToolCode pattern, built cleanly, and then threw "has a 'supplyData' method but no 'execute' method" at runtime. The fix is usableAsTool: true on a regular Main→Main node with a standard execute() method. n8n auto-wraps it into a LangChain DynamicStructuredTool when connected to an Agent's Tool port. This is confirmed by n8n PR #26007 and by every community tool package on npm that actually works.
You also need N8N_COMMUNITY_PACKAGES_ALLOW_TOOL_USAGE=true on the n8n process — it's not on by default.
The singleton design problem
arduino-router rejects a second $/register for the same method name. In n8n, multiple active workflows might try to register button_pressed simultaneously. And creating a fresh socket per workflow activation is wasteful.
The solution is a process-wide BridgeManager singleton with reference counting: the first trigger to use a socket path creates the Bridge; subsequent ones get the same instance. When the last trigger deactivates and the refcount hits zero, the socket closes and the router drops all registrations automatically.
There is one non-obvious trap here. n8n's "Listen for test event" calls a trigger's closeFunction before running the downstream workflow with the captured event. If the Trigger is in deferred-response mode and the closeFunction tries to drain in-flight handlers synchronously, you get a deadlock: the workflow can't reach the Respond node until closeFunction returns, and closeFunction is waiting for a Promise that only resolves when Respond runs. The fix is to fire-and-forget the drain — return synchronously from closeFunction, let the handler finish in the background, then close the socket. Simple once you see it; not obvious until the 30-second MCU timeout fires and you're reading the error logs at midnight.
One important caveat: the singleton is per-process. n8n's queue mode with separate worker processes breaks the assumption. This is documented as a v1 limitation.
A note on safety — especially in Europe
Letting an LLM decide when to fire a relay or override a thermostat is a different category of risk from letting it write a draft email. The EU AI Act classifies AI systems based on the risk of their outputs to health, safety, and fundamental rights. A system that can actuate physical hardware — even indirectly, through n8n workflows — warrants explicit consideration of where human oversight sits.
n8n's human-in-the-loop feature, configurable at the agent↔tool connector level, maps directly to this requirement. For any Method node whose MCU method changes physical state — set_motor_speed, open_valve, disable_alarm — the default posture should be to require approval before the call executes. Read-only methods (read_temperature, get_led_state) can reasonably run without a gate.
This isn't just a compliance concern. The practical risk is an LLM misreading context and calling an actuator method at the wrong time. Strict parameter validation in the node, narrow and action-specific tool descriptions (so the model picks the right tool), and a conservative human-review default for state-changing methods are all mitigations worth wiring in from the start.
What's next
Both packages are published on npm at v0.1.0. @raasimpact/arduino-uno-q-bridge is available as a standalone Node.js client for anyone writing code on the UNO Q, independent of n8n. n8n-nodes-uno-q is listed in n8n's community nodes directory — setup is three steps: grab the compose file, docker compose up -d, and install the package from Settings → Community Nodes in the n8n UI.
The source code is available in Github.
The broader pattern here — wrapping hardware RPC methods as LLM tools — is not UNO Q-specific. Any device with a structured RPC interface, a clear method registry, and predictable response types is a candidate. The UNO Q is a particularly clean example because the router already handles reconnection, method routing, and async events. But the same approach applies to any embedded system you can reach over a socket.
Building AI workflows that interact with real hardware?
We work with companies integrating AI agents into operational environments — from sensor networks to ERP-connected automation. If you're evaluating what a proof of concept looks like for your setup, let's talk.
Start the conversation →
Drawing from over 20 years of expertise as Fractional innovation Manager, I love bridging diverse knowledge areas while fostering seamless collaboration among internal departments, external agencies, and providers. My approach is characterized by a collaborative and engaging management style, strong negotiation skills, and a clear vision to preemptively address operational risks.
No guesswork.
No slide decks.
Just impact.
Ready to move from AI hype to a working system? In a free 30-minute call we'll identify your highest-impact use case and tell you exactly what it takes to get there.
No upfront cost · Italy · Malta · Europe · English & Italian
The complete implementation guide: building the ADK agent, handling tool calls, and what happens when an LLM meets real ERP complexity.
An AI customer support chatbot is only as good as the knowledge it's built on. The technology is available and mature — the challenge is ...
A systematic literature review involves four distinct phases: finding relevant papers, screening them for inclusion, extracting the data ...