Skip to content

NodeRedSupport

Pierre Cauchois edited this page Jan 16, 2018 · 1 revision

Node-Red support for the Azure IoT Hub Node.js SDKs

Introduction

Hi! if you're reading this you're probably looking for information on how to use Azure IoT Hub from Node-Red. The good news is that it's possible and it should be fairly easy, the bad news is that there is no official existing node(s) to do that. The resources of the SDK team are limited and we're focused on shipping high quality APIs for our SDK, which leaves us little, if no time, to support 3rd-party tools. This being said, we've been thinking about Node-Red for a while and here's our thoughts about how we think it should happen, if it were to happen. We hope it helps and also want to emphasize that we do welcome contributions to our repositories through pull requests, so if you feel like you agree with this approach and want to give it a try, we'll help the best we can!

What is Node-Red

See: https://nodered.org/

Node-Red on devices

Because Node-Red requires a Node.JS runtime, targeted devices are usually of the Raspberry Pi/BeagleBone class. The hosted GUI makes it easy for a developer to "reconfigure" the device while the "daemon" nature of Node-Red makes it easy to automatically run at boot.

Node-Red on servers

Server-side applications can also be run using Node-Red and again, the separation between the UI and the flow-execution engine makes it easy to update/reconfigure an application without having to redeploy a whole environment or VM.

Existing samples (not for production)

Two samples are currently available for Node-Red related to Azure IoT Hub:

Other related things:

Azure IoT Hub features and corresponding Node-Red "nodes"

A list of best-practices for creating nodes can be found here in the Node-Red docs:

http://nodered.org/docs/creating-nodes/

One of the most important statement that this document makes is:

nodes shall sit at the beginning, in the middle, or at the end of a flow, but not all at once.

A single "client" node with multiple inputs and outputs would not work well for that because it'd lead to weird, looped flows for example to receive a device method, act on it, and then send a response. The idea of creating multiple client nodes in the same flow (one for sending and one for receiving for example) doesn't work well either since IoT Hub devices can only use a single connection and one client node would disconnect the other. In other words, a connection, and therefore a Client object, should be shared between a "sending" node and a "receiving" node. Luckily for us Node-Red provides this capability with "configuration nodes".

If we decide to make a client a configuration node, we then have a couple of different approaches: we could have:

  • 1 configuration node (Client), 1 "Source" node and 1 "Sink" node (and then provide different "ports" for different features (messages, methods, etc.). I do not find this approach compelling because:

    • source/sink nodes would have to subscribe to everything even if the customer doesn't require it
    • configuration of some features (such as receiving desired properties updates) could be difficult
  • 1 configuration node (Client) and 1 node for each feature with a single input or output port.

    • This makes configuration and subscription much easier but it complicates a flow using multiple features quite a bit. Node-Red provides the ability to nest flows though so this issue could be alleviated easily.
    • Features that receive data from an Azure IoT hub (e.g. receiving a C2D message) are nodes that will probably end up being nodes placed at the start of a flow. On the other hand, features that send data to an Azure IoT hub, (e.g. Sending a D2C message) are more likely to sit at the end of a flow.Finally, features that require "querying" IoT Hub (e.g. obtaining device twins), are probably going to be nodes that sit in the middle of a larger flow.

List of potential nodes

  • Device Client Nodes [P0]

    • Device Client [configuration] [P0]
    • Authentication
      • Connection String [P0]
      • X509 Certificate [P1]
      • Shared Access Signature [P2] (Do we need a node to generate SAS Tokens?)
    • Transport (should be easy thanks to the SDK client architecture)
      • AMQP [P0]
      • AMQP/WS [P0]
      • MQTT [P0]
      • MQTT/WS [P0]
      • HTTP [P0]
      • HTTP transport Configuration [P2]
    • Features
      • Send D2C Message [P0]
      • Custom Payload [P0]
      • Custom Properties [P1]
      • Receive C2D Message [P0]
      • Receive Device Method Request [P0]
      • Send Device Method Response [P0]
      • Upload File to Blob [P2]
      • Receive Twin Desired Property Update [P1]
      • Receive all updates [P1]
      • Receive specific property update (and allow multiple nodes?) [P2]
      • Send Twin Reported Property Update [P1]
  • IoT Hub Service Client [P0]

    • Service Client [configuration] [P0]
    • Authentication:
    • Connection String [P0]
    • Shared Access Signature [P2]
    • Send C2D Message [P0]
    • Custom Payload [P0]
    • Custom Properties [P1]
    • Send Twin Desired Property Update [P1]
    • Listen to File Upload Notifications [P2]
    • Listen to C2D Message Feedback [P1]
  • IoT Hub Device Registry [P1]

    • Registry Client [configuration] [P1]
    • Register a Device Identity [P1]
    • Register Multiple Device Identities [P2]
    • Delete a Device Identity [P1]
    • Delete Multiple Device Identities [P2]
    • Update a Device Authentication Parameters [P2]
    • Update Multiple Device Identities [P2]
    • Enable a Device Identity [P2]
    • Disable a Device Identity [P2]
    • Get a Device Twin [P2]
    • IoT Hub Job Client [P2]
    • Job Client [configuration] [P2]
    • Create a new Device Twin Job [P2]
    • Create a new Device Methods Job [P2]
    • Query for a Job Status [P2]
    • Delete an Existing Job [P2]
  • Event Hubs Client [P1]

    • Listen to incoming D2C messages [P1]

Source-Control, NPM Packages and Release

Most Node-Red modules seem to have their own package, but in our case and because of the large number of features, adopting this approach probably wouldn't scale. Not to mention the additional GitHub repositories we might need. Luckily the Node-Red API design makes t is possible to ship multiple nodes within the same package. Since it's unlikely that Service-side operation and Device-side operations would live in the same flow, it makes sense to divide all these nodes into 2 packages:

  • An Azure IoT "device" package with all features and transports of the Device SDK
  • An Azure IoT "service" package with all the features of the Service SDK, as well as the Event Hubs SDK.

Note: We could separate the Event Hubs SDK and the Azure IoT Hub Service SDK nodes, but it might be counterproductive: so far experience has proven that customers are confused by this separation that only shows our internal organization. At this point, we plan on providing some basic functionality, strictly for the purpose of using the Azure IoT Hub features. When the need is felt for separate Event Hubs packages based on Event Hubs design patterns (Event Processor Host, etc) then we can reconsider.

There is an open question to decide whether these 2 packages live one repository or each in their own, or if they live with the SDK. More separation makes CI more complicated (a change that lives both in the SDK and node code requires 2 check-ins), but release simpler (since we can release the SDKs and the node-red packages separately). Right now, the current device client package lives within the SDK repo and is released at the same time and I'm leaning towards keeping it that way for now.

Continuous Integration and Testing

CI and proper testing are non-negotiable conditions to shipping with the Azure IoT SDK team. To that end, we need to come up with a way of testing all the nodes as we build them.

Since each node is going to be extremely simple and basically devoid of any logic, there's very little interest in writing unit tests (although in some cases such as the configuration nodes, it might make sense). Additionally, it'd be a lot of plumbing code with little value to try and mock the Node-Red APIs (inputs, outputs, etc.). Finally, most of the SDK is well tested with a combination of unit, integration and end-to-end tests.

Integration and end-to-end testing however makes a lot of sense for the Node-Red modules because it'd allow to simulate how our customers are going to use the nodes. It would be fairly easy to design a few flows that can be started independently and would validate the whole chain. For example: Send and Receive a C2D message.

These flows should run at the gate, and because it requires some investment to automate the deployment, linking and running the flows, I'd suggest this runs into a separate Jenkins build.