This code pattern shows how to build and deploy machine learning apps that can run offline and directly on a device (in this case a Raspberry Pi). Using Node-RED with TensorFlow.js, you can incorporate machine learning into your devices in an easy, low-code way.
In most cases, enabling your IoT device with AI capabilities involves sending the data from the device to a server. The machine learning calculations happen on the server and then the results are sent back to the device for appropriate action. However, this is not an ideal or feasible approach when data security or network connectivity is a concern.
By combining Node-RED with TensorFlow.js, you can more easily add machine learning functionality onto devices:
When you have completed this code pattern, you will understand how to:
- Create a Node-RED node that includes a TensorFlow.js model.
- Build and deploy a Node-RED application that uses a TensorFlow.js node.
- Use (or download) a machine learning model in TensorFlow.js format.
- Create a Node-RED node for the TensorFlow.js model and wire the TensorFlow.js node into a Node-RED application.
- Deploy the Node-RED application locally.
- Access the Node-RED application from a browser and trigger inferencing on images captured from a webcam.
- Alternatively, you can deploy the Node-RED application to a Raspberry Pi device.
- The device runs the Node-RED application and performs inferencing on images from a camera.
- The device outputs to a connected speaker or takes some other action depending on the inference results.
Go to the README file for detailed instructions on how to:
- Clone the repo.
- Install Node-RED.
- Install the TensorFlow.js node.
- Import the Node-RED flow.
- Deploy the Node-RED flow.
Paul Van Eck