AI at the Edge with Meadow

Bryan here, and I’m super excited to announce first-class support in Meadow for running AI workloads at the edge. 

AI @ the edge is a critically important part of many IoT workflows and growing exponentially. Gartner has predicted that by next year, 75% of data will be created and processed at the edge. And Intel CEO, Pat Gelsinger, has stated that “by 2026, 50% of edge computing deployments will involve machine learning and AI, compared to just 5% today.”

IoT devices routinely not only provide sensory input to model training in the cloud but in turn, execute those models locally for efficient, low-latency outcomes:

  • Sensory Input – ML training pipelines and modern LLMs are gobbling up data at an unprecedented pace. The edge provides a wealth of data from the real world that can only be gathered from field-deployed devices. Digital twins would be impossible without the edge sensory input that feeds them, and the insights from not only individual devices, but in aggregate, they can unlock insights that can massively increase overall efficiency and productivity.
  • Local Model Execution – While it’s generally not practical to train models on edge, these devices have outsized capabilities to execute models locally. For instance, realtime defect detection via an attached camera can run easily on a microcontroller (MCU), and object detection in multi-channel video can be done on a multi-GPU Jetson NX rather efficiently. This seminal post by Pete Warden (Useful Sensors) explains why non-LLM AI/ML models run so efficiently on low-powered devices.

Benefits of Running @ the Edge

Additionally, running AI at the edge rather than in the cloud has a number of important efficiencies and advantages:

  • Lower Latency – By executing locally, model execution outcomes are available immediately, not requiring a round-trip to the server.
  • Reduced Bandwidth Cost – It’s magnitudes more efficient to send the output of a model than the raw data that fed it (once training has occurred). For instance, if you’re creating a smart outlet that tracks appliance usage, all that the cloud cares about is what appliances are running and if there are anomalies. It doesn’t need the raw waveform data of actual electrical usage which may not even be possible to feed over a low-bandwidth IoT connection.
  • Reduced TCO – By outsourcing ML processing to the edge and running in situ, server loads (and costs) can be reduced and devices can use available processing cycles as part of their normal operating lifecycle.

AI on Meadow Devices

Our goal is to provide a productive and powerful, unified, multi-platform AI experience at the edge for .NET developers, whether it be an MCU-based device like a Meadow F7, Single-Board-Computer (SBC) such as a Raspberry Pi or Multi-GPU Jetson Nano, or desktop machine.

Like most embedded development today, AI workloads are somewhat easier on SBCs, but more difficult on MCUs where specialized model execution engines/runtimes have to be built into OS/Applications, models have to be manually managed (often built directly into the executing assembly and not available dynamically), and model training requires specialized tooling and skills.

We started on our AI platform story almost two years ago, and the platform work has been progressing down the following trajectories:

  • TensorFlow on MCUs Runtime – The first step was to provide a model execution engine/runtime that was baked into Meadow.OS as a first-class feature. We’ve now completed this work, as TensorFlowLite is now available as a managed OS service. Technically, this was a huge lift. We had to bake in the ability to dynamically load arbitrary libraries into NuttX so that they were available to .NET to call into..

    In addition, this work also enables dynamic model loading, which makes it possible to download and execute models on the fly without having to rebuild and redeploy the application, which is how these models work on today’s platforms.
  • Hardware and Protocol Support on Single Board Computers (SBCs) – Meadow provides first class integrations for SBC such as the AI workhorse Jetson Nano, Raspberry Pi, BeagleBone Black, and others. So all of the same drivers, industrial protocol support, such as MODBUS, M-Bus, etc., can be used in Meadow apps running on them. Additionally Meadow supports crash-reporting, OtA updates, command & control, and other functionality via Meadow.Cloud on those SBCs so they can be managed in the field along with Meadow MCU devices.
  • Edge Impulse Integration – Our partner Edge Impulse provides not only pre-trained, optimized models for embedded scenarios, but they also provide a powerful workflow for importing training data, labeling it, and then tuning, and outputting a useful model. We are well along on this work and look forward to launching our future integration.
  • Useful Sensors Drivers – We’ve also been creating Meadow.Foundation drivers for Useful Sensors, which are purpose built, commodity AI sensors.
  • Documentation + Samples – Finally; no feature is launched until it’s not just tested, but has documentation and samples.

For more information, check out the AI guides here.