In the same way "embedded" is relative, I appreciate the author's recognition that "edge" is relative. For some, AI at the edge means on-prem server farms. For some it means a mini-pc. For others, maybe an SBC. Here it's a microcontroller. Further still is AI within the sensors a microcontroller would talk to. That's probably just another microcontroller but still.
There's "micro" and "micro". The microcontroller operating a simple coffee machine, or a simple washing-machine is probably 8 or 16 bits. This is what I would call "bare metal", as they don't run an OS, only off-the-shelf frameworks at best.
For "bigger" devices, it's usually a Cortex inside a system-on-chip or system-on-module, 32 bits single core and a few Mb of RAM for low-end (enough to run regular Linux distro instead of uClinux for instance), 64 bits multicore for high-end devices that deal with audio/video. That kind of business is often resource-hungry in every way.
I work with that kind of stuff, and to me these "microcontrollers" are just monsters that I hesitate to call "micro" when some of my coworkers work on much smaller chips with only a few K of RAM available.
I do wish sometimes they used the bigger micro, though. We have some power supplies that technically have an Ethernet interface. But when using it, even for SCPI over TCP (forget about the virtual front panel that takes a minute to update), it lags so bad the output enable button needs a few tries to toggle. I should practice yanking the positive wire for an emergency
tf-lite micro library has many advantages, and the first of these is the tensorFlow framework itself. You can train the model easily and then implement the same or a similar architecture on esp-32s without much effort. Another advantage is its optimization and you can easily intervene in various memory optimizations and even though it is not a large one, it does have a community.
Apart from these, for example, the author implemented the model the traditional way using C, but it is more convenient to use tf-lite micro on esp32s with the Berry script language.
However, since I have never used onnxin this kind of project, I cant speak to its advantages, so comparisons are difficult from my perspective. But as I said, tf-lite and offer benefits like easy integration, good optimization, and as the name implies, tensorFlow.
For "bigger" devices, it's usually a Cortex inside a system-on-chip or system-on-module, 32 bits single core and a few Mb of RAM for low-end (enough to run regular Linux distro instead of uClinux for instance), 64 bits multicore for high-end devices that deal with audio/video. That kind of business is often resource-hungry in every way.
I work with that kind of stuff, and to me these "microcontrollers" are just monsters that I hesitate to call "micro" when some of my coworkers work on much smaller chips with only a few K of RAM available.
Wouldn't it be advantageous if we used ONNX for everything? https://onnx.ai/
Apart from these, for example, the author implemented the model the traditional way using C, but it is more convenient to use tf-lite micro on esp32s with the Berry script language.
However, since I have never used onnxin this kind of project, I cant speak to its advantages, so comparisons are difficult from my perspective. But as I said, tf-lite and offer benefits like easy integration, good optimization, and as the name implies, tensorFlow.