Authors are from STMicro, polytechnic Turin, Freie universitat Berlin, and Inria. Examined writing firmware for an IOT sensor platform. From the abstract:
> Two teams concurrently developing the same functionality (one in C, one in Rust) are analyzed over a period of several months. A comparative analysis of their approaches, results, and iterative efforts is provided. The analysis and measurements on hardware indicate no strong reason to prefer C over Rust for microcontroller firmware on the basis of memory footprint or execution speed. Furthermore, Ariel OS is shown to provide an efficient and portable system runtime in Rust whose footprint is smaller than that of the state-of-the-art bare-metal C stack traditionally used in this context. It is concluded that Rust is a sound choice today for firmware development in this domain.
> Rust is evolving far too fast to be used in code which needs to run for years to decades down the line.
Code doesn’t stop running on existing hardware when the language changes in a future compiler. You can still use the same old toolchain.
I’ve done a lot of embedded development in a past life. Keeping old tool chains around for each old platform was standard.
I would much rather go through the easy process of switching to an older Rust tool chain to build something than all of the games we played to keep entire VMs archived with a snapshot of a vendor tool chain that worked to build something.
I remember a coworker having to fight with an old platform's build not working because our user/group IDs were bigger than 2^16. I can't remember which utility was causing the problem, I'd have to guess tar. This is when we learned to play the archive a VM game.
I can't imagine theres much overlap between "we will need to update this firmware for the next decade." and "Let's bet the farm on the documentation being perfect, and all the downloads still available."
Rust uses "Editions" (e.g., 2015, 2018, 2021, 2024) to introduce breaking changes without splitting the ecosystem. Every edition remains supported by newer compiler versions _indefinitely_. The only churn is on projects targeting "nightlies" but there's no reason you can't target a stable one for projects that need that stability.
Do you recall which libraries? Use of nightly fell of a cliff after 2018. Looking at the bottom of https://lib.rs/stats#rustc-usage, ~8% of all crates.io requests came from a nightly newer than that corresponding to 1.86. That's am upper bound, as using a nightly compiler doesn't mean that a nightly compiler was needed. The prevalence of nightly is also niche specific. If you're in embedded it is likely you need to use some nightly-only features that haven't been stabilized, but if you have an OS chances are that you don't.
The code won't magically stop running because the Rust community continued evolving the language. The old toolchains will be available if there's a compatibility change.
I'm curious what the concern is with the rust editions mechanics in place. Each crate gets to define the language edition it is compiled with. Even if dependencies up convert to later editions they can still be linked against by crates that are an older edition.
As for the broader crate ecosystem, if crates you depend on drop support for APIs you depend on, that could cause you to get stuck on older unsupported releases. Though that is no different of a problem than any other language.
I only tried Rust for small hobby projects, but I did experience weird code rot when you just leave the code there and after a while it does not compile. Might have something to do with how Cargo manages dependencies
Do you remember more specifics? I've seen four cases:
- a project with no Cargo.lock, where there have been breaking changes in a dependency that wasn't specific enough in Cargo.toml; fixing this requires some finessing of dependencies but is possible to get the project building without any code changes
- a project with proper dependency tree specified, but where a std change cause inference to break specific older versions of a crate in your tree (time 0.35 comes to mind); this requires similar changes to the above
- a project relies on UB on stable code that should always have been disallowed and since fixed; this is tricky, on a dependency, an updated version will likely exist, on your own project you'd have to either change your code or use the older toolchain, knowing that the code might not be doing what you want it to do (this happened a handful of times pre 1.20)
- an older project, with the proper dependency versions specified, being built on a newer platform; I saw this with someone trying to build a project untouched since 2018 on an ARM Mac: the toolchain for it didn't exist back then, and the macOS specific lib they were using didn't have any knowledge either. Newer versions of the library do, of course, but that required updating a set of libs that would be compatible too.
All of these cases are quite rare. You could encounter all of them at the same time, and that would be annoying, enough to have someone doing it for fun say "fuck it" and drop it. You can also get hit by a lightning.
But between Cargo.lock which should allow your project to build on newer toolchains, and access to all prior toolchains, your project should continue to build forever on the same platform.
I'd add pinning a rust toolchain version (using rust-toolchain.toml or similar) in addition to Cargo.lock
Rustc does have fairly frequent (every ~18 months of so) minor breaking changes between versions. These are often related to type inference, usually only affect a very small number of crates, and are usually mitigated by publishing patch versions of those crates that don't run into the issue. But if you have the patch version locked with a lockfile then that won't help you, and there is increased likelihood of the build failing, so it's best to lock down the rustc version too.
Luckily pinning the rustc version is very easy to do.
---
On regular projects this kind of issue can usually also be fixed by upgrading to the latest rustc and running `cargo update`. But conservative embedded projects may have legitimate reasons for not wanting to upgrade rustc to the latest version, and parts of ecosystem's disregard for MSRVs means that running `cargo update` on an older rustc has a high chance of causing build breakage due to MSRV issues.
I've had issues compiling Python 3.12 on ArchLinux when Python 3.12 -> Python 3.13 happened, and few of important packages broke. So I had to compile older version of gcc and build Python 3.12
So, it can happen in any programming language, and to any large projects.
Rust allows me to handle this easily with rust.toolchain file, so, this concern is kinda overblown imo
> Might have something to do with how Cargo manages dependencies
Build against the lockfile to use the same versions.
Unless they were pulled from upstream, they won’t suddenly stop building against the same compiler version. Rustup makes it easy to switch compiler versions to get back to the same one you used, too.
Even if a crate is yanked, if you have the version in a lock file it will still download and build. (This was done precisely after seeing the left-pad incident.)
This is not a Rust issue but an inherent issue with dependencies in all languages. External dependencies rot.
For Rust code for serious industrial use cases or firmwares, it's always best to minimize dependencies as much as possible to avoid this. Making local copies of dependencies is also a thing for certain use cases.
There is a difference in C and Rust culture. Embedded C projects rarely have external dependencies, and in rare cases when there are dependencies (e.g. most projects use vendor SDKs nowadays), they are pinned and there is an expectation of API compatibility anyway
Rust on the contrary incentivises using dependencies, and especially embedded software is hard to write without using external packages (e.g. cortex-m-rt, bytemuck and many others)
Code in all languages bitrots. Even if your dependencies are "done", the language is unchanging, the toolchain mature, a vendor can introduce a new platform and all of a sudden your code won't compile anymore, because IBM introduced a new RISC server platform, or macOS changed the definition of time_t, or Windows blocked direct win32.DLL access (I know, a stretch), that your older libraries didn't know about.
I'm curious why I've seen this sentiment repeated in so many places, I learned Rust once 5 years ago and I haven't had to learn any new idioms and there have been no backwards incompatible changes to it that required migrating any of my code.
I think people don't like the JavaScript treadmill. People want to think about using tools and getting proficient with them rather than relearning tools. I'm not saying rust is like that, but I do feel that way about python and JavaScript. Those are dynamic languages but it is what all this editions stuff evokes. It's an if it were stable, it wouldn't be changing sort of thing.
> using tools and getting proficient with them rather than relearning tools
This attitude works in carpentry, but not in software. You need to get proficient, but your tools will keep evolving, like everything else in the software world.
We have Rust code in a living code base that is more than 5 years old and it's required maybe one touch in the last 5 years to fix some issues due to stricter rules. It was simple enough it could have been automated.
We passed on Rust for Ada/SPARK2014 to write to bare metal on Cortex-M processor for real-time, high-integrity, and verifiable mission-critical software. Rust is making strides to be a future competitor, but it's new to the formal verification tooling and lacks any real world legacy in our domain. Ada's latest spec. is 2022. Other than AdaCore's verified Rust compiler, Rust still does not have a stable language specification like C/C++, Lisp, or Ada, SPARK 2014. I have no doubt that it will start rising to tick all the boxes that Ada/SPARK do right now with their decades of legacy in high-intetrity, mission-critical applications. The mandate to use memory-safe software put into effect this past Jan 1 2026 puts some wind in Rust's sails, but it's more than memory-safety in this domain. Plus, I do not enjoy Rust, but Cargo is nice. We're looking at Lean for further assistance in verifying our work. I think there was and is lot of Rust evangelism that will also carry it forward and boost even more Rust popularity,
Rust is not really memory safe if you combine it with external libraries. Too many "unsafe" keywords, and lack of tooling for code analysis and verification.
> It is concluded that Rust is a sound choice today for firmware development in this domain.
This conclusion was reached with a single experiment.
> Two teams concurrently developing the same functionality — one in C, one in Rust — are analyzed over a period of several months.
> Furthermore, Ariel OS is shown to provide an efficient and portable system runtime in Rust whose footprint is smaller than that of the state-of-the-art bare-metal C stack traditionally used in this context.
> The authors thank Davide Aliprandi and Davide Sergi of the STAIoTCraft team, and the wider Ariel OS team.
So one team had Ariel OS developer support, and it's unclear what support the other team had. Seems fair.
In Figure 12, they simply stop optimizing the code once desired rate is reached. Just at the end of the project the Rust firmware gets over a third performance boost, most likely from their OS developers.
Additionally, there is a claim that "Ariel OS is shown to provide an efficient and portable system runtime" - but there are no real tests for portability are conducted. Worst still:
> Where C-based projects require a separate project setup and manual code copying per target, Rust on Ariel OS consolidates everything within a single project [..]
This claim is just not true. This sounds like somebody that is not as familiar with C.
> In Figure 12, they simply stop optimizing the code once desired rate is reached.
Yes. The goal was to handle the maximum data rate of the used sensor, and stop there. Time was limited on both ends.
> Just at the end of the project the Rust firmware gets over a third performance boost, most likely from their OS developers.
The ST intern found those boosts all by himself. They compared the exact MCU & peripheral initialization of the C and Rust firmwares, tightened I2C timings (where STM Cube has vendor tuned & qualified values), and enabled the MCU's instruction cache, which somehow is not default in Embassy's HAL. We were quite impressed actually, the last days before the deadline were quite productive, optimization wise.
> Yes. The goal was to handle the maximum data rate of the used sensor, and stop there. Time was limited on both ends.
I understand, and I understand that there were limits to what could be done with the resources there were. What irks me is the strength of the claim made without enough evidence to make it.
> The ST intern found those boosts all by himself. They compared the exact MCU & peripheral initialization of the C and Rust firmwares, tightened I2C timings (where STM Cube has vendor tuned & qualified values), and enabled the MCU's instruction cache, which somehow is not default in Embassy's HAL. We were quite impressed actually, the last days before the deadline were quite productive, optimization wise.
Fair enough, hats off to the intern. This kind of thing is common in MCUs, even on low-end CPUs weird defaults can be selected. But the involvement and influence of the OS developers remains unclear.
Again, there's just not enough data to make such strong claims. I think the paper could easily make recommendations, it could say that at least in some cases (as evidenced) Rust could be a reasonable choice, and it could make an argument for further work.
> This conclusion was reached with a single experiment.
No shit. This is the conclusion reached at the conclusion of this experiment. This part of your comment can be removed with no loss of clarity, I think.
I think you miss my point. I don't think that this conclusion can be reached with the (singular) experiments performed because there is a lack of data to draw it.
If I ran an experiment where I gave a cancer patient bread, and then they recovered from cancer, I couldn't then say: "It is concluded that <bread> is a sound choice today for <cancer treatment> in this domain.". You would rightfully jump up and down and demand further experiments to increase the confidence of the result before drawing the conclusion.
It could have been concluded instead that there is a case for further experiments to be conducted, or that Rust could be approaching a maturity where it could be considered for some firmware projects. But as it stands, the conclusion is far too strong given the experiments performed.
You mean with the "two teams" that were tasked to develop the C / Rust versions?
Yeah of course. Then again - they were one person teams, where the C "team" had years of experience in stm32 / embedded C / stm32 cube development and churned out that handwritten state machine in just days. The Rust "team" was a pre-masters intern with only minimal embedded Rust experience. They ran into all the pitfalls with (async) embedded Rust, but corrected towards the end.
That does not seem like even close to a fair comparison and makes me wonder how valid the conclusion is. Effectively this is two times n=1, if you use 'teams' when you actually mean 'individuals' then that's not really proper reporting.
I do applaud you for having the same work done twice but it would have been far more meaningful to have two actual teams of seasoned developers do this sort of thing side-by-side. The biggest item on the checklist would be the number of undiscovered UB or UB related bugs in the C codebase and to compare that with the Rust codebase on 'defect escape rate' or some other meaningful metric.
I think there’s another hidden issue of testing how new devs use the language vs. those seasoned devs. I expect someone with a few months of experience would prefer Rust (fewer footguns) but someone with more experience would prefer C (the sharper knife). The flavour of the thing changes as we age.
The problem with C - and I'm saying this as a life-long C programmer and not exactly a fan of Rust - is that C is indeed very sharp but it will cut other people just as easily even though they are far downstream of the original programmer, as well as the users of those programs. And it is extremely hard to not accidentally fall for one of the many pitfalls of C.
I've got my own set of restrictions for when I'm coding in C based on many nights spent poring over various pieces of code and trying to find a way to do it better and safer without outright switching languages. I do believe it is possible. But at the end of all that you have essentially redefined the language in a way that probably no other C programmer would like or agree with, and it would still require very good discipline.
So having languages with fewer footguns is good, as long as the lack of one kind of footgun isn't replaced by a other kinds of footguns. It is one of the reasons I'm interested in the FIL-C project.
Yeah, a common stupid requirement. Perhaps a selling point for any solution would be to deploy a common serialization/de-serialization package that can be used on both the cloud and end point side.
Why? In IoT stuff, its very useful if you can talk to your devices via standard internet protocols, otherwise you have to introduce some pointless 'gateway' node for that.
I mean sometimes efficiency matters a lot, but a lot of other times, interoperability is more important.
Text based IO with microcontrollers over tty has been quite a standard thing even decades ago.
1. So Ariel OS is based on Embassy - IIUC I2S and CAN has some support upstream. That can be used already, although not using Ariel's usually fully portable APIs.
2. Well, ST has released official Rust drivers for a bunch of their sensors. They're built on embedded-hal(-async), so can directly be used with Ariel OS. There is probably more.
I'm a big fan of Rust on embedded (and think embassy in particular is awesome, haven't tried this Ariel OS.)
I would say however that there's still toolchain issues here. There all kinds of MCUs that simply don't/won't have a viable compiler toolchain that would support Rust.
e.g. I recently came from a job where they built their own camera board around an older platform because it offered a compelling bundle of features (USB peripheral support and MIPI interface mainly). We were stuck with C/C++ as the toolchain there, as there was no reasonable way to make this work with Rust as it was a much older ARM ISA
> Two teams concurrently developing the same functionality (one in C, one in Rust) are analyzed over a period of several months. A comparative analysis of their approaches, results, and iterative efforts is provided. The analysis and measurements on hardware indicate no strong reason to prefer C over Rust for microcontroller firmware on the basis of memory footprint or execution speed. Furthermore, Ariel OS is shown to provide an efficient and portable system runtime in Rust whose footprint is smaller than that of the state-of-the-art bare-metal C stack traditionally used in this context. It is concluded that Rust is a sound choice today for firmware development in this domain.
Rust is evolving far too fast to be used in code which needs to run for years to decades down the line.
Code doesn’t stop running on existing hardware when the language changes in a future compiler. You can still use the same old toolchain.
I’ve done a lot of embedded development in a past life. Keeping old tool chains around for each old platform was standard.
I would much rather go through the easy process of switching to an older Rust tool chain to build something than all of the games we played to keep entire VMs archived with a snapshot of a vendor tool chain that worked to build something.
Unless you find out the compiler was buggy and was producing faulty binaries, but the new compiler can no longer compile the old code.
Now I’ve not extensively used Rust but almost everytime I did it ended up needing nightly to use some library or other.
Where's the problem exactly?
As for the broader crate ecosystem, if crates you depend on drop support for APIs you depend on, that could cause you to get stuck on older unsupported releases. Though that is no different of a problem than any other language.
That statement deserves support.
- a project with no Cargo.lock, where there have been breaking changes in a dependency that wasn't specific enough in Cargo.toml; fixing this requires some finessing of dependencies but is possible to get the project building without any code changes
- a project with proper dependency tree specified, but where a std change cause inference to break specific older versions of a crate in your tree (time 0.35 comes to mind); this requires similar changes to the above
- a project relies on UB on stable code that should always have been disallowed and since fixed; this is tricky, on a dependency, an updated version will likely exist, on your own project you'd have to either change your code or use the older toolchain, knowing that the code might not be doing what you want it to do (this happened a handful of times pre 1.20)
- an older project, with the proper dependency versions specified, being built on a newer platform; I saw this with someone trying to build a project untouched since 2018 on an ARM Mac: the toolchain for it didn't exist back then, and the macOS specific lib they were using didn't have any knowledge either. Newer versions of the library do, of course, but that required updating a set of libs that would be compatible too.
All of these cases are quite rare. You could encounter all of them at the same time, and that would be annoying, enough to have someone doing it for fun say "fuck it" and drop it. You can also get hit by a lightning.
But between Cargo.lock which should allow your project to build on newer toolchains, and access to all prior toolchains, your project should continue to build forever on the same platform.
Rustc does have fairly frequent (every ~18 months of so) minor breaking changes between versions. These are often related to type inference, usually only affect a very small number of crates, and are usually mitigated by publishing patch versions of those crates that don't run into the issue. But if you have the patch version locked with a lockfile then that won't help you, and there is increased likelihood of the build failing, so it's best to lock down the rustc version too.
Luckily pinning the rustc version is very easy to do.
---
On regular projects this kind of issue can usually also be fixed by upgrading to the latest rustc and running `cargo update`. But conservative embedded projects may have legitimate reasons for not wanting to upgrade rustc to the latest version, and parts of ecosystem's disregard for MSRVs means that running `cargo update` on an older rustc has a high chance of causing build breakage due to MSRV issues.
So, it can happen in any programming language, and to any large projects.
Rust allows me to handle this easily with rust.toolchain file, so, this concern is kinda overblown imo
Build against the lockfile to use the same versions.
Unless they were pulled from upstream, they won’t suddenly stop building against the same compiler version. Rustup makes it easy to switch compiler versions to get back to the same one you used, too.
For Rust code for serious industrial use cases or firmwares, it's always best to minimize dependencies as much as possible to avoid this. Making local copies of dependencies is also a thing for certain use cases.
Rust on the contrary incentivises using dependencies, and especially embedded software is hard to write without using external packages (e.g. cortex-m-rt, bytemuck and many others)
imo it's just so much easier
I'm curious why I've seen this sentiment repeated in so many places, I learned Rust once 5 years ago and I haven't had to learn any new idioms and there have been no backwards incompatible changes to it that required migrating any of my code.
This attitude works in carpentry, but not in software. You need to get proficient, but your tools will keep evolving, like everything else in the software world.
- a lot of code now uses mix of witness types and const generics
- with new borrow checker release they will do new iterators 2.0
Seems like coding on 5 year old Rust is like C++ 98.
We have Rust code in a living code base that is more than 5 years old and it's required maybe one touch in the last 5 years to fix some issues due to stricter rules. It was simple enough it could have been automated.
This conclusion was reached with a single experiment.
> Two teams concurrently developing the same functionality — one in C, one in Rust — are analyzed over a period of several months.
> Furthermore, Ariel OS is shown to provide an efficient and portable system runtime in Rust whose footprint is smaller than that of the state-of-the-art bare-metal C stack traditionally used in this context.
> The authors thank Davide Aliprandi and Davide Sergi of the STAIoTCraft team, and the wider Ariel OS team.
So one team had Ariel OS developer support, and it's unclear what support the other team had. Seems fair.
In Figure 12, they simply stop optimizing the code once desired rate is reached. Just at the end of the project the Rust firmware gets over a third performance boost, most likely from their OS developers.
Additionally, there is a claim that "Ariel OS is shown to provide an efficient and portable system runtime" - but there are no real tests for portability are conducted. Worst still:
> Where C-based projects require a separate project setup and manual code copying per target, Rust on Ariel OS consolidates everything within a single project [..]
This claim is just not true. This sounds like somebody that is not as familiar with C.
Yes. The goal was to handle the maximum data rate of the used sensor, and stop there. Time was limited on both ends.
> Just at the end of the project the Rust firmware gets over a third performance boost, most likely from their OS developers.
The ST intern found those boosts all by himself. They compared the exact MCU & peripheral initialization of the C and Rust firmwares, tightened I2C timings (where STM Cube has vendor tuned & qualified values), and enabled the MCU's instruction cache, which somehow is not default in Embassy's HAL. We were quite impressed actually, the last days before the deadline were quite productive, optimization wise.
I understand, and I understand that there were limits to what could be done with the resources there were. What irks me is the strength of the claim made without enough evidence to make it.
> The ST intern found those boosts all by himself. They compared the exact MCU & peripheral initialization of the C and Rust firmwares, tightened I2C timings (where STM Cube has vendor tuned & qualified values), and enabled the MCU's instruction cache, which somehow is not default in Embassy's HAL. We were quite impressed actually, the last days before the deadline were quite productive, optimization wise.
Fair enough, hats off to the intern. This kind of thing is common in MCUs, even on low-end CPUs weird defaults can be selected. But the involvement and influence of the OS developers remains unclear.
Again, there's just not enough data to make such strong claims. I think the paper could easily make recommendations, it could say that at least in some cases (as evidenced) Rust could be a reasonable choice, and it could make an argument for further work.
No shit. This is the conclusion reached at the conclusion of this experiment. This part of your comment can be removed with no loss of clarity, I think.
If I ran an experiment where I gave a cancer patient bread, and then they recovered from cancer, I couldn't then say: "It is concluded that <bread> is a sound choice today for <cancer treatment> in this domain.". You would rightfully jump up and down and demand further experiments to increase the confidence of the result before drawing the conclusion.
It could have been concluded instead that there is a case for further experiments to be conducted, or that Rust could be approaching a maturity where it could be considered for some firmware projects. But as it stands, the conclusion is far too strong given the experiments performed.
Yeah of course. Then again - they were one person teams, where the C "team" had years of experience in stm32 / embedded C / stm32 cube development and churned out that handwritten state machine in just days. The Rust "team" was a pre-masters intern with only minimal embedded Rust experience. They ran into all the pitfalls with (async) embedded Rust, but corrected towards the end.
I do applaud you for having the same work done twice but it would have been far more meaningful to have two actual teams of seasoned developers do this sort of thing side-by-side. The biggest item on the checklist would be the number of undiscovered UB or UB related bugs in the C codebase and to compare that with the Rust codebase on 'defect escape rate' or some other meaningful metric.
I've got my own set of restrictions for when I'm coding in C based on many nights spent poring over various pieces of code and trying to find a way to do it better and safer without outright switching languages. I do believe it is possible. But at the end of all that you have essentially redefined the language in a way that probably no other C programmer would like or agree with, and it would still require very good discipline.
So having languages with fewer footguns is good, as long as the lack of one kind of footgun isn't replaced by a other kinds of footguns. It is one of the reasons I'm interested in the FIL-C project.
https://fil-c.org/
I mean sometimes efficiency matters a lot, but a lot of other times, interoperability is more important.
Text based IO with microcontrollers over tty has been quite a standard thing even decades ago.
2. Well, ST has released official Rust drivers for a bunch of their sensors. They're built on embedded-hal(-async), so can directly be used with Ariel OS. There is probably more.
I would say however that there's still toolchain issues here. There all kinds of MCUs that simply don't/won't have a viable compiler toolchain that would support Rust.
e.g. I recently came from a job where they built their own camera board around an older platform because it offered a compelling bundle of features (USB peripheral support and MIPI interface mainly). We were stuck with C/C++ as the toolchain there, as there was no reasonable way to make this work with Rust as it was a much older ARM ISA
-> paper is not final. And IIUC ST will be releasing the code at some point.
https://info.arxiv.org/help/faq/whytex.html