I realise that this question might be unanswerable without reference to specifics but if I can get a general vibe that would be useful. I'm working on a crate (a software modem) primarily for use on my PC but I would like to make it accessible as possible to people who are targeting embedded micros.
I've ensured it is no_std, of course. By chance it has been fairly easy to avoid using alloc so I've stuck to that for now. I currently use f32 since this benchmarks well on my machine but it would be possible to make it integer-only. I've heard these are limitations embedded devs sometimes face, however I have no practical experience whether this is actually helpful to anybody or whether I'm just making my own life more difficult.
Embedded developers: do you ever actively seek no-malloc or no-floating-point crates? Do these properties make you happy? Or is this unimportant?
Depends what one means by "embedded". Embedded systems can range from tiny 8 bit micro-controllers with a few K of program and data space all the way up to full up Linux running machines.
At the low end the notion dynamically allocating memory is impossible and there is not support for floating point.
Somewhere in-between there are devices with floating point and memory space is big enough to start to be practical. Even if only grabbing some memory at start up and never releasing it.
Then performance is a consideration when we might be running at only a couple of million instructions per second or lower at the low end.
I have no idea what performance your modem demands but potentially it's not practical to think of running it on a slow machine that had no hardware float support. It would not be so terrible to set a minimum hardware requirement as so many MIPS plus hardware float to support your code.
But yeah, in general avoiding allocation and floating point will make more embedded devs happy. Consider its a challenge!
As an old project leader of mine said to our team on an embedded project:
If you think you need floating point to solve the problem then you don't understand the problem.
In an embedded, resource-constrained project, I'd try to optimally use the available resources.
If the problem at hand can be solved using floating-point operations, and the target system has a floating-point unit (FPU), then the optimal approach might indeed be using floats. If the target has no FPU, then emulating floats in software is probably a bad choice and solving the problem using integer math could be more efficient. But it's hard to make very general claims without looking at details, and benchmarking.
If you're making a general purpose library targeting diverse embedded environments [1], then you could consider using conditional compilation with cargo features. You could have an optional float feature: when disabled (default) your library would use integer math, when enabled your library would use floating-point math. It's then up to the user of your library to enable that feature based on their target architecture.
Same goes for an optional alloc feature.
and if there's a real benefit in using floats when possible âŠī¸
Thank you both, very useful insights and it's good to know that these are properties that someone could conceivably want.
It's likely that the perf required for this modem would exceed what low end processors can do but I'll see where I land after optimisations. Maybe I'll pull out the FP anyway.
Worth noting is that there are some targets that have f32 hardware support but no f64. Or are limited in what operations have direct hardware support (x86-32 has hardware sin/cos/sqrt for example, though I'm not sure that exists in SSE/AVX, it has been a long time since I had to do x86 assembly).