When do you "outgrow" embedded concerns?

This is a bit of a philosophical question, but I think the people here would provide some good insights/musings.

The question is getting at when an embedded system becomes powerful enough such that common wisdom around the do's/don'ts of embedded seem to apply less. They say you should prioritize speed, and avoid all heap allocations, keep memory usage small, etc. In my situation at work, we are using the Arm a53 with 4GB of ram. This thing is powerful enough to run a full desktop operation system (I tried it). This is an upgrade from our previous platform on which we ran bare metal C++.

In moving from C++/baremetal/weak to Linux/Rust/powerful, can we eschew some of the typical constraints embedded developers typically apply to themselves? I know it's obviously use case specific, but the reason I ask is on these forums and elsewhere, I don't see much talk about this kind of use case. I hear a lot about "constrained" environments, but for us our time deadlines are on the order of multiple seconds, we have orders of magnitude more RAM than we need, and 64GB of fast storage.

Is this even really embedded anymore? It seems "wrong" to just heap allocate willy nilly, pulling out anything and everything I would as if programming a desktop application in Rust?

I know that the real answer is to try it, benchmark it, and test it, and restrict if we hit problems, but I wanted to get some community insight. I feel like the embedded landscape is changing, as cheap, lower power, but fast SoCs are becoming widespread.

Thanks!

Why do you think so? The line is around 128MB — that's when “normal” desktop memory allocators start working adequately well.

Linux tried to support smaller devices for a long time, but it looks that people go with custom solutions there, they are now planning to remove that support.

But once you've hit around 128MB (give or take) size you can afford to start using “normal” desktop tools, there are really no barrier which separates you from supercomputers.

Look on smartphones. iPhone had 128MB and needed to heavily restrict UI multitasking while Android had 192MB and supported multitasking from the beginning.

Both used normal OS core, a bit stripped down. And all other, specialized, smartphones OSes were killed in a few years, all these specialized OSes weren't attractive anymore.

Only you can decide whether to call it embedded or not. But you no longer need to use resource-constrained solutions.

You may still decide to use them for one reason or another, but there are no need.

For what it's worth, nope, not from my perspective. I have much smaller AWS EC2 instances running Ubuntu and some sort of online tool. As long as the application isn't leaking memory it will run until AWS closes their doors.

Thanks for the insight. These were our thoughts as well; it just feels affirming to hear someone else say it.

So memory fragmentation just isn't a concern in most cases? Even for long running programs?

In my view, it's the function(s) of the actual device itself which dictates if it is "embedded" or not. Not really the hardware capabilities (though it does come into the decision).

If it needs to function reliably without monitoring, I'll probably think of it as "embedded" because I'll apply that kind of rigor.

1 Like

But what about when reliability is simply a byproduct of the hardware? If you application only uses 10MB of ram and you have 4, you're never going to fragment/OOM, so you are reliable. If your timings aren't strict then you're Scott free.

Given a reasonable amount of excess storage, no. For example, if your application never uses more than 1.2 GiB of memory and the largest block ever allocated is 1 KiB then it's essentially impossible to fragment the heap on your 4 GiB machine in a way that would strangle your program.

Or, given very patterned allocations, no. For example, if your application only allocates / frees 74 byte structs it's impossible for memory to become fragmented. There are only two possibilities for the next allocation. Either the heap has to be expanded because there's currently nothing free or a previously freed 74 byte block can be put back in use. There is no path that leads to fragmentation.

All the C and C++ compilers I've used have a way to read the bounds of the heap (lowest and highest addresses). The difference gives a rough estimate of the heap size. If Rust has the same, regularly checking that would give you confidence that your application is or is not stable.

A word of caution: In my experience Rust applications use significantly more stack space than C / C++ applications. You will need to ensure your stack is reasonably sized.

That's what these “advanced” memory allocators (the ones that perform poorly on less-than-32MB RAM) are supposed to fight.

I suspect that's the reason “small-scale” allocators are not popular: yes, they can be effective on 32MB or even 2MB, but they do lead to memory fragmentation and over months or years it becomes a problem.

That's why people pick either “full-blown” Linux or special solution, not “low-scale” Linux.

Whether it would be enough to run for years I have no idea, though. Months are fine in practice.