Limit handler to CPU/Memory bounds?

  1. I have a setup where client sends server (Rust) a mathematical expression, and server, after sanitizing input, executes it.

  2. This makes it possible for client to do a denial of service attack of the form of say, 2^1000000000 where the underlying big_int will almost certainly consume all memory.

  3. So my question here is: is there a way in Rust, when executing a function, to say: limit the total mem allocs of this function (and it’s children / recursive calls) to 10MB ?

  4. If not, is there a different language that provides such abstractions?



I don’t know of anything like this at the function level – I think your sanity checks would need to be performed at each interim operation.

At the process level you can setrlimit (via libc), on Linux at least.

1 Like


So the potential Rust solutions so far are:

  1. for each execution, create a new process, use OS to limit process resources

  2. for each step of the execution, try to estimate how big the intermediate result will be (perhaps even a global counter to track how much resource each ‘execution’ has taken)



I suppose you will eventually be able to use a custom allocator where you could track memory usage, assuming everything you use (like bigint) can be set to allocate that way.

Limiting CPU use seems harder.

1 Like


Sounds like 1 process per client call is the easiest solution.

This does make sense as technically the server is building a ‘multi user’ system.



So, off topic for the Rust forums, but in response to your last question: the Racket language provides a facility for this, and I’m personally not aware of any others.



I’d recommend solving this as part of your algorithm instead of looking for a mechanism to kill a function call/thread/process which uses too much memory. For example, rustc has a recursion limit when expanding macros or resolving traits so it doesn’t consume all of a computer’s resources.

For the specific problem of limiting memory size, you could use the heapsize crate to get an estimate of memory consumption, then bail with an Err(MemoryUsageOverThreshold) whenever you use too much memory.



I’ve tried a similar thing a couple of months ago. I wanted the execution of a function to stop in case it exceeds time or memory limits. Time can be tracked in another thread and if the time limit is exceeded, the thread running the function can be killed using pthread_cancel (which people advise against for reasons I forgot).
In order to track the memory usage of the function, one can write a wrapper around the system allocator. Apparently, the only way to indicate that something isn’t quite right within an allocator is to panic (via an abort_hook or by returning std::ptr::null() if I remember correctly). Unfortunately panics inside an allocator always abort instead of unwind, therefore the entire program stops (issue).

You can find the code here:

This was only an experiement. Note that I was way out of my comfort zone when I wrote this, so don’t be surprised if you find complete nonsense :wink: In order to actually stop the execution if memory limits are exceeded, lines 40-46 in src/ have to be uncommented.
From the experience I’ve had I’d also advise against this and I would try to solve this in another way if possible.



For big and risky things I launch a full OS process. That’s partly necessary due to deficiency in Rust’s OOM handling — it always aborts the whole process without even trying to panic, and for servers that’s just unacceptable. Once you’re working with OS processes, then all kinds of sandboxes, rlimits, cgroups, OOM killers, etc. are available to you.

For a task within a process, I don’t know of a better way than just being careful and having a timeout. If you can split your computation into chunks, you could periodically check if its over its time limit:

while do_a_bit_of_work() {
  if now > deadline {break;}