How to avoid OOM when reading untrusted sources

Consider a format that stores a binary field as [len in 4 bytes little endian][bytes]. Given a reader: std::io::Read (e.g. a file).

What is the correct way of reading this format without triggering OOM (e.g. from untrusted sources)?

This is what I have so far:

use std::convert::TryInto;
use std::io::{Error, ErrorKind, Read};

fn read_a<R: Read>(reader: &mut R) -> Result<Vec<u8>, std::io::Error> {
    let mut len = [0u8; 4];
    reader.read_exact(&mut len)?;
    let len: usize = u32::from_le_bytes(len)
        .map_err(|_| Error::from(ErrorKind::OutOfMemory))?;
    let mut value = vec![];
        .map_err(|_| Error::from(ErrorKind::OutOfMemory))?;
    reader.take(len as u64).read_to_end(&mut value)?;

is it sufficient?

1 Like

What about just comparing len against a "sane" threshold. For example, if I'm requesting a HTML file and the other end sends back more than 100MB of data, you can be pretty sure something funny is going on.

As written, your code will prevent an OOM due to the Vec being too big, but keep in mind that it doesn't stop later allocations from triggering an OOM (e.g. imagine having some tree of objects you derive from the bytes).


Reserving space in the vector is a performance optimization. It doesn't affect correctness if you reserve too little space. I would cap the reservation at some reasonable size to prevent overly large pre-allocations.

use std::cmp::min;

value.reserve(min(len, MAX_RESERVATION));
reader.take(len as u64).read_to_end(&mut value)?;

If the length claims to be 2GB it's likely garbage. The reader will probably run out of data long before the 2GB mark. But if there is in fact 2GB data, no problem. The only cost is a couple of extra reallocations.

I used this exact strategy in some file parsing code I wrote to fix a panic uncovered by fuzz testing.


You can't really rely on try_reserve failing even if you don't have the memory due to memory overcommitting.


In case you aren't familiar with Linux's memory overcommit, here's one explanation:

If you don't support Linux, then I suppose it isn't relevant.

1 Like

The background of this post is that someone reached out to me privately to disclose a vulnerability in one of the libraries I maintain whereby an specially crafted file can trigger a panic when reading the file, and I simply do not know how to fix it. Panics are often considered vulnerabilities over untrusted data (e.g. DOS).

This emerges in deserializing a format such as flatbuffers, protobufs, thrift, etc. Essentially, I can craft a flatbuffers file whereby I always trigger a panic and/or OOM in rust flatbuffers implementation; I can craft a thrift file that always trigger a panic and/or OOM in rust thrift implementation.

I am trying to understand what is the correct mechanism to address these issues, if it fundamentally exists.

Your suggestion is to use a configurable threshold whereby the whole file must be within a limit. That would work for this case. However, we still hit a similar issue when reading compressed data from that file (e.g. parts of a Parquet file are compressed) - there is no guarantee that uncompressed data won't decompress up to OOM (in other words, with Read we can't compute the file size, only with Seek and stream_len). So, I can still craft the file to declare the length of the uncompressed data to be u32::MAX so that it crashes whoever is trying to read from it. There is no mechanism for the consumer to know what would be the maximum uncompressed size (only its compressed size).

In other words, I can't seem to be able to get away of the panic!. On the other hand, this must have been solved - web servers do not OOM that often. Is this what they do? They just set a maximum possible size of a compressed payload?

I agree. I was trying to avoid vec![0; len], which I understood is worse because

  • we alloc_zeroed
  • (untested) we "touch" the memory and thus likely trigger an OOM (even if Read does not have len bytes)

But maybe the second point does not apply and the kernel will still only OOM when we try to write to the region?

Yes, putting a maximum size on the payload is a standard security measure against denial of service attacks.

Thanks for bringing up this discussion, as I've been encountering this bug in numerous rust crates through fuzz testing. One example is from websockets-rs where the maintainer graciously released a bugfix even though the crate is marked deprecated.

The root cause of the issue is CWE-789 which suggests pre-defined thresholds as (seemingly) the only mitigation. I used to think that fallible allocations such as Vec::try_reserve would be sufficient, but a malicious user could easily test what the memory limit is, then force a great deal of pressure on remaining allocations, even ones that are of fixed, smaller size. Basically if this is the solution for one OOM-fix, then it needs to be applied to every allocation too, which is not an ergonomic or maintainable solution.

Can you explain how you got from needing to guard a len field to needing to guard every allocation? I don't understand the leap.

Say your process will OOM if it allocates 100 more bytes. You guard one allocation, and the malicious user has figured out that a 90 bytes allocation will succeed. Next, some string message needs to allocate 20 bytes for formatting logic. Now you get an OOM in an unexpected place.

This vulnerability is known as the zip bomb.

To be safe against it, your library must provide a mechanism for its caller to impose a size limit on the uncompressed data — “stop if you would allocate more than this much for the output”. In some cases you might be able to delegate this to the caller implicitly — for example, if your library provides a Read implementation instead of allocating, then the caller can stop reading whenever it likes. On the other hand, it may be ergonomic to provide an overall limit, if there are many small allocations.

When the file format doesn't involve compression, then putting a limit on the size of the input data is theoretically sufficient. But, watch out for those constant factors — e.g. if a certain structure of input file ends up creating many small allocations. For example, a JSON [{},{},{},{},...] deserialized to serde_json::Value would allocate many maps which are probably much larger than two bytes each.

Another option, applicable to some situations, is to ensure the processing of the data is streaming. That is, the service/library can handle arbitrarily large files, by processing them as delivered, so that the cost of large data (whether uncompressed or compressed) becomes a cost in processing time, not in maximum memory usage. Of course, you still need to consider whether an attacker can send many separate zip bombs and thus tie up large amounts of CPU — but you could imagine having some kind of quota mechanism to say “No more right now; we're still processing your previous files”. (Whether that makes sense to do is very application-specific.)

Streaming processing puts requirements on the file format being processed — it has to be provided in the order it can be processed.


This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.