Is there any Rust crate that does constant stack space serialization / deserialization of T <-> Vec<u8>?

Is there any crate that provides procedural macros for doing T <-> Vec<u8> but is guaranteed to only take constant stack space ?

EDIT: modified title, in case it (incorrectly) implied using serde was required

For simple structs, you can use zerocopy.

What precisely do you mean by "convert a T into a Vec<u8>"? I'm guessing you are after a general serialization mechanism and we aren't allowed to use a Vec to implement a "manual" heap?

For the simple cases of a fixed-size transmute-friendly type you can probably use crates like zerocopy, but I imagine this is niche enough that it'll be hard to find a general purpose serialize/deserialize crate that doesn't use tools like recursion or alloca.

In particular, the general strategy used by proc macros of "implement some trait by recursively calling its method on each field" breaks down because recursion will generally use a dynamic amount of stack space.

Your proc macro could theoretically do a recursive-to-iterative transformation when working with an obvious recursive type (e.g. struct List<T> { item: T, next: Option<Box<List<T>>> }), but that's going to break down when you have mutually recursive types (e.g. struct First(Option<Box<Second>>) and struct Second(First)) because there is no way for the proc macro to "see" the recursion.

I know safety-critical embedded systems tend to have a "all algorithms must have bounded stack usage" requirements, so maybe look into how they solve this sort of problem?


You might try to use checked const generics to enforce this:


trait Serialize<const QUOTA: usize> {
    fn push_bytes(&self, _: &mut Vec<u8>) {

impl<const QUOTA: usize> Serialize<QUOTA> for u8 {}

struct MyStruct;

impl<const QUOTA: usize> Serialize<QUOTA> for MyStruct where u8: Serialize<{ QUOTA - 1 }> {}

fn main() {
    // Fails due to usize underflow
    <MyStruct as Serialize<0>>::push_bytes(&MyStruct, &mut vec![]);

I don't believe there have been any Rust facilities to enforce a bounded stack size created since your last thread asking about it.


My constraint comes from wasm32. Using a heap allocated Vec for a manual 'stack' is fine.

At the cost of some runtime performance, I think if the proc macro used a Vec<Box<dyn CanWriteInConstStackT>>> and used the Vec to do a 'dfs walk' of the struct in O(1) stack space, it might work.

This comment falsely implies that I am repeating an old question.

These are two different questions. In particular, the idea outlined in

despite being useless for the general case of statically bounding Rust stack space, might work for the problem in this post.

Please be more careful in the future.

1 Like

If that's the case, what about doing something like this?

use std::borrow::Cow;

trait Serialize {
    fn serialize<'a>(&'a self, commands: &mut Vec<Command<'a>>);

enum Command<'a> {
    Serialize(&'a dyn Serialize),
    Write(Cow<'a, str>),

impl<'a> Command<'a> {
    fn write(text: impl Into<Cow<'a, str>>) -> Self {

struct First(Second);

impl Serialize for First {
    fn serialize<'a>(&'a self, commands: &mut Vec<Command<'a>>) {

struct Second(Option<Box<First>>);

impl Serialize for Second {
    fn serialize<'a>(&'a self, commands: &mut Vec<Command<'a>>) {
        if let Some(child) = self.0.as_ref() {


fn main() {
    let item = First(Second(Some(Box::new(First(Second(None))))));

    let mut buffer = String::new();
    let mut commands = vec![Command::Serialize(&item)];

    while let Some(command) = commands.pop() {
        match command {
            Command::Serialize(s) => s.serialize(&mut commands),
            Command::Write(s) => buffer.push_str(&s),

    println!("{}", buffer); // First(Second(First(Second)))


Something like that would be quite amenable to a proc macro, too.

To deserialize you could probably pull the same trick by creating an Default value and doing a similar VecDeque<&mut dyn Deserialize> trick, where the deserialize() method just updates an internal field or does *self = ....

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.