Is the tailcall crate truly zero cost as advertised?

According to the docs, tailcall is zero cost. If this is the case, why is the become keyword taking so long to implement? Why not just adopt the trampoline implementation of tailcall, since it is zero cost apparently. What are the downsides?

I asked bing chat this question, and I got the following response.

As for why the language designers did not make the tailcall crate functionality part of the language, it is likely because the become keyword is a more general solution that can be used for more than just tail recursion. The become keyword is intended to be a more flexible and powerful feature that can be used to transform the current function into another function, which can be useful in a variety of contexts. However, it is important to note that the tailcall crate is still a viable option for performing tail recursion in Rust, especially for projects that do not require the full power of the become keyword.

I am wary of accepting AI generated reponses so I want to ask the human experts here if they agree or have anything to add?

1 Like

This crate only allows self-tailrecursion by turning it into a loop, while become will support tailcalling other functions too, which can't be turned into a loop like self-tailrecursion.


It took me a while to make sense of the OP. Then it slowly dawned on me that it meant "become" was proposed to become an actual keyword in Rust.

Please, there must be a better choice of keyword than "become". It's so awful.

But... without it, we'll never get:

become more_powerful_than_you_can_possibly_imagine();

But seriously, Rust was the first time I saw the keyword, and I love it. It directly suggests what's happening: the top stack frame is "becoming" a different function.


Great. Sadly it does not immediately suggest anything to me. I would never have guessed it was anything to do with tail call optimisation.

I might need a couple of stack/memory layout diagrams to fix that idea in my mind.

My first thought is that it is yet another way for a structured high level language to sneak a goto in by a different name :slight_smile:

Edit: Turns out "go" is offered as a synonym for "become" BECOME Synonyms: 21 Similar and Opposite Words | Merriam-Webster Thesaurus. So my statement above is not even a joke.

It would be cool to use "go" instead of "become", just to annoy the Go people :slight_smile:


Efficient tail calls really are jumps to the top of the function without creating a new stack frame. But they don't circumvent structured control flow at all, like a goto, the behavior is just simple recursion. The behavior is the same as if a new stack frame were created, just without the danger of stack overflow. They allow safe recursion in that sense.


So do we agree that the tailcall crate is truly zero cost? The other part of the question (the reason it has not already been adopted) seems to be confiirmed.

tailcall had better be a cost benefit otherwise it has no reason to exist.

The best you can do by hand is write a loop, so it makes sense that a generated loop would be about the same. But the only way to confirm that is with benchmarks. There are some mentioned in the crate description:


There are a few benchmarks available; currently the benchmarks demonstrate that for single-function tail-recursion, performance is the same as using a loop. Mutual recursion runs, but suffers penalties.

It is not zero cost for mutual recursion, but if you stick to simple recursion (a function calling itself directly) it seems that it is.

EDIT: Another thing worth mentioning is that iterators are highly optimized in Rust and apparently are sometimes faster than loops. Plus they are very convenient and concise. So I would prefer using an iterator when it would work instead of recursion.

1 Like

Ah yes. I was thinking in terms of: If you really want to write a recursive function, then getting rid of stack use and replacing the function call with a jump must reduce the size of the code, reduce the stack usage and boost performance. Likely reduce cache pressure as well. Therefore tail call/become had better be better than zero cost else they are of no value at all.

My question is what do we need this? Because:

  1. As far as I know any recursive algorithm can be written as a loop.
  2. It's neat to write recursive parsers and such, but is the call overhead really an issue there?

So what is the use case/justification for complecting the language with this?

I we must have it can we think of something better than "become". Rust has adopted nice short keywords like fn, struct, const. I feel become blows that out of the water in an ugly way.

Perhaps just be?

It's not the call overhead that is an issue, it's the danger of stack overflow.


Yeah, I forget to say: How often is stack overflow really a problem?

I worry about it on memory constrained embedded systems, where recursion is generally outlawed, but have never had an issue with it otherwise.

Can a recursive descent parser actually make use of tail call optimisation? Surely you need to stack up a lot of stuff as you go.

The problem is you don't ever really know for sure, or it is very difficult to know for sure. Because of this I don't use recursion in Rust; I don't want to spend energy figuring out whether it is safe or not. I think it would be valuable to be able to use recursion without the danger. It is not critical because loops and iterators work very well in Rust, but is is still an ergonomic improvement.


Anyone have an example of when it is an ergonomic improvement?

The canonical fibo example, as shown on the tailcall page tailcall - Rust, is more simply written and easier to understand as a simple loop.

One example are finite state machines with enough complexity in the state transitions that it becomes difficult to encode the transitions as a data-driven algorithm.

This LUA example of a game communicates the idea pretty well, along with the final note:

For this simple game, you may find that a data-driven program, where you describe the rooms and movements with tables, is a better design. However, if the game has several special situations in each room, then this state-machine design is quite appropriate.

The calls to room1(), room2(), etc, are only possible as state transitions that avoid stack overflow because of the tail call optimization.

This is indirect or mutual recursion, which is not handled well by the tailcall crate but would be handled by the become proposal, as @bjorn3 said.

I think this technique is somewhat difficult to use in Lua (unless I'm missing something) because you have to be sure that the function calls are in tail call position to know that stack overflow will not occur. Whereas with the become proposal it is explicit and guaranteed to avoid stack overflow (or it won't compile).


See this article about parsing protobuf using musttail.


For me, if it is a simple case of mapping some function to a collection, then I would not reach for recursion. But if I had a more complicated stopping condition, when I don't know in advance how many times I want to go round, and/or I want to build up some kind of state, then usually I prefer recursion. I find it easier to see at a glance what is going on and to reason about behavior.

1 Like

Enough of a problem for crates like stacker to exist. I certainly have seen it be used in parsing libraries.


Any time the input is not bounded, basically.

The thing with Lambda: The Ultimate Goto and the function call technique described therein is that it makes hard guarantees about the amount of stack space used by function calls from tail position, in the general case. That is a fundamental change to the semantics and operational behaviour of function calls (in tail position, at least); it's not merely an optimization or an ergonomic improvement, but rather a fact about the language that programmers may rely on. In languages that don't promise this, programmers must assume that function calls always allocate at least some resources for the activation record, and plan for it. (Whether "the program will crash on large recursive inputs" constitutes a plan depends on the programmer's goals.)

The main reason I'm aware of that this technique is not used more often is that it also eliminates information from the running program that developers may want to rely on. For example, it can make debugging harder - after a function call from tail position, there's no stack frame for the caller to include in a stack trace, for example, and stepping into a function call in a debugger may not guarantee that you can step back to the call site again afterwards, either. It also increases the conceptual footprint of the language, because programmers need to be aware of what "tail position" is and which calls are or are not in it; it's not always obvious.


This is a good summary of the situation. There are some programming paradigms where tail calls make the model much clearer, such as large statement machines. However, as you mention, the tradeoff of making debugging more difficult means this is probably not worth doing for a language where this type of programming is not dominant. In a language like OCaml, where recursion is the primary mechanism for iteration, tail calls elimination is essential. I think Rust makes a good compromise here by not performing the optimization, but considering a become operator that would allow it to be done explicitly when needed.