Stack memory with move semantics

Reading through the 'book', in section 4.1 it begins talking about move semantics, and how it works with memory on the stack vs memory on the heap. Using type String as an example I noticed that there is some minor overhead involved in how strings work. Mostly concerning length and capacity.

So if I were to create a new string. I would have pushed a new string on the stack with the following members:

What this implies to me is that ptr, len, and capacity all live on the stack. So if I were to move that string into a new variable (shallow copy) then would that memory for ptr, len, and capacity still exist on the stack in the original variable and now also the new variable. Basically would I lose the ability to use that memory space in the original variable until it falls out of scope?

So if I were to pass this string into a function with the intent of moving it (not barrow) and not using it any longer, would it still exist in the original function but not be accessible?

I'm just interested because I haven't seen anyone ask this, but I don't want to blow up my available stack unknowingly because I didn't understand what is going on behind the scenes.

Lets not consider optimizations at the moment.

Rust currently allocates on to the stack at the beginning of a function call and deallocates stack memory at the end of a function call. So no (ignoring optimizations), you won't be able to reuse stack memory manually To reuse the vaiable's memory, simply reuse the variable. That said, you most likely won't blow up the stack from 24 bytes of data (for String).

When you move a value from one variable to another, you completely lose access to the old variable binding until you reassign a value to the old variable binding. You will never get it back, but the memory is still allocated on the stack and won't be freed until the function call ends. There are only 2 exceptions to this rule, if you're type implements Copy, then you can always reuse the old variable binding. Note: moving a Copy type generates the same code before optimizations as moving non-Copy types. The second exception is &mut T, which is special cased to reborrow instead of move. This is to make the following code work.

let mut x = "Hello World".to_string();

let mut_x = &mut x;


desugars to

let mut x = "Hello World".to_string();

let mut_x = &mut x;

drop(&mut *mut_x);
drop(&mut *mut_x);

Another thing to note on Debug mode is that the memory allocated for a function is dependent on the number of variables you have, and is the sum of the size of all of the types of each variable. When you move a value between variables, the value is literally moving from 1 stack location to another. (This can be seen in the playground if you make it show you the asm).

If you want to see the asm of your project, you can use this cargo command

cargo rustc -- --emit asm
cargo rustc --release -- --emit asm

and the asm will be in these files (where crate_name is the name of your crate)


Edit because if oversights mentioned in @Riateche's reply

You can assign a new value to a variable after moving out the previous value:

let mut a = some_string;
let b = a;
a = some_other_string;
1 Like

Unoptimized codegen is pretty bad in terms of stack frame usage, register allocation, and pretty much anything else efficiency-related (unsurprisingly). If you take code like the following:

fn foo() {
    let mut s = "hello".to_string();
    s = "world".to_string();
    s = "again".to_string();
    s = "and again".to_string();

and vary the number of reassignments, you'll see the compiler (in Debug builds) varying the call frame size naively. For example, the above reserves 120 bytes of stack space:

	subq	$120, %rsp

If you remove the last assignment, you get:

	subq	$88, %rsp

You can observe a similar thing even with Copy types:

fn foo() {
    let mut x = [1;8];
    x = [2;8];
    x = [3;8];
    // vary how many statements like that, and stack size grows
    // accordingly

What's interesting though is if you use scalar values or tuples, the stack slots are reused even in debug mode.

1 Like

Thanks for the information everyone. Pretty much exactly what I anticipated, but I didn't want to just guess. It looks like I need to remain conscious of how I'm using types after binding them more than I initially thought.

Your answer is particularly interesting to me, and not what I would have guessed. I have to assume additional use of memory is necessary for debugging purpose though.

One of the hard things for me at the moment getting my head into, is how Rust seems to disambiguate what a pointer is from the struct itself. I was working with c/c++ for a little bit of time, and those languages leave the pointers more malleable IMO. I'm gonna have to do a little more homework to fully understand what you laid out even in its simplicity. I mention pointers specifically, because I feel the use of the (&) operator and (*) operator leaves a little confusion. Based on how borrowing works, variable ownership, dereferencing, and such things.

I admit I really need to work on learning assembly. Just not quite there yet.

Debug (unoptimized) builds are generally like that for, primarily, the following reasons:

  • faster compilation time (opto passes don’t run)
  • better debuggability, yeah

So stack usage will be higher due to compiler mostly going verbatim with your source code - that includes stack slot usage and lack of function inlining, which also contributes to more stack.

In general, Rust debug builds are very slow in runtime performance, produce bloated binaries, etc - all the “zero cost abstractions” stuff is essentially missing :slight_smile:.

What do you mean exactly?

Its not a big deal vitalyd, and off topic from what my original question was. It seems that the two operators I had mentioned are exactly like c/c++. Being & and *, but it seems the book is trying to put a separation in mindset on how they should be viewed. Am I wrong about that?

Granted I really haven't read anything about lifetimes yet. You don't need to answer my question. Just reading through the book and figuring things out.

Its nice to have good response though. I really appreciate it.

References (or borrows) in Rust can be looked at as “managed” (or checked, verified, what have you) pointers. By managed, I mean they come with some rules, and compiler enforces them. The rules are there to avoid memory unsafety, at compile time. At the codegen level, they’re no different than pointers/references in C++. When people ascribe more semantics to Rust’s references, as compared to C++, they’re talking about those (checked) rules they come with. Lifetime parameters are a way to relate different references in an abstract manner that the compiler can then use to verify (they’re like generic type parameters in a lot of ways).

Go through the book - it ought to help paint a fuller picture. And don’t be shy about posting questions to this forum.

1 Like

Rust references are not the same as C/C++ pointers. I find it useful to think of references as shared-read-locks (&) and unsharable-write-locks (&mut). If you do want to think of them as pointers, think of them as virtual pointers that the compiler will often optimize away (except in debug mode).

1 Like

One of the things that C++ ppl need to unlearn is that "references pretend to be values". Generally that's not true in Rust - they are separate types and you need the * and &.

1 Like

This was probably more apparent in early versions of Rust when there weren't so many predefined coercions, thus requiring more explicit uses of * and &,
[Disclaimer] That was before I began to study Rust.

1 Like

@vitalyd, could you please explain a bit what is going on here? I see only one local variable declared in both cases, so i am quite surprised to see the stack space allocation to depend on the number of assignments.

What would happen if assignments were executed in a loop?

Please don't revive year old threads, you can link to old threads in a new thread if you need to.

What's happening is all temporaries (.to_string()) are allocated as individual stack variables. So we have 5 Strings, not 1.

Thanks for the explanation.

Can i read somewhere this forum rules, in particular regarding reviving old threads? I only found FAQ - Keep It Tidy and "Code of conduct". I thought that as long as comment is on topic, i could post it.

It's not really a rule, when you revive old threads you also notify everyone who responded to the thread (like me in this case). And since it's cheap create a new thread and avoid that, it's polite to do so. Also, creating a new thread helps others, because each thread ends up being more focused. This means people searching for help in the history will be more likely to find it.

This is also why threads are automatically closed after three months. Unfortunately this hasn't been applied to very old threads that existed before that timer was introduced.