Why aren't some basic traits automatically derived?

Traits such as clone and copy seem like they should always be derived, or at least derived by default. Is there a use case where this isn't true?

I have some structs that could implement Copy, but I've chosen not to deriver it in case I later decide to include a field that isn't Copy. I even have a couple that don't derive Clone for the same reason.

4 Likes
  • It may not be what you want, for example Vec contains a pointer, a size and a capacity, all of them are Copy, but Vec shouldn't be Copy and its Clone implementation should contain some extra logic.
  • It may silently break compatibility if you change the internal representation of your type in a way that makes it no longer Copy/Clone (e.g. you add an Rc<T> to something that was previously Copy).
  • It adds more code to check and compile, which in turn increase compile times.
14 Likes

Mostly rephrasing the same points...

If Copy were an auto-trait, it would become a (SemVer) breaking change to add some private field that makes it non-Copy.

If Clone were an auto-trait, the programmer loses the control over what exactly that means (consider Rc), and additionally, loses the ability to not supply that capability (e.g. for future proofing, like the Copy example, or for other reasons).

Copy in particular would be horrible from a breaking change perspective because of how easy it is to implicitly copy Copy types. I.e. the dependency on Copy is likely to spread quickly and deeply, making it impossible to rectify without a major revision even if you don't really care about SemVer. I also wouldn't want my GiantBlob([u8; 100_000_000]) to be Copy; implicitly copying that would be a giant performance hazard.

3 Likes

There's also the fact that derived Copy and Clone use direct bounds on generic type parameters, rather than figuring out structural constraints like auto-traits do. For example, &T can be copied regardless of T, but if you #[derive(Copy)] on a struct containing such a reference, it will require T: Copy.

https://github.com/rust-lang/rust/issues/26925

3 Likes

Also there is a nice pattern that can be used with non-Copy zero-sized types:

pub struct FunctionAFinished(std::marker::PhantomData<()>);

pub fn a() -> FunctionAFinished { /* */ }

/// Users of this library are forced to call `a` before `b`
pub fn b(_: FunctionAFinished) { /* */ }

It's not much, but I like it.

1 Like

Why put () in a PhantomData? It's already zero-sized...

Historical interest: It was once the case (before Rust 1.0) that Copy was automatically implemented (it was basically an auto trait, like Send and Sync are now, except auto trait didn't exist yet). If you wanted to suppress that default behavior you had to include a std::marker::NoCopy field. I can't find any original discussions because all the good links have rotted away,¹ but the reasons for changing it at the time were pretty much what people have said in this thread.

¹ RFC #19 would be the place to start for someone interested in learning more.

1 Like

To me that looks like an orthogonal problem, since the derives could have been implemented to use the auto-trait behavior:

#![feature(trivial_bounds)]

#[derive(smarter::Copy, smarter::Clone)]
struct MyPtr<T>(*const T);
// would expand to:

impl<T> Copy for MyPtr<T>
where
    *const T : Copy,
{}
// etc.
  • without trivial_bounds:
    #[derive(Clone, Copy)]
    struct __MyPtrGenericFields<_0>(_0);
    type MyPtr<T> = __MyPtrGenericFields<*const T>;
    

Maybe it's not directly related, but it has similar concerns, whether structural fields should influence trait implementations and what that means for the public API.

While we are at it: C++ manages to call copy constructors automatically upon pass-by-value, so why can't Rust do it too? Why do we have to bother with the oh-so-inconvenient move semantics and explicit clone()s? Why aren't the developers of Rust smart enough to recognize that this would be so much more convenient?


As it turns out, the answer is the same, every single time. If something nontrivial happens, be it at runtime (such as allocating memory) or at compile time (such as awarding a type some sort of capability by means of impling a trait), that nontrivial action should be visible to the programmer. If it's dangerous, it should also be noisy and scary, but if it's safe only nontrivial, it should still be visible.

A good programming language is not one that lets you write the least possible amount of code. A good programming language is one that shows the intent to the human reader. One that actively guides his/her mental model towards the actual, real, honest-to-God thing that the compiler sees and the code it generates.

People are spectacularly bad at non-local reasoning. The only thing they are worse at is keeping huge amounts of small, invisible details in their head. The tools that we could use as crutches, such as full-fledged on-the-fly type checkers, are immature at best and come with their own learning curve (ever tried setting up RLS for an IDE?), therefore most of us will fall back to simply (rip)grepping for raw text. But grepping for an empty string is not particularly productive, to put it mildly.

Consequently, it is usually a good thing when the programmer needs to spell out his/her intent, because it will be easier on future readers' eyes and ripgreps. Of course, there are legitimate exceptions to this rule. When programmers tend to forget certain things, it's better to automate them away. (I meant the things being forgotten, not the programmers.) For example, people regularly forget to free dynamically-allocated memory and to close file handles. Or they just do it at the wrong time. Hence, Rust has implicit destruction aka RAII, which is a completely different perspective of language design, but the goal is the same: make the code correct and safe.

Finally, as a concrete example, there are other "basic" (read: commonly-used std) traits that could be automatically derived but shouldn't. For example, Debug can also be inferred trivially from the structure of a type. However, what if I am writing a cryptography library and I don't want to expose secret keys, passwords, and other credentials? Should I remember to opt out of Debug every single time I add a new type? That would be insane. One day I would forget to opt out of just one teensy type, and someone would go ahead and debug-log passwords of millions of people on their server in plain text. Of course, this has totally never ever happened before, so I'm pulling a slippery slope argument here, and I should be ashamed of myself for how low I think of programmers.


TL;DR: if we have learnt anything from the past couple of decades of programming language design, it is that there is no such thing as a good surprise in programming. So, let's design languages that eliminate the surprise factor instead of fueling it even further.

12 Likes

Huh yeah I don't even know why I thought it was necessary to use PhantomData, I guess struct FunctionAFinished(()); works as well :sweat_smile:

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.