Note: I am very new to testing in Rust.
I want to test a FromStr impl I did on a custom type that consists of many nested enums. Here is the test I have so far:
I don't really like the assert!(false). Ideally, I would use assert!(Operation::MoveCursor(MoveCursor::Up(10)) = "up 10".parse()?);, but that doesn't compile. Any suggestions on how to do what I want to do in a more idiomatic way (if there is one)? I think I could implement the PartialOrd trait for my custom types, but that seems unnecessary just for testing.
Edit: sorry, I messed up the code originally, fixed now.
Edit:
Solution: I ended up with this refactored solution:
Should work. Maybe parse will complain that it can't assess the type. Then you'd have to provide type hints. Here the link to assert_eq! in the standard library and the testing section from the rust-by-example book.
I tried that, but the compiler complains that Operation doesn't implement the Debug and PartialEq trait. Is it worth it to implement these traits just for the test? I don't need them otherwise.
Wow, matches! is really cool! The only downside I see is that the error message will not be as pretty as when using something with assert_eq!(...), but that's the trade-off I would have to make if I don't want to derive traits (of course I could use #[cfg_attr(test, ...)], but that is also more work because of the nested nature of Operation). Thanks!
It's single line of #[derive(...)] and doesn't add up any runtime cost. So why not?
API guideline lists traits that every types should implement if it makes sense. It's less important if you're writing binaries but having them eagerly may reduce future hassles.
I thought I read somewhere that adding the Debug trait specifically can add a lot to the compiled binary size. Maybe someone else can confirm whether this is true, I'm still fairly new to Rust. That's one downside, but I could of course use cfg_attr if I wanted the trait.
All trait impls add to the compiled binary size. However, storage is cheap these days. It's certainly a good default trade-off to impl Debug for whatever you can, or else you will annoy your users (and yourself). Being able to print the debug representation of a value is basically not optional.
I think you're reasoning applies more to writing a Rust library. There, I totally see the point of adding useful traits like Debug. However, I am writing a cli tool. Therefore, other users will probably never need the Debug trait for some of my internal types and I want to keep the binary size lower if possible.
Right, each Debug impl may add up hundreds of bytes to the binary size. Does it matter for your use case? Check your existing binary size and decided it yourself.
But for most cases, that just doesn't matter. Is there a specific reason why you can't afford a couple more 10s of kBs? If you are writing "a CLI tool", then you aren't even running on an embedded or otherwise resource-constrained system, probably (I may be wrong). Thus, shaving off the Debug impls from a binary that's already several megabytes big seems to be premature optimization at best. Granted, the users may never need/see it, but
that isn't entirely true, because you may still want to include relevant values into a user-facing panic and/or stack trace anyway, to make it easier for yourself to track down the reasons for a reported crash, and
as mentioned above, omitting such common impls will make your own life harder. For one, you won't know what exactly the wrong value was in the example above. How are you going to find out? This really isn't something worth "optimizing" at the expense of a massive debugging handicap.
If you're compiling a binary in release mode then unused code (like a Debug impl that's never called) should be optimized away. Deriving impls that you don't need will increase build time, but shouldn't affect the binary size.
Yeah, I totally understand your point. My cli isn't intended to be run on embedded devices. What I'm trying to say is that if nothing in my application requires the Debug trait besides the test (I have other ways of propagating errors/panics to the user), then I think using a #[cfg_attr(test, derive(Debug))] can be useful (for build times and binary size).
So my thought process is: if I don't need it, why use it?
The big caveat: I'm still quite new to Rust, so maybe my code isn't very idiomatic and should actually be using the Debug trait more. So, if you're curious, here's a piece of code where I have decided to use #[cfg_attr(test, derive(Debug))] over #[derive(Debug)]: watchbind.
Then, of course, I would be totally happy to derive for the added functionality.
Ok, I didn't know that, that's good to know. I'm working from quite an old laptop already, so marginally faster build times are a useful advantage as well.
I'd like to add that assert! just checks the condition and panics if it's false (as do the other related macros). So though the matches! approach is better here, you could have written something like
#[test]
fn test_move_cursor_step_size() -> Result<()> {
if let Operation::MoveCursor(MoveCursor::Up(10)) = "up 10".parse()? {
// nothing needed here, `assert!(true)` doesn't do anything
} else {
panic!("Parsed incorrectly"); // ...or any other message you'd like
}
Ok(())
}
as well, where a failure would look something like
running 1 test
test test_move_cursor_step_size ... FAILED
failures:
---- test_move_cursor_step_size stdout ----
thread 'test_move_cursor_step_size' panicked at 'Parsed incorrectly', src/lib.rs:7:13
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
failures:
test_move_cursor_step_size
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s