The tests are designed to check the operation of functions, aren't they?, and if the test was successful, we know that the function works correctly, then why do we need to write tests that fail
Well, for example, the tokio::sync::Semaphore
type has a test that Semaphore::new(MAX_PERMITS)
succeeds, and another test that Semaphore::new(MAX_PERMITS + 1)
panics. This helps test that the max permits limit is implemented correctly.
Sometimes, the correct thing to do is to fail. Sometimes (specifically, when the error is non-recoverable), the correct way to fail is to panic.
For example, out-of-bounds array indexing should panic, instead of causing UB. If you don't test that it panics when it should, then how do you test that it doesn't cause UB upon OOB indexing?
Also, you might be misunderstanding the definition of "a failing test". A #[must_panic]
test succeeds when the test function panics. You are not writing a failing test – rather, you are writing a test where panicking means a successful test case.
This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.