Newbie Here -- Puzzled by Cargo Check Not Flagging Obvious Errors

I'm probably being really dumb here as I am such a newbie to Rust that I'm still only on Chapter 5 of "the book" but here goes anyway.

I was just reading an article picking holes in the 'safety' claims made for V and idly wondered What form Rust's complaining about similar errors would take. So I tried a couple of the examples in that post and, to my surprise, cargo check didn't see anything wrong with them...


fn main() {
let  mut x:i32 = 2147483647;
x += 1;
println!("Overflow? --> {}",x);

cargo check doesn't complain...

    Checking tester v0.1.0 (/Users/stuzbot/claraithe/rust/tester)
    Finished dev [unoptimized + debuginfo] target(s) in 0.14s

but, as expected, cargo run does...

~/claraithe/rust/tester ᐅ cargo run
   Compiling tester v0.1.0 (/Users/stuzbot/claraithe/rust/tester)
error: this arithmetic operation will overflow
 --> src/
4 | x += 1;
  | ^^^^^^ attempt to compute `i32::MAX + 1_i32`, which would overflow


fn main() {
let x = 42;
let y = 0;
let z = x/y;
println!("Z ==> {}",z);

Again, cargo check doesn't complain...

~/claraithe/rust/tester ᐅ cargo check
    Checking tester v0.1.0 (/Users/stuzbot/claraithe/rust/tester)
    Finished dev [unoptimized + debuginfo] target(s) in 0.72s

But, as expected, cargo run does:

~/claraithe/rust/tester ᐅ cargo run
   Compiling tester v0.1.0 (/Users/stuzbot/claraithe/rust/tester)
error: this operation will panic at runtime
 --> src/
6 | let z = x/y;
  |         ^^^ attempt to divide `42_i32` by zero

INo doubt I'm missing the distinctions between cargo check and cargo run but the impression I got from what I've read of the Rust Programming Book so far was that cargo check was generally used to run a quick check on your code while working on it,, to make sure it compiles OK --without the overhead of actually building each time

Cargo also provides a command called cargo check. This command quickly checks your code to make sure it compiles but doesn’t produce an executable:

So, why is cargo check not flagging up these two very obvious errors? Shouldn't they be spotted at compile time? I don't see how either could only be caught at runtime.

EDIT: substituted 'cargo check' for 'cargo run' in last paragraph.


From the cargo book:

[cargo check] will essentially compile the packages without performing the final step of code generation, which is faster than running cargo build. [...] Some diagnostics and errors are only emitted during code generation, so they inherently won't be reported with cargo check.

The checks in your examples are being caught at compile (code generation) time. You can see this by building without running using cargo build.

I don't see how either could only be caught at runtime.

It's not always possible to detect when these will happen at compile time, so you can certainly make new examples which do compile. This isn't the fault of the language though; it is mathematically impossible to catch all cases. Catching obvious cases is just a convenience for the programmer (a lint).

Division by 0 at runtime panics. Overflow at runtime is either a panic or is defined to wrap. With the default cargo profiles, debug builds will panic and release builds will wrap.


This happens to be a trivial example that rustc could detect at compile time. It doesn't, but it could. Most, however, can't be detected like this. In general, they don't arise from taking two literals and adding or dividing them, but rather from things like summing up a bunch of numbers from a table or from user input. It's not clear that rustc catching only some of the mistakes in a class of errors is useful, and the general case is halting-equivalent (i.e., it provably can't be done correctly, but it can be approximated fairly well).

1 Like

Note that in Rust, the "safety guarantee" is not something hand-wavy and vague. It has a very specific interpretation, which is "no UB/memory management errors in safe code". You picked the single least interesting kind of bug – arithmetic error – which does not constitute memory unsafety, and which is very hard to protect against in a way that doesn't absurdly obstruct the readability of the code.

If you don't want panics from erroneous arithmetic operations, use the checked_xyz() methods of integers, e.g. checked_add() and the like.

By the way, let variables are considered runtime values. It is not the compiler's job to try and detect errors related to runtime values at compile time, because it is not, in general, possible. For such additional, best-effort, not-quite-the-compiler's-job checks, use the official linter, Clippy. It catches the div-by-0, for example.


Thanks for the replies. Though I can't say as I understand the finer points you make yet.

At the minute, as I work my way through the book, I've been typing out the example code and then changing "stuff" to see what else works, or what errors I get when things break. So, based on the passage from the book re 'cargo check' I'd quoted above plus the oft-heard Rust mantra "If it compiles it will run" I'd been assuming that anything I did where 'cargo check' didn't complain was 'OK'. Looks like I actually need to do a 'cargo build' to be really certain.

You should interpret this as "Rust code that compiles is likely to have significantly fewer bugs than in many other languages". It does not mean "Rust that compiles never has bugs under any circumstances".


Regarding the specific examples you have posted, I don't understand how you are getting compilation errors for them. For me, they fail at runtime:

$ cat src/ 
fn main() {
    let mut x: i32 = 2147483647;
    x += 1;
    println!("Overflow? --> {}", x);
$ cargo run 
   Compiling addtest v0.1.0 (/tmp/addtest)
    Finished dev [unoptimized + debuginfo] target(s) in 0.19s
     Running `target/debug/addtest`
thread 'main' panicked at 'attempt to add with overflow', src/
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
$ rustc --version
rustc 1.67.1 (d5a82bbd2 2023-02-07)
1 Like

Oh yes. I know it's not a 100% guarantee that code which compiles will be bug free. There will always be edge cases or unexpected input that the compiler can't foresee.

I was just surprised that, given how 'cargo check' is so conscientious at picking up obscure errors the programmer might not have noticed, that it let pass two of the most obvious "schoolboy errors" for any wet-behind-the-ears coder to make: dividing be zero and overflowing an integer. Especially when the errors were written directly into the code and not dependent on any kind of runtime input.

No. It's the opposite. I'm NOT getting compile time errors, but thought I should be, given how blatantly wrong the code was.

You make it sound like these types of errors are easy to detect, but they most definitely are not. The compiler will only catch them in the simplest of simplest of cases.

Is your case part of "the simplest of simplest of cases"? Perhaps, but where exactly the boundary between "simplest of simplest of cases" and "simple cases" lies is a complicated question. Your examples are actually rather complicated because you have to look at several expressions at once to catch them. The compiler generally does not try to execute your code a compile time.

1 Like

These “schoolboy” errors involve integer arithmetic, and arithmetic is famously hard so it’s not surprising that compile-time analysis of anything involving integer arithmetic will be quite limited. Those other “obscure” errors you are eliding to on the other hand are likely actually way simpler, conceptually than arithmetic [1]; though “simpler” is meant in a “easier for a computer to reason about” sense, which is always the case when there exist efficient and clear rules for checking stuff. But – unlike humans – computers can easily chew through 10s, 100s, or 1000s of applications of simple straightforward rules, so that the things that are detected can still appear “hard” from a human POV.

  1. of course for a more detailed assessment, we’d need to start by naming examples of what kind of “obscure errors” we are even talking about :slight_smile: ↩︎

1 Like

Again, forgive my naïiveté. But, if the compiler can catch things like trying to change an immutable variable, or all the myriad complexities of incorrect borrowing/ownership, why its something as blatantly obvious as setting a variable to zero and then, on the very next line, using that variable to divide by "not easy to detect". Ditto with setting a u32 to its maximum value and then, in the very next line, adding 1 to it.

I'm not trying to criticise Rust here, or pick holes in how things work. As I said, I'm a complete newbie. I'm just trying to get my head round why the compiler doesn't spot these patiently obvious errors, while it is able to flag up so many really hard to spot and obscure ones.

Thanks. I'll have a read....

[Five minutes later: head explodes]

Generally, almost all errors that the compiler will catch are some variant of "these two things have different types but you're trying to use them as-if they have the same type". Anything else is very likely to be a best-effort lint that is not guaranteed to trigger.

To give some examples:

  1. If you see lifetimes as a special kind of type, then the borrow-checker is just a type-checker.
  2. Thread-safety is checked using the Send/Sync traits, which are part of the type system, so this is also just a type-checker.
  3. Immutability vs mutability is also just types. [1] Mutable and immutable references are different types.
  4. And obviously, if you mix up the types of two different things, then that's a type mismatch too.

  1. Okay, this is not completely true. Whether or not a variable is marked mut is separate from its type, but it's a very simple check: If you make a mutable operation on a variable directly, you get a compilation error unless the variable is defined with mut. It only becomes part of the type system once you create a reference to it. ↩︎


Another take: All these things like “mutability of a variables” or “ownership and lifetimes” are things that are explicitly designed to be “easy” (read: “possibly”) to analyze by a compiler at compile-time.

On the other hand, the behavior of arithmetic is not designed for that purpose. The link to the incompleteness theorem by the way was not actually me believing that someone could understand a connection to the problem of analyzing code, especially from just a Wikipedia article.[1]

Note that there’s other things, even relating to mutability of lifetimes that the compiler will not be able to understand, though typically the result in that case is that you get “too many” compilation errors, not too few.

E.g. the underlying logic of how you use your variables is not something the compiler understands. It’s going to complain about an immutable variable being mutated in the same way, no matter whether you genuinely (accidentally) tried to mutate a variable that wasn’t supposed to be mutated, or whether you just forgot to explicitly tell the compiler that your variable was supposed to be mutable in the first place. In the second instance, the compiler arguably didn’t catch a bug at all, it just complained that you didn’t give it enough information.

Similarly, with lifetimes you get borrow check errors or errors regarding lifetimes, not matter whether you had some actually bad bug (use-after free, or data race, or whatnot) or you just gave some function the wrong signature or maybe your code was correct but too complicated to fit the borrowing model of (safe) Rust in the first place.

  1. But one possible take-away of the result is, in my own words: arithmetic is surprisingly complicated, since and the following would be a way to phrase the result itself mathematical statements about nothing more than simple integer arithmetic can become so complicated/hard that it’s literally impossible to prove or disprove them. A consequence of this (as far as I remember you can also show this result more directly though) is that computers cannot automatically answer questions of the form “is this property of integers true or false”. The term “mathematical statement” or “property” in the previous two sentences does involve a specific form of statements, in “first-order logic”, and thus the incompleteness theorem might as well be considered more a theorem about first-order logic than about arithmetic, but Arithmetic still plays a role. The question of how complicated arithmetic in computer programs can become might then also be more of a question about computer programs than arithmetic, and more applicable results that also play an important role in demonstrating the fundamental limits of compilers and static analysis would be the (undecidability of the) halting problem. ↩︎

1 Like

In very limited cases, you actually can use the type system to catch arithmetic mistakes for you. For example, the compiler provides a type called NonZeroU32 which contains an integer that is non-zero.

This type provides a division operator, and if you read the documentation:

impl Div<NonZeroU32> for u32
fn div(self, other: NonZeroU32) -> u32

This operation rounds towards zero, truncating any fractional part of the exact result, and cannot panic.


Since this type can never contain the value zero, division using this type can never fail.

How would this apply to your example? Well, the NonZeroU32::new method will fail if you give it a zero:

use std::num::NonZeroU32;

fn main() {
    let x = 42;
    let y = NonZeroU32::new(0).unwrap();
    let z = x / y;
    println!("Z ==> {}", z);

It now fails on the let y line instead of at the division.

This still wont catch the bug at compile-time, but the compiler is now able to make a promise that it couldn't previously: The compiler promises that the division will never, under any circumstances, fail. The thing that can fail is elsewhere. If the creation of the NonZeroU32 and the division are far away from each other in the program, then this kind of thing could make it easier to catch mistakes.

All that said, catching arithmetic mistakes with the type system is very limited, and it is not something people usually do.


For an actual compile, the compiler will do constant propagation, in which case the result becomes obvious at compile time (by constant propagation, I mean propagation of known values, not necessarily const). I suppose cargo check doesn't do constant propagation. Would it be useful to tell cargo check to do constant propagation?

That the compiler doesn't complain about integer overflow, maybe that's because sometimes integer overflow is intentional? For example, it makes implementing certain random number generators or has functions easier. On the other hand, if one wants integer flow, the correct way is to use wrapping_add etc.

But it might become known after all the linting is already finished and it's impossible to meaningfully return error to the used.

It does - it's essentially the command for "compile, but not codegen". That is, there are errors (like monomorphisation or linking) which aren't reported there (only by cargo build), but all the ordinary lints are reported normally.

Since cargo check doesn't run all the build time checks you can use cargo build for greater confidence.

Yes. I've started doing that now too, for a "Belt'n'Braces" approach.

Maybe the authors could reword this passage in "The Book" where Cargo is introduced as --for this newbie anyway-- I found it a bit misleading, as it suggested to me that essentially cargo check would catch anything cargo build or cargo run would, but was quicker and more convenient to use during incremental development...

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.