When you write
let x = 42, the compiler infers the type of that integer from the way in which it is subsequently used. If the code expects an u8, that will be an u8. If the code expects an u32, that will be an u32. If there are multiple code paths with diverging expectations, the compiler will be unhappy and cry out an error message.
This is appropriate for variables which are declared locally and never exit their host function. But for public globally scoped data, it makes the type of the value sensitive to the implementation of the module, or of its users, which is a recipe for maintenance problems.
You can compare this approach with the one taken by C or Java where every value has a well-defined type (IIRC int for integers and double for floating-point data, overridable with a suffix). It has different tradeoffs: it arguably works better in this case, but less well in others (causing unexpected integer truncation or unnecessary double-precision computation for example).
A possible middle ground would have been for Rust not to require type annotations when the type of a value is unambiguous, as in “0i32” or “4.2f64”. I can only guess that this one was decided against because it added one more language rule for what is ultimately a small reduction in verbosity.