Why does char not have a char::MIN constant analogous to char::MAX? It seems inconsistent and i don't see any particular reason to leave it undefined -
Given that all unsigned integer types have MIN (equal to zero, of course), it’s likely just an oversight, char may not have been considered in the same “family” as the integer types. (And arguably it isn’t, its interface is entirely different!)
Discussed elsewhere recently was that things are (ideally) added to the standard library based on a positive use case for them. Adding something simply because it could be added isn't a good enough reason.
On the one hand this is such a tiny thing that it might be easy to justify. On the other hand, as jdahlstrom says, char is not an integer type (e.g. you cannot do 'a' + 'b') so it needs to be justified on its own terms.
I think that the question should be Why is there a char::MAX?
The constants T::MIN and T::MAX imply that every value between them is valid, but this is not the case for char.
If the use case is to check if an integer can be converted to a char, instead of the usual x >= MIN && x <= MAX, an implementation must use char::from_u32, which does the correct checks.