Why is there a char::MAX but no char::MIN?

Why does char not have a char::MIN constant analogous to char::MAX? It seems inconsistent and i don't see any particular reason to leave it undefined -

From the spec -

A Unicode scalar is any Unicode code point in the range U+0000 to U+D7FF inclusive or U+E000 to U+10FFFF inclusive"

So char::MIN seems well defined as equal to U+0000 or 0 as char.

I'm guessing it's just because nobody has needed char::MIN because they know char is unsigned and null is a valid UTF-8 character.

[joke] it's to avoid getting sued by the toilet paper company [/joke]

11 Likes

Given that all unsigned integer types have MIN (equal to zero, of course), it’s likely just an oversight, char may not have been considered in the same “family” as the integer types. (And arguably it isn’t, its interface is entirely different!)

1 Like

Discussed elsewhere recently was that things are (ideally) added to the standard library based on a positive use case for them. Adding something simply because it could be added isn't a good enough reason.

On the one hand this is such a tiny thing that it might be easy to justify. On the other hand, as jdahlstrom says, char is not an integer type (e.g. you cannot do 'a' + 'b') so it needs to be justified on its own terms.

1 Like

I think that the question should be Why is there a char::MAX?

The constants T::MIN and T::MAX imply that every value between them is valid, but this is not the case for char.

If the use case is to check if an integer can be converted to a char, instead of the usual x >= MIN && x <= MAX, an implementation must use char::from_u32, which does the correct checks.

4 Likes

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.