I like that, but you are talking to a retired math teacher. Scientific notation is defined as having only one placeholder to the left of the decimal. You're right, though. I would have never let my students get away with it, but if we loosen up on the strict definition of scientific notation it would work fine for integers as we're coding.

It seems to me that if you allow scientific notation for integers then you can write things like:

`3e-10`

which is a number that cannot exist as an integer. So the compiler would have to check for all these invalid values. Or, more simply, disallow scientific notation for integers.

Mind you, the same applies to the floats, where `3e-10`

cannot be represented exactly. But hey.

Well yes but that's not a difficult check to implement.

True enough.

I'm not keen on the idea. Floats and integers are very different things, so I feel mixing float notation up with integers does not sit well.

The literals for floats and for integers would still be disjoint (the `_u32`

suffix makes it clear it's an integer).

Personally I think that going further and allowing the same literals for floats and integers (and other types too such as bignums) would actually be a good idea, but unfortunately that would cause problems with existing type inference. But then you'd be able to write both of the following (both are currently not allowed):

```
let a: f64 = 1;
let b: u32 = 1.2e6;
```

Same literals are already allowed for different types of integers and for different types of floats, so I don't see why not.

In particular, I have often wanted to say something like "1.5 billion" as an integer.

I'm not sure I'm following you, @scottmcm. By definition, scientific notation is written as a product. `0x1e9`

fits that definition, while ```1e9`` doesn't. What are you saying?

That's hexadecimal notation. The prefix `0x`

is often used in programming languages to write in hexadecimal.

In all your posts you're confusing how the scientific notation works in programming languages. 1e9 doesn't mean 1^{9}, it means 1 * 10^{9}. 10e9 doesn't mean 10^{9}, it means 10 * 10^{9} = 10^{10}.

Yes, you're correct. I have been confusing those two concepts and should have known better. Thanks.

@preacherdad What we know today as Scientific Notation is usually attributed to Descartes and it does not use the caret. The number you refer to is expressed as 1 x 10⁹.

The caret was only introduced in the late XX century in some programming languages. However, what become known as "E Notation" is considerably more common, and I believe well predates the use of the caret: Scientific notation - Wikipedia

Scientific notation with integers is a good question though.

**Moderator note**: The definition of a "billion" in different languages and dialects is off-topic.