Imagine a formatting option g that causes automatic choice between normal and scientific notation. Actually, I would like it to be the default, but putting it behind an option is a good first step.
In other words, println!("{:g}", x), where x is f64, would be like println!("{}", x) for some numbers and like println!("{:e}", x) for others.
For example, we would like 1e100 to be formatted as "1e100" (instead of a one with hundred zeros) and 12 as "12" (instead of "1.2e1"). Note that the number of valid digits is the same either way.
Actually, debug formatting (println!("{:?}", x) does pick between normal and scientific notation, but I don't like how it chooses and I think there should be an option outside of debug mode, because people will look for regular formatting options and not find it. Also, if this ever becomes the default, we could auto-derive Display and have it.
Use normal notation, if it fits in seven characters. If it does not fit in seven characters, use scientific notation if it is shorter than normal notation.
Why? I think seven is about the size up to which it is easily readable.
Also, a width of seven characters works well with tab-separated files, which are read by humans and machines.
It probably should have an argument to specify the target length, instead of locking in users into an arbitrary choice, e.g. 7. It can be optional though, and 7 seems fine for a default.
javascript has a decent method of automatically selecting how to convert fp numbers to strings (though we'd want to make it output "-0" for -0.0 for compatibility): ECMAScript® 2025 Language Specification
Blatant self-promotion: you can use the GPoint crate which defers to the underlying libc's printf("%g" ...) implementation. Lightweight, and very useful if you want to compare data to something generated by C code.
I think the format string notation is bad because it tries to fit the same set of formatting parameters onto all types which makes no sense, and because it forces all types to re-implement things like padding and centering that are really type independent (unless you reinterpret it in weird ways).
Where scientific is implemented for floating point types (and returns a wrapper type that implements Display), and width is a blanket trait implemented for all Display types.
So then you can have whatever complicated logic as another method:
I generally like your approach. Although I think width does have a type-dependent use, exactly for scientific notation: the type must know where to truncate the decimals and start rendering the exponent.
But scientific notation doesn't automatically round to worse precision based on width, it always prints to full precision. You could have a different logic, but then it should be called something else:
println!("pi = {pi}", pi = pi.scientific().round_to_fit_width(7));
I'm a bit weary of the discussion drifting from "let's just add a g option" to "let's totally revamp formatting".
The first question would be, what is the return type of these methods? If it is one universal format wrapper, then you would have a fixed set of flags like now. Or are we talking about each type having it's own format wrapper? Or something in between?
This. It would be up to each type what kinds of and how many special formats it implements. Of course some of these wrappers can share code via generics.
If anyone would like to try using format wrappers a lot, check out my small library manyfmt. It provides a generic wrapper type and extension trait so that you don't have to write the boilerplate of wrapping the original value, just the options you want in each case, and the Refmt extension trait for creating wrappers will apply to any set of types you can express in an impl, not just types you control. It’s designed to be the closest reasonable approximation to “what if the standard library actually had this feature?”[1].
use std::fmt;
use manyfmt::{Fmt, Refmt};
struct SwitchNotationAt(u8);
impl Fmt<SwitchNotationAt> for f64 {
fn fmt(&self, fmt: &mut fmt::Formatter<'_>, options: &SwitchNotationAt) -> fmt::Result {
todo!("your printing logic here")
}
}
println!("{}", some_number.refmt(&SwitchNotationAt(5)));
though it doesn't let you write options in-line with the format string, which would be big if possible, but need a proc macro and have lots of questions about namespacing and syntax ↩︎
-0.0 gets formatted as 0 (but I'm proposing -0 for Rust for consistency)
any NaN gets formatted as NaN
any negative non-NaN number gets formatted as - followed by the positive version
infinity gets formatted as Infinity
for non-zero finite positive numbers:
temporarily write the number in the form 0.<digits>...e<exponent>, <digits> must start and end with a non-zero digit and be as small as
possible while preserving the value exactly when converting from that
string back to the number type (f64/f32/etc.).
if there could be multiple string values, pick the mathematically
nearest one, picking the even <digits> for ties
(JavaScript doesn't require this sentence, merely recommends it for accuracy).
if <exponent> >= -5 && <exponent> <= 21, use non-scientific notation:
Integer values have no ., values less than 1 start with 0.
otherwise use scientific notation:
if <digits> from step 1 has length 1, use format <digits>e<+/-><exponent>,
otherwise use format <digit>.<digits>e<+/-><exponent>.
(JavaScript requires the + in positive exponents, I'm open to omitting it).
that's not actually true anymore, JavaScript added BigInt, though you're right that JavaScript uses f64 (they call Number) most places a lot of other languages would use an integer...