The easiest way I found is to to_string/find ./parse slice, but it feels off to have to parse/unwrap when the input is already a float.
I looked into what the Display do and they use grisu3 which seems to be a pretty nice algorithm, but also quite sophisticated and aimed at building a string out of a float, while I only need an integer which I'd hope is simpler.
I imagine that there would be some way to get f64::get_parts() which would return integer and fraction as an integer, but I can't seem to find it, and all algorithmic approaches with multiplications/divisions in a loop run into the rounding error where 0.023_f64 ends up being 0.22999999997 and then 229999997 instead of 23.
Ultimately what you want is somewhat ambiguous due to the nature of floats, but how about repeatedly multiplying with ten until the difference between the rounded value and the actual value is less than, say, 0.00001?
Ultimately what you want is somewhat ambiguous due to the nature of floats, but how about repeatedly multiplying with ten until the difference between the rounded value and the actual value is less than, say, 0.00001 ?
I attempted that and it breaks when the rounding gives 0.0009999997.
Parsing of string output is probably not too bad here. Float printing functions are complicated and do quite a bit of magic to pretend that 0.1 exists.
It doesn't have to allocate. Instead of to_string you could write! to some buffer (e.g. arrayvec)
For the record, you can write to mutable slices, which can be on the stack too.
use std::io::Write;
fn main() {
let mut array = [0u8; 10];
let mut slice = &mut array[..];
write!(slice, "{}", 5.01);
let remaining_len = slice.len();
let written = array.len()-remaining_len;
let digits = &array[0..written];
println!("{}", std::str::from_utf8(digits).unwrap());
}
I note that the table there shows 1.0 and 1.00 as being different (screenshot below). If that's the case, it feels like f64 is fundamentally the wrong datatype for the input. Why not accept a &str and parse it?
I do accept &str as well, using FromStr trait. I also accept integers and floats, in which case denoting the precision comes from the fraction portion of the number and the options like minimum_fraction_digits. (See ECMA402 Intl.NumberFormat - JavaScript | MDN for what inspired this code).
Thank you for all the feedback. I realize that the issue is hairy and the solution from @alice is potentially unsafe.
I tried @kornel's approach with writing to an array. Here's the result:
impl From<$ty> for PluralOperands {
fn from(input: $ty) -> Self {
let abs = input.abs();
let mut array = [0u8; 10];
let mut slice = &mut array[..];
write!(slice, "{}", abs).unwrap();
let remaining_len = slice.len();
let written = array.len()-remaining_len;
let digits = &array[0..written];
let (len, fraction) = if let Some(pos) = digits.iter().position(|b| b == &b'.') {
let s = std::str::from_utf8(&digits[pos+1..]).unwrap();
(
digits.len() - pos - 1,
usize::from_str(&s).unwrap()
)
} else {
(0, 0)
};
PluralOperands {
n: abs as f64,
i: abs as usize,
v: len,
w: len,
f: fraction,
t: fraction,
}
}
}
To add on to this, I actually looked into this problem a bit ago for a proc macro. My determination was that parsing the string manually was in fact the best option.