Using bigint, we can defined bigfloat (which is (bignum, n) representing bignum * 10^n)
Now, suppose we ahve an expression tree consisting of bigfloat, ±*/, pow, root, log, exp … and I want to say:
evaluate this tree to K significant digits, i.e. get an output of the form:
a_0 . a_1 a_2 … a_k * 10^M for some M and a_0, a_1, …, a_k
- Question: what algorithm do we use for this? I don’t think converting each bigfloat to an f64 provides the type of precision guarantee we want.