Floating Rationals Proposal

All fixed-length formats will be imprecise to some extent. You can't solve that with rationals, with posits, with whatever.

If you want BigRational, use it. If you want fixed-length, just use floats because that way there are well-known algorithms for dealing with it.

(An iterator of continued fraction terms does sound cool, though.)

1 Like

Well, you could apply all operations to lazy continued fraction generators, so your final result will be a lazy continued fraction generator but it would not be either easy or fast.

This sort of approaches (lazy infinite sequences to represent real numbers) typically suffers from indeterminacy too. Subtracting two different representations of √2 and trying to compute the inverse will fall into infinite loop.

1 Like

Rationals can be useful when working on exact numbers with bigints, but not very useful for fixed size numbers.

When you start accumulating floating-point numbers (adding lots of numbers), you start accumulating inaccuracy. When you start accumulating rational numbers, you'll overflow the denominator, as num1/den1 + num2/den2 will have a denominator that can be as large as den1 * den2 such that the denominator quickly overflows.

Also the cost in performance is not just a few CPU cycles, but many many CPU cycles. For a simple comparison you need a couple of multiplications to compute for example num1/den1 < num2/den2 as num1*den2 < num2*den1. One rational addition will need a GCD computation, and a few multiplications and divisions: num1/den1 + num2/den2 is (num1*den2/gcd + num2*den1/gcd) / (den1*den2/gcd) where gcd is the greatest common divisor of den1 and den2, which is one GCD, two divisions (as one division can be reused), three multiplications and one addition. If you skip the GCD, the denominator will overflow faster, and then even an equality check will be expensive as you'll have multiple representations of the same numbers.

1 Like

But I have abandoned the pretense of exactness, therefore the equality check too, just like with the real floats. Therefore I can get away without GCD. Those two multiplications for a comparison are one price that remains. Similarly the slow additions (three multiplications and one normal addition), the two multiplications for a multiplication or division, and the normalisations on the denominator overflow (one division and some shifting). However, all these are much faster than the repeated GCD computations and broadly fit into my broad objective of "a few extra CPU cycles". Against this there are benefits too, like the free reciprocals and the greater accuracy.

However, I will admit that this will not be the type of choice for situations where all you want to do is to add lots of numbers.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.