Poor BigInt performance

I've been playing with BigInt quite a bit. Its performance is pretty poor (taking about 100x longer than Mathematica for example). I know that Ramp (unstable) and Rust-gmp are available in crates.io. I'd like to hear other people's thoughts on this:

Will BigInt be improved?
Do you use Ramp or Rust-gmp or something else?

Thanks,
-Joe

1 Like

Improvements are welcome. For instance, koverstreet has done a lot in PR#133, which we'll probably merge soon. It may be hard to compete with the best using just pure Rust, without nightly asm features, but it's greatly improved in that PR.

The people that need the max performance will probably use GMP through bindings. So I think it's better for the regular Rust bigints to be flexible, handy, widely portable, quick and easy to install and use, to have small int optimizations (so the bigint numbers smaller than 32 or 64 bits require no heap allocations, and are very fast), and so on.

num 0.1.29 is now published with all the performance improvements at hand.

Small-int optimizations sound interesting, but I imagine it will complicate a lot of functions. But if someone wants to try it, I'll happily review it...

Very impressive! My heavily-BigInt-using test went from 3.85s to .11s. That's a 35x improvement! Thanks

I've done few benchmarks, and I'm seeing a significant performance difference. (It's approaching the performance of the CPython numbers, so for casual purposes it now seems fast enough. But I don't know if there are some corner cases where bigint is much slower).

The small-int optimization (allowing it to store a usize and isize without heap allocations) is handy for such library because it makes it more flexible. You can use such small-int-opimized bigints as replacement for regular ints when performance of the code is not maximally important, but you need the code to work correctly (and to not panic and crash in case of moderate overflows). Several languages work like this, their integers are not fixnums, like Python and even Common Lisp (https://www.cs.cmu.edu/Groups/AI/html/cltl/clm/node17.html ). If bigint gains the small-int optimization then it should also gain attributes like the most-positive-fixnum and most-negative-fixnum of CommonLisp.

So you can think of such bigints with small-int optimization like another behaviour beside the panic, wrapping and saturating ones.

I like the Whiley language (http://whiley.org/ ), a language focused on provably correct code. It uses multi-precision integers, because they act more like the ideal integers from mathematics, allowing simpler proof of correctness of the code. Rust could someday prove similar properties on its code with the help of something like Z3 (https://github.com/Z3Prover/z3 ) when the code uses the bigints (that should be efficient enough for normal code that doesn't need to use huge integers).

Perhaps currently there's no way to create library-defined small-int-opimized bigints that are as polysemous as the built-in integral numbers. Solving this problem should help such big-ints become more flexible and better drop-in replacements for built-in integral numbers. I don't know how this can be done...

(Beside being polysemous, in future I'd like Rust built-in integral numbers (and library numbers) to have another property inspired by D language, but this is better left to a separated future post).