A new big integer library: ibig

I have been working on a big integer library: ibig.

With some more work I'm hoping to reach (or beat!) the performance of GMP (written in C).

I wrote some benchmarks to compare: bigint-benchmark.

Check it out!

12 Likes

Wow, those benchmark results do look impressive kudos!

I had to do some profiling after seeing the num-bigint disparity, and I found that the vast majority of its time was simply converting the final answer to a string. With a small adjustment to take that out of the benchmark timing, it's much closer to ramp's result -- still slower, but not by magnitudes. So I need to spend some time on the formatting, at least.

If you have any tips on low-hanging fruit you think is hurting num-bigint, I'd be happy to hear it, but of course it's fine if you'd rather focus on your own crate. :slight_smile:

edit: Almost all of the remaining time is in the final division p * T::from(10u32).pow(...) / q, which is 6643975 bits / 3322049 bits. I remember that you made some simple tweaks to num-bigint's division, but I suppose you must be using a better algorithm for numbers of this size.

Yep, in ibig I'm using sub-quadratic algorithms for all arithmetic operations and for to/from radix conversions.

I was going to ask about that. A while ago we had a little competition to calculate the first Fibonacci number that had one million digits in various languages. Without cheating by using a library like GMP.

I found that the big integer capabilities of Python were pretty good. But it took an order of magnitude longer to print the result out in decimal than actually calculate it.

Our competition rules included all the run time so Python was a serious loser.

The winners of that race were in C and my C++ effort. We did the big number arithmetic all in base 10, which made producing the final output very fast.

It would be great have a Rust entry to the competition. Hint, hint.

I hope you have success, but it is a difficult task to reach GMP performance, as it has an experienced team working on it, and uses assembly routines for optimization, which routines are tuned to the different target architectures. And while there is always room for improvement, there isn't much low-hanging fruit: they already use the best algorithms in most cases, and for example in multiplication GMP picks one of multiple implemented algorithms according to the operand sizes. So if you want to reach GMP performance, it's going to be a huge amount of work, not some more work! :slight_smile:

You do realize there's a pull request open for a Rust entry, right?

Yep. Hoping I can find some time to look it over soon. Life has been hectic around here recently.

Thanks.