Notes on Type-level integral numbers

Type-level integers are used often as template arguments in D language, for various purposes, and they will be a good improvement for Rust (despite the complexity increase they bring). In D you can also use many other things as template arguments, but integers are the most useful (after type arguments, of course).

Two of the ways to use them in D are:

  • To improve performance (sometimes using them with compile-time tuples) of some code.
  • To increase static safety (often for the lengths of fixed-length arrays);

Below I explain something about both usage patterns, using D code for the explanations.


One performance improvement comes from replacing one or more than one of the run-time arguments of a function with template arguments, as you can see in the first D program here:

If you take a look at the function signatures:

auto findNearest(size_t k, F)(KdTree!(k, F) t, in Point!(k, F) p) pure nothrow @nogc {

static KdNode!(k, F)* nk2(size_t split)(Point!(k, F)[] exset) pure {

Here split is a compile-time argument that in a precedent version of the code was a run-time one. The n-dinsional data is stored inside arrays in the Go language version:

// point is a k-dimensional point.
type point []float64

But the D code uses a fixed size array:

struct Point(size_t k, F) if (isFloatingPoint!F) {
    F[k] data;

The recursive call for the other coordinates is done at compile-time:

enum nextSplit = (split + 1) % d.length;//cycle coordinates
return new KdNode!(k, F)(d, split,
                         nk2!nextSplit(exset[0 .. m]),
                         nk2!nextSplit(exset[m + 1 .. $]));

Similar code, where one or more run-time function arguments are replaced by integer template arguments can be used where you have bounded recursion (you can also mix, unrolling the recursion calls statically up to a point, and then using regular function arguments for the rest of the recursion arguments, using a new function with run-time arguments instead of template arguments, and dispatching using a "static if"). This sometimes generates larger program binaries, but also sometimes huge speedups in my D code.

I have written a 15-problem solver in D that uses such coding pattern to partially unrolls the recursive search, generating few thousand functions in the binary (that I think the D compiler compiled in less than a minute), that was much faster (or much shorter to write for the programmer) than a regular search with run-time arguments.

To use type-level integers well perhaps you need the "static if" of D, and few other things.


Another common way to use type-level integers is to increase compile-time safety of the code (and again this also gives more static information to the compiler).

Here you can see an example, matrix multiplication with fixed-size arrays:

The first part is:

alias TMMul_helper(M1, M2) = Unqual!(ForeachType!(ForeachType!M1))
                             [M2.init[0].length][M1.length];
 
void matrixMul(T, T2, size_t k, size_t m, size_t n)
              (in ref T[m][k] A, in ref T[n][m] B,
               ref T2[n][k] result) pure nothrow @safe @nogc {

Unqual!() is a template that strips away most qualifiers from a type.

ForeachType!() gives the type of the items of the given array type.

M2.init[0].length gives the lengths of the rows of the M2 2D fixed-size array.

TMMul_helper helps you compute the type of the result:

TMMul_helper!(typeof(a), typeof(b)) result = void;

Most of the memory used by the matrixMul() templated function is allocated outside.

The type-level integers of matrixMul() make sure the size of input/output arrays are correctly sized.

One problem with matrixMul() is that the D compiler generates a new instance of matrixMul() for each possible type and each possible size of input arrays. This bloats the binary.

So in Rust I'd like an optional way to make the type-level integers ghosts. So a generic function like matrixMul() becomes parametrized only on the type of the items (and all the arrays become slices with run-time length).

(In C++/D you can fake that idea and reduce bloat using a wrapper function with full compile-time typing, that calls a private function with just run-time arguments. But I'd like to avoid writing the wrapper and to have many compiled wrappers inside the binary. On a related note, Bjarne Stroustrup wrote a paper where he proposed an annotation for C++ to be used inside template classes, to specify that some of the member functions are templated only according to a subset of the template arguments, to reduce template bloat. This is different from just letting the compiler spot where it can avoid bloat, because that annotation a contract between programmer and compiler).

4 Likes

As I understand it, you are proposing that the library writer decide which parameter is "ghost" and which is not, right?

This is in line with #[inline] (!), however I have sometimes wondered if the client should not be in charge of deciding, on a per call-site basis, whether to ask for inlining, etc... for example, you'd ask for it in a hot tight loop (after profiling), but not in other colder areas of the program where speed does not matter as much.

Allowing the client code to decide if to inline or not, sounds potentially useful. But the programmer needs lot of information to decide that, it's not easy.

@leonardo Do you have some references about recursion unrolling using type level integer?

I don't have references, I've just used it some times in my D code. And you can probably do similar things in C++, but there you don't have the "static if" of D, so you have to break your functions and use template "pattern matching" on the integral numbers.

In D this technique is quite easy to do, you don't need references to use it. First you write correctly the normal functions of your program and spot the more performance-critical functions of your program, and then to try to speed up your code you look for integral arguments that are quite bounded (because you usually don't want to generate thousands of different instantiations of your functions). So you replace the run-time arguments with template ones:

uint foo(in uint x, in uint y) {...}
==>
uint foo(uint y)(in uint x) {...}

And you probably have to replace some "if" statements that work on y with "static if" ones, and you probably have to replace some const/immutable expressions inside your foo function with enum ones (that are computed at compile-time, so they can be used as template arguments).

Generally I do this near the leaves of the computation, I don't have a theory for this, I usually replace the run-time arguments with template ones in critical functions, where the range of possible inputs is limited, and where the template arguments allow me to replace some expressions with compile-time computations. So it's a bit of an art and usually you need to do several experiments, but it's easy to do.

To do something similar in Rust you probably need type level integral numbers, some way to enable-disable sections of a function (like traits testing positioned somewhere inside the function body, to replace the static if), and some way to ask for compile-time evaluation of expressions and more inside parts of functions. All this is probably doable in Rust, but it will take some work and some additions.

I'm not sure how (or if it's possible) to do what you're asking, but you might be interested in the library that I've written that provides type-level integers for rust:

https://crates.io/crates/typenum/

I am not sure where the term "ghost" comes from, but in PL research these are called Phantom Types. If you search on that term you will find plenty of papers on the subject. The HList paper which I worked on in 2004 does lots of clever stuff with Haskell type-classes and phantom types to facilitate type level programming. A good starting point is the paper "Faking It: Simulating Dependent Types in Haskell'. I think Faking It is better than real dependent types for languages like Rust, because it preserves the distinction between static and dynamic (runtime) for the programmer.

It turns out you can use type-classes (traits) as a logic language (think Prolog) at compile time to facilitate meta-programming. Because Rust has backtracking when satisfying constraint bounds the trait system corresponds exactly to the positive fragment of pure Prolog (the bits without side-effects and negation).

Combine logic programming with phantom types and you can write a lot of useful libraries that enforce their own invariants, like your example of matrices that only multiply by correctly sized matrices where the matrix size is static. Type level integers are possible as in the above library by having an encoding lime Peano numbering, and a set of logic programs implemented using traits for arithmetic on them.

It's much better to wait for proper compile support of integral arguments for generics, like in C++/D. Type tetris is not for production code.

I do not see anything wrong with defining numbers in terms of type level objects, you have to start somewhere, and this is in fact how proof systems like Coq define numbers. So these type things really are numbers, and can be defined in a library you never need to look at. What you want is really an optimisation to make compiling with numbers faster, and some syntactic sugar for type level number literals. Plenty of Haskell production Haskell libraries use these techniques, it is just a matter of getting used to them. If the type system were some ad-hoc thing with inconsistent rules I would agree with you, then it is type Tetris, but where the type system is properly a logic, and follows consistent rules then meta-programming the type system is as reliable and useful as programming at the value level.

I disagree with most of your post @keean. When you think about this topic it's better to think about D user-level code (or C++) instead of Haskell and Coq libraries and demonstrations. You can start from regular Rust code.

Production-level code in C++/D could use large amounts of type-level integrals (this sometimes includes chars too. In D you also can have strings and arrays as template arguments), think of your medium-sized D program as made of tens of libraries that sometimes use integer template arguments.

So you want them handled with very low compilation memory and time (unlike Haskell compilations), and you want to use them with a syntax and semantics as close as possible to the run-time numbers (to learn as little as possible to use them, so almost-newbies can use them), to keep their usage as transparent as possible (no leaking abstractions caused by defining them in libraries, and to avoid bugs).

I agree it would be easier with some syntactic sugar for type level integers, and a faster compiler is always good. You don't need to understand the numbers to use them because in most functions they would be replaced by generic parameters anyway like matrix_multiply<X, Y, Z>(m : Mat<X, Y>, n : Mat<Y, Z>) -> Mat<X, Z> where X : TypeLevelInt, Y : TypeLevelInt, Z : TypeLevelInt would be the type of multiplication, you don't need to know how the numbers are implemented.

A clean implementation is often a good implementation. Implementing basic language features (like type-level integral numbers) with messy/twiddly library code is going to cause big troubles if you want to use them in large real-world system code.

Yep, and in the real-world this means avoiding as many abstractions as possible when you implement/use them in the compiler/language. Real-world "performance" usually doesn't come from "heavy optimizations" or high level implementations of basic language features. If you want usable type level integers in Rust, take a look at D, don't take a look at Haskell and GHC compiler.

I agree you don't need to know how the compiler implements them to use them, but them being natively implemented in the compiler allow them to be used like the other numbers in Rust, with a similar syntax, and you avoid the inevitable abstraction leaks coming from having them implemented in a slow library. I hope Rust designers will not follow your ideas on this topic :slight_smile:

1 Like