Hi again ^^
I've started a (perlin-family-of) noises project with a friend. There, I have a function that takes an N-D f32 vector and outputs a pseudorandom, unitary, N-D f32 vector. The ideas behind that function are that:
- Its output is always unitary
- Its output is deterministic (i.e. f(x) = f(x') if x = x', regardless of the state of the program)
- Its output's expected value is the zero vector (0n). This means that the distribution of outputs is balanced and doesn't favor any direction in particular.
I can test (1) and (2) using things like quickcheck and proptest. However, I have no clue as to how I would test (3). This is the model I have in mind:
- For each dimension n=1..20 or so:
- Take a large (>10000) m.
- Generate m random n-D f32 vectors.
- Obtain f(x) for each one of those.
- Sum the results and divide by m.
- Result should be "close" to 0n.
under a highly likely Delta determined by m (I remember reading when I took statistics that there is a formula for this).
How "close" it is can be used to say how confident the program is about that value being correct. Here's an explanation.
Is there a crate I could use for this purpose?