I have made a small demo program to show how it works. This should provide enough entropy for a decent poker or black jack game.

I am probably not the first one with this idea. I would like to know what you think.

I have made a small demo program to show how it works. This should provide enough entropy for a decent poker or black jack game.

I am probably not the first one with this idea. I would like to know what you think.

There are a few "cursed" ways of getting randomness. For example, in programs running with ASLR you can get random bits from the address of your `main`

function.

But I do not recommend relying on any of these. Rust could change/optimize implementation of the `HashSet`

and weaken the randomness. It may already be biased. Just use the `rand`

crate.

9 Likes

This section of The Rand Book helped me with the terminology in this area, basically "random" does not always mean random in the way that we think of it. So when you use a randomness API, you want to make sure you and the documentation are on the same page about what kind of randomness you're getting. When it comes to cards, people expect truly non-deterministic (at least from the participants' perspectives) order, so you need an API that actually promises pseudo-randomness and is made by people who understand the underlying computer science. Otherwise someone could get shot.

As @kornel said, the real problem is that this API does not specify randomness as its goal, so if you rely on it and it changes to become fully ordered without any implicit randomization, it's still doing what it promises and you're screwed (or dead, if you run your code in the wrong saloon) because you chose to misuse that API.

The rand crate is good stuff. It was in development before Rust was released.

1 Like

Thank you, that was very helpful. It was too good to be true. I suspected something like this, but being still a beginner I had to be sure.

I'll stick to a rand crate from now on.

1 Like

I'd also mention that you need to employ extra caution for card deck shuffling in particular. A RNG with 32 or 64 bits of internal state *cannot* produce all possible shuffles; you need 225 bits of internal state merely to be able to reach all 52! shuffles plus another ~32 for debiasing.

Most PRNGs, and even most OS-provided RNGs, do not carry 256 bits of internal state, and you need 225 bits of *entropy* (which is a burdensome requirement).

The just-released Linux kernel 5.17 changes to an RNG with 256 bits of state for its entropy pool. All versions prior to that, you need an extra source of entropy in order to have fair shuffles out of process restart.

6 Likes

It's obviously not cryptographically strong, and I wouldn't use it if I was writing code for a casino or whatever, but I've copy-pasted this function, and some related utility functions into several hobby game projects.

```
type Xs = [core::num::Wrapping<u32>; 4];
fn xorshift(xs: &mut Xs) -> u32 {
let mut t = xs[3];
xs[3] = xs[2];
xs[2] = xs[1];
xs[1] = xs[0];
t ^= t << 11;
t ^= t >> 8;
xs[0] = t ^ xs[0] ^ (xs[0] >> 19);
xs[0].0
}
```

128 bits of state, seeded with a timestamp has been sufficient for at least my purposes.

This is based on the C version of `xorshift128`

from the Xorshift Wikipedia page.

1 Like

Depending on the game, it may not be necessary to produce all possible card orderings. Blackjack, for instance, only uses a few cards per hand. As long as the selection of those is unbiased, it doesn’t matter what happens to the rest of the deck.

Even games that deal out the entire deck often don’t care about the exact order. In bridge, for example, you just need to randomly distribute the cards into 4 sets of 13.

1 Like

Wow, thank you for all these answers, that's more than I could hope for. You've giving me a lot to think about. Rust is all about safety and security, hijacking HashSet's random stuff is gone.

I have a lot to learn.

This is not the right way to think about the number of bits of entropy seeding a random number generator.

By this logic, 256 bits wouldn't be enough to play two hands of a card game, because to shuffle two decks requires 551 bits of entropy. If you play 10 hands of a card game, you would need 2256 bits of entropy.

However, this would only be true if you want to use "true randomness" rather than a cryptographically strong pseudo-random generator. In a CSPRNGs the requirement is **not** that every sequence is possible, but rather that the sequences are algorithmically indistinguishable from random.

I would say CSPRNGs are good enough for card games (and for almost every other application).

Thus the number of bits of entropy required is **not** related to the number of possibilities in your game, but rather to the cryptographic strength you want to achieve.

Also, I am pretty sure the Linux RNG has been using a 256-bit strong CSPRNG at least since kernel 4.8, when it started using ChaCha20, and it was always seeded with 256 bits of entropy. What changed in kernel 5.17 is just how that entropy is collected, not how many bits.

I don't quite remember how the argument goes, but I think that once all shuffles are possible, we can start claiming that the shuffles are IID (independent and identically distributed) and a player of the game needs to do 2^(31/2) = 2^15 work to prove me wrong (that the shuffles are correlated with each other) and start predicting the next game.

... Okay that's not exactly a *great* safety margin.

It takes about 2^128 work to break 128-bit security (i.e. to distinguish random shuffles using a 128-bit RNG from true IID shuffles), and 2^256 work to break 256-bit security, regardless of how many cards you're shuffling how many times.