What is meant here is that although you can make it as huge, complex, and composed from whatever you want, you have to make a choice about which algorithm you are working with. In other words, if you pick a specific algorithm, then there is some Rust file that it will not be able to verify. On the other hand, there is no Rust file that every algorithm fails to verify.

One place where this comes into play is randomness. These theorems assume that the algorithm is fully deterministic, and unless you hard-code some sort of pseudo-randomness seed in the algorithm, you have not picked a "specific algorithm" in the sense used here. (If this seed has a bounded size, you can just try every seed. You are allowed to combine as many algorithms as you want after all, as long as there are finitely many.)

As for humans; if you want to apply these theorems to us, you run into the issue of "single algorithm" again. You can't just say "humans", because that is not a specific fully deterministic algorithm, and the theorem only applies once you have chosen a single specific algorithm. If we assume that the universe is deterministic or whatever (and that humans eventually go extinct), we can make some argument where our single algorithm loops through every nanosecond in the universe, and for each instant, loops through every person alive at this moment and sees what they would do. However this runs into some trouble with non-obvious assumptions about physics, continuity of time, bounded lifespan of humans, the fact that we also have to verify the answers given by humans somehow, as there are probably a lot of them who give the wrong answer, and figuring out what to do if every human gives up trying to solve it.

The above is not an argument that humans are better than algorithms; rather it is an argument that the halting problem does not guarantee that humans are as bad computers are.