Understanding Rusts Auto-Vectorization and Methods for speed increase

No, not necessarily. In Python, you'll get substantial speedups from using NumPy, because Python is interpreted, whereas NumPy is written in C (and presumably compiled with an optimizing C compiler). Rust is already compiled, so a simple loop in itself is comparable to a loop in C. (Except that you are using indexing instead of iterators in your example above, so there may be bounds checking, but that can be removed by using zip instead).

However, if you want convenient and type-safe manipulation of general arrays, you should look into the ndarray crate, because you should probably not re-write such functionality all the time, even if Rust lets you do it as fast as C or NumPy. That'd risk introducing bugs related to edge cases that the ndarray crate has probably fixed long ago.

What you posted should auto-vectorize as-is. Are you compiling and running it in optimized (i.e., release) mode? If so, it's possible that the aforementioned bounds checking hinders vectorization. Instead of indexing, you should just collect instead:

pub fn calc_norm2(xs: &[f64], ys: &[f64]) -> Vec<f64> {
    assert_eq!(xs.len(), ys.len(), "Can't calculate the 2d norm if the number of x and y components doesn't match");
    xs.iter().zip(ys).map(|(&x, &y)| f64::sqrt(x*x + y*y)).collect()
}

Generally, try to follow the idioms of the language. For example, I removed the explicit indexing above. I also changed the input from &Vec to &[f64] because the latter allows you to use the function even if you don't already have a vector in the first place, potentially avoiding an unnecessary allocation. And don't jump to using unsafe for optimization until you have measured that safe code is in fact a real bottleneck.