Long answer: Simple iterators and equivalent simple loops like this can usually be (and are) auto-vectorized by the compiler. Thus, this is generally as fast as you get algorithmically (general min/max reduction can't be asymptotically faster on an arbitrary, unordered array, because you must always look at every element.)
However, the unsafety is entirely unwarranted here. For one, your code is unsound because you still unwrap_unchecked() the min/max of an empty slice, which is always None, triggering UB.
If you know that your numbers are never NaN, then consider encoding this invariant in the type system, using a floating-point wrapper type that implements Ord, removing the inner unwrap, and remove the outer unwrap or at least convert it to the non-unsafe method instead.
If you find that you are calling this in a hot loop, then you are probably missing out on utilizing an appropriate data structure (eg. a heap, a B-tree, or a simple sorted array) for which min/max retrieval is trivial or at least algorithmically faster than naïve linear search.
^ This! I was going to write something with the same conclusion:
If you only search for the maximum value in a slice once then creating that data took long enough that minor differences between the speed of the (already plenty fast) straightforward (and safe) implementation vs. potential slightly improved implementations are negligible. If you search for the maximum value in a slice many times, (presumably with small modifications to the slice each time), then there will most likely be better data structures to choose, with asymptotic improvements, instead of just constant factors.