Another reason is such that you can actually pass integers to functions accepting any type implementing Add. In that case the MIR of that function contains Add::add calls instead of + operators. And finally it ensures that before MIR building there don't need to be any special cases to make for example 1 + 2 typecheck.
In effect, when the compiler encounters an expression of the form expr1 + expr2, then it applies some rules:
If the types of expr1 and expr2 match one of the compiler's intrinsic implementations of +, then the compiler emits that implementation.
Otherwise, it looks for an implementation of Add for a type that's compatible with the types of expr1 and expr2 and with the types required by the result, and, if it finds exactly one, calls T::add through that trait.
Otherwise, if it doesn't find a single matching Add implementation, it emits an error and aborts compilation.
The + itself is the instruction to use an intrinsic, if an intrinsic is available. By giving intrinsics a higher "priority" than calling Add::add implementations, the compiler avoids generating a recursive loop trying to generate code for +.
However, + does not inherently imply Add - if the Add trait were removed from the stdlib, the + operator would still work with types with intrinsic implementations. The macro you quoted generates Add implementations for those types, so that they can be used in contexts where an Add is required. Without that macro, you would not be able to use a u8 as an argument to a function taking a dyn Add or a type parameterized over Add.
The same basic process applies to  as to +. Slices ([T] for any type T) have an intrinsic implementation of the  indexing operator, which is resolved before any trait would be. The dereference-and-borrow maneuvering there converts the &[T] into a [T] within the scope of the expression (without moving it), then borrows a single element out of that slice using the slice's intrinsic , returning that reference.