How rust prevent dead loop for op?

The outer add + calls Add impl, and Add impl would use +, and so on, in dead loop.
How rust distinguish them?

macro_rules! add_impl {
    ($($t:ty)*) => ($(
        #[stable(feature = "rust1", since = "1.0.0")]
        impl Add for $t {
            type Output = $t;

            #[inline]
            #[rustc_inherit_overflow_checks]
            fn add(self, other: $t) -> $t { self + other }
        }

        forward_ref_binop! { impl Add, add for $t, $t }
    )*)
}

And I am confused about index op too.

    #[inline]
    fn index(self, slice: &[T]) -> &T {
        // N.B., use intrinsic indexing
        &(*slice)[self]
    }

What type is that for? That context is not present in your post. If it's for integers, it's implemented directly in the compiler.

I dont' think so. It seems they are implemnted in std lib.

add_impl! { usize u8 u16 u32 u64 u128 isize i8 i16 i32 i64 i128 f32 f64 }

Yeah, that's just calling the macro. I believe what you're asking about is the self + other bit and where the recursion ends, yes?

Yes. That's my question.

It's implemented in the compiler itself, presumably via LLVM intrinsics (which is a fancy way of saying "I'll handle it")

Where's the codes telling the compiler uses intrinsics?

I'd have to dig for that, as it's a language construct. Honestly I'm about to head to bed, so hopefully someone else can do that digging for you :slight_smile:

During the construction of MIR operators like + are lowered to calls to the associated trait method like Add::add except for specific builtin types like integers. For those it will lower them to BinOp MIR statements. The LLVM codegen backend then codegens the right LLVM instruction. The lowering happens at rust/as_rvalue.rs at d03fe84169d50a4b96cdef7b2f862217ab634055 · rust-lang/rust · GitHub.

4 Likes

But why add_impl! get called for inumeric types in std?

They implement the Add trait for consistency, even though they don't have to because using + would work for them even without that impl.

1 Like

Another reason is such that you can actually pass integers to functions accepting any type implementing Add. In that case the MIR of that function contains Add::add calls instead of + operators. And finally it ensures that before MIR building there don't need to be any special cases to make for example 1 + 2 typecheck.

4 Likes

In effect, when the compiler encounters an expression of the form expr1 + expr2, then it applies some rules:

  • If the types of expr1 and expr2 match one of the compiler's intrinsic implementations of +, then the compiler emits that implementation.
  • Otherwise, it looks for an implementation of Add for a type that's compatible with the types of expr1 and expr2 and with the types required by the result, and, if it finds exactly one, calls T::add through that trait.
  • Otherwise, if it doesn't find a single matching Add implementation, it emits an error and aborts compilation.

The + itself is the instruction to use an intrinsic, if an intrinsic is available. By giving intrinsics a higher "priority" than calling Add::add implementations, the compiler avoids generating a recursive loop trying to generate code for +.

However, + does not inherently imply Add - if the Add trait were removed from the stdlib, the + operator would still work with types with intrinsic implementations. The macro you quoted generates Add implementations for those types, so that they can be used in contexts where an Add is required. Without that macro, you would not be able to use a u8 as an argument to a function taking a dyn Add or a type parameterized over Add.

The same basic process applies to [] as to +. Slices ([T] for any type T) have an intrinsic implementation of the [] indexing operator, which is resolved before any trait would be. The dereference-and-borrow maneuvering there converts the &[T] into a [T] within the scope of the expression (without moving it), then borrows a single element out of that slice using the slice's intrinsic [], returning that reference.

2 Likes