I think it is not useful to think about this in terms of “which is faster” because these two synchronization primitives do not do the same thing, so you will use them in different ways. Which matters, because the performance of synchronization primitives depends heavily on how you use them (for example, the amount of synchronization transactions and memory traffic which you perform affects performance a lot).
Channels are more specialized than mutexes. So if implemented well, they should be faster than or equivalent to a naive mutex-based implementation of what they do, namely sending data across threads with some mechanism to await reception of data. If this communication pattern suits your needs well (like, for example, when implementing a synchronous web server or a logging system), then channels are a better choice: they’re less code to write, and someone already did the performance optimization work for you.
Sometimes, however, the pattern of communication used by channels is not what you want. Imagine, for example, that you have a complex data structure shared between two threads, only modify little chunks of that struct at a time, and do not need to notify the other thread when you do so. This happens frequently with caches. In this scenario, mutexes are the better choice. You can emulate this pattern with channels by sending diffs of the shared data block across threads, but it will be much less efficient.
And sometimes, even mutexes are too specialized/high-level for your needs, and an atomics-based synchronization protocol will be faster and more appropriate for you. This is the case if, for example, you simply want thread A to tell thread B when it’s done doing something, and thread B does not need to await this event. In this case, an AtomicBool will be most efficient.
As you can see, which synchronization mechanism is best heavily depends on what it is that you are trying to do!