A &str is a slice of bytes (u8) that use the variable-length UTF-8 encoding (i.e. each character may take up 1, 2, 3, or 4 bytes).
A &[char] is a slice of chars, where each char is a 4-byte "unicode code point".
Despite what most common programming languages would have you believe, a string isn't just an array of characters. To properly explain the distinction would require going down a very deep rabbit hole and I'll probably butcher it completely. So instead, I'll refer you to this excellent explanation from Tom Scott:
You can't even assume that a single code point correlates to a single glyph (very loosely - the "letter" you would draw) because you've got things like accents and skin tone modifiers which can be added after another code point to alter it.
Even if they have identical binary representations, there are a couple reasons why &str and &[u8] are two different types:
A bunch of bytes and a string are two different logical concepts, and making them different types means you can use the type system to ensure they don't get accidentally mixed up
You can give the str type methods specific to a string (e.g. lines() and trim_whitespace()) and implement the Display trait
The str type uses unsafe code internally so if you provide direct access to the underlying bytes, users may accidentally break str's assumptions and cause UB (e.g. you modified the last byte to look like a multi-byte character and now some unsafe code will read past the end of the string)
There is similar logic for why std::path::Path is a different type to str, even though most languages are happy to treat strings and file paths as the same thing.