No, the memory representation is different in general: a &str contains UTF-8 encoded bytes, but a char is a Unicode codepoint. I believe this basically means that a char is a 31-bit integer.
ASCII characters happen to have a single-byte UTF-8 encoding — they're encoded as the byte with their value. So the above works for ASCII characters, but it breaks for other characters which have a variable length encoding in UTF-8.
Doesn't even work for ASCII characters depending on endianness. It working in the OP depends on 'a' being represented on a byte level by 97 0 0 0 and not 0 0 0 97.