Syntax: Setting Endianness in byteorder crate


Hi everyone,

Taking the first example from the byteorder crate docs, there is a syntax I haven’t seen before rdr.read_u16::<BigEndian>().unwrap()

The bit I’m confused about is ::<BigEndian>, I see what it does - but I wouldn’t know how to do it myself. I’ve looked in the source code to try to figure out what it is and where it’s coming from - but I’m struggling.

Please can someone provide some pointers to some docs?

As a secondary point, I’d be interested in whether I can set this to be LittleEndian or BigEndian at runtime - as I can be dealing with files generated using both techniques.

Appreciate the help :koala:


See ; basically, it lets you specify a type for a generic method when it can’t be deduced.

If you want to write a function which is generic over byte order, you can do something like this:

fn f<R: ReadBytesExt, T: ByteOrder>(rdr: &mut R) -> (u16, u16) {
    (rdr.read_u16::<T>().unwrap(), rdr.read_u16::<T>().unwrap())

fn g<R: ReadBytesExt>(rdr: &mut R) -> (u16, u16) {
    f::<_, BigEndian>(rdr)


Thanks for responding, Eli. I wonder if you clear a couple of things up for me:

  • What does the _ in the f::<_, BigEndian>(rdr) mean?
  • Can the type of BigEndian/LittleEndian be determined at run-time?

Or would it be better to add:

fn h<R: ReadBytesExt>(rdr: &mut R) -> (u16, u16) {
    f::<_, LittleEndian>(rdr)

…and then choose to call g or h at runtime?

The first byte of my file tells me if it was written with little-endian or big-endian.


The _ is just an inferred type; instead of having to write out the type, the compiler figures it out from the surrounding code. f::<R, LittleEndian>(rdr) would have the same meaning.

Generic types have to be resolved at compile-time. Adding something like h is essentially the right approach. You’ll end up with something like:

let file_big_endian = rdr.read_u8() != 0;
let parsed_file = if file_big_endian {
    parse_file<_, BigEndian>(rdr)
} else {
    parse_file<_, LittleEndian>(rdr)