I'm writing some code in Rust/wasm32 interacting with wgsl / (webgpu, webgl2); and am wondering if both wasm32 & webgl/webgpu are guaranteed to be little endian.
WASM is little endian.
WebGL (and WebGPU?) use host endianess, so no guarantees.
I honestly would expect WebGL and WebGPU to use little endian too where it matters. Neither specification says anything about endianness, but I would surely hope that browsers don't just break them on big endian for websites that are written with little endian in mind.
A big part of GPU APIs is that you describe what the buffer layout is, as a sort of manually provided reflection/rtti. I don't have a citation for this, but if you say your buffer has "Float32×3" in it (e.g. a
vec3<f32>), then that should always mean whatever the native endianness of the API is. For wasm32, that would be little endian, because wasm32 is little endian.
If the OS and/or GPU is big endian, some part of the stack will do the necessary byteswapping and/or endian aware interfacing. Note that shader dispatch already does data type conversions (e.g. between floating and quantized values), so an endianness swap wouldn't be all that special at the raw driver level.
Keep in mind that the web platform added typed arrays (
Float32Array etc) which uses native endianness, and
DataView which let's you control the endianness, in order to support WebGL, specifically so the API doesn't have to handle byte swapping.
A big endian platform with a browser and hardware rendering would probably have plenty of broken websites that just slap little endian buffers straight into the GPU. I know I've done that.