postMessage(js_sys::SharedArrayBuffer): O(1) or O(n) time?

Consider an js_sys::(Shared)ArrayBuffer of length n. Here is my current understanding:

  1. postMessage on a js_sys::ArrayBuffer w/o transferables would use The structured clone algorithm - Web APIs | MDN and is O(n)

  2. postMessage on a js_sy::ArrayBuffer with transfer list would be O(1)

  3. js_sys::SharedArrayBuffer can not be transferred

So now my question is: calling postMessage on a SharedArrayBuffer -- is it guaranteed to be O(1), or can it also be O(n) ?

The structured clone algorithm accepts SharedArrayBuffer objects and typed arrays mapped onto SharedArrayBuffer objects. In both cases, the SharedArrayBuffer object is transmitted to the receiver resulting in a new, private SharedArrayBuffer object in the receiving agent (just as for ArrayBuffer). However, the shared data block referenced by the two SharedArrayBuffer objects is the same data block, and a side effect to the block in one agent will eventually become visible in the other agent.

  1. Thanks.

  2. Do you interpret that as O(1) or O(n). Personally, I'm not sure, here are the two sides of the argument I see:


  1. By the name, "Shared"ArrayBuffer, I expect it to be O(1).

Makes it sound like it might be O(n). Structured Clone is basically a "deep copy"; and if we were just doing a O(1) shallow copy, it is not obvious to me why we need structured clone.

This, the creation of a new SharedArrayBuffer, also makes me think it will be O(n).

This is the first evidence in that blurb that makes me think, it's O(1) -- i.e. the two are sharing some region in memory.

This, unfortunately, once again makes me think it's O(n) - if they were sharing the same memory blog, why is it "eventually become visible in the other agent" instead of "immediately become visible in the other agent".


Sorry if this comes off as pedantic; I'm not sure how to interpret this.

This is a fundamental property of parallel system. Since each threads have its own timeline about memory access, the term "immediately" is not defined across multiple threads. This is why cache invalidation is such a hard problem.

I see. So you are attributing the "eventually" to L1 / L2 / L3 caches. I guess your intuition is on the O(1) rather than O(n) side of things ?

Yeah. Though I can't find explicit guarantee, I don't think there's any reason to make it O(n) for documented behavior.

1 Like