RtcPeerConnection and 'condvar wait not supported'

Dear forum,

I already received help more than once here, but every time I was able to present a failing test. This time it seems difficult and I ask you blindly, as it is a wasm-compiled code that interacts over the network with another code. And only after a couple of interactions do I see a condvar wait not supported panic.

As the panic happens in wasm (being run in node), the stack trace is unusable, and I had to go with log::debug!() to find the culprit. From what I gather, the problem is somewhere along the line of the following. But I have no idea why there is a condvar_wait in there. The "Offer created" is never printed.

  futures::executor::block_on(async {
    // The config sets IceServers
    let rp_conn = RtcPeerConnection::new_with_configuration(&config)?;
    // Lots of other stuff happening
    log::debug!("Creating offer");
    let offer = JsFuture::from(rp_conn.create_offer())
      .await?;
    log::debug("Offer created");
  });

Does anyone have an idea why that would try to block on a condvar?

I see that panic message in the standard library, under the "unsupported" system type here.

It looks like wasm imports that module if compiled without target_feature = "atomics", as seen here.

Thanks a lot - I rewrote most of my code and ran again into this problem. It happens if I use the futures::executor::block_on.

Previously I tried with wasm_bindgen_futures::spawn_local, which works, but as the closure needs to be 'static, it needs some more code around it, so I'd prefer the call using futures. But then I get the above error.

I tried to find how to compile using the atomics feature, but the only reference I could find was Parallel Raytracing - The `wasm-bindgen` Guide, and this seems only to work on nightly. Is there a way to make it work on stable?

Perhaps using LocalPool from futures will avoid using any unimplemented synchronization primitives. I don't know enough about the futures implementation to say for sure, but that would be my guess.

tl;dr It's fundamentally impossible.

Within the JS on web browsers you can't block on some IO operations synchronously. This is intended API design as blocking the UI thread would freeze the web page and make it irresponsible. You can't even workaround it with busy loops as the loop itself blocks the code which complete the IO operation. WASM on web browsers are executed in the very same thread with the JS and share its limitations.

If I use wasm_bindgen_futures::spawn_local it works - but the quirk is that this takes a 'static, so I need to make one of these Inner structures, and pass them through an Arc<Mutex<>> to the spawn_local method.

From what I can see in my app, the UI stays responsive during these calls. Even though they are very short (200ms) anyway, so I'm not sure if it doesn't freeze during these 200ms.

For future reference: tried LocalPool, but that doesn't work, neither. It also panics when calling the JsFuture::from().await.

And I didn't find out how to recompile wasm with the atomics feature. Probably would need to go through nightly, but I have a firm conviction that I don't want to do this...

It can work if all the operations happens synchronously without pausing. So technically it may work, in some less useful situations, but in practical code it won't.

For this specific example the inly .await point is to create offer. Recent version of browsers tend to have webrtc trickle option enabled, which allows collecting and exchanging credentials in background so the initial offer can be constructed synchronously without touching any IO.

Hmm - does that mean in general that you cannot (in practical code, as you write) have rust code that compiles to wasm and that depends on IO? But frameworks like yew do offer async handling of the code.

Now I'm confused (again).

No, it means you can't have blocking IO on the web. You can still .await them in the async fn. Some libraries may not support this kind of environment though.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.