I assumed Arc<Mutex<T>> allows T to always being shared between threads

I do have a struct that contains a *mut u8. With this the struct is not able to be safely shared between threads. However, when wrappig the struct into an Arc and a Mutex this would always guaranty that it would be safe to share between threads. Why is this not the case here ?


struct Foo {
    data: *mut u8,

fn bar<T: Send>(foo: T) {

fn main() {
    let foo = Foo {
        data: std::ptr::null_mut(),
    let foo2 = std::sync::Arc::new(

This does not compile:

= help: within `Foo`, the trait `std::marker::Send` is not implemented for `*mut u8`
   = note: required because it appears within the type `Foo`
   = note: required because of the requirements on the impl of `std::marker::Send` for `std::sync::Mutex<Foo>`
   = note: required because of the requirements on the impl of `std::marker::Send` for `std::sync::Arc<std::sync::Mutex<Foo>>`

How can I get this kind structure to be "thread safe" ? I'm actually not owning the content of the structure as it is given by a library....

Thx in advance for any hints.

You need to have a Send bound on the inner T, i.e., that it is safe for an owned instance (or exclusive reference to one) to cross a thread boundary.

The simplest counterexample would be a shared reference to something that isn't safe to share across threads, such as Cell: Cell : !Sync ⇔ &'_ Cell : Send.


If this wasn't the case, the following data race would compile:

let x = Cell::new(0);
let at_x = &x;
let arc_mutex = Arc::new(Mutex::new(at_x)); // T = &'_ Cell
::crossbeam::thread::scope(|thread_spawner| {
    thread_spawner.spawn(move |_| {
        loop { at_x.set(at_x.get() + 1) }
    thread_spawner.spawn(move |_| {
        let at_x = arc_mutex.lock().unwrap();
        loop { at_x.set(at_x.get() + 1) }

Well, my assumption was that wrapping T with Arc and Mutex would add the Send bound to the type T. If this is not the case How can I get T Send when T is defined within a library I cannot change and T contains a *mut u8 ?

Is there any wrapping type that can be used?

There is: https://docs.rs/send_wrapper/0.4.0/send_wrapper/, but this just converts a compile-time error into a runtime error.

If you’re sure that your type is actually safe to share, you can implement Send or Sync for it directly, as described in the Rustnomicon.

1 Like

But to really be sure about this for an external dependency, you should audit their code to make sure it is the case, and then pinpoint the dependency in your dependency tree / Cargo.toml:

that-external-crate = "=x.y.z"

so that you opt-out of even patch updates, since those would be allowed to make their types become non-thread-safe since they never guaranteed so within the type system.

So, a more future-proof approach, for both yourself and others in the future, is to submit a PR to the repo of the external crate adding the unsafe impl Send for ..., ideally with a lengthy /// Safety doc-comment attached to it explaining why it is so (e.g., no thread-local storage, and no shared references to !Sync data).

Then, while waiting for the PR to be merged, you can already use your fork using the [patch] section of the Cargo.toml

git = "url of your fork"
branch = "branch of the PR"

If a type is !Send that's a good indicator it isn't sound for ownership of your type to be passed to another thread, and you shouldn't try to force it by wrapping Arc<Mutex<T>> and slapping a unsafe impl Send for MyWrapper {} on it.

For example, imagine the chaos you could cause by being able to move a Arc<Rc<T>> to another thread (or more realistically, a type which may contain an Rc one or two levels down). Each thread could clone the Rc<T> and now you've got a data race around the Rc's reference count.

The best course of action is to make an issue to the upstream library asking why something is !Send and if it is deemed sound to implement Send for it.


Well thanks for all your answers and explanations and I do grasp now why an automatic Send for an arbitrary type might be dangerous.

The crate in question is the web-sys crate and the structure in question is WebGlRenderingContext which is just a wasm binding to a java script class, so I do not expect anything could be changed here in favour of my requirements.

I'm actually trying to use the WebGlRenderingContext with a ECS application where I'd like to store the WebGlRenderingContext reference as part of the system that shall be responsible for rendering/drawing, and the ECS I'm using requires the System to be Send for optional parallel processing.
I might need to look for other approaches how to deal with this.

I’m not too familiar with WebGL, but lots of graphics code is designed to only work from one privileged thread. One way to deal with this is to use a crate like generational-arena to keep the actual objects on the graphics thread and store handles in the ECS system.

Hi thanks for the hint with the generational-arena. I've had a quick look into it and at first glance it seems reasonable. However, thinking in more details, even if I share only the ArenaIndex within the ECS system I'd need to reference to the Arena itself as well, right? Or would the arena being treated as a global static in this scenario?

Only the renderer and other graphics code needs the reference to the arena, so it can live on the stack of the graphics thread, or as a thread_local static.

So the lifecycle of a texture, for example, would be something like this:

  • An asset loader (running on the graphics thread) generates the texture and stores it in the arena. It puts the id into the ECS along with metadata (like the texture’s name).
  • Game code, running anywhere, copies the texture id onto some Sprite object to tell the renderer what it should look like.
  • The renderer is running on the graphics thread, so it has access to both the ID from the sprite object and the arena. This lets sit get the actual texture object it needs to do its job.
  • At a level change, cleanup code on the graphics thread can ask the ECS which textures aren’t needed anymore and deallocate them.
1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.