Reproduce repo:
DennisZhangOiler/rust-deadlock
It would be much more polite if you described your code and the problem.
Some observation: Your sleep is blocking in an async function. But this should not lead to a deadlock. Why are you using async at all? There is no await.
If you feel offend, my apologize, to me code explains more than anything so I made a repo but yes I should be more clear.
The thing is this code can be run smoothly on amd64(Linux localhost 5.15.153.1-microsoft-standard-WSL2+ #2 SMP Mon Aug 5 11:37:02 CST 2024 x86_64 x86_64 x86_64 GNU/Linux
), but if you cross compile it to aarch64
using cross
, it randomly stuck due to deadlock which is strange, I can not see any deadlocks could happen here, and I am thinking if there is any compiler or glibc issues.
And the reason I need async is that I need to use spawn:)
Do not use blocking operations in async code.
No need to apologize. What I was hinting at is that you are more likely to get useful feedback if you give more information. You have to take into consideration that you have to convince people who lurk here in their free time to spend time on your question.
I do not see an error that could lead to a deadlock in your code (I might be wrong) but you have several complicated dependencies who might behave wrong on aarch64
. Besides your mutex and python's GIL, there is a mutex around rust's stdout and tokio also has to use some synchronization internally. I would try to get rid of tokio. You should be able to use std::thread::spawn
. If you need async code you should read and follow alice's advise.
I don't think it's due std::thread::sleep
, I switched to tokio::time::sleep
and it still happens, also tokio uses green thread
right? so even std::thread::sleep
block current thread, it should not block the whole program, other spawned coroutines should be able to work on other green threads, it's more like mutex deadlock in the spawned coroutines which is weird, because closure executed in GIL should be atomic.
Also it doesn't explain code works on amd64, failing on aarch64
Rust doesn't have the concept of a "green thread", instead its async
infrastructure is built on top of std::future::Future
, which is an object that gets polled to make progress. A core assumption about futures is that polling will execute quickly and set things up so the future is polled again when something interesting happens.
If one future blocks, that means the executor polling it will be blocked and unable to poll anything else.
Something that may mask the problem is that tokio
starts up a pool of threads, where each thread has its own executor and idle executors try to steal work from busy ones.
I'd highly recommend reading @alice's article because she explains it much better than I could.
A Tokio task is an asynchronous green thread.
Spawning | Tokio - An asynchronous Rust runtime
That is unlikely, if the executor polls the future and future returns not ready, it will poll other futures, I suppose you mean when the task is executing, it's not able to poll other futures.
I already tried her advice, and it's not working, I am pretty sure it's the mutex deadlock in the tokio spawned coroutines.
Pretty much. The whole "if the executor polls the future and future returns not ready, it will poll other futures" bit assumes the poll()
method returns "not ready" and gives other futures a chance to be polled.
If your code uses std::time::sleep()
, the poll()
method won't even return until the specified duration has elapsed, which means other futures won't get a chance to be polled.
If the second future is waiting for the first one to complete, but the first future can't make progress until the second future runs, then you get a deadlock.
One of the ways you can get this is by running a blocking operation (e.g. std::time::sleep()
). Another way to do it is by holding a std::sync::Mutex
across a .await
point, when you should be using an async-aware mutex like tokio::sync::Mutex
instead.
Just find out the reason why it deadlocks, thanks anyway.
This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.