Slow directory walk

I wrote a small piece of code that counts files in directory. When I compare it with similar time ls ... it gives me that my code (optimized for release) is around 100x slower. I don't expect my code to be anywhere near the ls but 100x slower is far from perfect. Is there anything that can be easily improved without sacrificing simplicity?

use async_std::{fs, path};
use crossbeam_queue::SegQueue;
use futures::StreamExt;

async fn main() -> () {
    let q = SegQueue::new();

    let path = std::env::args().skip(1).take(1).next().unwrap();
    let path = path::Path::new(&path);
    let path = fs::read_dir(path).await.unwrap();


    let mut count = 0;

    while let Some(mut dir) = q.pop() {
        while let Some(entry) = {
            let entry = entry.unwrap();
            let path = entry.path();
            if path.is_dir().await {
                let dir = fs::read_dir(path).await.unwrap();
            } else {
                count = count + 1;

    println!("{} files", count);

Before this I also tried recursive async fn but I don't want to go down that road. I also considered spawning new task for each read_dir but when I looked at its implementation I realised it's done internally.

It would probably be faster to do in non-async code, since async code has to offload any filesystem based IO to a synchronous thread pool.

You're right. When I removed async_std and async/await, my code is now 5-10x times slower that ls which is reasonably fast for my needs. Thank you for this advice.

Of course, I wasn't expecting that. Is there any rule of thumb when is it better to use sync vs async? In other words: in what case moving from std to async_std would give me performance boost? Would it be opening a huge file or working with thousands of directory? I can't grasp the issue here.

If you are only doing file IO, async/await is never helpful.

If you are doing network IO, I recommend Tokio, although you can use async-std for that as well.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.