Weird buffering issue hidden somewhere?

Hi there.

I'm playing around with both Rust and video encoding/transmission.
I currently have a server/client codebase where I can read a video file, encode it in some format (currently using x264), send that via TCP to the client which then displays it via ffplay

This is mostly as a prototype/debug tool for now, so the ffplay part is being done like this:

impl<F: Frame + FFPlayArgs> FFPlay<F> {
    pub fn new(w: usize, h: usize) -> Self {
        let args = vec!["-fflags".into(), "nobuffer".into(), "-".into()];

        let child = Command::new("ffplay")
            .args(args)
            .stdin(Stdio::piped())
            .stdout(Stdio::null())
            .stderr(Stdio::null())
            .spawn()
            .expect("Could not start ffplay");

        // let child = Command::new("cat")
        //     .stdout(std::fs::File::create("log/cat.log").unwrap())
        //     .stdin(Stdio::piped())
        //     .spawn()
        //     .expect("Could not start cat");

        Self {
            child,
            _encoding: PhantomData,
            debug: crate::debug::BufferDebug::new("log/client-ffplay"),
        }
    }

    pub fn write(&mut self, bytes: &[u8]) -> Result<(), std::io::Error> {
        let mut out = self
            .child
            .stdin
            .as_ref()
            .expect("Could not get ffplay process");

        out.write_all(&bytes[..])?;
        out.flush()?;

        Ok(())
    }
}

This is essentially an overgineered way of doing cat video_file | ffplay -fflags nobuffer -

You may also notice that I have some commented code in there, which creates a cat command instead of ffplay
The reason I did that is because the ffplay window is not showing up at all. I thought it was some buffering issue (either at the tcp buffer, or here), so I debugged all those points, by outputing those buffers to files. I also switched to that cat command here to output the final buffer to a file, instead of to ffplay. I was then able to cat that exact file with cat log/cat.log | ffplay -fflags nobuffer -, and the video is showing up properly, meaning all the proper bytes reached the cat command.

So I now have no idea why it doesn't work when ffplay is used as a std::process::Command. Is there any difference in the way that works that makes it not so equivalent to the bash command?

I have no idea how to debug this further

It might be useful to have a (minimized?) test program that reproduces the problem. I tried to reproduce it with the following program, but this plays the video successfully on my computer:

use std::{process::{Command, Stdio}, fs::File, io::{Read, Write}};

fn main() -> Result<(), Box<dyn std::error::Error>> {
    let child = Command::new("ffplay")
        .args(&["-fflags", "nobuffer", "-"])
        .stdin(Stdio::piped())
        .stdout(Stdio::null())
        .stderr(Stdio::null())
        .spawn()
        .expect("Could not start ffplay");

    let mut out = child
        .stdin
        .as_ref()
        .expect("Could not get ffplay process");

    let mut input = File::open("input.mp4")?;
    let mut buf = vec![0; 1024 * 1024];
    loop {
        let n = input.read(&mut buf)?;
        if n == 0 { break; }
        out.write_all(&buf[..n]);
        out.flush()?;
    }
    Ok(())
}

My hunch would be that you are not closing child's stdin after writing the data.

Try adding drop(child.stdin.take()) after you've feed it with data?

See this pattern here: Child in std::process - Rust

1 Like

Have you tried spawning a Command to ffplay a file directly off the filesystem? (To rule out other problems, e.g., the DISPLAY environment variable being unavailable in whatever context your program runs.)

@mbrubeck yes, I also have a version where, instead of server -> TCP -> client ffplay, I send the buffer directly to ffplay. That one works perfectly.
It also worked perfectly when I was sending raw RGB frames instead of encoding with x264. I suspect that because each frame is now extremelly small, it's not putting so much pressure in one of the buffers, making it not flush properly as it would with the larger raw frames

I'm now trying to put together a reproducible example like you suggested. I'll post it here when I achieve that

Ok, as I was trying to build a reproducible example, I noticed a potential solution.

It turns out nothing this was a combination of two things:

  • I was running ffplay -fflags nobuffer -, but apparently, removing -fflags nobuffer is part of the solution (only works in conjunction with the next point)
  • the program kept running in an infinite loop where, after the whole file was read, it would get stuck reading 0-length buffers all the time. Forcing a break out of the loop once buf == 0 seems to make something flush, and the whole video shows at some point

I should note, for context, that I'm using a small video file for debugging. only a couple of seconds, 112kb of size. I'm guessing this is too small, and even trying to force a flush is not enough for ffplay to play it, and only an EOF of its stdin is triggering it?

I should not as well that drop(child.stdin.take()) as suggested above didn't work either

Anyway, this isn't fully solved, if I were to be pedantic, since I still don't know the full story, but it seems that a real-world scenario (with real-time live video for longer durations instead of just a 100kb clip) won't fall under this problem

Thank you all

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.