Writing large inputs to STDIN of a child process fails

I have the following program that attempts to spawn a new process on Linux and write a bunch of stuff into the child process via STDIN.

use std::io::Write;
use std::process::{Command, Stdio};

fn gpg_decrypt(file: Vec<u8>) -> Result<(), ()> {
    let mut proc = Command::new("gpg")
        .map_err(|e| {
            eprintln!("error spawning process: {}", e);

        .map_err(|e| {
            eprintln!("error writing to child process: {}", e);

    let output = proc.wait_with_output().map_err(|e| {
        eprintln!("error getting output: {}", e);

    println!("output: {:?}", output);

fn main() {
    let bytes = std::fs::read("foo.tar.gpg").unwrap();
    println!("Hello, world!");

The program does not work when a 600K file is sent to STDIN. It also does not work if I attempt to chunk the writes as follows:

        let mut total = 0;
        let stdin = proc.stdin.as_mut().unwrap();
        for c in file.as_slice().chunks(1024) {
            println!("total: {}", total);
                .map_err(|e| {
                    eprintln!("error writing to child process: {}", e);
            total += 1024;

Running the program under strace tells me that the write hangs after a certain limit.

Is there anything I am doing wrong?

Can you be more specific about how it doesn’t work? In particular, does it emit any kind of error, terminate unexpectedly, or hang forever?

One potential problem is that you’re not reading the child’s output as you’re giving it the input data. If it’s unable to write output due to a full pipe, it will probably stop accepting input until the situation is resolved. Your program then blocks trying to write into itsfull input buffer, and you have a deadlock: Each process is waiting for the other to consume some data from the pipe buffer before continuing.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.