Hi, I'm trying to develop audio streaming service.
What I'm trying is: I will create input stream with cpal and send data continuously to axum server, in the same time injecting data to html body so html audio component can play. In theory. I hope, I'm on the right way.
Here is my axum code that can produce playable music from file.
async fn stream() -> impl IntoResponse {
println!("Stream");
let header = [
(header::CONTENT_TYPE, "audio/mpeg"),
(header::CACHE_CONTROL, "no-cache, must-revalidate"),
(header::PRAGMA, "no-cache"),
];
let file = File::open("audios/audio.mp3").await.unwrap();
let stream = ReaderStream::with_capacity(file, 1);
(header,Body::from_stream(stream));
}
This is the cpal code that I assume I record from microphone correctly(I never be able to listen because I couldn't understand how "move" in "let stream" works. My rust info is not enough for it so i could just print it).
pub fn record() {
let host = cpal::default_host();
let device = host.default_output_device().unwrap();
let mut configs_range = device.supported_input_configs().unwrap();
let config = configs_range.next().unwrap().with_max_sample_rate().config();
let stream = device.build_input_stream(
&config,
move |data: &[u8], _: &cpal::InputCallbackInfo| {
println!(":?", data);
//I need to do something here, I think
},
move |_err| {
},
None).unwrap();
stream.play().unwrap();
std::thread::sleep(Duration::from_secs(10));
stream.pause().unwrap();
}
println! produces this kind of outputs if I speak to microphone so I think it works somehow.
I need to transfer sound stream from cpal to axum server over network (streamer's computer to axum server) for serving listeners. How can I do this correctly ?
Edit:
I have struggled about this topic a lot. In case someone need help. I did it finally: