HDF5 partial write

Does the hdf5 crate support partial writes?

If the data that need to be stored fit comfortably in memory, then they can be written into an HDF5 file approximately like this:

 hdf5::File::create(file_name)?
     .create_group(group_name)?
     .new_dataset_builder()
     .with_data(&data)
     .create(dataset_name)?;

But what can be done if data is excessively large?

I'm not managing to figure this out from the API-only docs.

The HDF5 standard supports storing partial blocks of data called chunks. Typically, if the dataset is too large to fit into memory at once, you can write to your file one chunk at a time, and read the data file in the same fashion.

The Rust HDF5 library also supports chunking. See this code example.

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.