Creating 2D vectors


I’ve been extremely selective about what I post here versus what I can ask on Mibbit. I hope this is within the brackets of an acceptable post. I’m creating this post to get more perspective on what i’m doing and also because I’ve seen many solutions but no reasoning to why referencing performance.

I started creating 2D vectors using an example I found here. I wanted my method to be a bit more flexible allowing adjustable dimensions, iterations, initializations and such. I came up with many versions (thanks to the guys/ladies on Mibbit) and eventually came to the conclusion that creating a 1D vector and iterating over it in a 2D behavior was the best way.

Is there anything I’m doing wrong here or possibly could do better in improving the performance? I shouldn’t need to import a crate to do something so simple. So if there’s a way I can improve my method would you be so kind to elaborate.

Profiling: 539,967 ns/iter (+/- 40,988)
const IMG_WIDTH: usize = 2052;
const IMG_HEIGHT: usize = 2048;

pub fn create_2d_vector (pixel_2d_vec: &mut Vec<Vec<u16>>, width: &usize, height: &usize) {
    println!("\n\nCreating nested 2D vector of {}_WIDTH by {}_HEIGHT ", width, height);
	for (_i, row) in pixel_2d_vec.iter_mut().enumerate() {
    	for (_y, col) in row.iter_mut().enumerate() {
    		*col += _y as u16;

My apologies, the above was one version I had but I eventually got to the following version which is continuous layout “I believe”.

Profiling: 439,495 ns/iter (+/- 35,042)
const IMG_WIDTH: usize = 2052;
const IMG_HEIGHT: usize = 2048;

pub fn create_1d_vector (pixel_1d_vec: &mut Vec<u16>, width: &usize, height: &usize) {
	println!("\n\nCreating 1D Vec<u16> of {}_WIDTH X {}_HEIGHT ", width, height);
    for x in pixel_1d_vec.chunks_mut(*width) {
    	for (iter, item) in x.iter_mut().enumerate() {
    	    *item += iter as u16;


Contiguous memory will be the fastest. With Vec<Vec<>> you have double indirection and rows of varying sizes.

For working with regions of 2D images in contiguous memory I’ve created the imgref crate.


@kornel Sorry about that, I forgot to post the other version. Also your example for the crate you have doesn’t appear to compile due to a “resize_img not found in this scope”.


The code in README is pseudocode and wasn’t supposed to compile. Noted, I’ll change it to real code in the next version.


Just to add onto what @kornel said, the continuous version also introduces fewer cache misses if you are accessing the memory (roughly) sequentially.


Is there any support to imgref for pixel formats…
Windows DIB

Are there examples on how to implement your crate. There also appears to be a terminology gap that I’ve not heard before called “stride”. Your defining “strides” as regions which in industry we typically refer to as tap mode configuration or image readout geometry. I’m no expert which is why I ask. Would you be so kind to elaborate on this please and provide some examples in your next release as well. Perhaps I could fork the 1.3.3 release and suggest some features to be integrated?


imgref crate is based on a generic Img<T> type, so like Vec<T>, it can be used with any pixel type.

It’s fine for planar YUV422 if you make each plane separate Img. YUV444 is OK. It’s not suitable for images with clever interleaving, so bayer and interleaved YUV422 are out.