/// This is safe, iff the program's downstream logic isn't tampered with.
/// The logic ensures that only at most 1 mutable reference exists at a time.
/// Further, since the referrent which is used via &'driver mut self lives
/// alongside in memory in a datastucture, once drop is called, all items
/// are freed simultaneously. For this reason, we don't have to worry that
/// the referrent is outlived by the reference (which would otherwise lead
/// to a dangling reference). The referrent is self.stage1 (and higher-stages)
fn extend_lifetime(&mut self) -> &'driver mut Self {
unsafe { std::mem::transmute(self) }
}
/// drives mutable references up one stage
fn drive(&mut self) -> Result<Async<()>, ()> {
let mut to_stage1 = Vec::new();
for (idx, packet) in self.extend_lifetime().processed_inbound.iter_mut().enumerate() {
match packet.stage {
PacketStage::Stage1 => {
self.stage1.start_send(packet);
},
PacketStage::Stage2 => {
//to_stage2.push(packet);
}
PacketStage::NeedsDelete => {
self.processed_inbound.remove(idx);
}
_ => {
panic!("Invalid stage!")
}
}
}
self.stage1.poll_complete()
}
One may ask, "why not change fn drive(&mut self) -> Result<Async<()>, ()> to fn drive(&'driver mut self) -> Result<Async<()>, ()>?". The answer is because drive is being called by a Sink's trait function which has an unbounded &mut self lifetime. As such, I have to call extend_lifetime within the drive function.
Is the reasoning in the comment of extend_lifetime sound?
Presumably self.extend_lifetime().processed_inbound.iter_mut().enumerate() returns an iterator with a mutable reference to the processed_inbound field?
Since an iterator is live for the whole for loop, this means you are creating a duplicate mutable reference to self.processed_inbound when you call remove, since now you have one in both the iterator and the one used to call remove.
To see how this is a really bad idea, try running this:
fn extend_lifetime<'a, T>(t: &mut T) -> &'a mut T {
unsafe { std::mem::transmute(t) }
}
fn main() {
let mut vec = vec![1, 2, 3, 4, 5, 6, 7, 8];
for (idx, n) in extend_lifetime(&mut vec).iter_mut().enumerate() {
println!("Got to {} at index {}.", n, idx);
if *n % 2 == 0 {
vec.remove(idx);
}
}
}
The output is the following:
Got to 1 at index 0.
Got to 2 at index 1.
Got to 4 at index 2.
Got to 6 at index 3.
Got to 8 at index 4.
Got to 8 at index 5.
thread 'main' panicked at 'assertion failed: index < len', src/liballoc/vec.rs:987:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
We've got a bunch of stuff going on here: skipped items, duplicated items and panics.
Thank god that they have an assert inside the iterator—if they didn't you'd have a buffer overflow vulnerability.
You should never attempt to modify a collection while it is being iterated unless it is specifically designed for something like that. A HashMap will have similar issues if you remove an item while iterating it.
I recommend storing the indexes to remove in a vector and actually removing them after the loop.
Right, that would be the next part to the solution (I was going to write that out before, but then realized that even storing the indexes and then deleting them all in a loop would force a change in the index positions)
Okay, so onto the next portion: the idea that there are duplicated items. The &mut Packet is what I'm assuming is being cloned here. Might you have a better idea here than the one I've proposed?
Are you referring to my duplicated eight? It's less of a clone and more of a “yeah the bytes just kinda happened to be there”. If it was a type with special clone handling, that handling would not have been called here.
As for a better idea, does using retain after the loop not fix the issue?