use std::mem::transmute;
// struct MyDropStruct(i32);
// impl Drop &for MyDropStruct {
// fn drop(&mut self) {
// println!("drop my struct");
// }
// }
#[derive(Debug)]
struct TraitObjPtr {
data: *const (),
vtable: *const TraitObjVtable,
}
#[allow(unused)]
#[derive(Debug)]
struct TraitObjVtable {
destructor: fn(*const()),
size: usize,
align: usize,
func: fn(*const()),
}
fn main() {
let vec = vec![1, 2, 3];
let f: &dyn Fn() = &move || {
for v in &vec {
println!("v = {}", v);
}
println!("vec ptr = {:p}", &vec as *const Vec<i32>);
// std::mem::forget(vec);
};
unsafe {
let trait_obj_ptr = transmute::<_, TraitObjPtr>(f);
let data = trait_obj_ptr.data;
let func = (*trait_obj_ptr.vtable).func;
println!("{:#?} {:#?}",trait_obj_ptr, *trait_obj_ptr.vtable);
func(data);
}
}
I am recently playing with trait object detail. Here I coercise a closure to a trait object f: &dyn Fn. When I use trait method in Fn, it looks like double free . I try to replace captured data struct (vec) with my data struct (struct MyDropStruct(i32)), it works ok.
Of course this order is unstable and cannot be relied on. You can see the current layout fairly well if you look at the assembly for a manually-implemented Fn implementation. (The compiler-generated ones seem to simply use the same function pointer in both the FnMut::call_mut and the Fn::call slots, so it’s hard to tell them apart.)
Since impl Fn… for &F where F: Fn… in the standard library is a “manual” implementation, you can look at something simple like
pub fn f() -> &'static dyn Fn() {
&&||()
}
which produces (Show Assembly in the playground, Release mode)
core::ops::function::impls::<impl core::ops::function::Fn<A> for &F>::call: # @"core::ops::function::impls::<impl core::ops::function::Fn<A> for &F>::call"
# %bb.0:
retq
# -- End function
core::ops::function::impls::<impl core::ops::function::FnMut<A> for &F>::call_mut: # @"core::ops::function::impls::<impl core::ops::function::FnMut<A> for &F>::call_mut"
# %bb.0:
retq
# -- End function
core::ops::function::FnOnce::call_once{{vtable.shim}}: # @"core::ops::function::FnOnce::call_once{{vtable.shim}}"
# %bb.0:
retq
# -- End function
playground::f: # @playground::f
# %bb.0:
leaq .L__unnamed_1(%rip), %rax
leaq .L__unnamed_2(%rip), %rdx
retq
# -- End function
.L__unnamed_3:
.L__unnamed_1:
.quad .L__unnamed_3
.L__unnamed_2:
.quad core::ops::function::FnOnce::call_once{{vtable.shim}}
.asciz "\b\000\000\000\000\000\000\000\b\000\000\000\000\000\000"
.quad core::ops::function::FnOnce::call_once{{vtable.shim}}
.quad core::ops::function::impls::<impl core::ops::function::FnMut<A> for &F>::call_mut
.quad core::ops::function::impls::<impl core::ops::function::Fn<A> for &F>::call
and at the bottom you can clearly see the vtable. By the way, this first vtable.shim is what makes #![feature(unsized_fn_params)] work, which in turn powers the implementation of FnOnce for Box<F> for usized types F such a Box<dyn FnOnce()>. This shim function is called with a pointer to the self argument, and is implemented by (something like) reading the value from behind that pointer, claiming ownership. This is why calling that vtable entry will drop the closure. To be clear: When you call a Box<dyn FnOnce()>, the function in the vtable is responsible for dropping the closure itself, while the generic impl<F: ?Sized, Args> FnOnce<Args> for Box<F> where F: FnOnce<Args> implementation handles freeing the memory of the Box itself.