Can you provide more context to what you’re going to do with light, assuming you can get the right type?
Hi @vitalyd, let's try:
During bidirectional path tracing you create paths from the camera and from all light sources and try to connect them. First you create an unconnected path from the camera, bounce several times (hitting geometry and creating new rays), up to a given depth, then you do the same from all lights. After that you try to connect those paths into paths which start at the camera and end up in a light source. You connection strategy depends on a so-called MIS (Multiple Importance Sampling) weight and for that you have to call a function called pdf_le(...) for each light:
$ rg -trust pdf_le
src/core/light.rs
49: fn pdf_le(&self, ray: &Ray, n_light: &Normal3f, pdf_pos: &mut Float, pdf_dir: &mut Float);
...
Ignoring a medium for now, the information you gather during the unconnected path creation is either a EndpointInteraction (camera or light) or a SurfaceInteraction (you hit and/or bounced off some geometry):
pub struct Vertex<'a, 'p, 's> {
vertex_type: VertexType,
beta: Spectrum,
ei: Option<EndpointInteraction<'a>>,
si: Option<SurfaceInteraction<'p, 's>>,
delta: bool,
pdf_fwd: Float,
pdf_rev: Float,
}
The EndpointInteraction
looks like this:
pub struct EndpointInteraction<'a> {
// Interaction Public Data
pub p: Point3f,
pub time: Float,
pub p_error: Vector3f,
pub wo: Vector3f,
pub n: Normal3f,
// EndpointInteraction Public Data
pub camera: Option<&'a Box<Camera + Send + Sync>>,
pub light: Option<&'a Arc<Light + Send + Sync>>,
}
Whereas the SurfaceInteraction
looks like this:
#[derive(Default, Clone)]
pub struct SurfaceInteraction<'p, 's> {
// Interaction Public Data
pub p: Point3f,
pub time: Float,
pub p_error: Vector3f,
pub wo: Vector3f,
pub n: Normal3f,
// TODO: MediumInterface mediumInterface;
// SurfaceInteraction Public Data
pub uv: Point2f,
pub dpdu: Vector3f,
pub dpdv: Vector3f,
pub dndu: Normal3f,
pub dndv: Normal3f,
pub dpdx: Vector3f,
pub dpdy: Vector3f,
pub dudx: Float,
pub dvdx: Float,
pub dudy: Float,
pub dvdy: Float,
pub primitive: Option<&'p GeometricPrimitive>,
pub shading: Shading,
pub bsdf: Option<Arc<Bsdf>>,
pub shape: Option<&'s Shape>,
}
The problem is that geometry can also act like a light source (light emitting geometry, like a sphere, disk or triangles):
pub struct GeometricPrimitive {
pub shape: Arc<Shape + Send + Sync>,
pub material: Option<Arc<Material + Send + Sync>>,
pub area_light: Option<Arc<AreaLight + Send + Sync>>,
// TODO: MediumInterface mediumInterface;
}
That's where the Option
comes from, EndpointInteraction
stores either camera or light information, a geometric primitive can bei either a shape which reacts to light via a material or can act as an area light source. The code in question will figure out if a light without geometry is involved or if we hit (or start the light path at) a light emitting geometry.
I hope this helps understanding the underlying problem ...