diff --git a/topics/01_cameras/index.md b/topics/01_cameras/index.md index 274a24520bbd0bc28f4c4b61b2cdf918c54497e3..501092779447a60f94daa7a088a4295d467647e0 100644 --- a/topics/01_cameras/index.md +++ b/topics/01_cameras/index.md @@ -161,7 +161,7 @@ The output value is read by **selecting** the pixel. Usually, an entire row is r Using a 4-transistor architecture, the exposure time can be controlled, by decoupling the photodiode from the amplifier's gate on command. -## color +## Color The most common way of capturing color images with a digital sensor is a Bayer filter, interleaving color filters in front of pixels in this pattern: @@ -234,7 +234,7 @@ The refractive index of the material used in the lens can slightly differ as a f These effects can be tackled by combining multiple optical elements. Software correction is also possible, if the lens was properly calibrated beforehand. -## Prime lens vs zoom lens +## Zoom lens vs prime lens Lenses capable of zooming are very common, but introduce significant complexity. This either leads to an increased price, or compromises in their sharpness or aperture. A prime lens, on the other hand, does not offer zoom capabilities, but often have superior image quality and calibration. diff --git a/topics/02_passive/index.md b/topics/02_passive/index.md index 001a461792aa0f32891ed3a78677b4821bf40f65..eee13dac455447aa02f04e8cd376e9a66f25562f 100644 --- a/topics/02_passive/index.md +++ b/topics/02_passive/index.md @@ -179,10 +179,3 @@ Depth estimation on Light Field data is an active domain. For now, algorithms ar "https://sketchfab.com/models/b9edfdd28c154ecf995da7b8c6590da8/embed" <iframe title="4D light field - depth estimation" frameborder="0" allowfullscreen="" mozallowfullscreen="true" webkitallowfullscreen="true" allow="fullscreen; autoplay; vr" xr-spatial-tracking="" execution-while-out-of-viewport="" execution-while-not-rendered="" web-share="" src="https://sketchfab.com/models/b9edfdd28c154ecf995da7b8c6590da8/embed"> </iframe> - - -# Light Stage -<p>This <a href="http://www.pauldebevec.com/">impressive device</a> was built for capturing the Bidirectional Reflectance Distribution Function (BRDF), which can describe the material’s optical properties in any direction and any illumination conditions. Thanks to the linearity of lighting, we can decompose the total illumination based on its direction. The viewing angle also plays a role for reflective or special materials (e.g. iridescence).</p> -<p><img src="images/brdf.png" alt=""></p> -<p>In the most complex case, objects need to be captured from several locations and illuminated from as many directions as possible.</p> -<p><img src="images/light_stage.png" alt=""></p> diff --git a/topics/02_passive/images/brdf.png b/topics/03_active/img/brdf.png similarity index 100% rename from topics/02_passive/images/brdf.png rename to topics/03_active/img/brdf.png diff --git a/topics/03_active/img/kinect.png b/topics/03_active/img/kinect.png new file mode 100644 index 0000000000000000000000000000000000000000..9113f78afd25d46dd96a06e14ea1890ffebcf796 Binary files /dev/null and b/topics/03_active/img/kinect.png differ diff --git a/topics/03_active/img/laser_line.png b/topics/03_active/img/laser_line.png new file mode 100644 index 0000000000000000000000000000000000000000..bde39d4ae796b346c6670da28cba2a6509d7302b Binary files /dev/null and b/topics/03_active/img/laser_line.png differ diff --git a/topics/03_active/img/laser_line_result.png b/topics/03_active/img/laser_line_result.png new file mode 100644 index 0000000000000000000000000000000000000000..afb73c4e9c6f44a89f3359e2952e9057b41725b1 Binary files /dev/null and b/topics/03_active/img/laser_line_result.png differ diff --git a/topics/02_passive/images/light_stage.png b/topics/03_active/img/light_stage.png similarity index 100% rename from topics/02_passive/images/light_stage.png rename to topics/03_active/img/light_stage.png diff --git a/topics/03_active/img/structured_light.png b/topics/03_active/img/structured_light.png new file mode 100644 index 0000000000000000000000000000000000000000..d199fd3a06299c0992ba8fdb32174046508c7878 Binary files /dev/null and b/topics/03_active/img/structured_light.png differ diff --git a/topics/03_active/index.md b/topics/03_active/index.md index d056ce34a0ba8a8eaf6b3e1050d117f4dfbe7773..af6f62569a5fc6be3c981ee6df19bbcca495eeb4 100644 --- a/topics/03_active/index.md +++ b/topics/03_active/index.md @@ -16,13 +16,47 @@ mathjax: true # Structured light +Structured light methods are actively projecting light patterns onto the scene, creating features that are easily detected by one or more cameras looking at it. + ## Laser line -<!-- Makerbot digitizer --> +A straightforward to introduce structured light to the scene is to project a line using a laser: + + + +*source: http://mesh.brown.edu/desktop3dscan/ch4-slit.html* + +The illuminated pixels are easily detected on the camera. For each illuminated pixel, the camera's intrinsic and extrinsic parameters can be used to obtain the cartesian equation a line on which the 3D point must lie. If the laser's plane cartesian equation is known through prior calibration, the intersection between that plane and the light ray is straightforward to compute. + +After collecting 3D coordinates of each point on the illuminated curve, the object is moved or rotated, and more points are accumulated. Note that the rotation of the object must be precisely known in order to place the 3D points in the right context. In other words, the camera's extrinsic parameters must be accurately updated for each object pose. + + + +*source: http://mesh.brown.edu/desktop3dscan/ch4-slit.html* + +## Encoded pattern + +Projectors are a bit like reverse cameras: they use optics to project light rays onto the scene, rather than sensing them. The camera equations we previously presented are just as valid for projectors, and can be calibrated in a similar way. This also implies that the stereo vision principles can be applied to a projector/camera pair, rather than a camera/camera pair. + +In projector-assisted structured light methods, easily detectable features are projected onto the scene, to be then detected by the camera and triangulated back to a 3D coordinate. + + -## Pattern projection +There is a multitude of options for the projected pattern, but an interesting strategy is to project successive binary stripes. After each exposure, an additional bit is detected by the camera, until the finest grain achievable by the projector. This reveals a binary code in each pixel of the camera, univocally linking it to a part of the projector's pixel coordinate system. -<!-- Kinect V1 --> +Using binary encoding reduces the number of required exposures from $w$ to $\lceil\log_2(w)\rceil$, with $w$ being the projector's horizontal resolution. + +Uniquely identifying a pixel column of the projector is sufficient: triangulation can then be made with the same method we previously described for laser line systems. Alternatively, a succession of horizontal and vertical binary encodings can be used to uniquely identify each individual pixel of the projector. + +## Pseudo-random pattern + +Instead of using a projector that can display arbitrary patterns onto the scene, another approach is to use a pseudo-random pattern projected by a simple filter placed in front of a light source. + +The pattern is carefully chosen to not have self-similarity, letting the camera uniquely identify and locate its patterns. The features can then be triangulated, assuming the projector's pattern projection geometry was calibrated beforehand. + + + +The Microsoft Kinect V1 was using this principle. While this is a cost-effective approach, it leads to very poor resolution, as the granularity of details is ultimately limited by the feature size present in the pattern. <!-- CR Ferret pro: infrared binocular stereo --> @@ -50,4 +84,12 @@ ToF cameras are popular in gaming and realtime applications as they provide a de One common issue for complex scenes is multipath, where the IR light bounces from a second object before returning to the sensor. This typically produces rounded corners with loss of details. Another issue is flying pixel, occuring on edge regions (mixing of foreground and background signals). -# Light stage +# Light Stage + +This [impressive device](http://www.pauldebevec.com/) was built for capturing the Bidirectional Reflectance Distribution Function (BRDF), which can describe the material’s optical properties in any direction and any illumination conditions. Thanks to the linearity of lighting, we can decompose the total illumination based on its direction. The viewing angle also plays a role for reflective or special materials (e.g. iridescence). + + + +In the most complex case, objects need to be captured from several locations and illuminated from as many directions as possible. + +