Skip to content
Snippets Groups Projects
Commit 1098fde9 authored by Quentin Bolsee's avatar Quentin Bolsee
Browse files

active sensing complete

parent 823ce976
Branches
No related tags found
No related merge requests found
Pipeline #35708 passed
......@@ -161,7 +161,7 @@ The output value is read by **selecting** the pixel. Usually, an entire row is r
Using a 4-transistor architecture, the exposure time can be controlled, by decoupling the photodiode from the amplifier's gate on command.
## color
## Color
The most common way of capturing color images with a digital sensor is a Bayer filter, interleaving color filters in front of pixels in this pattern:
......@@ -234,7 +234,7 @@ The refractive index of the material used in the lens can slightly differ as a f
These effects can be tackled by combining multiple optical elements. Software correction is also possible, if the lens was properly calibrated beforehand.
## Prime lens vs zoom lens
## Zoom lens vs prime lens
Lenses capable of zooming are very common, but introduce significant complexity. This either leads to an increased price, or compromises in their sharpness or aperture. A prime lens, on the other hand, does not offer zoom capabilities, but often have superior image quality and calibration.
......
......@@ -179,10 +179,3 @@ Depth estimation on Light Field data is an active domain. For now, algorithms ar
"https://sketchfab.com/models/b9edfdd28c154ecf995da7b8c6590da8/embed"
<iframe title="4D light field - depth estimation" frameborder="0" allowfullscreen="" mozallowfullscreen="true" webkitallowfullscreen="true" allow="fullscreen; autoplay; vr" xr-spatial-tracking="" execution-while-out-of-viewport="" execution-while-not-rendered="" web-share="" src="https://sketchfab.com/models/b9edfdd28c154ecf995da7b8c6590da8/embed"> </iframe>
# Light Stage
<p>This <a href="http://www.pauldebevec.com/">impressive device</a> was built for capturing the Bidirectional Reflectance Distribution Function (BRDF), which can describe the material’s optical properties in any direction and any illumination conditions. Thanks to the linearity of lighting, we can decompose the total illumination based on its direction. The viewing angle also plays a role for reflective or special materials (e.g. iridescence).</p>
<p><img src="images/brdf.png" alt=""></p>
<p>In the most complex case, objects need to be captured from several locations and illuminated from as many directions as possible.</p>
<p><img src="images/light_stage.png" alt=""></p>
File moved
topics/03_active/img/kinect.png

255 KiB

topics/03_active/img/laser_line.png

317 KiB

topics/03_active/img/laser_line_result.png

164 KiB

topics/03_active/img/structured_light.png

65.7 KiB

......@@ -16,13 +16,47 @@ mathjax: true
# Structured light
Structured light methods are actively projecting light patterns onto the scene, creating features that are easily detected by one or more cameras looking at it.
## Laser line
<!-- Makerbot digitizer -->
A straightforward to introduce structured light to the scene is to project a line using a laser:
![](img/laser_line.png)
*source: http://mesh.brown.edu/desktop3dscan/ch4-slit.html*
The illuminated pixels are easily detected on the camera. For each illuminated pixel, the camera's intrinsic and extrinsic parameters can be used to obtain the cartesian equation a line on which the 3D point must lie. If the laser's plane cartesian equation is known through prior calibration, the intersection between that plane and the light ray is straightforward to compute.
After collecting 3D coordinates of each point on the illuminated curve, the object is moved or rotated, and more points are accumulated. Note that the rotation of the object must be precisely known in order to place the 3D points in the right context. In other words, the camera's extrinsic parameters must be accurately updated for each object pose.
![](img/laser_line_result.png)
*source: http://mesh.brown.edu/desktop3dscan/ch4-slit.html*
## Encoded pattern
Projectors are a bit like reverse cameras: they use optics to project light rays onto the scene, rather than sensing them. The camera equations we previously presented are just as valid for projectors, and can be calibrated in a similar way. This also implies that the stereo vision principles can be applied to a projector/camera pair, rather than a camera/camera pair.
In projector-assisted structured light methods, easily detectable features are projected onto the scene, to be then detected by the camera and triangulated back to a 3D coordinate.
![](img/structured_light.png)
## Pattern projection
There is a multitude of options for the projected pattern, but an interesting strategy is to project successive binary stripes. After each exposure, an additional bit is detected by the camera, until the finest grain achievable by the projector. This reveals a binary code in each pixel of the camera, univocally linking it to a part of the projector's pixel coordinate system.
<!-- Kinect V1 -->
Using binary encoding reduces the number of required exposures from $w$ to $\lceil\log_2(w)\rceil$, with $w$ being the projector's horizontal resolution.
Uniquely identifying a pixel column of the projector is sufficient: triangulation can then be made with the same method we previously described for laser line systems. Alternatively, a succession of horizontal and vertical binary encodings can be used to uniquely identify each individual pixel of the projector.
## Pseudo-random pattern
Instead of using a projector that can display arbitrary patterns onto the scene, another approach is to use a pseudo-random pattern projected by a simple filter placed in front of a light source.
The pattern is carefully chosen to not have self-similarity, letting the camera uniquely identify and locate its patterns. The features can then be triangulated, assuming the projector's pattern projection geometry was calibrated beforehand.
![](img/kinect.png)
The Microsoft Kinect V1 was using this principle. While this is a cost-effective approach, it leads to very poor resolution, as the granularity of details is ultimately limited by the feature size present in the pattern.
<!-- CR Ferret pro: infrared binocular stereo -->
......@@ -50,4 +84,12 @@ ToF cameras are popular in gaming and realtime applications as they provide a de
One common issue for complex scenes is multipath, where the IR light bounces from a second object before returning to the sensor. This typically produces rounded corners with loss of details. Another issue is flying pixel, occuring on edge regions (mixing of foreground and background signals).
# Light stage
# Light Stage
This [impressive device](http://www.pauldebevec.com/) was built for capturing the Bidirectional Reflectance Distribution Function (BRDF), which can describe the material’s optical properties in any direction and any illumination conditions. Thanks to the linearity of lighting, we can decompose the total illumination based on its direction. The viewing angle also plays a role for reflective or special materials (e.g. iridescence).
![](img/brdf.png)
In the most complex case, objects need to be captured from several locations and illuminated from as many directions as possible.
![](img/light_stage.png)
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment