From 67d29f6ff1c0a580d689eadbaaabf397a1b4d14d Mon Sep 17 00:00:00 2001
From: Danny Griffin <dgr@mit.edu>
Date: Tue, 23 Apr 2024 23:08:52 -0400
Subject: [PATCH] links wip

---
 topics/02_passive/index.md | 35 +++++++++++++++++++++++++++++------
 1 file changed, 29 insertions(+), 6 deletions(-)

diff --git a/topics/02_passive/index.md b/topics/02_passive/index.md
index 59f832c..6d4b448 100644
--- a/topics/02_passive/index.md
+++ b/topics/02_passive/index.md
@@ -105,33 +105,56 @@ iPhone and Android apps for photogrammetry and now LiDAR scanning have multiplie
 
 # Light Field
 
+_plenoptic_: Of or relating to all the light, travelling in every direction, in a given space.
+
 Light fields represent an advanced form of passive sensing, aiming to capture full plenoptic content: all possible light rays emanating from a scene in any given direction. This results in a four-dimensional function, as it involves selecting a ray's position and angle. If the ideal plenoptic function was known, any novel viewpoint could be synthesized by placing a virtual camera in this space, and selecting the relevant light rays.
 
+Adelson et al.
+
+
+Why do we want all of the light?
+Image-Based Rendering (IBR) for view synthesis is a long-standing problem in the field of computer vision and graphics.
+Applications in robot navigation, film, and AR/VR.
+
+This is such an intensive calculation, that it prompts researchers to seek simulation shortcuts to reach this result:
+
+[Using thousands of virtual cameras] (https://openaccess.thecvf.com/content/ACCV2022/papers/Li_Neural_Plenoptic_Sampling_Learning_Light-field_from_Thousands_of_Imaginary_Eyes_ACCV_2022_paper.pdf)
+
+Papers proposing the use of thousands of virtual cameras and neural networks to capture a complete dense plenoptic function
+
+
 In practice, we can only sample light rays in discrete locations. There are two popular optical architectures for this:
 
-### Multi-Camera Systems:
-Multi-camera systems: simply shoot the scene from several locations using an array of camera (or a single moving one).
+### Multi-Camera Systems
+Simply shoot the scene from several locations using an array of camera (or a single moving one).
 
 
-### Lenslets:
+### Lenslets
 Lenslets: a single CMOS sensor with an array of lenses in front.
 
-<p>In the lenslet approach, each pixel behind a lenslet provides a unique light ray direction. The collection for all lenses is called a <strong>sub aperture image</strong>, and roughly corresponds to what a shifted camera would capture. The resolution of these images is simply the total number of lenslets, and the number of sub-aperture images available is given by the number of pixels behind a lenslet. For reference, the <a href="https://en.wikipedia.org/wiki/Lytro">Lytro Illum</a> provides 15x15 sub-aperture images of 541x434 pixels each, which is a total of ~53 Megapixels.</p>
+In the lenslet approach, each pixel behind a lenslet provides a unique light ray direction. The collection for all lenses is called a <strong>sub aperture image</strong>, and roughly corresponds to what a shifted camera would capture. The resolution of these images is simply the total number of lenslets, and the number of sub-aperture images available is given by the number of pixels behind a lenslet. For reference, the [Lytro Illum](https://en.wikipedia.org/wiki/Lytro)> provides 15x15 sub-aperture images of 541x434 pixels each, which is a total of ~53 Megapixels.
 
-<p><img src="images/viewpoint7_png+img+margin.gif" alt="LF sub aperture images"></p>
+<img src="images/viewpoint7_png+img+margin.gif" alt="LF sub aperture images">
 
 The most efficient layout for lenslets is hexagonal packing, as it wastes the fewest pixel area. Note that some pixels are not fully covered by the lenslet and receive erroneous or darker data. This means some sub aperture images cannot be recovered.
 
-<p><img src="images/LF.png" alt="LF preview"></p>
+<img src="images/LF.png" alt="LF preview">
+
+
 
 Light Fields have gotten a lot of traction recently thanks to their hight potential in VR applications. One impressive work was shown by Google in in a SIGGRAPH 2018 paper:
 
 https://www.youtube.com/embed/4uHo5tIiim8
 
+### Depth Estimation
+
+Forming an image from these cameras requires sampling one pixel from each micro lens to generate virtual viewpoints. The resulting "sub-aperture images" offer different perspectives with subtle shifts, presenting a challenge for depth estimation due to their minute disparities.
 
 
 Depth estimation on Light Field data is an active domain. For now, algorithms are commonly tested on ideal, synthetic light fields such as this [dataset](https://lightfield-analysis.uni-konstanz.de/). Here is one example of point cloud obtained from a stereo[matching method](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8478503).
 
+"https://sketchfab.com/models/b9edfdd28c154ecf995da7b8c6590da8/embed"
+
 <iframe title="4D light field - depth estimation" frameborder="0" allowfullscreen="" mozallowfullscreen="true" webkitallowfullscreen="true" allow="fullscreen; autoplay; vr" xr-spatial-tracking="" execution-while-out-of-viewport="" execution-while-not-rendered="" web-share="" src="https://sketchfab.com/models/b9edfdd28c154ecf995da7b8c6590da8/embed"> </iframe>
 
 
-- 
GitLab