diff --git a/topics/01_cameras/img/Price_Prime_Zoom_Lenses.jpg b/topics/01_cameras/img/Price_Prime_Zoom_Lenses.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..b8712e5fd77f00fd4f84f447d6fe99c43328511a
Binary files /dev/null and b/topics/01_cameras/img/Price_Prime_Zoom_Lenses.jpg differ
diff --git a/topics/01_cameras/img/bokeh.jpeg b/topics/01_cameras/img/bokeh.jpeg
new file mode 100644
index 0000000000000000000000000000000000000000..d4ea7dcb2b2093632fc0cda6c5b0c354473601ee
Binary files /dev/null and b/topics/01_cameras/img/bokeh.jpeg differ
diff --git a/topics/01_cameras/img/chromatic.png b/topics/01_cameras/img/chromatic.png
new file mode 100644
index 0000000000000000000000000000000000000000..d8217d728123af11b812dee03a4148b0b1de5d71
Binary files /dev/null and b/topics/01_cameras/img/chromatic.png differ
diff --git a/topics/01_cameras/img/focus.png b/topics/01_cameras/img/focus.png
new file mode 100644
index 0000000000000000000000000000000000000000..727c4031b74d8c1b887bca929084ee2a763358c4
Binary files /dev/null and b/topics/01_cameras/img/focus.png differ
diff --git a/topics/01_cameras/img/rolling_shutter.gif b/topics/01_cameras/img/rolling_shutter.gif
new file mode 100644
index 0000000000000000000000000000000000000000..20adeb7e16e7cfe9021817ccba30b4d973e2faff
Binary files /dev/null and b/topics/01_cameras/img/rolling_shutter.gif differ
diff --git a/topics/01_cameras/index.md b/topics/01_cameras/index.md
index 9c25b644235f018ce174cacf0b5f91b8eddcbbfb..274a24520bbd0bc28f4c4b61b2cdf918c54497e3 100644
--- a/topics/01_cameras/index.md
+++ b/topics/01_cameras/index.md
@@ -24,16 +24,16 @@ The term camera is derived from the Latin term *camera obscura*, literally trans
 
 ![](img/pinhole.png)
 
-Using only a small hole (pinhole) blocks off most of the light, but also constraints the geometry of rays, leading to a 1-to-1 relationship between a point on the sensor (or wall!) and a direction. Given a 3D point $(x,y,z)$ in space, the point on the sensor $(u, v)$ is given by:
+Using only a small hole (pinhole) blocks off most of the light, but also constraints the geometry of rays, leading to a 1-to-1 relationship between a point on the sensor (or wall!) and a direction. Given a 3D point $(x,y,z)$ in space, the point on the sensor $(u, v)$ is:
 
 $$\begin{cases}
 u = f \frac{x}{z}\\
 v = f \frac{y}{z}
 \end{cases}$$
 
-in which $f$ is the focal length: the distance from the pinhole to the sensor. Multiple 3D coordinates fall onto the same sensor point; cameras turn the 3D world into a flat, 2D image.
+in which $f$ is the focal length: the distance from the pinhole to the sensor. Zooming-in corresponds to increasing the focal length. Conversely, short focal lengths are associated with wide-angle photography.
 
-Let's make the sensor coordinate system more general, by introducing an origin $(u_0,v_0)$ and non-isotropy in the $x$ and $y$ focal lengths, which is necessary to describe non-rectilinear sensors. The complete pinhole camera model can be summarized by a single affine matrix multiplication:
+The equation shows that multiple 3D coordinates fall onto the same sensor point; cameras turn the 3D world into a flat, 2D image. Let's make the sensor coordinate system more general, by introducing an origin $(u_0,v_0)$ and non-isotropy in the $x$ and $y$ focal lengths, which is necessary to describe non-rectilinear sensors. The complete pinhole camera model can be summarized by a single affine matrix multiplication:
 
 $$
 \begin{bmatrix}
@@ -130,16 +130,17 @@ It is common to choose the $z$ axis to point **toward** the scene, and the $y$ a
 ## Technologies
 
 ![](img/sensor.jpg)
-*source: https://www.automate.org/vision/blogs/ccd-vs-cmos-image-sensors-which-are-better*
 
-We'll focus on the two main families of digital sensors: CCD and CMOS.
+*source: https://www.automate.org/vision/blogs/ccd-vs-cmos-image-sensors-which-are-better*
 
-In both families, the actual light sensing is based on the electron-hole pair generation in MOS devices.
+We'll focus on the two main families of digital sensors: CCD and CMOS. In both families, the actual light sensing is based on the electron-hole pair generation in MOS photodiodes. The main difference is how this charge is converted to a signal, offering tradeoffs over complexity, signal-to-noise ratio and readout speed.
 
 ### CCD
 
 ![](img/ccd.png)
 
+*source: https://www.princetoninstruments.com/learn/camera-fundamentals/ccd-the-basics*
+
 In CCD sensors, the generated charges in the photodiodes are accumulated under a potential well, controlled by a voltage on the gate.
 
 Charges can be moved to a neighboring pixel by performing a specific sequence on the gates. By shifting the charges all the way to the edge of the sensor, individual pixel values can be readout sequentially.
@@ -149,13 +150,23 @@ Advantage of CCD sensors include the simplicity of their design, and the large s
 ### CMOS
 
 ![](img/cmos_pixels.png)
+
 *source: Coath, Rebecca, et al. "Advanced pixel architectures for scientific image sensors." (2009).*
 
+In a CMOS sensor, each pixel is in charge (pun intended) of collecting light and converting it to a signal. The added complexity made them impractical compared to CCD for a long time, but they have now fully caught up.
+
+The main principle is as follows: the charge accumulated by the photodiode is directly controlling the gate of an **amplifier**. In other terms, the current induced by incoming light is charging up the gate capacitance of the amplifier. This charge is present until a **reset** is initiated.
+
+The output value is read by **selecting** the pixel. Usually, an entire row is read out at once.
+
+Using a 4-transistor architecture, the exposure time can be controlled, by decoupling the photodiode from the amplifier's gate on command.
+
 ## color
 
 The most common way of capturing color images with a digital sensor is a Bayer filter, interleaving color filters in front of pixels in this pattern:
 
 ![](img/bayer.png)
+
 *source: https://en.wikipedia.org/wiki/Bayer_filter*
 
 For every red or blue pixel, there are two green ones. This is to mimic the human eye's increased sensitivity to green light.
@@ -185,45 +196,118 @@ The luminance $Y$ can be thought of as a grayscale value. The coefficients in th
 
 # Lens
 
+Pinhole cameras presented earlier capture very little light, needing long exposure times (sometimes hours!). They also suffer from blurry details, and vignetting toward the borders of the image: the hole's effective size reduces as the incident angle increases.
+
+To gather more light, a lens can be used. The goal of the lens is to take light rays emitted by a point in the scene, and focus those rays back into a single point on the sensor:
+
 ![](img/pinhole_lens.png)
 
+The lens equation provides a relationship between the object distance $d_o$ and the image distance behind the lens $d_i$:
+
+$$\frac{1}{f} = \frac{1}{d_o} + \frac{1}{d_i}$$
+
+where $f$ is the focal length of the lens. Note how $d_i$ tends toward $f$ as $d_o$ tends toward infinity: for very far objects, the adequate distance between the lens and the image plane is equal to the focal length. This brings us back to the pinhole camera model, in which the focal length was simply the distance between the hole and the sensor.
+
+As scene objects get closer to the camera, the lens needs to be moved away from the sensor to keep them in focus. This also causes a negligible zoom effect, familiar to seasoned photographers.
+
+The plane of focus (or focus point) is the part of the scene with a perfect focus:
+
+![](img/focus.png)
+
+*source: https://greatbigphotographyworld.com/depth-of-field-how-what-when/*
+
+When a light-emitting point is either in front or behind of this plane, it shows up as a blurry spot on the sensor, also called **circle of confusion**. When this circle of confusion is no larger than a pixel, the scene's point is still considered to be in-focus. This defines a region of the scene in which blur is imperceptible: this region is delimited by a plane in front of the plane of focus, and one behind it. The distance between those two planes is called the **depth of field**.
+
 ## Distortion
 
+As lenses don't exactly bend light rays following the pinhole camera model, they introduce distortion. This is modeled as a shift in $(u, v)$ coordinates between the ideal pinhole model, and the observed coordinates. Barrel distortion is the most familiar type of distortion, often visible in wide angle photography.
+
 ![](img/lens_distortion.png)
 
+## Chromatic aberration
+
+The refractive index of the material used in the lens can slightly differ as a function of the wavelength of the incoming light, causing separation of colors:
+
+![](img/chromatic.png)
+
+*source: https://www.studiobinder.com/blog/what-is-chromatic-aberration-effect/*
+
+These effects can be tackled by combining multiple optical elements. Software correction is also possible, if the lens was properly calibrated beforehand.
+
+## Prime lens vs zoom lens
+
+Lenses capable of zooming are very common, but introduce significant complexity. This either leads to an increased price, or compromises in their sharpness or aperture. A prime lens, on the other hand, does not offer zoom capabilities, but often have superior image quality and calibration.
+
+![](img/Price_Prime_Zoom_Lenses.jpg)
+
+*source: https://www.slrlounge.com/glossary/prime-lens-definition/*
+
 # Aperture
 
-An aperture gives control over the amount of light entering the lens. It's effectively equivalent to having a smaller lens diameter.
+By using a diaphragm, the amount of light entering the lens can be controlled, effectively emulating a lens of a smaller diameter. The opening left by the diaphragm is called the aperture:
 
 ![](img/aperture.png)
+
 *source: https://www.adorama.com/alc/camera-basics-aperture/*
 
 Aperture values are often expressed as f-numbers, defined as a ratio between the aperture diameter and the focal length of the lens:
 
 $$f_{\rm number} = \frac{d_{\rm aperture}}{f}$$
 
-This quantity is directly related to the light density reaching the sensor.
+This quantity is directly related to the light density reaching the sensor, and lets a photographer estimates the amount of light captured, independently of the focal length.
+
+When scene points are out of focus, their circle of confusion takes the shape of the aperture. This is known as a **bokeh** effect, and is especially visible for scenes containing distinct, bright points. Notice how the shape of the blades is visible in this picture:
+
+![](img/bokeh.jpeg)
+
+*source: https://clideo.com/resources/what-is-bokeh-photography-effect*
 
 # Shutter
 
+Light sensors are integrating light continuously. To obtain a useful image, the exposure start and end times need to be well defined.
+
 ## Mechanical shutter
 
+A straightforward way of blocking all light coming to the sensor is to hide it behind a curtain. In early photography, this was done manually by sliding a plate in front of the sensor or the lens. For more precise control, mechanical shutters were developed, with careful
+
+A popular type of mechanical shutter is a **focal plane shutter**, in which two curtains are moving in front of the sensor. The first curtain starts the exposure, and the second curtain ends it. The exposure duration is modulated by changing the distance between the two curtains:
+
 ![](img/shutter.gif)
+
 *source: https://www.youtube.com/watch?v=CmjeCchGRQo*
 
+As different parts of the sensor are exposed at different times, they capture a different instant. This is known as a rolling shutter effect, and leads to distorted images:
+
+![](img/rolling_shutter.gif)
+
+**Leaf shutters** are a forgotten alternative to focal plane shutters, and are implemented directly in the lens, near the diaphragm. The carefully-designed shape of the leaves ensures a consistent exposure time over the whole area:
+
 ![](img/leaf_shutter.png)
+
 *source: Hasselblad*
 
+Although they add some complexity, leaf shutters don't suffer from rolling shutter artifacts, as all parts of the sensor are exposed simultaneously.
+
 ## Electronic shutter
 
-# Photography basics
+Controlling the exposure duration electronically is becoming a new standard, and eliminates the need for moving parts in front of the sensor. Electronic shutters have been implemented in both CCD and CMOS sensors.
+
+**Global shutter** is the holy grail of exposure strategies, in which each pixel is shuttered simultaneously, fully eliminating rolling shutter artifacts. Very high speed photography benefits from this; for example, the Sony α9 III offers shutter speeds of 1/80,000 of a second.
 
-## The three parameters
+# Photography basics
 
 Photography mainly comes down to setting three parameters on the camera:
 
-- Aperture
-- Shutter speed
-- ISO
+- **Aperture**
+- **Exposure time (shutter speed)**
+- **ISO**
 
 Each parameter can be converted to a $\log_2$ scale. A common name for a unit on that scale is a **stop**. For example, increasing exposure by one stop can be achieved by doubling the shutter speed, doubling the ISO or increasing the aperture by $\sqrt{2}$.
+
+Unless when shooting in a studio, the photographer has no control over the amount of light available, and has to make choices over the three settings available. While arbitrarily increasing the three settings sounds like an easy way to get enough exposure, there are tradeoffs to consider:
+
+- **Increasing exposure time**: more motion blur
+- **Increasing Aperture**: more out-of-focus blur (less depth of field)
+- **Increasing ISO**: more noise
+
+Any modern camera is measuring the amount of light available, and offers automatic tuning of the three settings. Seasoned photographers often opt for full manual control, but a good compromise is to fix two settings and let the camera choose the last one. In the **shutter priority** mode (Tv or S), the user chooses the shutter speed, and the aperture is decided by the camera. In the **aperture priority** mode (Av or A), the user first sets the aperture, and shutter speed is automatic.