4SM for geeks
Here we disclose in plain langage
what specific tricks
break new grounds beyond Lyzenga's proposition

Lyzenga's proposition
4SM specifics

No need for atmospheric correction
No need for field data

 We all know Z = A + a*X1 + b*X2 + ... k*XN where, for each waveband at wavelength WLi  X=log(Ls-Lsw) Ls is TOA radiance at the sensor over shallow bottom Lsw is  TOA radiance at the sensor over optically deep bottom coefficients A, a, b, ..., k have to be estimated through multiple linear regression using existing depth sounding points. We all know that this amounts to introducing an assumption regarding the spectral signature of the shallow bottom Lyzenga's proposition works reasonably well over fairly bright bottoms, which are likely to be well represented in the depth sounding dataset used for calibration. Another way to write this is by removing the path radiance La Z = A + a*X1 + b*X2 + ... k*XN where, for each waveband at wavelength WLi  X=log(L-Lw) L    is BOA radiance just below water surface over shallow bottom L=Ls-La. Lw  is BOA radiance  just below water surface over optically deep water Lw=Lsw-La. This is called the water volume reflectance, or backscatter. Let Z=0 for the two-bands case: this can be re-arranged into a straight line X1=m0+m1*X2 where intercept m0=-A/a   and slope m1=-b*/a   But Lyzenga's proposition fails to ackowledge for the role of water volume reflectance Lw This is why it is well documented to yield underestimated depths Z over dark bottoms. Just look at this The reason is that, over dark bottoms, the water volume reflectance comes into play at shorter wavelengths, in the blue range and even in the green range of the solar spectrum, and more so as the bottom darkens even further, whereas it is neglibgeable over bright bottoms. Another reason is that dark bottoms are likely to be under-represented in the depth souning dataset used for calibration of  coefficients a, b, ..., k through multiple linear regressions.  This is why some users first segment their image into bright bottoms and dark bottoms, and assess specific coefficients  a, b, ..., k  for each segment. This is illustrated at  http://www.watercolumncorrection.com/4sm-presentation.php

 In 4SM, we acknowledge for some extra evidence which are plainly obvious in most image data First select a bunch of pixels in your image to represent non-vegetated land, from bright to dark. Then select a bunch of pixels in your image to represent bright shallow bottoms over the whole depth range. Then display these pixels in a bidimensionl plot, first as natural DNs, then after the X=log(Ls-Lsw) linearization. This is illustrated below : Xblue vs Xgreen for the atoll of Cliperton (Ikonos)              for Negril shores in Jamaica (TM)  Estimating spectral K in m-1    Bright Pixels Line: as pointed out by Lyzenga, the exponential decay of the bright bottom reflected signal is seen to become a straight line after linearization. The slope of the BPL is the ratio Ki/Kj   of diffuse attenuation coefficients Ki    and Kj   at wavelengths i and j.  The ratio Ki/Kj: for a select pair of wavebands is then used to derive spectral values of K at all operational visible wavelengths. Estimating spectral Lw Soil Line : but why does the Soil Line exhibit a distinctly curved shape? the answer is: because Lwi>>Lwj  . because we all know that Lw~=0 over the RED-NIR range, this provides a practical way for the estimation of spectral values of Lw over the BLUE-GREEN range. Calibration diagram The strength of 4SM is that, unlike other empirical ratio methods, this approach keeps reasonable track of the basic of atmospheric and underwater optics, through the calibration diagram. 