Optical calibration, bathymetry, water column correction and bottom typing of shallow marine areas, using passive remote sensing imageries
WorldView 2 image at Waimanalo Beach, Oahu, Hawaii islands
3289x3241, 2 m ground resolution, courtesy of Ron Abileah


updated July 13th

  • I found it extremely difficult to estimate propper conditions for deglinting, and therefore for estimating deep water radiances Lsw.
    • seems that this is now solved in 4SM for this WV2 image
    • it remains to be seen whether or not other WV2 images exhibit the same weird properties
  • The very high level of system noise in the dataa seems to originate in the glint signal itself 
    • I have to apply extremely punishing smoothing conditions over water 
    • smart-smoothing does not make sense here;
    • this almost nullifies the very attractive 2 m resolution!
  • In the end, all WV2 wavelengths should be specified at mid-waveband
    • this important finding was derived from the use of the Lidar seatruth

Ron: "Do the patterns of over/under estimated depths
correlate with bottom types?


  • Yes: the role of bottom spectral signature can not be fully corrected for.

  • The effort aims at achieving the best possible correction.

  • Of course, for doing that, the practitioner must make some assumption that pertains to the BOA bottom spectral signature: this can be

    • an exhaustive and possibly universal "library" of existing bottom type signatures collected underwater in advance of water column correction: methods by EOMAPHEege,  GoodmanFearns,

    • or a "LUT" approach: pick your choice out of a mamoth matrice to contain all combinations of K values, bottom substate spectral features, and bottom depth: method by MobleySAMBUCCA,

    • or some kind of assumption that pertains to what I call the Soil Line in 4SM

      • either through "multi-linear" regressions of shallow radiances at a number of known depth in order to derive coefficients and offsets: methods by Paredes and Spero, then Polcyn, then Bierwirth, then Lyzenga, then Stumpf

      • or through the observed spectral Soils Line over bare dryland, from which to derive the "ideal" -or average- spectral bottom signature: 4SM method
      • after all,  dryland in the image, from bright to dark, is at depth=ZERO: let's make it our "null depth spectral reference"

Ron: "How water pollution can explain the depth errors?"

  • Your water is nice and homogeneous.

  • You manage to estimate its "water volume reflectance" Lsw which prevails over optically deep areas and shall be applied to the whole marine area.

  • Now add some chlorophyll and suspended particles to your clear waters at a particular location: call it a plume.

  • This makes your water look locally a bit "milky". Therefore the water's "water volume reflectance" is increased locally: a few Dns higher than your adopted deep water reflectance Lsw.

    • This increases the marine signal Ls by such few Dns, but you don't know it and can't correct for it.

    • It also reduce the water penetration of light in water.

  • This increases the apparent bottom contrast Ls-Lsw, up to the point that you can be mistaken that you have bottom detection while the pixel is so deep that you can't possibly have bottom detection. 

  • But you can't account for that unless you resort to Analytical methods, lots of measurements on site, and topnotch  airborne hyperspectral data like OceanPHILLS.

    • Therefore your computed depth shall be underestimated, and can very well be a "ghost".

  • I can account for it in 4SM, provided (this is a lot of work)

    • suspended load does not prevent bottom detection

    • pollution is homogeneous

    • and a mask is specified to contain all polluted areas, so that the masked polygons can be processed usingspecific calibration parameters: see tmnov_tutorial

Ron: "Besides the soil line, do you use known depths to calibrate?

Ron: "The depths with WKB are accurate...
I was wondering if that can be used as input into 4SM?


  • Sorry I have to say : NO

  • To the extent that effective wavelengths can be precisely specified, the answer to your question is : NO, definitevely NO

  • Suppose the answer was: YES! That's a rich man's gimmick altogether. I'd rather spend that money on a series of WV2 images , near-nadir viewing, calm and clear day, process them separately, and run a final CombinedDepth process in order to produce a final and seamless DTM: 4 or 5 images are enough to get rid of the clouds.

Ron: "Here I think there are only 2, sand and coral"

Ron: "We need to also look in coastal areas with higher CHl  concentration"

Ron: "The errors do not appear to coincide with the river outflow"

  • My tentative interpretation is that waters from Waimanalo Stream ooze out and sink and tend to fill all depressions (a stratified waters situation).

  • For this they need to be denser than ocean waters.

  • I was able to measure that waters just at the stream outlet are of Jerlov's Coastal type 1 : clear waters indeed, with a hint of dissolved organic matter affecting the blue bands

  • See the illustration above:

    • the 2-3 m depths of coral patches are correct

    • but the depths over the 12 m deep moat that surrounds them are underestimated by up to 7 m

    • please notice that the moat is in fact a gully: the depths inside that gully are distinctly underestimated down to ~20 m

  • This reminds me of CASI at Prince Edward Island, Nova Scottia

Ron: "Why are you using 1 m tide correction?"

  • The SHOALS Lidar digital terrain model is probably refered to the lowest astronomical tide (LAT), like most nautical charts.

  • My computed depths are consistently ~1 meter deeper than SHOALS depths
    • This can be appreciated from results where Red band exhibits healthy bottom detection
    • In which case the depth is estimated with an excellent precision: better than 0.1 m
  • For a best fit in seatruth regressions,
    • I could just as well add a 1 m tide correction to SHOALS depths
    • and leave my estimated depths uncorrected!
  • ?NOAA tide prediction is ~0.30 m above MLLW

Ron: "I wonder how the results may change with"

  • an adjustment in the spectral library 

    • No need for spectral libraries here: resorting to spectral libraries means

      • collecting reflectance spectra for all possible shallow substrates: this library is likely to be only representative of the illumination and phenologic conditions on site at the time of imaging/fieldwork (a huge team work, only for one not concerned with turnover time)

      • working with data that have been converted into physical units of reflectance (fully fledged atmospheric corrections; not my cup of tea)

    • Once spectral K is estimated, Z is increased in the inverse model LB=Lw + (L - Lw)*exp(2K*Z)

      • until the ratio (LBi + LBj + ... ) / n*LBk equals 1
      • or for example, if bands 1, 2, 3 and 4 are used: (LB1+LB2+LB3)=3*LB4 
    • This is because, in 4SM, all radiances are normalized prior to water column correction 

      • so that the slope of the Soil Line is made to equal 1 for all pairs of bands (is'nt that cool?)

  • taking account of differences between wet  and dry material spectra

    • Wet material is only darker; but it's spectral reflectance features are not altered
      • they are just less contrasted
      • this does not affect band ratio: cool!
    • Because we're working with ratios, the estimated depth does not depend on the relative brightness/darkness of a shallow substrate, but only on its relative spectral reflectance features
  • adjusting water attenuation coefficients 
    • Effective attenuation coefficients are estimated through the calibration process in 4SM : what I "used" is the series of diffuse attenuation coefficients for visible bands derived from Jerlov's table for a water type which has been estimated to be Oceanic Type 1     
    • yes: as clear as it can possibly get, for an observed ratio K2/K3=0.35 with WLblue=477 nm and WLgreen=546 nm
  • what water attenuation coefficients did you use for Oahu?
    • "what did you use" is not the term I would use: no magic wand behind!
    • "what did you find" would be more proper
    • 2Kpurple    0.050 m-1
    • 2Kblue       0.043 m-1
    • 2Kgreen     0.123 m-1
    • 2Kyellow    0.516 m-1
    • 2Kred         0.771 m-1
  • pixel unmixing (some pixels may be a mix of two or more materials)
    • So, you're back to that two-substrates segmentation!
    • Of course, bottom typing shall yield the spectral signature of "bright sands" and of "dark coral reef", and one could play with various mixtures of these.
    • Remember though that we/you paid good fat price for 2 m ground resolution...
      • Pixel unmixing does make sense with 30 m Landsat mixels,
      • while WV2's very high resolution aims at delivering to coastal users the resolution they insisted they required for sallow water work...

"I see you were able to use band 1 here"

  • Well, band 1 is OK in this image
  • this is Ron's deglinted band_1, further deglinted by 4SM, 16U, NO-smoothing
 Rich: "I wonder whether the usefulness of band 1
depends on relation of incidence angles of the satellite and sun?"
"Perhaps it would be worth comparing these values
between scenes where the band has or hasn't been useful?"
 Let's get started:
extract from 11MAR31215109-M2AS-052656310070_01_P002.IMD from DigitalGlobe:
    meanSunAz = 146.5;
    meanSunEl =  70.0;
    meanSatAz = 306.1;
    meanSatEl =  47.1;
    meanInTrackViewAngle =  14.9;
    meanCrossTrackViewAngle = -34.8;
    meanOffNadirViewAngle =  37.5;
Rich: "Do your results show the errors in deeper waters tend to curve off
to 'over-esimate' for brighter sea bed reflectance
and to 'uner-estimate' for darker reflectance? "
  • this is to be expected from investigating the plot of Stumpf's model
    • in this "model",
    • a very-very dark pixel at depth 3.0 m
    • is given the same computed depth
    • as a ver-very  bright pixel at a depth of 6.0 m
  • this was observed in a Landsat TM image at Caicos Bank, Bahamas
  • this Waimanalo scene does not lend itself to this sort of petty investigation, as it lacks truely dark substrates
  • for this evaluation, we need strongly contrasted bottom brightnesses at various depths

Rich: "when you apply de-glintng,
is there any possibility the linearity of spectral bands is affected?

  • I can't see why or how.

  • for sure, linearity of sensor's response is something we depend on as the bottom contrast decreases

    • either because bottom depth increases

    • or because bottom brightness decreases

    • or both

Rich: "why are the brightest sea bottom reflectances only at very shallow depth? 
Shouldn't there be some deeper?"

  • I know, that is somewhat disturbing

  • for one: this is a young volcanic uneven construct that is quickly sinking: shallow bottoms are mostly a rock construct that slopes down quite steep

  • then corals are seen to be relatively scarce, bright sand production is probably not that abundant (this is not the reef environment we're used to)

    • what we see is pockets/gullies of bright substrates: bright sand collects in the topographic lows and tend to migrate downslope through some gullies and deeper pockets

    • these gullies and pockets are a just few meters deeper than their surrounding

  • The question that haunts me is: why do we observe a subtle alteration of the water quality (I called it : 'pollution') over a good half of the scene, which results in

    • underestimated depths

    • darker bottoms

Proof of pollution

Ron: "I see your new slide delineating a possible pollution area:
can pollution in this area be validated independently?"

  • "Is it only the error in bathymetry that is proof of pollution?"
    • NO: one can identify pollution of some sort before any modeling, by just inspecting deglinted Red and Yellow bands separately: see https://www.watercolumncorrection.com/waimanalo-pollution.php#bands
    • See that coral patches are totally obliterated in Red and Yellow bands, although they do rise up to a depth of ~4.5 m below water surface at the time of imaging:
      • at least, they should show up un the Yellow band which has bottom detection in excess of 11 m here
      • all that we see are whirls of "pollution", and a protruding bulge that advances seawards
    • Of course: garbage in the data, garbage out in the results!
  • "Can the IR bands be used as an independent measure of pollution?"
    • Usually YES: plumes of suspended particles show very distinctly in the NIR range
    • Not here because NIR bands 7 and 8 are whiped flat by the deglinting process
    • NO because the deglinted NIR band 6 is free from any significant sign of pollution
      • this is disturbing, as this would rule out suspended sediment particles
  • See https://www.watercolumncorrection.com/waimanalo-pollution.php#proof

Viewing angle
Ron : "One benefit of the 7 images vs 1 image is the ability to observe the
bottom spectral properties under changing view angle.   Does your
algorithm assume a fixed spectrum or is there a view angle dependence?"


  • good question: please see Jerlov's comment

  • With both sun and sensor high in the sky, 2K~=2a

  • This means No_Need_For_Field_Data!

    • as those bottom-reflected photons which reach the sensor's narrow field of view experienced a near-vertical upward path through water:
      this is because most upwelling photons that have been scattered on the way up were redirected outside of the near-vertical narrow field of view of the sensor.

  • Or at least this is how I tend to explain the results I get from using the image itself to derive spectral 2K in units of m-1


Créer un site
Créer un site