Optical calibration, bathymetry, water column correction and bottom typing of shallow marine areas, using passive remote sensing imageries
 

Bathymetry and water column corection
A 1500*2100 series of Landsat ETM/TM  4-bands images 

of Ras Hatibah, Saudi Arabia

work done in march 2010



 
1 - NO NEED for field data, nor for atmospheric correction
2 - this is demonstrated in this website, using a variety of hyper/multi spectral data
 
Requirements are
1 - homogeneous water body and atmosphere
2 - some coverage of optically deep water
3 - some coverage of dry land
 
Problems are
1 - the precision on estimated depth is found wanting, because the noise-equivalent change in radiance  of accessible data is too high for shallow water column correction work 
2 - radiance data should be preprocessed by the provider at level 1 in order to improve S/N ratio
3 - exponential decay: the deeper/darker the bottom, the poorer the performances
 
So
I keep digging
until suitable data
become available
 
 




Some answers, April 11th 2010

MASK?? 
"I would like to explore working with unmasked data if possible for future projects."
  • ==> I'm not working with any mask, other than a segmentation of either lands or marine pixelwise which is stored in channel 5
"Why was one masked more (at around 8m in places) than the other? 
Can unmasked data always be produced?"
  • Where the bottom brightness is interpreted to be very dark, then the bottom-contrast becomes negligeable at even much shallower depths, and therefore no depth estimate may be produced.
  • Gain this is controlled through the -Lm... argument: see 4sm_help_Lm
  • I know! that is FRUSTRATING. But there is nothing that can be done about it.
  • But the following improve the potential dramatically:
    • brighter sea bottom (of course!!)
    • more sun light, 
    • more wavebands, 
    • clearer waters, 
    • deep-cooled sensor, 
    • and last but not least 16 bits data
Reduce the detail?
"For the 2003-01-10 results, if you compare the de-glinted channels (ch24, 25, 26) with the original data, I am sure there must be some additional smoothing filter applied.  It looks more than just the de-glinting and seems to reduce the detail on some features and starts to remove some deep narrow ridges or points.  
Can you advise please. "
  • Smart smoothing does not mix marine and non-marine pixels: see 4sm_help_Smooth
  • Your concern is about deep features which return a bottom-reflected signal which is just above the noise:
    • Sure even a very smart smoothing is likely to wipe out very faint but real features.
    • More so if the sets the -Lm... threshold argument values in order to prevent odd "results" from populating the otherwise deep waters: see 4sm_help_Lm: this is probably where you seem to think that I have been applying "masks"
    • Then you have to realize that modeling needs sufficient bottom contrast (Ls-Lsw) in at least two bands: this means that as the green band looses bottom contact, the blue band still returns fairly strong bottom-reflected signal. So if you still want a depth estimate for those pixels with significant bottom contrast in the sole blue band, then I must use the "one-band case". see optical_modeling/sld054
    • Operating the "one-band case" for pixels which have Ls<Lm in the green band needs an assumption on the bottom brightness.
    • This gives interesting results which exhibit the deeper features in earnest, but are very deconnected from the rest of the DTM wherever the assumed brightness does not apply......
  • If wanted, I just have to enable some features of the 4SM commandline, in order to allow the blue band to yield results when it finds itself on his own: this is the "one band case"
    • First is the -M... argument
      • -M/000001/00002/00003/00004               
      • instead of -M/@00001/00002/00003/00004                       
      • sorry, no 4sm_help_M yet
    • With -M/000001/00... specified
      • 4SM shall use specified LBref value to operate the one-band case
      • -B/tclNe5.00/LBref50_20/Bmin0/cLM1.00    
      • instead of -B/tclNe5.00/LBref200_100/Bmin0/cLM1.00    
    • In addition, where water properties or deglinting are lousy, a pixel can yield a very lousy -usually very dark bottom and unacceptably shallow depth-
      • this is controlled by the Bmin variable in the -B/tclNe5.00/LBref200_100/Bmin0/cLM1.00
        • Bmin0 all pixels are accepted, whatever the results...
        • Bmin20 pixels that retrurn B<20 are deemed deep water or recomputed using the one band case
      • either the current pixel is mapped "deep water" if its computed bottom brightness is less than specified Bmin: this is how I forced deep waters in some channels closest to the coastline, by using Bmin20 for example
      • or the one-band case is used if -M/000001/00... is specified: this is how I can force depth results to appear where only the blue band exhibits significant bottom contrast
    • see command line ETM_2000-11-01.sh
  • Whether this can satisfy some select end user, I leave it to your appreciation.
  • But it is a risky game, as the computed depths can be very odd even though they actually exhibit realistic depth variations and mainy of them are very real.
  • see the series of ETM_2000-11-01... illustrations
   
Deglinting
"Also the deglinted image seems to lose more detail than I would expect due to correction of surface waves.  "
 
  • ==>Hardly any surface waves glint affects in those RasHatibah images
  • ==>BUT YES, adjacency effect, and variations of atmospheric thickness, and some very small clouds (yes).
  • ==>They are all returning skydome light to the sensor, and are efficiently removed by so-called "deglinting"
  • ==>Deglinting transfers noise from the NIR band to other bands, adding to their own noise. This requires quite some smart smoothing
Processed in isolation?
"Is there any possibility of pixels being affected by other local pixels during the deglinting process. 
Or are all the pixels processed in isolation?"
  • The only step where processing of current pixel accounts for surrounding pixels is smoothing, although smart-smoothing is designed to reduce loss of details and to preserve sharp changes of bottom contrast.
 

.

.

.

.

 




Combined Depth, 22 April 2010

Since above answers,
I have worked very hard to improve further
over the last two weeks
  • 4SM has improved a lot:
    • deglinting process
    • smoothing process
    • combined depth process
    • profiling
  • Land areas and deep areas are cleaner
  • The Averaged Depth result in centimeters should now be more convenient for producing depth contours

Averaged depth, nbav=4

Averaged depth, nbav=3

 

Averaged depth, nbav=2

Averaged depth, nbav=1

 
  • ABOVE: deep features: 
    • the UL result misses quite many deep features, but has neat deep channels
    • the LR result does not miss any deep feature,    but shows shallow channels.  
  • ABOVE: averaged depth: four images are presented
    • UL where 4 depths at least are averaged
    • UR where 3 depths at least are averaged
    • LL where 2 depths at least are averaged
    • LR where 1 depths at least are averaged 







 
  • LEFT: 
  • deepest depth now is of much less interest, 
    • as the above LR result is much smoother
    • as the above LR result exhibits all deep features
  • apart maybe from the channels
     





Location of the profile









  • Above is a profile through  rashatibahCZ07   
  • which shows results for all seven select images
  • and the profile for averaged depth (black)
  • and the profile for deepest depth (white)

I feel this is most rewarding,
although it clearly shows
that the deeper the bottom,
the less reliable the results...
  •  This improvement was made possible as a response to interaction with a knowledgeable and demanding partner
  • More improvements are in order.
  • In view of the above profile, I feel that some kind of "smart-smoothing" of the final averaged depth image should be developped so that depth contouring becomes more straightforward: 
    • deeper reaches clearly need that
    • some kind of spline function: do you have that?
    • what do you think?
  • 16-bits data should tell a more comfortable story 
  •  Results still need to be multiplied by a final depth correcting factor to be derived from some seatruth data
  • The above averaged depth images are in the following channels  of  rashatibahCZ07.pix image:
    • channel 16:  1 depths at least are averaged
    • channel 19:  2 depths at least are averaged
    • channel 22:  3 depths at least are averaged
    • channel 25:  4 depths at least are averaged 
As 4SM is now much improved in terms of productivity, you should not hesitate to plan for several images to be processed for each Landsat ETM scene:
 
  • In view of the increasing noise/discrepancies of computed depth as the bottom depth increases,
  • Combined/Averaged Depth is a huge leap forward and should see through                                               
 

return to study cases