Considering the dynamic range of HiRISE-images:
Whereas for a "normal sized" image covering one single type of terrain, 8 bit (256 shades of gray) are usually enough, It has been my impression that for the huge area (gigapixels) coverd by each single HiRISE image, often covering a variety of different types of terrain from very dark (basltic black sand) to very light (ice !) typical for the martian landscape, the subtle details in the shadows and very light areas may easily get lost with a *global* dynamic range of only 8 bits.
From reading the specs at http://hirise.lpl.arizona.edu/ it was my impression that the dynamic range of HiRISE images is
compressed
1) already on-board (via 14-to-8 bit Lookup-tables LUT)
2) the dynamic range stretching is done globally for each image
I conclude this from the information I got at the HiBlog:
QUOTE
Hi Bernhard. Look-Up Tables are chosen for each image based on the lighting, viewing angle, and terrain. They map the 14 bpp DN to an 8-bit range, and normally do a *very* good job: the Mars scenes just don’t have a huge dynamic range. That 8 bpp (24 bpp color) is preserved in the RDR and in the IAS viewer.
I should have added, the LUT is done onboard, so it is effectively a form of lossy compression.
I should have added, the LUT is done onboard, so it is effectively a form of lossy compression.
On the other hand, however, there seems nevertheless to be a way for the HiRISE team to increase the *local* dynamic range for sub-images like the following one
http://hirise.lpl.arizona.edu/PSP_005392_0995
where in the sub-image the details in the very bright area of the icy crater wall are preserved, whereas in the global image the bright area is "blown out" with uniformly white pixels)
This is even so when applying the "local dynamic range stretch" in IAS viewer ...
So my question would be: if the dynamic range compression is already done on-board and globally for each image, then how
could the locally improved dynamic range obtained of the sub-images posted ta the HiRISE site ?
Could this high dynamic range information (i.e. the original 12-14bit per pixels from the CCDs) somehow obtained/recoverd from the raw/IMG-files (converting to 16-bit PNG ?) ?
Thanks a lot in advance for any answers !
Bernhard