Looking at some of the deep shadows in the recent imagery from home plate, I came to think that some of the techniques used to extend the dynamic range of photographs are (in principle) applicable to rover photography.
The technique used is to take a series of photographs with the same orientation with continually changing exposure times -- generally doubling exposure time over a range of ten exposures until you have ten images. At one extreme the image is so underexposed that only the brightest specular reflections are recorded, at the other end almost everything is overexposed, but even objects in deep shadows are visible. These images are then post processed to convert them to a single high dynamic range image, encoded in one of the HDR formats that are about, and these high dynamic range images can be examined to show detail in the brightest and darkest regions. (Obviously, with MER this would have to be done separately for each spectral filter used).
This kind of thing is not applicable to orbiter missions, where the object and camera are moving, but in principle it could be done for the present and planned rover missions (e.g., MER, MSL). The only way that seems practical to do this with MER would require changing the photo software, processing the images onboard, and sending the encoded high dynamic range image to earth. Some questions for the Pancam people:
1) How would the MER cameras react to the required extreme under/over exposure?
2) I know there have been software rewrites during the mission but is it possible to change the on-board photo processing software in this way?
3) If it were, would the discontinuity in the image series (even if it is an improvement) cause problems for the scientific value of the images?
4) Is there any thought of using high dynamic range images for later surface missions such as MSL?
Steve