Help - Search - Members - Calendar
Full Version: Mer Dynamic Range Limitations
Unmanned Spaceflight.com > Mars & Missions > Past and Future > MER > Tech, General and Imagery
SteveM
Looking at some of the deep shadows in the recent imagery from home plate, I came to think that some of the techniques used to extend the dynamic range of photographs are (in principle) applicable to rover photography.

The technique used is to take a series of photographs with the same orientation with continually changing exposure times -- generally doubling exposure time over a range of ten exposures until you have ten images. At one extreme the image is so underexposed that only the brightest specular reflections are recorded, at the other end almost everything is overexposed, but even objects in deep shadows are visible. These images are then post processed to convert them to a single high dynamic range image, encoded in one of the HDR formats that are about, and these high dynamic range images can be examined to show detail in the brightest and darkest regions. (Obviously, with MER this would have to be done separately for each spectral filter used).

This kind of thing is not applicable to orbiter missions, where the object and camera are moving, but in principle it could be done for the present and planned rover missions (e.g., MER, MSL). The only way that seems practical to do this with MER would require changing the photo software, processing the images onboard, and sending the encoded high dynamic range image to earth. Some questions for the Pancam people:
1) How would the MER cameras react to the required extreme under/over exposure?
2) I know there have been software rewrites during the mission but is it possible to change the on-board photo processing software in this way?
3) If it were, would the discontinuity in the image series (even if it is an improvement) cause problems for the scientific value of the images?
4) Is there any thought of using high dynamic range images for later surface missions such as MSL?

Steve
mcaplinger
QUOTE (Steve @ Feb 19 2006, 04:00 PM) *
1) How would the MER cameras react to the required extreme under/over exposure?
2) I know there have been software rewrites during the mission but is it possible to change the on-board photo processing software in this way?
...
4) Is there any thought of using high dynamic range images for later surface missions such as MSL?


Underexposure isn't much of an issue; at some point in overexposure you start to get "blooming" where excess charge from a photosite starts to leak to adjacent pixels.

The largest problem is that to make this work you'd have to use more bits per pixel, and I suspect that the current number (12 bits for raw images, 8 bits for square-root-encoded) is deeply wired into the software.

Of course, nothing's stopping them from commanding separate images at different exposures and merging them on the ground. I haven't heard of that being done, but it would be fairly straightforward if one was willing to accept the penalty of downlinking the extra data.

We have no plans to build this into the image acquisition process for MSL -- we've got our hands full there as it is. smile.gif
djellison
Actually - I've found MER imagery to have a much better range than most digital cameras - and whilst I've played with HDR a little in the 3D field, I can't imagine much benefit for MER purposes.

Doug
Pertinax
My understanding of HDR imagery is that it is intended to overcome the dynamic range limitations of standard 8bit imagery (well, 24 bit (8R,8G, 8B)), which has a rather limited dynamic range.

The rover imagery (the *.img imagery) is 12bit per image, providing a wealth of more dynamic range information than 'normal' / 8bit imagery would. This is not to say that you could not use the technique to produce an image of even greater dynamic range (one absolutly could), but as Doug noted, there is little need /benefit to do so -- particularly as we already have 12 bit per filter imagery.


-- Pertinax
SteveM
QUOTE (Pertinax @ Feb 21 2006, 09:54 AM) *
The rover imagery (the *.img imagery) is 12bit per image, providing a wealth of more dynamic range information than 'normal' / 8bit imagery would. This is not to say that you could not use the technique to produce an image of even greater dynamic range (one absolutly could), but as Doug noted, there is little need /benefit to do so -- particularly as we already have 12 bit per filter imagery.
-- Pertinax


Thanks for all the helpful comments. 12 bits is much better than 8, but not up to the ability of HDR to deal with deep shadows and glaring reflections in the same image. I guess my perception of the dynamic range limits came from just using the JPGs, with their 8 bit range.

Using my newbie privilege, can anyone point me to sources for the .img files and for software to read them.

Steve
slinted
The img files can be found in the MER Analyst Notebook. Under search, look for RAD files. If you're using windows, Bjorn's img2png program will help make displayable images (using the -r flag to apply the scaling factors) from the img files.

As has already been mentioned, the 12 bit range allows almost every detail on the surface to be captured in a single exposure. On Mars, I think only direct solar, or bright sky, imaging would create a greater than 1000:1 range. The occasional specular reflection bleeds out the pixels, but that has more to do with how the autoexposure works than what the ccd could actually handle. Areas under shadows have good visible detail but it's hard to tell that from looking at the raw jpgs.

The problem of seeing the detail in shadow still remains even if there is useable data in those areas. The other half of the HDR research might end up being much more important for Mars imaging than multi-exposure photography. Since computer displays and TVs only have a dynamic range of 1:100, the issue of how best to compress the photographic range into the display range is a significant challenge. Since HDR images often have a vastly wider range than the display device, research into intelligent tone mapping has gone hand in hand with the methods for capturing HDR images. Retinex, iCAM and several other techniques attempt to preserve visible detail while compressing the range for display or print. Even smaller range images, like Pancam, could benefit from some smart tone mapping.
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Invision Power Board © 2001-2024 Invision Power Services, Inc.