I've been looking into this as well, mostly since the begining of the mission. Here's a couple references i picked up, that mostly outline the points along the chain of processing same as the webpage posted regarding Pathfinder:
This one shows some of the processing steps involved in the stereo pair vision used onboard for navigation and hazard avoidance:
http://robotics.jpl.nasa.gov/people/mwm/visnavsw/aero.pdfA bit about the image correlation :
http://dynamo.ecn.purdue.edu/~gekco/mars/correlator_app.htmlThe CAHVOR camera model, used to linearize (remove lens distortion, to line up stereo pairs so that you only have to correlate features horizontally) :
http://robotics.jpl.nasa.gov/people/mwm/cahvor.htmlThe software used by MIPL to generate the steps are all VICAR (
http://www-mipl.jpl.nasa.gov/external/vicar.html ) programs, whose helpfiles are available online at
http://www-mipl.jpl.nasa.gov/vicar/vicar300/htmlAs far as I can tell, here are the steps involved:
Raw image ->
MARSCAHV (linearizes images) ->
MARSJPLSTEREO (computes disparity map) ->
MARSXYZ (computes xyz values for each pixel)
Then, using the xyz files, they can derive roughness maps, slope maps, reachability maps, etc...
As to the data that is already available, there were supposed to be Mesh products released with the PDS (some of the PDS directories even list them) but I can't find a search or directory that actually contains them.
What was released, however, might suit your purposes. The 3 letter codes for finding them are in parenthesis first for the non-linearized, then the linearized:
5.2.4 XYZ RDR (XYZ, XYL)
An XYZ file contains 3 bands of 32-bit floating point numbers in the Band Sequential order.
Alternatively, X, Y and Z may be stored in separate single-band files as a X Component RDR, Y
Component RDR and Z Component RDR, respectively. The single component RDRs are implicitly the
same as the XYZ file, which is described below. XYZ locations in all coordinate frames for MER are
expressed in meters unless otherwise noted.
The pixels in an XYZ image are coordinates in 3-D space of the corresponding pixel in the reference
image. This reference image is traditionally the left image of a stereo pair, but could be the right image
for special products. The geometry of the XYZ image is the same as the geometry of the reference
image. This means that for any pixel in the reference image the 3-D position of the viewed point can be
obtained from the same pixel location in the XYZ image. The 3-D points can be referenced to any of
the MER coordinate systems (specified by DERIVED_IMAGE_PARAMS Group in the PDS label).
Most XYZ images will contain "holes", or pixels for which no XYZ value exists. These are caused by
many factors such as differences in overlap and correlation failures. Holes are indicated by X, Y, and Z
all having the same specific value. This value is defined by the MISSING_CONSTANT keyword in the
IMAGE object. For the XYZ RDR, this value is (0.0,0.0,0.0), meaning that all three bands must be zero
(if only one or two bands are zero, that does not indicate missing data).
5.2.5 Range RDR (RNG, RNL)
A Range (distance) file contains 1 band of 32-bit floating point numbers.
The pixels in a Range image represent Cartesian distances from a reference point (defined by the
RANGE_ORIGIN_VECTOR keyword in the PDS label) to the XYZ position of each pixel (see XYZ
RDR). This reference point is normally the camera position as defined by the C point of the camera
model. A Range image is derived from an XYZ image and shares the same pixel geometry and XYZ
coordinate system. As with XYZ images, range images can contain holes, defined by
MISSING_CONSTANT. For MER, this value is 0.0.
5.2.7 Surface Normal RDR (UVW, UVL)
A Surface Normal (UVW) file contains 3 bands of 32-bit floating point numbers in the Band Sequential
order. Alternatively, U, V and W may be stored in separate single-band files as a U Component RDR,
V Component RDR and W Component RDR, respectively. The single component RDRs are implicitly
the same as the UVW file, which is described below.
The pixels in a UVW image correspond to the pixels in an XYZ file, with the same image geometry.
However, the pixels are interpreted as a unit vector representing the normal to the surface at the point
represented by the pixel. U contains the X component of the vector, V the Y component,
and W the Z component. The vector is defined to point out of the surface (e.g. upwards for a flat
ground). The unit vector can be referenced to any of the MER coordinate systems (specified by the
DERIVED_IMAGE_PARAMS Group in the PDS label).
Most UVW images will contain "holes", or pixels for which no UVW value exists. These are caused by
many factors such as differences in overlap, correlation failures, and insufficient neighbors to compute
a surface normal. Holes are indicated by U, V, and W all having the same specific value. Unlike XYZ,
(0,0,0) is an invalid value for a UVW file, since they're defined to be unit vectors. Thus there's no issue
with the MISSING_CONSTANT as there is with XYZ, where (0.0,0.0,0.0) is valid.
5.2.11 Terrain Map RDR
Terrain models are a high level product which are derived from the XYZ files and the corresponding
image files. The terrain models are generated by meshing or triangulating the XYZ data based on the
connectivity implied by the pixel ordering or by a volume based surface extraction. The XYZ files can
be viewed as a collection of point data while the terrain models take this point data and connect it into a
polygonal surface representation. The original image is referenced by the terrain models as a texture
map which is used to modulate the surface color of the mesh. In this way the terrain models can be
viewed as a surface reconstruction of the ground near the instrument with the mesh data capturing the
shape of the surface and the original image, applied as a texture map, capturing the brightness
variations of the surface. Specific terrain model formats such as VST, PFB, DEM and others can be
viewed as analogous to GIF, TIFF or VICAR in image space in that each represents the data somewhat
differently for slightly different purposes.
5.2.11.1 VST Terrain Wedge (VIS, VIL)
The ViSTa (VST) format consists of one terrain model for each wedge (stereo image pair), in a JPLdefined
binary format suitable for display by SAP. Each file contains meshes at multiple levels of detail.
5.2.11.2 PFB Terrain Mesh (ASD?, ASL?)
The Performer Binary (PFB) format facilitates the representation of a terrain surface as polygons,
optimized for use by the RSVP tool. The number of polygons at any one time may vary according to
site specific features, such as small rocks versus large boulders.
I know you said you weren't looking to do any programming, but that may be the only way to achieve a solution that doesn't involve waiting for the PDS release. There seems to be a lot of code available with stereo vision algorithms, but one in particular that might be helpful is the Open Computer Vision Library available at :
http://sourceforge.net/projects/opencvlibrary/