Help - Search - Members - Calendar
Full Version: Range Finding - Parallax Calculations
Unmanned Spaceflight.com > Mars & Missions > Past and Future > MER > Tech, General and Imagery
Pages: 1, 2
helvick
Just setting this up as a topic.

Rodolfo (RNeuhaus) has pointed out that he gets hard to interpret results when using Joe Knapp's online Parallax Calculator.

I don't know the formulas behind this calculator but have been trying to work out my own.

Taking (for now) that the geometries are as follows:
Pancam, 16deg FOV, 300mm separation between L and R and a 1deg toe in.
Navcam, 45deg FOV 200mm separation between L and R and no toe in.

Parallax formula - Distance to object=half the Camera Separation/Tan angle between the point in the L+R images. (I don't think this is 100% correct for parallel cameras but it should be good to a close first approximation provided things aren't too close).

Sticking with the Navcam which doesn't have any toe in my gut feel for things is that distant objects should be affected less by parallax. So taking Joe Knapps sample calculation for Navcam. Left Camera 512 pixels, right camera 500 pixels.
Separation is 12 pixels so the parallax angle is 12 * (45deg/1024pixels) = 0.527344 degrees.
The distance should then be (100mm/tan(0.527344 deg).
That gives me 108m.
Joe Knapps calculator gives me 20.3m.

The Pancam is a bit trickier because of the toe in angle, I need to think a bit more about whether my default assumption that it can be compensated for by simply subtracting 64 pixels from the right image position is valid or not.

Anyway - something is amiss in either my numbers or Joe Knapp's calculator. Anyone out there able to throw some light on this.
RNeuhaus
Good to hear from you. I have already written a note to the PANCAM Parallex calculator's author on last Friday. Up to know, there is no news from him.

About a manual measurement, the document (Bell's one) says that the PANCAM resolution is designed specially up to 100 meters which is the designated range of roving per day. Its resolution is about 2.8 cms per pixel at a range of 100 meters.

This leads me to think that the distance of every pixel is not lineal but the closer range, the distance would be smaller than 2.8 cms per pixel and the average would be at 50 meters with 2.8 cms per pixel and at 100 meters would be smaller than 2.8 cms / pixel. Isn't that right?

On the other hand, I remember that someone was able to calculate the distance between the Spirit to the slope obstacles of Hasking Ridge with some kind of software Pancam Parallax. Which ones is that?

Rodolfo
helvick
Rodolfo,

The author (Joe Knapp) posts here as jmknapp I think. It might be worth sending him a PM.

I'm still getting the impression that you don't understand the fundamental geometry of the set up. If I'm mistaken, apologies but I think I need to step through this carefully to be certain you understand the basic principles.

Each camera covers a fixed field of view. That means that if you were to draw a triangle looking down on the scene then the angle between the left hand side of the image, the camera andthe right hand side of the image is 16deg for the Pancams and 45deg for the NavCam.

So the field of view (from extreme left to extreme right) of any of these images always has the same angular distance. This means that as you move further away from the camera the physical dimension covered by an individual pixel gets bigger. The exact physical dimensions (D) of a Pancam pixel at a given distance (X) is represented by the following:
D=X*(tan(8)/512
See this web page for a useful diagram and an explanation of this formula. We use half the FOV and half the number of pixels of a full image so we can work with the simple right handed triangle.
So the pixels on a Pancam image, like every other camera with a 16deg FOV taking a 1024 pixel image have the following dimensions (in cm):
Range(m) Pixel size
1 -----------0.027449382
5 -----------0.137246909
10 ----------0.274493818
20 ----------0.548987636
50 ----------1.372469089
70 ----------1.921456724
100 ---------2.744938178
I have to reiterate that are fundamentally related to the FOV of the camera and they apply equally to any camera anywhere which has a 16deg FOV. The numbers for the Navcam are different since it has a wider FOV.

The calculations for finding the specific range of an object uses the same basic idea but in reverse (more or less). From a pair of left and right images you can read off the parallax angle of a feature by counting how many pixels it appears to move between the two images since each pixel always corresponds to a fixed angle (16/1024==0.015625deg for Pancam). Since you also know the distance between the two cameras you are once again dealing with a triangle where you know all the angles and have the length of one side. From that you can work out the missing bits, in particular the range to the object. My back of the envelope drawings of the parallel situation (for the Navcam) lead me to believe that the range should be closely approximated by:
Range=0.1m/Tan(No of Pixels * 0.043945deg) (Navcam)
Range=0.15m/Tan(No of Pixels * 0.015625deg) (Pancam)
I'd welcome some comments on this because I suspect that this is not completely accurate but I'm pretty sure that even if that is the case the error is going to be small provided the range is above a couple of metres.

The Pancam case is complicated by the 1 deg toe but I think that simply adding 64 pixels (1024/16 == no of pixels in 1 deg for a Pancam image) to the measured measured value for the right hand image should be sufficient.
mars_armer
QUOTE (helvick @ Dec 3 2005, 06:46 AM)
Taking (for now) that the geometries are as follows:
Pancam, 16deg FOV, 300mm separation between L and R and a 1deg toe in.
Navcam, 45deg FOV 200mm separation between L and R and no toe in.

Parallax formula - Distance to object=half the Camera Separation/Tan angle between the point in the L+R images. (I don't think this is 100% correct for parallel cameras but it should be good to a close first approximation provided things aren't too close).

Sticking with the Navcam which doesn't have any toe in my gut feel for things is that distant objects should be affected less by parallax. So taking Joe Knapps sample calculation for Navcam. Left Camera 512 pixels, right camera 500 pixels.
Separation is 12 pixels so the parallax angle is 12 * (45deg/1024pixels) = 0.527344 degrees.
The distance should then be (100mm/tan(0.527344 deg).
That gives me 108m.
Joe Knapps calculator gives me 20.3m.
*

I think you're off by a factor of two in your formula and a factor of 10 in your calculator. I get

200mm/tan(0.527344 deg) = 21.7 m.
RNeuhaus
Helvick, Thanks for the response. However I still did not grasp fully the understanding of MER Stereo Parallax Calculator and below I have some questions to clarify.

About the formula: 100mm/tan(0.527344 deg). What is 100 mm (milimiter)? Is the distance of what are you saying.

Tan(@) = opposite/adjacent. --> adjacent (distance) = tan (@) / opposite

For the NAVCAM, according to the material you showed me about the formule:

View has an angle: @
Half view of the angle is: @/2
The distance of the angle is: l
Half distance of the angle is: l/2
The adjacent distance is: d (vertical height)

Then the formule: tan (@/2) = (l/2)/d

to know the distance from the viewer: d = (tan(@/2))/(l/2)

Let suppose that a point of interest in the picture: a stone: Left navcam: 580 pixels (GIMP tool measurement) and the Right navcam: 590 pixels:

According to the Parallalx calculator: object distance: 8.92 m, one-pixel error: 0.037 m
object dimension: 0.2 cm.

According to the formule:
The separation distance is: 590-580= 10 pixels.

@ angle is: (10 pixels*45 degree)/1024 pixels
@ = 0.439453

distance = "opposite side, not known, how to obtain it?" x tan(0.439453). I guess that the opposite side must be always of 100 meters?

Rodolfo

P.D.jamescanvin has talked about the measurement of distances with parallax calculator in the topic: Haskin Ridge, how did he measured it? See at the post
jamescanvin
Just jumping in as I'm mentioned:

QUOTE (RNeuhaus @ Dec 6 2005, 02:17 PM)
About the formula: 100mm/tan(0.527344 deg). What is 100 mm (milimiter)? Is the distance of what are you saying.


As mars_armer pointed out that should be 200mm which is the distance between the navcams. I guess helvick new that and was using half the seperation distance to make a true right angled triangle (his mistake was then not to then divide the angle by two!)

QUOTE (RNeuhaus @ Dec 6 2005, 02:17 PM)
Let suppose that a point of interest in the picture: a stone:   Left navcam: 580 pixels (GIMP tool measurement) and the Right navcam: 590 pixels:

According to the Paralalx calculator: object distance: 8.92 m, one-pixel error: 0.037 m
object dimension: 0.2 cm.

According to the formule:
The separation distance is: 590-580= 10 pixels.

@ angle is:  (10 pixels*45 degree)/1024 pixels
@ = 0.439453

distance = "opposite side, not known, how to obtain it?" x tan(0.439453). I guess that the opposite side must be always of 100 meters?


as above: opposite side = distance between cameras = 200mm.
Which gives about 26.0m using 0.2m/tan(0.439453)

Why the discrepency with your value of 8.92m from the calculator? Well, it looks like you were using the pancam setting! When I put in those values I get:

object distance: 24.4 m, one-pixel error: 1.234 m

All seems consistent to me. smile.gif

QUOTE (RNeuhaus @ Dec 6 2005, 02:17 PM)
P.D.jamescanvin has talked about the measurement of distances  with parallax calculator in the topic: Haskin Ridge, how did he measured it? See at the post
*


Just using the Parallax Calculator like above, nothing more. Measure the pixel positions of the same object in the L & R frames and feed the numbers to tthe program. smile.gif


James.
Tesheiner
Rodolfo, just an additional reminder:

You will see that the farther the distance to a feature, bigger is the "one-pixel" error. When measuring a range via parallax you must be *exact* (as much as possible) with the pixel positions for a feature on the L and R images.
helvick
QUOTE (jamescanvin @ Dec 6 2005, 05:24 AM)
Just jumping in as I'm mentioned:
As mars_armer pointed out that should be 200mm which is the distance between the navcams. I guess helvick new that and was using half the seperation distance to make a true right angled triangle (his mistake was then not to then divide the angle by two!)
*

Err, yes. Wasn't paying attention to myself there. Apologies for my confusion, I appear to have had enough coffee by the time I made my later post and didn't repeat the error.

I still seem to have some problems with the parallax calculator site on my main machine - it appears to cache results strangely which another reason why I was getting confused. Apart from forgetting trig 101 that is. Oh well.

I'm still a bit confused though because the Parallax calculator page doesn't generally agree with my numbers.

I found Joe Knapps original post regarding the Pancams - it seems that I misunderstood the meaning of the "~1 deg" toe in, again not thinking clearly, both cameras have the toe in angle so the observed parallax must be adjusted by twice the amount I referred to (~64 pixels based on an 1deg toe in).
There's more to it than that though:
QUOTE
Based on the left and right images, there's a parallax of 92 pixels at the rock. The distance equation for Opportunity's pancam is:

D = 1071/(130 - N)

D = 1071/(130 - 92) = 28.2 meters

The width of the rock is 54 pixels. At 0.28 mrad/pixel (pancam spec) that's 15 mrad. Width then would be 28.2*0.015 = 0.42 m.

The width of the bounce mark is 223 pixels or 1.8 m.


Interesting.
jmknapp
My parallax calculator could sure use some improvements. It's based on a simplified geometric model and neglects complications like actual (vs. designed) pointing and toe-in of the cameras, non-linear optical effects, etc. There may be some operational issues too like the "strange caching of results" mentioned above.

More recently I have found the NAIF file speciying the camera geometry which makes some parameters clearer, but some less so as I don't have any experience with the CAHVOR model the experts use to define the camera geometry.

The latest "frames definition kernel" for MER2 is at:

ftp://naif.jpl.nasa.gov/pub/naif/MER/kernels/fk/mer2_v09.tf

This is a text file that can be opened in an editor. It has both text descriptions of the geometry and also parameter/value pairs. Here is one snippet:

QUOTE
Nominal PMA [primary mast assembly] camera orientations are such that NAVCAM left and right
   boresights are parallel to each other and the PMA head +X axis,
   while the PANCAM left and right boresights "toe"ed in by 1 degree
   toward the PMA head +X axis. In order align PMA head frame with the
   camera frames in this nominal orientation, it has to be rotated by
   +90 degrees about Y and then about X by non-zero "toe"-ins for the
   PANCAM (-1 degree for the left camera and +1 degree for the right
   camera) and by zero "toe"-ins for NAVCAM, and finally by -90 degrees
   about Z (to line up Y axis with the vertical direction.) The
   following sets of keywords should be included into the frame
   definitions to provide this nominal orientation (provided for
   reference only):
  
      TKFRAME_-254121_AXES             = (   2,       1,       3     )
      TKFRAME_-254121_ANGLES           = ( -90.000,   1.000,  90.000 )

      ...etc.


So that makes it fairly clear that both boresights are toed-in, for a total 2 degrees (1 degree left and right) by design. The PANCAM X-axis is the axis looking out from the pancams (e.g., towards the horizon). The Y-axis is the left/right direction and the Z-axis is the vertical. The SPICE kernel developers have taken ASCII-art to a wonderful level, and here is their diagram of the PANCAM frame:

QUOTE
 
                                 UHF    /\
                         HGA            \/ PMA
                          .--.    #     ||
                         /    \   #     ||
                        |      |  #     ||
                         \    /=. #     ||
                          `--' || #     ||                 Rover
                       =======================           (deployed)
                             |    =o=.    |
                             |  .' Yr `.__|o====o
                           .===o=== o------> Xr \\    
                          .-.      .|.    `.-.  ##o###
                         | o |    | | |   | o |
                          `-'      `|'     `-'     IDD
                                    V Zr


Clear now? biggrin.gif

The 1-degree toe-in per camera is by design, but evidently, if I read the frames kernel right, reality is quite a bit different. For the left PANCAM we have:

QUOTE
The actual MER-2_PANCAM_LEFT_F1 frame orientation provided in the frame
   definition below was computed using the CAHVOR(E) camera model file,
   'MER_CAL_176_SN_104_F_1.cahvor'. According to this model the reference frame,
   MER-2_PMA_HEAD, can be transformed into the camera frame,
   MER-2_PANCAM_LEFT_F1, by the following sequence of rotations: first
   by 90.05088205 degrees about Y, then by -0.65914547 degrees about X, and
   finally by -90.31522256 degrees about Z.

   The frame definition below contains the opposite of this
   transformation because Euler angles specified in it define
   rotations from the "destination" frame to the "reference" frame.

   \begindata

      FRAME_MER-2_PANCAM_LEFT_F1       = -254121
      FRAME_-254121_NAME               = 'MER-2_PANCAM_LEFT_F1'
      FRAME_-254121_CLASS              = 4
      FRAME_-254121_CLASS_ID           = -254121
      FRAME_-254121_CENTER             = -254
      TKFRAME_-254121_RELATIVE         = 'MER-2_PMA_HEAD'
      TKFRAME_-254121_SPEC             = 'ANGLES'
      TKFRAME_-254121_UNITS            = 'DEGREES'
      TKFRAME_-254121_AXES             = (    2,        1,        3     )
      TKFRAME_-254121_ANGLES           = (  -90.051,    0.659,   90.315 )

   \begintext


So the toe-in rotation about the X-axis is only 0.58 degree rather than 1.0 degree, moreover the are fractional-degree deviations of the Y and Z axes.

Then there are the CAHVOR complications. The CAHVOR model is specified in the instrument kernels

left PANCAM instrument kernel
right PANCAM instrument kernel

At this point I throw up my hands!
algorimancer
QUOTE (jmknapp @ Dec 6 2005, 06:57 AM)
...
The 1-degree toe-in per camera is by design, but evidently, if I read the frames kernel right, reality is quite a bit different. For the left PANCAM we have:
So the toe-in rotation about the X-axis is only 0.58 degree rather than 1.0 degree, moreover the are fractional-degree deviations of the Y and Z axes.

Then there are the CAHVOR complications. The CAHVOR model is specified in the instrument kernels
...
At this point I throw up my hands!
*


Perhaps I can shed a little light on this. I pulled-up the ftp://naif.jpl.nasa.gov/pub/naif/MER/kernels/fk/mer2_v09.tf file and looked at the pancam reference frame orientations:

PancamLeft

TKFRAME_-254128_AXES = ( 2, 1, 3 )
TKFRAME_-254128_ANGLES = ( -90.051, 0.659, 90.315 )

PancamRight
TKFRAME_-254131_AXES = ( 2, 1, 3 )
TKFRAME_-254131_ANGLES = ( -89.946, -1.376, 90.400 )

I worked-out the combined rotations, rotating (for PancamL) as specified first -90.051 degrees about the Y axis, then 0.659 degrees about the X axis, and 90.315 degrees about the Z axis (in that order), then did the same for PancamR, and worked-out the relative orientation between them (I did all this using quaternions, as they're easier and more accurate than 3X3 matrices). The net result is that the relative orientation between the left and right pancams is

2.039451 degrees about the unit axis vector (-0.045257,-0.998119,0.041349). Not exactly the expected 2 degrees, but darn close.

Incidentally, in the CAHVOR data from the instrument kernels is a unit referred to as the CAHVOR_QUAT. From this quaternion I extract a rotation of
181.908954 degrees about axis vector (0.046757,0.000780,0.998906), which is surprisingly close to 2 degrees as well (if you subtract off 180 degrees). However both cameras have EXACTLY the same CAHVOR_QUAT value, so I'm not clear on what this refers to. I pulled-up a paper discussing the CAHVOR camera model but it didn't give any mention to CAHVOR_QUAT, so I don't know what rotation that parameter refers to.

I'm still not sure how to go from this to photogrammetry, or I'd whip-up some software.
jmknapp
Interesting that taking all the rotations into account gives the correct figure of 2 degrees. Kudos for the quaternion math!
algorimancer
I have just completed a new 3D MER RangeFinder:

http://www.clarkandersen.com/RangeFinder.htm

Screenshot:
Click to view attachment

The new RangeFinder uses the CAHVOR camera models for the pancams and navcams, takes 2D coordinates from the image pairs, and uses photogrammetry to calculate a 3D distance, error, and even the coordinates of the point, albeit in the camera reference frame.

If there is sufficient interest I may add a batch processing utility to it at some point.
jmknapp
QUOTE (algorimancer @ Mar 1 2006, 08:51 PM) *
I have just completed a new 3D MER RangeFinder:


Very nice & a lot of work I bet! I notice it works out to 100m or so--my calculator really went haywire at the longer ranges.
algorimancer
QUOTE (jmknapp @ Mar 2 2006, 06:21 AM) *
Very nice & a lot of work I bet! I notice it works out to 100m or so--my calculator really went haywire at the longer ranges.


Thank you smile.gif I was (and am) rather surprised myself at the apparent accuracy, which I can only attribute to the good job the JPL folks did of calibrating the cameras. Since I'm using the Spirit calibration, I'm not sure how well it will compare when used on Opportunity, but the camera model parameters seem pretty close. There is also additional calibration data for each filter of the pancams, but no significant difference between them. I'm debating with myself the notion of integrating all the calibration data for all filters of both rovers (relatively easy to do, but not as clean an interface). At the same time, I'd like to release the source code but will have to investigate some copyright issues first; handling the vector ops with the Boost library would be an easy step in that direction. I'm leaning towards providing a command line interface and a class library release at some point, and it should easily generalize to any rover/lander/etceteras with a stereo pair of cameras and corresponding CAHVOR calibration.

A surprising amount of the work involved in creating this application was simply tracking down the details of the CAHVOR camera model; the vast majority of the relevant papers & web pages were more concerned with generating the model parameters than using them. Frustratingly, rather a lot of the needed papers (found via Google) were bad links to the JPL robotics site; seems like they've decided to isolate a lot of material from the web lately sad.gif
djellison
I know JB's out the office at the moment, but I'm sure he'd help you out if you needed more info - his email addy is easily googlable.

Doug
algorimancer
QUOTE (djellison @ Mar 2 2006, 07:50 AM) *
I know JB's out the office at the moment, but I'm sure he'd help you out if you needed more info - his email addy is easily googlable.

Doug


I was actually on the verge of resorting to emailing questions yesterday, about the time I stumbled upon a paper which provided an algorithm for transforming from an image coordinate to a vector directed out from the camera's principal point (using the CAHVOR model), which was all I needed to get this thing working. Otherwise I'd found lot's of references to the inverse transformation. I was also giving Mathematica a workout trying to figure it out myself. At this point the camera model of the application seems to be working great, and the photogrammetry results speak for themselves. It might be nice to spot check it with some calibrated images, but there's not a lot of room for error in the approach I've used. At this point I'm pretty jazzed about adding more capabilities ... I see what people are doing in terms of projecting images using the camera pointing angles, and it seems that photogrammetry capability would be complementary. I'm looking forward to seeing what transpires smile.gif
MaxSt
Thanks, algorimancer.

I was using these simple formulas for NAVCAM (results are very close to Joe Knapp's calculator):

CODE
  
//NAVCAM
object_distance = 0.2*1024*14.672/(12.29*(Xl-Xr));
object_dimension  = 0.2*size/(Xl-Xr);


But I notices some distortions, so I was looking for CAHVOR model data too... So thanks for the links. cool.gif
algorimancer
QUOTE (MaxSt @ Mar 2 2006, 04:14 PM) *
Thanks, algorimancer.

I was using these simple formulas for NAVCAM (results are very close to Joe Knapp's calculator):

CODE
  
//NAVCAM
object_distance = 0.2*1024*14.672/(12.29*(Xl-Xr));
object_dimension  = 0.2*size/(Xl-Xr);


But I notices some distortions, so I was looking for CAHVOR model data too... So thanks for the links. cool.gif


You're welcome smile.gif I tried some similar formulas initially, plus tried to interpolate between the derived vectors listed in the CAHVOR files, but just couldn't make myself be happy with the results. Once I had all the pieces together it was actually relatively easy to get it working, but 3D graphics and motion analysis is part of my day job, so I have a well-stocked bag of tricks. It's been entertaining.
algorimancer
While it's only a day since the initial announcement, I've just implemented a few improvements to my MER RangeFinder application. It's now version 1.1 smile.gif There are two major (visible) changes:

1. Integrated the CAHVOR parameters for Opportunity in addition to Spirit. It turns-out to be noticeably more accurate with the correct rover selected.

2. Added a batch processing option, to permit processing files full of rows of pairs of pixel coordinates. It occurs to me that the ImageJ application might be very helpful in quickly acquiring lot's of pixel coordinates, perhaps combined with Excel to handle organizing the data files.

I looked into the notion of providing separate CAHVOR calibrations for each pancam filter, but there doesn't seem to be any difference in the parameters between the filters, so I won't bother.

Here's the link and updated screenshot:

http://www.clarkandersen.com/RangeFinder.htm

Click to view attachment

Enjoy. Please let me know of any problems or suggestions; I've done a fair amount of testing, but not exhaustive.
MaxSt
Batch Process seems to work fine, that's very useful.

But could you explain "position", please? Where is the center?
Tesheiner
Thanks for this new tool, algorimancer.

QUOTE (algorimancer @ Mar 3 2006, 03:02 AM) *
It occurs to me that the ImageJ application might be very helpful in quickly acquiring lot's of pixel coordinates, perhaps combined with Excel to handle organizing the data files.


Do you mean (semi-)automatically pick pairs of pixel coordinates from both L & R images? That would be great!
I'm not familiar with ImageJ, and it's home page gives me no hint about such capability.
Any help?
djellison
Then once you have a fairly populated array of values, you can generate a mesh......and put the image back on it...and bingo - the full on 3d navigable environment we've been dying for smile.gif

Doug
MaxSt
Actually, that's exactly what I'm playing with right now...

I found this nice program for automatic point selection... It's the same method used in Autostitch, so it's quite good:
http://www.cs.ubc.ca/~lowe/keypoints/

I already got some quick-and-dirty 3d models from NAVCAMs. But the model from one pair of images is just not enough - looks like a small patch. What I really want is to stitch them slices into the whole pizza... but that's kinda hard.
djellison
Alternatively, just write a convertor for the released data which includes meshes smile.gif

Oh yeah- hint hint smile.gif

Doug
algorimancer
QUOTE (MaxSt @ Mar 2 2006, 09:59 PM) *
Batch Process seems to work fine, that's very useful.

But could you explain "position", please? Where is the center?


The position is oriented in the pancam masthead reference frame (using a pair of center coords is illustrative, (512.5,512,5) for each camera), and the center/origin is set by the application as the midpoint between the two pancam's principle points (essentially the focal points). This means that you can generate a set of points in a consistent reference frame for one stereo pair of images. If you do this for multiple pairs of images you'll have to apply a rotation to the resulting points (and possibly a translation if the origin doesn't coincide with the masthead's center of rotation). One solution would be to have at least 3 overlapping points between them, figure the absolute orientation transformation, and apply it as needed. Or perhaps it's adequate to get the pancam orientation corresponding to the images and apply that rotation.

I can provide a capability to transform coordinates sometime in the next couple of days, I think. Meanwhile, there's an open source (free) application out there called Blender, which I haven't used, but apparently is popular among the 3D animation crowd, and may or may not be helpful.

To answer Tesheiner's question, "I'm not familiar with ImageJ, and it's home page gives me no hint about such capability. Any help?", I believe that once you get ImageJ running you'll find a "measure" option under one of the menus. My recollection is that measure allows you to capture pixel coordinates, in addition to measuring distances. On the other hand, the application that MaxSt mentioned above may be more targeted to capturing coords from image pairs and worth a look.

Generating meshes... possibly Blender would be helpful. Other ideas?
jmknapp
QUOTE (algorimancer @ Mar 3 2006, 07:15 AM) *
If you do this for multiple pairs of images you'll have to apply a rotation to the resulting points (and possibly a translation if the origin doesn't coincide with the masthead's center of rotation). One solution would be to have at least 3 overlapping points between them, figure the absolute orientation transformation, and apply it as needed. Or perhaps it's adequate to get the pancam orientation corresponding to the images and apply that rotation.


I think that--at least conceptually--once you have the position of a point in the masthead frame as your app provides, the SPICE kernels released to the NAIF website can be used to transform it to whatever coordinate system desired. In this case I guess one would want the Mars body-fixed frame IAU_MARS (essentially latitude-longitude-altitude). So if one had a point in the masthead frame MER-1_PMA_HEAD stored in a vector pmaxyz, these would be the SPICE calls:

QUOTE
// load SPICE kernels for MER1
furnsh_c("mer1_surf_roverrl.bsp") ; // rover position
furnsh_c("mer1_struct_ver10.bsp") ; // rover structures position
furnsh_c("mer1_surf_pma.bc") ; // rover PMA pointing

// et is ephemeris time
spkezr_c("MER-1_PMA_HEAD",et,"IAU_MARS","LT+S","MARS",headbf,&lt) ; // get PMA head position headbf, Mars body-fixed
pxform_c("MER-1_PMA_HEAD","IAU_MARS",et,pma2bf) ; // generate pma->body-fixed rotation matrix pma2bf
mxv_c(pma2bf,pmaxyz,marsiauxyz) ; // multiply rot matrix times pma vector to get body-fixed pma vector
vadd_c(headbf,marsiauxyz,pointbf) ; // add to head position to get point position, body-fixed
reclat_c(pointbf,&radius,&longtitude,&latitude) ; // convert to radius, lon, lat


So most of the automagic would be provided by NASA. The SPICE kernels must be a lot of work for them to generate, with rover slippage complicating matters, but I think they may do a lot of the eyeball work you refer to and release updated SPICE kernels daily. To wit, this is from the SPK (position kernels) README:


QUOTE
mer[1,2]_surf_roverrl_YYMMDDHRMN

MER-1/2 rover and site position SPK file generated daily from rover TLM combined with the latest bundle-adjustment position input from Dr. Ron Li. The latest of these files supersedes all previous files.


The latest such file for MER1 is mer1_surf_roverrl.bsp (currently 54 MB) released yesterday, covering the rover position from 25JAN2004 through 02MAR2006. The position of various fixed points on the rover (such as the PMA head) are given in another file mer1_struct_ver10.bsp. The file mer1_surf_pma.bc (currently 21MB) gives the pointing info for the PMA, also udpated daily.

That said, I haven't actually tried this for MER...
algorimancer
QUOTE (jmknapp @ Mar 3 2006, 08:42 AM) *
...
masthead frame MER-1_PMA_HEAD stored in a vector pmaxyz, these would be the SPICE calls:
...
That said, I haven't actually tried this for MER...


Is SPICE available via the web? I was reading through the user guide a few days ago and it had some interesting options, but I saw no mechanism for remote access. Fundamentally, simply knowing the transformation from the pancams' principal points to the mast head's center of rotation would be of tremendous assistance when combined with camera orientation parameters.
MaxSt
QUOTE (algorimancer @ Mar 3 2006, 07:15 AM) *
The position is oriented in the pancam masthead reference frame (using a pair of center coords is illustrative, (512.5,512,5) for each camera), and the center/origin is set by the application as the midpoint between the two pancam's principle points (essentially the focal points).


That's what I was hoping for... but I'm still a bit confused.

For example, I give coordinates (465,402),(453,402), and I get position (-19.7276,-0.0000,-0.3330).

What's the second number means? Why it's zero? The original point is not on center or anything...
algorimancer
QUOTE (MaxSt @ Mar 3 2006, 01:10 PM) *
That's what I was hoping for... but I'm still a bit confused.

For example, I give coordinates (465,402),(453,402), and I get position (-19.7276,-0.0000,-0.3330).

What's the second number means? Why it's zero? The original point is not on center or anything...


Well, it's an (x,y,z) coordinate describing a position in space. In this case the x axis direction is positive in the direction behind the camera, the y-axis is positive to the left, and the z axis is positive towards the ground. This forms a right-handed coordinate system. Personally I wouldn't worry too much about what the numbers in the coordinates mean in and of themselves, but rather focus on how the vertices relate to each other. For example, I see a rock in a pair of spirit navcam images, find the position of a point on its left side is about (-2.133,-0.056,-0.468) and that of a point on its right is about (-2.097,-0.204,-0.418), subtract those and calculate the magnitude of the resulting vector and I find that the rock is about 16 centimeters across (a calculator that handles vectors makes this a lot easier). Better yet, sample a variety of points in a scene, graph the resulting vertices in a 3D grapher, and get a better sense of the topography and how it varies from different perspectives. Eventually it would be nice to be able to do this with meshes and overlay the images on the meshes (Photomodeler can do this, but is beyond my price range).

Having said all that and re-reading your question I see that I've gone off on a tangent a bit; sorry about that. Why is the middle coordinate 0? I notice that the x-coordinate of each of your pixels is near the center of the images. Bear in mind that the x-coordinate of the pixel is an indicator of the y-axis in the coordinate system, and it appears that you have a point whose y-coordinate just happens to be aligned with the midpoint of the 2 cameras principal points.

EDITED: After further consideration, it appears that I have muddied the waters a bit when I said that the origin was the midpoint between the cameras principal points. The cameras princpal points are given by the CAHVOR parameter C, which in one paper is described as "the camera center vector C from the origin of the ground coordinate system ... to the camera perspective center". In other words, neither camera's C is at coordinate (0,0,0), nor is the midpoint between them. For Spirit, the left and right C's are (0.382152,0.149178,-1.246381) and (0.443429,-0.142099,-1.246638 ), and the midpoint between them (which is what my application measures the distance to) is at (0.412791,0.00354,-1.24651), and the distance between them is 0.297653 [30 centimeters]. This is not very useful as an origin, and I have the impression that the coordinate frame is different from that resulting from the image - to - world transformation, so I'm going to make a modification to the application to make the origin coincide with the midpoint between the cameras perspective centers. I'm also inclined to modify to frame of the resultant coordinates to a more user friendly system in which x-axis is positive to the right, the y-axis is positive towards the sky, and the z-axis is positive towards the rear of the camera. Yes, this implies that the z coordinate will be negative in the viewing direction, but that is necessary to preserve a right-handed coordinate system. None of this will affect the distances measured, but it will be easier to understand in terms of the images and the x-y-z coordinate system we all learned in school.

[Done. Version 1.2 has these updates; origin is midpoint between camera perspective centers, new x is last -y, new y is last -z, new z is last x. Ranges are unchanged, as is relative orientation between points.]

If anyone disapproves, we can discuss it. This is a quick and easy change to make to the application.
Tesheiner
QUOTE (algorimancer @ Mar 3 2006, 01:15 PM) *
To answer Tesheiner's question, "I'm not familiar with ImageJ, and it's home page gives me no hint about such capability. Any help?", I believe that once you get ImageJ running you'll find a "measure" option under one of the menus. My recollection is that measure allows you to capture pixel coordinates, in addition to measuring distances. On the other hand, the application that MaxSt mentioned above may be more targeted to capturing coords from image pairs and worth a look.


I had a look to the application mentioned by MaxSt but it seems to not solve my "problem".
I was thinking in something like what is available in PTGui to match same pixels/features in two images. The idea is to automate just a bit the now manual process of identifying the (x,y) coordinated of e.g. a rock on both L and R images.
MaxSt
QUOTE (algorimancer @ Mar 3 2006, 03:44 PM) *
so I'm going to make a modification to the application to make the origin coincide with the midpoint between the cameras perspective centers. I'm also inclined to modify to frame of the resultant coordinates to a more user friendly system in which x-axis is positive to the right, the y-axis is positive towards the sky, and the z-axis is positive towards the rear of the camera.


That would be great!

By the way, I forgot to mention that I'm using settings "Spirit" and "Navcam".

QUOTE (algorimancer @ Mar 3 2006, 03:44 PM) *
Eventually it would be nice to be able to do this with meshes and overlay the images on the meshes.


I already got that. Maybe I'll post a couple of my 3d models...
algorimancer
QUOTE (MaxSt @ Mar 3 2006, 04:23 PM) *
That would be great!

By the way, I forgot to mention that I'm using settings "Spirit" and "Navcam".
I already got that. Maybe I'll post a couple of my 3d models...


I would enjoy seeing your 3d models smile.gif

Also, the change is updated (version 1.2), as edited in my prior post. One thing still perplexes me a bit, but I probably just need to think on it a little more.
MaxSt
QUOTE (Tesheiner @ Mar 3 2006, 03:55 PM) *
I had a look to the application mentioned by MaxSt but it seems to not solve my "problem".


That's strange...
It gives me like 3000-4000 points after matching left and right, with only 2-3 false positives.

---

OK, here is a couple of models I promised, in attachment.

There are many free viewers available for VRML format. I use IE plug-in called Cortona.
You'll also need a couple of textures from Spirit's NAVCAM (sol 751):

2N193038341EFFAOA0P0615L0M1.JPG
2N193038393EFFAOA0P0615L0M1.JPG
algorimancer
QUOTE (MaxSt @ Mar 3 2006, 05:12 PM) *
That's strange...
It gives me like 3000-4000 points after matching left and right, with only 2-3 false positives.

---

OK, here is a couple of models I promised, in attachment.

There are many free viewers available for VRML format. I use IE plug-in called Cortona.
You'll also need a couple of textures from Spirit's NAVCAM (sol 751):

2N193038341EFFAOA0P0615L0M1.JPG
2N193038393EFFAOA0P0615L0M1.JPG



Wow, the second one is particularly good (I used Flux Player). You used my application to get the points for these? That's pretty impressive results (I hadn't expected anything like this so quickly).

That Keypoints application (http://www.cs.ubc.ca/~lowe/keypoints/ for those who missed it) pretty nice. How did you go from vertices to triangulated surface? Personally I'd enjoy results in the STL file format, but it's easy to convert, I think.
MaxSt
No, I'm still using my simple formulas, I've been working on this for some time...

I'd like to switch to your method, but still having problems with "position". Can you check it so for (524,512)-(500,512) it would return x=0,y=0?

There are many utilities for Delaunay triangulations. I found one called "Triangle". Seems to work fine:
http://www.cs.cmu.edu/~quake/triangle.html
CosmicRocker
Holy Moses! I wasn't expecting the synergy here to develop so quickly. It took me a little while to realize how I could drop the texture onto the scene, but it was worth the effort. smile.gif This is very nice work.
djellison
I've been waiting 2 years for files like that.

A little animation is on the way.

The ideal scenario would be the conversion of the pds released wedges and textures ( I desperately want to combine multiple Navcam panoramas complete wedge-sets into a 3D terrain from the landing site up to Bonneville crater and animate the complete traverse that way )

BUT - this is the first MER wedge's I've seen - and as a result, they are beautiful smile.gif
http://www.unmannedspaceflight.com/doug_im...it_751_2nav.mov
(720p WMV-HD version on the way, hopefully - that QT is H264, but approx 420 x 270)

Doug
helvick
I started this thread just looking to get a slightly better understanding of the basic geometry of the stereo imaging - I'm amazed at the technical capabilities that you folks have.

Fantastic stuff.
djellison
15MB 720p WMV-HD movie of it
http://www.unmannedspaceflight.com/doug_im...751_3d_720p.wmv

Doug
helvick
QUOTE (djellison @ Mar 4 2006, 11:40 AM) *

Jaw drops to floor. This is really stunning.
djellison
Now extrapolate to the complete traverse from the lander to Bonne smile.gif

All I need is the wedges - and I have no idea how to get them out of the PDS data sets.

Doug
jmknapp
QUOTE (algorimancer @ Mar 3 2006, 11:53 AM) *
Is SPICE available via the web? I was reading through the user guide a few days ago and it had some interesting options, but I saw no mechanism for remote access. Fundamentally, simply knowing the transformation from the pancams' principal points to the mast head's center of rotation would be of tremendous assistance when combined with camera orientation parameters.


It's available at the NAIF website, both C and FORTRAN libraries, & the data files are there as well. It's difficult to use though because so many elements (specifically the SPICE kernel files) have to be in place or it throws an exception and quits. I tried yesterday to code up something to see if it would work & reached a stumbling block. I can get the position of the rover itself on the surface (MER-1_ROVER -> IAU_MARS), but trying to get the pointing of the mast assembly head fails:

QUOTE
Toolkit version: N0058

SPICE(NOFRAMECONNECT) --

There is insufficient information available to transform from -253110
(MER-1_PMA_HEAD) to frame 10014 (IAU_MARS). Frame -253110 could be transformed
to -253110 (MER-1_PMA_HEAD). Frame 10014 could be transformed to 1 (J2000).'


So there's a disconnect in my code. Getting the position of the rover is one thing, but the exact pointing requires knowing the local level of the rover and the instrument pointing. Must not have the right kernels loaded--If I can figure it out I'll post.
algorimancer
QUOTE (djellison @ Mar 4 2006, 06:40 AM) *

What I can see of it is very impressive, but what I see is maybe 5 near-static perspectives, with a hint of movement at each translation (no luck at all with the earlier .mov file, Quicktime and WMP both don't recogize it). Pretty sure I have pretty up-to-date codecs, but apparently I'm lacking something.
algorimancer
QUOTE (MaxSt @ Mar 3 2006, 10:01 PM) *
I'd like to switch to your method, but still having problems with "position". Can you check it so for (524,512)-(500,512) it would return x=0,y=0?


Honestly, I'm not entirely clear as to just why it doesn't return something like that. Bear in mind that, at least for the pancams, the ccd's are not horizontally parallel. Quoting from my post #10 in this thread:

[QUOTE...]
PancamLeft

TKFRAME_-254128_AXES = ( 2, 1, 3 )
TKFRAME_-254128_ANGLES = ( -90.051, 0.659, 90.315 )

PancamRight
TKFRAME_-254131_AXES = ( 2, 1, 3 )
TKFRAME_-254131_ANGLES = ( -89.946, -1.376, 90.400 )

I worked-out the combined rotations, rotating (for PancamL) as specified first -90.051 degrees about the Y axis, then 0.659 degrees about the X axis, and 90.315 degrees about the Z axis (in that order), then did the same for PancamR, and worked-out the relative orientation between them (I did all this using quaternions, as they're easier and more accurate than 3X3 matrices). The net result is that the relative orientation between the left and right pancams is

2.039451 degrees about the unit axis vector (-0.045257,-0.998119,0.041349). Not exactly the expected 2 degrees, but darn close.
[...QUOTE]

My application depends entirely on the provided CAHVOR parameters for each camera, which thankfully were calibrated in the same frame (I don't have to apply that 2 degree toe-in transformation, for instance, nor the 30 cm translation between camera). It's one thing to set the origin to a handy location like the midpoint between the cameras' principle points, but I'd hesitate to force it into an arbitrary location. There is nothing preventing you from finding a translation vector that achieves that effect and applying it to all the vertices.

I'm intending to review some of the details of my coding today and verify that it is doing what I think it is. The precision of the pixel-vector intersection suggests that it is correct, but I'm wondering a bit about the behavior of the vertical dimension (the zero seems to correspond with the top of the image rather than the center); the other dimensions seem fine. Due to the non-ideal nature of the lenses and orientations of cameras I'm not surprised that putting in (511.5,511.5) and (511.5,511.5) (the true center of the ccd arrrays) does not return exactly x=0 (it's close), but it bothers me a bit that y is not similarly close to 0.
djellison
QUOTE (algorimancer @ Mar 4 2006, 03:40 PM) *
no luck at all with the earlier .mov file,


You need the latest Quicktime, for H264 support.

Doug
algorimancer
Soliciting opinions for additional development ...

It will be relatively simple to provide a command line interface to the MER photogrammetry portion of my application. I'll likely get this out this weekend, perhaps in conjunction with a c++ class library & dll.

The notion of providing transformations to align one stereo image pair's dataset (produced 3D coordinates) with that of other datasets brings up a number of questions and approaches regarding the desired user interface and operations. I would appreciate it if potential users of the application(s) would consider how they might like to interact with the data, and how I can create software to facillitate that.

Given pancam tilt angles, along with center(s) of rotation in the masthead frame, it would be easy to apply those transformations to the vertices. I'm not sure where this orientation information would come from, nor what format it is in, but I'm envisioning a gui interface where the user pastes in this information and the gui applies it to a file of vertices. This gui could be the same as the current application, or a separate one dedicated to the purpose. Alternately, a command line interface could do the same thing.

Incidentally, when I go about transforming vertices I'm typically doing something like 1) translate to the center of rotation, 2) rotate a given angle about an axis vector, and 3) translate back, and possibly 4) apply some additional translation. The axis vector to rotate about could be one of the standard x, y, z axes (usually in sequence), or (my preference) some arbitrary axis in space. I handle all of this in terms of 3D direction vectors (for position, translation, and axis direction) and a 4D quaternion (for rotation) or 4X4 matrix (combining translation and rotation). The information describing the masthead position/orientation for a particular stereo image pair might be given simply by a 3D center of rotation and a 4D quaternion, or it may be described in terms of a sequence of 3 rotations, each given by an angle, axis vector, and center of rotation, or it may be very simply described as a 4X4 matrix. I'll need to know what format to use prior to implementing the application (currently I'd be guessing).

Alternately, if the pancam angles are not available (and perhaps even if they are), another option for aligning data sets captured from multiple stereo pairs of images would be to find a triad of 3 vertices in common between two data sets (sets of 3D vertices resulting from my application's photogrammetry of the original stereo image coordinates). It is simple (as in I already have the code) to calculate the transformation between one triad and the other (it's called absolute orientation), and then apply it to the desired data set (vertices). The problem with working with the masthead orientations is that the resulting aligned data sets will only be aligned for that particular position of the rover. If we wish to combine position data from multiple rover positions I see no alternative to using the absolute orientation approach.

My inclination at the moment is to provide a standalone gui application to handle transforming text files containing rows of 3D vertices, much as the current application does with batch files of image coordinates.

Your thoughts & opinions would be appreciated smile.gif Choices need to be made, and I lack the time to do everything possible.
MaxSt
Nice video, Doug. Gives me confidence that automatic stitching should be possible, after I figure out all the angles. Creating mesh for the combined set of points is going to be tricky. But that application for triangulation seems to be pretty powerful, I just need to figure out what combination of the switches I should use.

algorimancer - since you asking for opinions on development... You know what I'd like to see - what if your program could actually display both images and let user select the points? And when you load the list of coordinates for Batch process, you could actually display all of them on top of images?

That would be extremely helpful. That keypoint application sometimes creates false positives, and finding/eliminating them manualy is a tedious process. Visual inspection would help a lot.
algorimancer
QUOTE (MaxSt @ Mar 4 2006, 03:49 PM) *
algorimancer - since you asking for opinions on development... You know what I'd like to see - what if your program could actually display both images and let user select the points? And when you load the list of coordinates for Batch process, you could actually display all of them on top of images?

That would be extremely helpful. That keypoint application sometimes creates false positives, and finding/eliminating them manualy is a tedious process. Visual inspection would help a lot.


I like that idea a lot. However... adding image handling adds rather a lot of complexity to the application; it's not something which I could whip-up over a weekend. It's relatively easy to display the images and overlay the connections, where it gets tricky is that it will pretty-much require a zoom/pan option. Sounds simple enough, but that sort of thing can be really complex to get right. I'll cogitate on it for awhile, see what I can think up. I think the source code for the keypoint application is available, so it could be directly integrated with the photogrammetry application.

This weekend I've encountered a development dead end. My compiler has acquired a weird bug that prevents me compiling. I'll have to re-install next week, I think. I have an upgrade to MSVC2005 anyway. I have the command line version of the photogrammetry tool essentially complete, but can't compile it. I've also thought of a way to achieve a good validation of my CAHVOR model, but again am unable to compile it. I'll work on some other projects.
algorimancer
For those of you into that whole command line thing (scripting and programming and so on), I have the command line version of my photogrammetry and rangefinding tool compiled and posted:

http://www.clarkandersen.com/RangeFinder.htm

Screenshot:
Click to view attachment

For the rest of us, I'd recommend sticking with the Windows gui version on that same page.

Currently I'm beginning work on an application which will integrate an image viewer with interactive point selection and editing to expedite the photogrammetry. At some time I'm sure we'll want to drape image segments over reconstructed triangle meshes, but just at the moment I'm not sure how to do that. There's also still the notion of aligning one 3D dataset (from one image pair) with another, but there hasn't been any feedback on that issue so I'll assume it isn't an immediate need, but it's still in the works.
Tesheiner
Great tool, algorimancer.

One feature I would really appreciate to have on it would be the option to measure an object's size, similar to jmknapp's tool "Dimension of object (pixels)".

When using your or jmknapp's tools to measure driving distances, you must find reference rocks/features on both pre-drive and post-drive images and calculate the distance to them on both set of pictures. And the size of those rocks, measured on both pre-drive and post-drive images, is a good way to double-check that they are actually the same ones on both sets of pics.
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Invision Power Board © 2001-2024 Invision Power Services, Inc.