Help - Search - Members - Calendar
Full Version: Stitching Mers Images With Pov-ray
Unmanned Spaceflight.com > Mars & Missions > Past and Future > MER > Tech, General and Imagery
erwan
I used Panorama Tools and Autostitch softwares to make Navcam or Pancam mosaics. Though powerful, these softwares assumed pictures to stitch are all shooted from an unique "Optical center". That is not the case for MER Navcam or Pancam images, shooted from the rotating Pancam Mast Assembly. Such an issue is very troublesome with Navcam images, given the wide field of view, and parallax defects between images, especially for close objects. For example, two Navcam images picturing some elements of the Rovers cannot very often be correctly stitched...
A good solution to stitch Navcam images is geometrical, i mean to project the different images reproducing Pancam Mast Assembly rotation. For images available on PDS node websites, we may found adapted PMA parameters near the end of the PDS label (Rover defined parameters, Azimuth and Elevation in degrees).
Then, we may render images projected this way with help of POV-ray sofware - POV-ray home - , and it's not difficult.
We have to create one flat square for each image to stitch. Then translate the square on a virtual sphere, according to Elevation and Azimuth parameters founded on the image label. Then texture the square with an image map: the image. Doing so for the different images to stitch, add a camera on the center of the sphere, render! It works! And we may finely tune every parameters needed to stitch nicely important features as horizon line, to orient the resulting mosaic, etc...

The sketch below illustrate the text:




And the Navcam mosaic linked below was made this way:

spaceffm
Thank You for the link and the explanation, but even the tutorial is too difficult for me, though i am not able to understand the whole text. sad.gif
But we have You to create thoses stunning Panoramas, Nasa Imaging Office should hire You.
Tman
QUOTE (spaceffm @ Mar 31 2005, 11:50 AM)
Thank You for the link and the explanation, but even the tutorial is too difficult for me, though i am not able to understand the whole text. sad.gif
But we have You to create thoses stunning Panoramas, Nasa Imaging Office  should hire You.
*


Which text passage exactly don't you understand? Maybe I can help you or rather do you already know this useful page: http://dict.leo.org/?lang=de&lp=ende
Jeff7
Great, now all we need is a quick, easy way of getting the colors right (using GIMP would be nice, as it's also free), and everyone's set to make nice panoramas. ohmy.gif
dilo
Merci Erwan, very interesting method!
I'm not very expert about "PMA parameters" and "PDS label", I will make a search for them...
Meanwhile, I would appreciate a short example of your PovRay code in order to better understand some details.
Thanks again and Ciao!
erwan
Arrivederci,, Dilo!
Here is the entire text reproducing the POV-ray file used for the Sol438 pan: some explanations are required for non-POV-ray users, see next days... images N0 to N6 quoted in text as TGA file correspond to the 6 Navcam images used, from upperleft to lower right. You may paste the text on an empty POV file, rename N0 to N6 TGA filenames according to the real filenames you own on your computer -all files must be in a POV-ray recognized directory)... (i guess image mapping requires TGA or PNG format, so converting file format is requested). Notice for sol 438, PDS labels are unavailable, but it's a fast process to adjust nice elevation and azimuth values to stitch images.

Hope you will enjoy, for NAVCAM stitching especially!

//PROJECTION PARAMETREE D'IMAGES NAVCAM POUR MOSAIQUES/ ERWANN QUELVENNEC

//**************** GLOBAL SETTINGS ***********************************************************

global_settings {adc_bailout 0.003922 ambient_light <1.0,1.0,1.0> assumed_gamma 1.9 max_intersections 64
max_trace_level 10 charset ascii }

background { color <0.000,0.000,0.000> }






//*************** GLOBAL PARAMETERS *************************************************************

background { color <0.000,0.000,0.000> } //black background around images

#declare FOV = 0.93; //size the boxes/image map wrapped, for all images. 0.93 works good for Navcam; to change only carefully

#declare imageratio = 1280/1024; // Width/Height of rendered image

//**************** TUNING PARAMETERS, ROVER DEFINED *************************************************************

#declare ANGLE_CAMERA = 180; //horizontal (wider or narrow view)

#declare ROLL_CAMERA = 0.7; // clockwise : tuning the horizontality of the mosaic/horizon line

#declare AZIMUT_CAMERA = 55.7; //left > right To be adjusted near the mean azimuth of the complete mosaic (at least when finishing)

#declare ELEVATION_CAMERA = 7; // height of the camera look_at point; useful to flatten horizon

#declare AZIM_IMAGE000 = -25.30; #declare ELEV_IMAGE000 = -17.15; #declare HAUT_IMAGE000 = 1015/1024; #declare LARG_IMAGE000 = 1024/1024;

#declare AZIM_IMAGE001 = 7.60; #declare ELEV_IMAGE001 = -13.10; #declare HAUT_IMAGE001 = 1018/1024; #declare LARG_IMAGE001 = 1024/1024;

#declare AZIM_IMAGE002 = 40.00; #declare ELEV_IMAGE002 = -10.50; #declare HAUT_IMAGE002 = 1024/1024; #declare LARG_IMAGE002 = 1024/1024;

#declare AZIM_IMAGE003 = 72.32; #declare ELEV_IMAGE003 = -9.87; #declare HAUT_IMAGE003 = 1024/1024; #declare LARG_IMAGE003 = 1024/1024;

#declare AZIM_IMAGE004 = 104.8; #declare ELEV_IMAGE004 = -11.70; #declare HAUT_IMAGE004 = 1028/1024; #declare LARG_IMAGE004 = 1024/1024;

#declare AZIM_IMAGE005 = 125.7; #declare ELEV_IMAGE005 = 19.69; #declare HAUT_IMAGE005 = 1015/1024; #declare LARG_IMAGE005 = 1200/1024;

#declare AZIM_IMAGE006 = 137.5; #declare ELEV_IMAGE006 = -14.70; #declare HAUT_IMAGE006 = 1022/1024; #declare LARG_IMAGE006 = 1024/1024;


//*************** CAMERA *****************************************************************

/*
//UNACTIVATED ULTRA_WIDE_ANGLE CAMERA
camera {
ultra_wide_angle
location < 0, 0, 0>
sky < 0.0, 0.0, 1.0> up <0.0, 0.0, 1.0> right < imageratio, 0.0, 0.0>
angle ANGLE_CAMERA look_at < 0, 0.1, -3650> rotate ROLL_CAMERA*z rotate ELEVATION_CAMERA*x rotate -AZIMUT_CAMERA*y
}
*/


camera { // spherical lens for sphere field of view (mappable to a sphere)
spherical location <0,0,0> look_at <0,0,-1>
sky <0.0, 0.0, 1.0> up <0.0, 0.0, 1.0>

angle ANGLE_CAMERA // horizontal degrees
ANGLE_CAMERA*1/imageratio // vertical degrees adaptation for rendered image size
rotate ROLL_CAMERA*z
rotate ELEVATION_CAMERA*x
rotate -AZIMUT_CAMERA*y
}



//*************** IMAGES *******************************************************************

#declare image000 =
material { texture { pigment { image_map { tga "C:\Program Files\Moray For Windows\PovScn\N0.TGA"
map_type 0 once } scale <1024*LARG_IMAGE000, 1024*HAUT_IMAGE000, 1.0>/FOV translate <-1024/2, -1024/2, 0.0>/FOV} finish { ambient 1.00 diffuse 0.0 }}}

#declare image001 =
material { texture { pigment { image_map { tga "C:\Program Files\Moray For Windows\PovScn\N1.TGA"
map_type 0 once } scale <1024*LARG_IMAGE001, 1024*HAUT_IMAGE001, 1.0>/FOV translate <-1024/2, -1024/2, 0.0>/FOV} finish { ambient 1.00 diffuse 0.0 }}}

#declare image002 =
material { texture { pigment { image_map { tga "C:\Program Files\Moray For Windows\PovScn\N2.TGA"
map_type 0 once } scale <1024*LARG_IMAGE002, 1024*HAUT_IMAGE002, 1.0>/FOV translate <-1024/2, -1024/2, 0.0>/FOV} finish { ambient 1.00 diffuse 0.0 }}}

#declare image003 =
material { texture { pigment { image_map { tga "C:\Program Files\Moray For Windows\PovScn\N3.TGA"
map_type 0 once } scale <1024*LARG_IMAGE003, 1024*HAUT_IMAGE003, 1.0>/FOV translate <-1024/2, -1024/2, 0.0>/FOV} finish { ambient 1.00 diffuse 0.0 }}}

#declare image004 =
material { texture { pigment { image_map { tga "C:\Program Files\Moray For Windows\PovScn\N4.TGA"
map_type 0 once } scale <1024*LARG_IMAGE004, 1024*HAUT_IMAGE004, 1.0>/FOV translate <-1024/2, -1024/2, 0.0>/FOV} finish { ambient 1.00 diffuse 0.0 }}}

#declare image005 =
material { texture { pigment { image_map { tga "C:\Program Files\Moray For Windows\PovScn\N5.TGA"
map_type 0 once } scale <1024*LARG_IMAGE005, 1024*HAUT_IMAGE005, 1.0>/FOV translate <-1024/2, -1024/2, 0.0>/FOV} finish { ambient 1.00 diffuse 0.0 }}}

#declare image006 =
material { texture { pigment { image_map { tga "C:\Program Files\Moray For Windows\PovScn\N6.TGA"
map_type 0 once } scale <1024*LARG_IMAGE006, 1024*HAUT_IMAGE005, 1.0>/FOV translate <-1024/2, -1024/2, 0.0>/FOV} finish { ambient 1.02 diffuse 0.0 }}}

//******************* OBJECTS **********************************************************************

box {
<-1, -1, -1>, <1, 1, 1>
scale <1024/2, 1024/2, 0.0001>/FOV
translate -1423.01*z
material {
image000
}
rotate <ELEV_IMAGE000, -AZIM_IMAGE000, 0.0>
}


box {
<-1, -1, -1>, <1, 1, 1>
scale <1024/2, 1024/2, 0.0001>/FOV
translate -1423.01*z
material {
image001
}
rotate <ELEV_IMAGE001, -AZIM_IMAGE001, 0.0>
}


box {
<-1, -1, -1>, <1, 1, 1>
scale <1024/2, 1024/2, 0.00001>/FOV
translate -1423.01*z
material {
image002
}
rotate <ELEV_IMAGE002, -AZIM_IMAGE002, 0.0>
}


box {
<-1, -1, -1>, <1, 1, 1>
scale <1024/2, 1024/2, 0.00001>/FOV
translate -1423.01*z
material {
image003
}
rotate <ELEV_IMAGE003, -AZIM_IMAGE003, 0.0>
}


box {
<-1, -1, -1>, <1, 1, 1>
scale <1024/2, 1024/2, 0.00001>/FOV
translate -1423.01*z
material {
image004
}
rotate <ELEV_IMAGE004, -AZIM_IMAGE004, 0.0>
}

box {
<-1, -1, -1>, <1, 1, 1>
scale <1024/2, 1024/2, 0.00001>/FOV
translate -1423.01*z
material {
image005
}
rotate <ELEV_IMAGE005, -AZIM_IMAGE005, 0.0>
}

box {
<-1, -1, -1>, <1, 1, 1>
scale <1024/2, 1024/2, 0.00001>/FOV
translate -1423.01*z
material {
image006
}
rotate <ELEV_IMAGE006, -AZIM_IMAGE006, 0.0>
}
Tman
Ok Erwann, just I'm trying an attempt with POV-ray (anyhow I've said myself would stay with PTGui - never say "never" tongue.gif ).

I pasted your text (code) in an empty POV file - prepared **five navcam pics (TGA) and let it run.

Now huh.gif : testpanorama

The orginal size of the entire image (with the black background) is 1280/1024px.

Ok, the frames are still incorrect together and I guess have to be changed in the code (TUNING PARAMETERS) - But how I get less background and more Columbia Hills?

Thank you for any help.

**edited Apr 4 / 03:06 PM
Tman
Text cancelled (comment: was shit biggrin.gif )
cIclops
QUOTE (erwan @ Mar 31 2005, 08:38 PM)
Here is the entire text reproducing the POV-ray file used for the Sol438 pan:

<snip>  (long text file)


Thanks erwan! I played with POV-ray and tried to read the docs but i couldn't see how to use it to stitch images. Your example is excellent for showing how to do it. POV-ray is a very powerful piece of software and its readable scene files are a great way to share ways of using it. Hopefully you can show us other examples of what can be done with it, such as viewing the same scene from different camera angles?
dilo
QUOTE (cIclops @ Apr 4 2005, 03:17 PM)
QUOTE (erwan @ Mar 31 2005, 08:38 PM)
Here is the entire text reproducing the POV-ray file used for the Sol438 pan:

<snip>   (long text file)


Thanks erwan! I played with POV-ray and tried to read the docs but i couldn't see how to use it to stitch images. Your example is excellent for showing how to do it. POV-ray is a very powerful piece of software and its readable scene files are a great way to share ways of using it. Hopefully you can show us other examples of what can be done with it, such as viewing the same scene from different camera angles?
*



I think you should first use correct Azimut/elevation info in each image. I'm still not able to retrieve them (pardon-moi, Ervann!) so I tried to guess them for this Viking mosaic:
http://img189.exs.cx/my.php?loc=img189&image=viking8qk.jpg
Clearly, level of perfection is far from Erwann results, also in terms of contrast/luminosity balance... sad.gif we definitively need some help!
Tman
I slowly approach too smile.gif But there are as much to adjust and I dont know what all. unsure.gif

Testpanorama2
Tman
QUOTE (cIclops @ Apr 4 2005, 03:17 PM)
Thanks erwan!  I played with POV-ray and tried to read the docs but i couldn't see how to use it to stitch images. Your example is excellent for showing how to do it. POV-ray is a very powerful piece of software and its readable scene files are a great way to share ways of using it. Hopefully you can show us other examples of what can be done with it, such as viewing the same scene from different camera angles?
*


Hi, first the POV file need the addresses of the images which are for stitching (I use TGA converted from JPG). For example:
#declare image003 =
material { texture { pigment { image_map { tga "C:\Programme\POV-Ray for Windows v3.6\images\image3.TGA"
map_type 0 once } scale <1024*LARG_IMAGE003, 1024*HAUT_IMAGE003, 1.0>/FOV translate <-1024/2, -1024/2, 0.0>/FOV} finish { ambient 1.00 diffuse 0.0 }}}


And then start with the "RUN" button...
slinted
erwan,
This looks like a great method! Getting navcams to line up has always been a particularily hard challenge, and it looks like your method does them well. I'm fairly certain that all the JPL pans we've seen are done in much this manner (project each image onto a simplified 3d model of the scene then take an 'image' of the projection). In their case, they use the CAHVOR parameters derived for each image as the controls on the projection.
I'm wondering how you handle the radial lens distortion in this method (that each image projection isn't in reality a rectangle, but rather a pillow shape). Is that controled by the HAUT_IMAGE and LARG_IMAGE parameters (or do those just change the size of the image)? How did you derive those two values for each image?

Also, you mention that the benefit of this method is that you aren't restricted to use a single focal center, but I'm not sure where that comes into play in the parameters. It seems like you are still just translating each projection a fixed z distance from a single centerpoint, not slightly different points based on the pancam motion, so I'm not sure where that aspect of this method comes into play.

The artificial sky adds a great extra level of detail to these images. It brings them much more into the perception of actually being there. Keep up the good work!
erwan
Thanks for your comments, folks. Some explanations for those interested, the best i can:

- The example of POV file i linked was fitted for the sol 438 Pan, OK.
-In this Text Code, you will find two kind of parameters:
-First, global parameters : color of background, Field of view of the camera used: i guess 0.93 fits with a FOV of 45°, i.e Pancam FOV; you may change the FOV parameter, but not as a first try as it works fine... This parameter define in fact the size of boxes/images wrapped, but boxes stay on a fixed sphere; the last global parameter is there to give the good image ratio (width/height) as the size you choose to render with POV.
-Second, tuning parameters: the first four define the camera view: horizontal angle of view, rotation clockwise, azimuth (horizontal direction of view) and elevation (vertical direction of view. playing with these four parameters, you will find the good way to render an image centered on the pan you want to stitch (for example, fix camera angle 45° * the number of horizontaly stitched navcam images , azimuth roughly the azimuth of the middle image....) . Then, you will find parameters for each image: azimuth and elevation define the projection (horizontal then vertical) of the boxe for each image; then haut (height) and larg (width) tune finely in "pixel values" (1024) the horizontal then vertical size of wrapping. This is related to Slinted comment; in fact it's not the pillow effect the more annoying, it's the parallax created by pancam mast rotation, for foreground objects. Thus a need to vary height of images, from left to right...

Finally, i give the original filename of image N0 to N6: i guess the best way for a first try is to reproduce the pan with these images and the POV code:

N0: 2N165162680EFFA900P0725L0M1.JPG (sol 437)
N1: 2N165249434EFFA900P1625L0M1.JPG (sol 438)
N2: 2N165249484EFFA900P1625L0M1.JPG (sol 438)
N3: 2N165249534EFFA900P1625L0M1.JPG (sol 438)
N4: 2N165249585EFFA900P1625L0M1.JPG (sol 438)
N5: 2N165342743EFFA912P0884L0M1.JPG (sol 439)
N6: 2N165249671EFFA900P1625L0M1.JPG (sol 438)

Notice N5 was a sol 439 image; Spirit moved from Sol 438 to sol 439, that is the reason why i needed to adjust the width of the image to 1200 pixel...
If you convert these images in TGA format, renamed accordingly from N0 to N6 and registered in a POV directory, i hope you will render the sol 438 correctly with the original POV-ray code provided. Then you may play with the parameters and, i hope, will evaluate this process as a nice one: it's totally controlled by the user! Moreover, i agree with you, Slinted, and i'm rather sure JPL stitching method is geometric, then more linked with POV ray method than with recognition of image pattern methods (as with stitching softwares)...
erwan
Tman and Dilo: i'm happy to see you handle the POV process! Probably the only important tip you need now is to tune correctly the images ELEVATION, at first for example with the two center images: if too high, images are closer on top than on bottom; if too low, images are closer on bottom than on top; finally Dilo, i'm sure you will find the best camera elevation/ camera roll to render a flat horizon! For contrast/luminosity blending, i'm adding BW radial/linear mask on each images, before stitching.
cIclops
QUOTE (erwan @ Apr 5 2005, 05:19 PM)
<snip>

Finally, i give the original filename of image N0 to N6: i guess the best way for a first try is to reproduce the pan with these images and the POV code:


cool it works for me too! ... i played with the parameters a little and found that
jpg files can be used directly, just replace the tga tag in the imagemap section with a jpeg tag, like this:

image_map { jpeg "image_filename.jpg" map_type 0 once }

POV-ray 3.6
erwan
Thanks for the tip, cIclops, far more easy and fast!
dilo
QUOTE (erwan @ Apr 5 2005, 05:31 PM)
Tman and Dilo: i'm happy to see you handle the POV process! Probably the only important tip you need now is to tune correctly the images ELEVATION, at first for example with the two center images: if too high, images are closer on top than on bottom; if too low, images are closer on bottom than on top; finally Dilo, i'm sure you will find the best camera  elevation/ camera roll to render a flat horizon! For contrast/luminosity blending, i'm adding BW radial/linear mask on each images, before stitching.
*


Hi Erwan, I'm sorry but, even following your indications, I still not not able to perfectly match the images: in fact, as clear from this example (replicating the famous "Sol330 HeatShield" mosaic):

match is good between first two images, but seems impossible with third one, even modulating both azimut and elevation values... Could be due to absence of parallax correction? In fact, looking to your POV code, I didn't find any hint of correction for the rotating Pancam Mast Assembly, as already noticed by SLINTED.
Moreover, I have impression that this "tentative" method is time-wasting, specially now that I do not have experience using it. It would be very useful to have "a priori" informations on direction of each image and then make only some fine adjustments! In a previous post, you mentioned the "PMA parameters" inside "PDS label", which contain both Azimut and Elevation; however, it seems to me that these info are under the "MER Analyst's Notebook", where data set are released only evrey 3 months sad.gif ! Do I'm missing other sources containing these info?
Tanks for your patience, bye.
Marco.
Tman
My try could have worked with the 7 raw pics that Erwann has quoted, have it? They are completely raw stitched.

Testpanorama3

Additionally I've used a higher resolution, therefore I had to change some settings:


//*************** GLOBAL PARAMETERS *************************************************************

background { color <0.000,0.000,0.000> } //black background around images

#declare FOV = 0.93; //size the boxes/image map wrapped, for all images. 0.93 works good for Navcam; to change only carefully

#declare imageratio = 2560/2048; // Width/Height of rendered image

//**************** TUNING PARAMETERS, ROVER DEFINED *************************************************************

#declare ANGLE_CAMERA = 225; //horizontal (wider or narrow view)

#declare ROLL_CAMERA = 2; // clockwise : tuning the horizontality of the mosaic/horizon line

#declare AZIMUT_CAMERA = 57; //left > right To be adjusted near the mean azimuth of the complete mosaic (at least when finishing)

#declare ELEVATION_CAMERA = 7; // height of the camera look_at point; useful to flatten horizon

------
Remaining code is the same like in post 6: http://www.unmannedspaceflight.com/index.p...findpost&p=7756
Tman
Marco, have you also tried an attempt with the 7 pics below that Erwann has quoted? BTW I've searched and found the pics in your "HeatShield image", they are from sol325 (according JPL classification). I will try also an attempt with them.

N0: 2N165162680EFFA900P0725L0M1.JPG (sol 437)
N1: 2N165249434EFFA900P1625L0M1.JPG (sol 438)
N2: 2N165249484EFFA900P1625L0M1.JPG (sol 438)
N3: 2N165249534EFFA900P1625L0M1.JPG (sol 438)
N4: 2N165249585EFFA900P1625L0M1.JPG (sol 438)
N5: 2N165342743EFFA912P0884L0M1.JPG (sol 439)
N6: 2N165249671EFFA900P1625L0M1.JPG (sol 438)
Tman
Sorry, but I dont get a useful result too.

By frustration I take my loved PTGui and create with it a pan from sol325. Before I've tried to adjust the three navcam pics with Photoshop together (in brightness and "Vignettierung").

Well Erwann try this with POV cool.gif tongue.gif :

Navcam Oppy sol325 JPGimage 600KB
dilo
Hi Tman, I didn't tried before because I was interested to make something new...
but now I made a tentative and, obviously, results are good because pointing info are accurate!
Consider that you can obtain results even better if you try to "equalize" luminosity of each image by adjusting the parameter "ambient" in the IMAGES section should be possible to make even better uniformity by changing also images contrast or Gamma...).
I also made a rough correction of darkening near image edge using an llumination effect (four light points at the image edges, using Paint Shop Pro...); probably Erwan is using more sophisticated system, as mentioned in a previous post. Finally, I "painted" sky in order to have constant luminosity (again, Erwan use a more realistic shading effect...). Results is almost perfect, as you can see cool.gif

However, to me is very hard (and sometimes impossible) to stitch other images...
Anyway, if you want to take with Sol325 panorama, these are the start images:
1N157047764EFF40A3P0685L0M1
1N157047817EFF40A3P0685L0M1
1N157047868EFF40A3P0685L0M1
Have Good luck!
Marco.
Tman
Hi Marco, (due to your question in this Thread) I've done it in the same way as the pan above from Oppy sol325.

BTW have you heard something from Erwann, what is he doing? Great things with POV that us will amaze? huh.gif tongue.gif
I would like to see the same pan stitched with POV that JPL has already done, Viking for example (or Voyager that will sure still come).
dilo
QUOTE (Tman @ Apr 10 2005, 05:29 PM)
Hi Marco, (due to your question in this Thread) I've done it in the same way as the pan above from Oppy sol325.

BTW have you heard something from Erwann, what is he doing? Great things with POV that us will amaze? huh.gif  tongue.gif
I would like to see the same pan stitched with POV that JPL has already done, Viking for example (or Voyager that will sure still come).
*


Hi Tman, so you used PTGUI... sad.gif
Unfortunately I din't hear Erwan, meanwhile I discovered some posts from him in the POV-Ray Forum, where hi describe stitching technique:
http://news.povray.org/povray.binaries.ima....povray.org%3E/
I still hoping hi will return also in this forum because we need him!
(meanwhile, yesterday I printed the Spirit panorama colorized by you and result is even more impressive than on the screen! wink.gif )
Ciao!
Jeff7
Tis an old thread, but you've got another padawan here. Erwan, thank you for that coding there.
I was just working on a two-image panorama of my dorm room, which didn't turn out too well, but good enough. Figuring out POV-Ray took awhile, especially since I didn't realize I was looking at a programming language, complete with variable declarations and everything. Think I'm finally getting the hang of it...maybe. I never was too good at programming though. Very nice program though, especially for the amount of money I didn't have to pay for it. Maybe I'll have to put my PC to work with some huge panoramas. Put its 1.25GB of RAM to some good use. smile.gif
If I figure out how to use POV-Ray for more than 2 images.wink.gif
dilo
Welcome to the Pov-Jedi academy, Jef... wink.gif
Unfortunately, Erwann leaved this site a long time ago and I wasn't able to find him! If you need some help, pls ask... cannot wait to see your first works!
Marco
Jeff7
Next thing I need to figure out is how to get the pointing data from the rover images, but then I haven't even started on that yet. When there's time...full time college right now, and Calculus has got a hit out on me. It nearly killed me in high school, and now it's back, just as mean. I almost understand limits now. Don't even talk about integrals. I think they involve sigma, but that's about all I remember.


Question on POV Ray now - how do I rotate an image? I can shift it left, right, up, down, and adjust the horitzontal stretch of the top or bottom, but I can't figure out how to simply rotate it a little bit. The "rotate" option doesn't seem to do that. All I can get it do are the functions I just listed.


For instance, rotate <1, 2, 3>
1 seems to control up and down movement
2 seems to control left and right
3 - positive numbers stretch the top of the image horizontally, negative numbers stretch the bottom more

But they don't actually appear to change the literal rotation of the object.

There is also the translate option. What does that do? The help file merely lists command arguments for most commands, but it doesn't say what they actually do, at least not in any detail.


**Another update.
Found a nice POV-Ray tutorial. I get the rotate commands now, and how they adjust objects on all 3 axes. I am so not used to dealing with 3 dimensions in a computer, at least like this. Sure I've played 1st person shooters and the Homeworld games, but I've never had to manipulate the objects using variables.
I'm also glad that Erwann posted that code. I probably wouldn't have known where to begin. I really do not like programming. wink.gif

**Yet another update.
Using this image and this image, both from Spirit's Navcam, sol 600, I created a semi-adequate mosaic.

Took a lot of tweaking. I had to rotate the image on the right 30 degrees along the z axis to try to get some things to line up, and then the elevation of the image needed to be adjusted by slightly more than that to compensate. I imagine that having the pointing data would have made that a lot easier.
Even so, I couldn't get the close-up features to match. More z-rotation might have done it, but I've spent enough time on this for now. More than I should have, actually. smile.gif




**Yet another update
3-image panorama! I did some serious screwing around with the z-axis rotation though, on the two left images, so I have no idea if that mountainous horizon is even close to accurate. I just did that until the rocks at the top and bottom of the image matched fairly close.
dilo
Great steady progress, Jeff!
You last stitch seems very good, I compared it to MMB panorama and there aren't major differences..

As you have already discovered, the rotate command allow to turn any object around one or all the 3 axes (it depends from the command syntax). Obviously, you cannot rotate images until they aren't projected as textures on a 3D object (a plane or a box).
It is important to take in mind that rotations effect is not commutative, so result depends on the sequence order you make rotations around each axis.
To stretch images, you should use the "scale" command (with a syntax similar to rotate).
Let me know if you have other questions.. wink.gif
Jeff7
I got the scale command down - it was helpful to match up a few images.
I thought too, I could just use GIMP to rotate an image slightly, then re-render the scene in POV-Ray.
mhoward
QUOTE (dilo @ Sep 13 2005, 06:29 AM)
Great steady progress, Jeff!
You last stitch seems very good, I compared it to MMB panorama and there aren't major differences..
*


Speaking of MMB, it has a feature where you can dump MER raw images onto a virtual panorama, select them and move them around, adjusting the azimuth and elevation of each image until they line up. It might help you to quickly determine rover-relative camera pointing info for the images where that info isn't available. I do this just about every day that new images come down, and I don't have a lot of time, so I've spent some effort on making the process as speedy as possible. There's partial (out of date) documentation here (scroll down to the bottom) if you're interested (I'll try to update this page later; you have to select "Pan Edit Mode" under File -> Advanced before you can do anything now).
Jeff7
Yeah, I'm kind of thinking I'll reserve POV-Ray for panoramas of rooms or landscapes, and leave the Mars panoramas either to MMB's wonderous workings, or to the others here who already know how to make them.

My trouble now - rotate again. Rotating along the x and z axes seems to do the same thing.
Example:

rotate <ELEV_IMAGE003, -AZIM_IMAGE003, 20>

seems to do very nearly the same thing as

rotate <ELEV_IMAGE003+20, -AZIM_IMAGE003, 0>


Both instructions merely move the image toward the top of the rendering area. I would expect the first command line there to rotate it 20 degrees to the right - just as the rotate command in a photo editor would do. My understanding:
X axis runs left to right.
Y axis runs up and down.
Z axis runs toward and away from the user, facing the monitor.

Thus rotating along the z axis should simply make the image appear to rotate.
Right?

The tutorial I linked to earlier uses a cone as a demo object. A symmetrical object. So rotating along one axis would net no visible changes for the sake of a demonstration. unsure.gif
dilo
Jeff, your axis identifications are absolutely correct.
Now, take in mind that, in the Erwan code, observer (camera) still in the origin, where axes cross. Then, you translate the textured box toward -z direction before to rotate it (herebelow complete sequence for a single image):
CODE
box {
<-1, -1, -1>, <1, 1, 1>
scale <1024/2, 1024/2, 0.00001>/FOV
translate -1423.01*z
material {
image003
}
rotate <ELEV_IMAGE003, -AZIM_IMAGE003, 0.0>
}

The "translate" command is essential to see the image (otherwise observer would lie inside the box!) but after it, any rotation around the other two axis would displace the image (box) in space, not only rotate it!
In other words, when (after translation) you give this command:
rotate <ELEV_IMAGE003, -AZIM_IMAGE003, 20>
your box will be first rotated around X and Y axis which are now outside it and, as you can imagine, this rotation will displace the box from it's original position. In fact, as you observed, the ELEV_IMAGE003 angle determine a shift toward the top and AZIM_IMAGE003 will shift in horizontal direction...
This is absolutely normal: those two angles are elevation and azimut of images centers, referred to observer. At this point, Erwan didn't use again the "translate" command to shift images simply because we need to have images always perpendicular to the line of sight, in order to not distort them with perspective effects.

You are reporting that rotation of angles <ELEV_IMAGE003, -AZIM_IMAGE003, 20> brings same results of rotation <ELEV_IMAGE003+20, -AZIM_IMAGE003, 0>.
This is absolutely incidental and, if you change the initial values of ELEV_IMAGE003 and AZIM_IMAGE003, you will see a different result (coincidence disappear!).
So, again, remember that consecutive rotations aren't commutative and that rotation after translation introduce a spatial shift!
You must use the angles just as their names suggests, so elevation will control height above/below horizon (vertical shift), azimut will shift around horizontally. Last angle (tilt) is the rotation around the line of sight, to eventually correct the horizon inclination inside the image; in fact, it is the real image rotation you are looking for! However, Erwan made it equal to 0 because, in general, images belonging to the same panorama are already aligned to each other (they have the same elevation angle in the rover coordinate system).
So, is more easy to apply a unique final correction to the general stitch orientation, instead to match a single image at each time... In fact, the two parameters "AZIMUT_CAMERA" and "ROLL_CAMERA" will make this final correction! So I reccomend you to play with them and not to touch third (z) angle in the image orientations.

I hope I was clear enough without boring you! Ask for other doubt..
Bye.
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Invision Power Board © 2001-2024 Invision Power Services, Inc.