Help - Search - Members - Calendar
Full Version: Perseverance Imagery
Unmanned Spaceflight.com > Mars & Missions > Perseverance- Mars 2020 Rover
Pages: 1, 2, 3, 4
htspace
QUOTE (Adam Hurcewicz @ Mar 16 2021, 08:39 PM) *
I use FitsWork to debayer images. It's great!

Yeah,I get it, it's great!
MarkL
QUOTE (phase4 @ Mar 14 2021, 09:17 PM) *
Not from the website directly. For the arm animation I made earlier I sorted the JSON entries on the spacecraft clock time which is given as "extended/sclk". (and can also be found as part of the image filename)


Do you have any insight into the MET portion of the filename. When did MET start (ie. when was it zero?)

At the moment its around 0669527872.

I would like to rename some of the raw images in the order they were taken and where pairs were taken keep those together - I think they would be taken at about the same MET anyway so sorting by MET should work to keep stereo pairs together.

MET seems to have been wrong on some images, or at least, the MET is the time the image was uploaded by the spacecraft rather than the time it was taken by the camera. Especially the EDL images. Or have they fixed that now?

Thanks!

fredk
The sclk (mission time) filename (and json) field appears to start on Jan 1st, 2000 - see here. And yeah, that field is still incorrect on at least some EDL camera images.
mcaplinger
QUOTE (fredk @ Mar 22 2021, 03:35 PM) *
The sclk (mission time) filename (and json) field appears to start on Jan 1st, 2000...

0 is 2000-001T11:58:55.816 ("epoch of J2000" in Coordinated Universal Time.)

This then drifts as a function of rover clock rate and is tracked by the NAIF file that Fred linked to, so you need to use the NAIF function scs2e or something like that to get a usable time. https://naif.jpl.nasa.gov/pub/naif/toolkit_...C/req/sclk.html
PaulH51
Hope this is the right thread for this question?

The SHERLOC imager is said to be a direct build to print version of the MAHLI imager on MSL. With MAHLI images the focus motor count is provided on the raw image server, using that count enables users to approximate the distance and scale of 'close-up' / 'in-focus' MAHLI images.

The SHERLOC images returned so far don't appear to have a motor count on the raw image server (that I can find)

Is the SHERLOC motor count provided on an image JSON feed somewhere? Any pointers would be appreciated, as I'd hope to check it's my use of the count using targets like the PIXL calibration target where its exact size is known. Then I can use it for scaling the geological targets in the future. TIA smile.gif
mcaplinger
QUOTE (PaulH51 @ Mar 23 2021, 03:05 AM) *
Is the SHERLOC motor count provided on an image JSON feed somewhere?

Not yet, maybe someday. See post 89.
PaulH51
QUOTE (mcaplinger @ Mar 23 2021, 10:38 PM) *
Not yet, maybe someday. See post 89.

Many thanks
phase4
Hey all, you're welcome to try the Perseverance image viewer I've build for the Marslife website.

Click to view attachment
https://captainvideo.nl/marslife/index.html

Currently the website supports the left-eye images (no bayers yet) for Navcam, Mastcam-Z & Hazcam camera's
and should be in sync with the official Nasa raw image releases.
A traverse path will be included as soon as the mission kernels become available.

A new viewmode is added (toggle V key) to present panorama's as a circular (fisheye) projection.
Also new is that the visibility of the placeholder overlay should now be toggled with the X key.
Red/orange outline colors stand for hazcam, green means navcam images and blue represent Mastcam-Z images.

It's still in it's early stages so you can expect bugs and quirks. Have fun anyway!

Rob
PaulH51
QUOTE (phase4 @ Mar 27 2021, 01:17 AM) *
It's still in it's early stages so you can expect bugs and quirks. Have fun anyway!
Rob

Works very well for me on PC & mobile smile.gif
pbanholzer
I understand that MastCam Z has video capability at 4 fps for the full sensor area but subsample areas can be selected. What frame rate will its video of Ingenuity be - 30 fps ?

Thanks.
mcaplinger
QUOTE (pbanholzer @ Mar 28 2021, 01:47 PM) *
I understand that MastCam Z has video capability at 4 fps for the full sensor area but subsample areas can be selected. What frame rate will its video of Ingenuity be - 30 fps ?

From https://mars.nasa.gov/msl/spacecraft/instru...for-scientists/

QUOTE
Each camera is capable of acquiring images at very high frame rates compared to previous missions, including 720p high definition video (1280 × 720 pixels) at ~10 frames per second.

pbanholzer
Thanks again. So that I am clear, the video on MastCam Z is the same as on Curiosity's Mastcam: 720p at 10 fps?
mcaplinger
From https://link.springer.com/article/10.1007/s11214-020-00755-x

QUOTE
The Mastcam-Z focal plane array (FPA) and electronics are essentially build-to-print copies of the heritage MSL Mastcam FPA (Malin et al. 2017)... Details of the sensor, electronics, and timing signals for the FPA are identical to those described for MSL Mastcam and are thus only summarized here.

mcaplinger
BTW, just a reminder: I'm not authorized to say anything about the inner workings of these missions. So I have to do this complicated dance where I quote stuff from papers, press kits, and web sites that anybody can see. It seems like the best source of current information at the moment, such as it is, is from twitter https://twitter.com/NASAPersevere #MarsHelicopter #Mars2020
pbanholzer
Mike - Thanks yet again. I worked at Goddard for 25 years so I understand about restrictions on information sharing. And I appreciate your knowledge of the literature. Now that I see the article, I do remembering reading about the heritage from MC to MC-Z.
Brian Swift
QUOTE (mcaplinger @ Mar 28 2021, 08:52 PM) *
BTW, just a reminder: I'm not authorized to say anything about the inner workings of these missions. So I have to do this complicated dance ...

Understood, but I’ll post my question list anyway…

1. Is Watson using same decompanding tables as MAHLI? (https://pds-imaging.jpl.nasa.gov/data/mahli/MSLMHL_0014/CALIB)

2. Was there a commanding difference between these two Watson images of MCZ calibration targets?
https://mars.nasa.gov/mars2020/multimedia/r...LC07009_0000LUJ
https://mars.nasa.gov/mars2020/multimedia/r...LC07009_0000LUJ
The images were taken only 13 sec apart, but the dark areas are a little different.

3. The AluWhite98 target seems to be a bit blue deficient relative to the other grey scale targets. Based on "Radiometric Calibration Targets for the Mastcam-Z Camera on the Mars 2020 Rover Mission”, I’d expect it to have a flat response. However the vendor reflectance chart for AluWhite98 shows some response drop off below 500nm, but I’m not sure it’s enough to account for the deficiency.

4. Is there a document for SHERLOC ACI comparable to “Mars 2020 Perseverance SHERLOC WATSON Camera Pre-delivery Characterization and Calibration Report”?
mcaplinger
1. Why would it be different?

2. No idea. I could look it up, but then I couldn't tell you. Did you look at the metadata?

3. I had nothing to do with the calibration target.

4. Not that I know of, we had no responsibility for ACI calibration.
fredk
QUOTE (Brian Swift @ Mar 30 2021, 06:31 AM) *
The images were taken only 13 sec apart, but the dark areas are a little different.

Different exposures? The differences are all over the images, not just the dark areas, as you can see by dividing or subtracting one frame from the other.
mcaplinger
QUOTE (fredk @ Mar 30 2021, 09:08 AM) *
Different exposures?

This would hardly be remarkable, since most images are autoexposed. But the JSON metadata doesn't have any exposure time information that I can see.
Andreas Plesch
Mastcam-Z zoom

The zoom capability on the Mastcam-Z is quite remarkable. It took me a while to find what this recent image shows:



Here is the unzoomed version from the initial panorama:

Brian Swift
Thanks for the response Mike.
QUOTE (mcaplinger @ Mar 30 2021, 07:05 AM) *
1. Why would it be different?
IDK, maybe trying a new transfer function that doesn't decompand both 2 and 3 to 3. (Which obviously isn't a big deal)
QUOTE
2. No idea. I could look it up, but then I couldn't tell you. Did you look at the metadata?
As you noted, not exposure info in metadata.
QUOTE
3. I had nothing to do with the calibration target.
The question would be, was the calibration target imaged by Watson during testing, and if so did the AluWhite98 target show less blue (relative to green and red channels) than the other three greyscale targets?
QUOTE
4. Not that I know of, we had no responsibility for ACI calibration.
So far I haven't come across a detailed technical document on SHERLOC (the non-watson side.) If anyone has a link to one, please reply.
fredk
QUOTE (mcaplinger @ Apr 5 2021, 07:00 AM) *

Thanks, somehow I missed the mention of IR filters in that paper. I had been thinking that maybe a lack of IR filter was causing the wonky engineering cam colours (though in hindsight that might've shifted colours to the red). I guess we're simply seeing the effect of raw colour, with the responses of the RBG filters quite different from those of the eye.
phase4
QUOTE (fredk @ Apr 5 2021, 04:25 PM) *
I had been thinking that maybe a lack of IR filter was causing the wonky engineering cam colours

I assumed that next to auto-exposure the new color cams would also use auto-white balancing. But that's an uneducated guess.
fredk
QUOTE (phase4 @ Apr 5 2021, 05:02 PM) *
I assumed that next to auto-exposure the new color cams would also use auto-white balancing.

Normally cameras don't apply white balancing to the raw (Bayered) images - that's done later, typically when the raw is converted to a jpeg. So white balancing can't be the cause of the yellowy engineering cam inages.
MarkL
How does the MET counter relate to real time on Earth please? Is there a zero point?
Thanks
Mark
mcaplinger
QUOTE (MarkL @ Apr 6 2021, 04:45 PM) *
How does the MET counter relate to real time on Earth please? Is there a zero point?

Isn't this the same question that you asked and was answered in this thread back on post #102 and subsequent?
ChrisC
QUOTE (MarkL @ Apr 6 2021, 07:45 PM) *
How does the MET counter relate to real time on Earth please? Is there a zero point?

Emily L did an epic writeup recently about the photo metadata that probably answers that question:

https://www.patreon.com/posts/orientation-to-48263650

(UMSF needs an "oh snap!" button smile.gif )
MarkL
QUOTE (mcaplinger @ Mar 23 2021, 03:58 AM) *
0 is 2000-001T11:58:55.816 ("epoch of J2000" in Coordinated Universal Time.)

This then drifts as a function of rover clock rate and is tracked by the NAIF file that Fred linked to, so you need to use the NAIF function scs2e or something like that to get a usable time. https://naif.jpl.nasa.gov/pub/naif/toolkit_...C/req/sclk.html


Thanks mate. Sorry I thought I forgot to ask. Too lazy! Much appreciated.
MarkL
I did a little playing around with sorting the raw images by different parameters (particularly by MET which is what interests me so I can browse images in sequence) on my Mac to get what I wanted and I will pass on a couple of Mac tips for accomplishing this. MacOS makes this process quite straightforward.

1. Obtain a regular expression (RegEx) filename change utility. I used Transnomino which worked very well.
2. Create aliases (ie. shortcuts) of the image files you want to sort (Command-a, Make Alias) and put them in a separate directory.
3. In the alias directory highlight all the aliases and rename them via the utility. Transnomino helpfully adds a context menu option so just right click the selected aliases and Rename with Transnomino.
4. Use the RegEx to rename as follows:

Find:
(\w)(\w\w)_(\d\d\d\d)_(\d{10})_(\d\d\d)(\w\w\w)_(\w)

Replace:
$7_$1_$4\.$5_$1$2_$3_$6_$7

This uses 7 RegEx capture groups (each delimited by parentheses) to rearrange the filename of the alias. It doesn't touch the original filename since the alias refers directly to the original file.

Example:

NLE_0045_0670932186_787ECM_N0031416NCAM00299_14_0LLJ01.png alias

is renamed to:

N_N_0670932186.787_NLE_0045_ECM_N0031416NCAM00299_14_0LLJ01.png alias

The sort order is, Image type (N/T), Camera subsystem, MET. This allows you to sort by name and keep all the thumbs separate from all the normal images. It sorts by MET within subsystems (Nav, EDL, Forward Hazard, Rear Hazard etc).

In the sol 45 and 46 images, there are a couple of cool "snake charmer" sequences of the arm and instrument head in motion in the thumbnails which I found easily after renaming the aliases as above. If you rename the way I suggest, the sequences are:

T_N_0670932316.774_NLF_0045_ECM_T0031416NCAM00298_01_600J01.png alias
T_N_0670932866.237_NLF_0045_ECM_T0031416NCAM00298_01_600J01.png alias

T_N_0671021417.992_NLF_0046_ECM_T0031416NCAM00298_01_600J01.png alias
T_N_0671024345.491_NLF_0046_ECM_T0031416NCAM00298_01_600J01.png alias

It is quite cool to watch the instrument head move around a fixed point in space. I suppose they are checking it out still.

Ryan Kinnett
QUOTE (djellison @ Feb 23 2021, 01:39 PM) *
For the less technically minded among us...Ryan Kinnet has put up a page that grabs a listing with links to the PNG files that you can then use any browser-plugin-batch-downloader with

https://twitter.com/rover_18/status/1364309922167488512

I tried Firefox with 'DownloadThemAll' and it worked perfectly.

Meanwhile THIS GUY has python code to also grab the data
https://twitter.com/kevinmgill/status/1364311336000258048


Hello UMSF!

I'm happy to learn my roverpics page is aiding some of the wonderful creations around here. That is precisely my intent with this and other things I'm working on. Please drop me a line if you have any ideas or suggestions to make that page more useful. One thing I hope to add in the near future is per-image local-level az/el, derived from metadata, to help us identify distant features in images. Longer term, I'm thinking about linking this to my rover3d page to render rover pose and camera view projection for any image.

Cheers
phase4
QUOTE (fredk @ Apr 6 2021, 06:50 PM) *
...white balancing can't be the cause of the yellowy engineering cam inages.


So that can't be it, thank you for the explanation.
MarkL
QUOTE (Ryan Kinnett @ Apr 7 2021, 10:23 PM) *
Hello UMSF!

Hi Ryan. I think lots of us use it daily. Its a terrific way to get everything at once. I have not done much with the json data or the URLs yet.

One suggestion - filter new images since last visit - otherwise I just download all of them if there have been some new ones added to a particular sol rather than manually look through them for new ones. Perhaps also a list of which sols have new images available or a log showing when new images have arrived by sol. Not sure the best way to implement that.

Really appreciate your site.
If anyone has done some python code to organize or selectively download that would be great.

Id also like to get peoples impressions of the various stitching software out there, particularly for Mac.
Ryan Kinnett
QUOTE (MarkL @ Apr 7 2021, 06:54 PM) *
Hi Ryan. I think lots of us use it daily. Its a terrific way to get everything at once.

Thanks, I'm very please to hear that.

QUOTE (MarkL @ Apr 7 2021, 06:54 PM) *
One suggestion - filter new images since last visit - otherwise I just download all of them if there have been some new ones added to a particular sol rather than manually look through them for new ones. Perhaps also a list of which sols have new images available or a log showing when new images have arrived by sol. Not sure the best way to implement that.

That a great idea, but unfortunately it's beyond my infrastructural capability. What you're describing would require tracking, either via cookies or a user management system, plus a completely different query system. The page is quite dumb. It's really just a graphical wrapper for the JSON interface, completely contained in a single html file with no libraries or back-end server support or anything complicated like that. You could save the index.html file to your desktop and run it from there, if you wanted to.

QUOTE (MarkL @ Apr 7 2021, 06:54 PM) *
If anyone has done some python code to organize or selectively download that would be great.

Have you tried Kevin Gill's m2020 raw image query python tool?
MahFL
A new advanced filter option appeared on the raw page, "Movie Frames"...
PaulH51
The black margins on the Mastcam-Z frames are a bit of a pain to remove manually. I do this to permit stitching in MS-ICE without leaving shadows at some of the the seams.

Does anyone here know how I could use GIMP to process a batch at one session?

Grateful for any advice smile.gif
James Sorenson
QUOTE (PaulH51 @ Apr 14 2021, 04:21 PM) *
The black margins on the Mastcam-Z frames are a bit of a pain to remove manually. I do this to permit stitching in MS-ICE without leaving shadows at some of the the seams.

Does anyone here know how I could use GIMP to process a batch at one session?


I don't know about GIMP because i don't use it, but you can batch do this easily with PIPP. In the Processing Options > cropping section. You can experiment with the offset and crop width fields. To preview, just click on the Test Options button. smile.gif
Greenish
I just ran across this, and thought folks would find it very useful: full metadata in csv format for M2020 pics, updated several times daily. In both incremental files and one master one.
https://www.kaggle.com/sakethramanujam/mars...0imagecatalogue
Heck, you can even do stats with it...
Click to view attachment

And regarding cropping a bunch, I can always recommend ImageJ/FIJI. 1. File > Import > Image Sequence (or Stack From List) 2. Select rectangle. 3. Edit > Crop. Done. Optional 4. Record as macro for later use.
Ryan Kinnett
QUOTE (PaulH51 @ Apr 14 2021, 03:21 PM) *
The black margins on the Mastcam-Z frames are a bit of a pain to remove manually. I do this to permit stitching in MS-ICE without leaving shadows at some of the the seams.

Does anyone here know how I could use GIMP to process a batch at one session?

Grateful for any advice smile.gif

Here ya go! Download this batch-crop GIMP script, and save it here:
CODE
%appdata%\GIMP\2.10\scripts

This script operates destructively on all images matching a filename pattern, saving the cropped images over the original image files.
To use this script, put all of the images you want to crop into a specific folder, then navigate in windows command prompt to that folder. Then call gimp from there, like so:
CODE
"c:\Program Files\GIMP 2\bin\gimp-2.10.exe" -i -b "(batch-crop \"*.png\" 1604 1196 25 4)" -b "(gimp-quit 0)"

In this case it will crop all png's in the current working directory to 1604 x 1196 (offset 25px from the left and 4px down) and save over the original files.
I hope it's useful! I'm also looking into content-aware filling the black schmutz in the Watson images - at least the larger spot, hopefully also the smaller ones if I can figure out how to load a schmutz map reference image.


Unrelated, I added an interesting feature to roverpics. You can now hover over any thumbnail to see the metadata for that image. I also calculate local-level azimuth for each image and added that to the metadata. If you ever spot an interesting terrain feature and want to know which direction it is (relative to cardinal north) from the rover, this is a quick way to find out, just hover over the image and you'll find it toward the bottom.
Click to view attachment

Thanks to Thomas Appéré for reporting a couple of glitches earlier this week. The page should now return all full-frame and thumbnail still frames and movie frames.

Cheers
htspace
QUOTE (Ryan Kinnett @ Apr 17 2021, 03:53 PM) *
Here ya go! Download this batch-crop GIMP script, and save it here:
CODE
%appdata%\GIMP\2.10\scripts

This script operates destructively on all images matching a filename pattern, saving the cropped images over the original image files.
To use this script, put all of the images you want to crop into a specific folder, then navigate in windows command prompt to that folder. Then call gimp from there, like so:
CODE
"c:\Program Files\GIMP 2\bin\gimp-2.10.exe" -i -b "(batch-crop \"*.png\" 1604 1196 25 4)" -b "(gimp-quit 0)"

In this case it will crop all png's in the current working directory to 1604 x 1196 (offset 25px from the left and 4px down) and save over the original files.
I hope it's useful! I'm also looking into content-aware filling the black schmutz in the Watson images - at least the larger spot, hopefully also the smaller ones if I can figure out how to load a schmutz map reference image.


Unrelated, I added an interesting feature to roverpics. You can now hover over any thumbnail to see the metadata for that image. I also calculate local-level azimuth for each image and added that to the metadata. If you ever spot an interesting terrain feature and want to know which direction it is (relative to cardinal north) from the rover, this is a quick way to find out, just hover over the image and you'll find it toward the bottom.
Click to view attachment

Thanks to Thomas Appéré for reporting a couple of glitches earlier this week. The page should now return all full-frame and thumbnail still frames and movie frames.

Cheers

Thank you for sharing, The website is very good!

I don't know how to call gimp, can you share a screenshot? Thank you!
djellison
QUOTE (PaulH51 @ Apr 14 2021, 03:21 PM) *
Does anyone here know how I could use GIMP to process a batch at one session?


Don't even need GIMP. Irfanview's batch conversion output format options have a crop section.

https://www.youtube.com/watch?v=Z_eEKD8AJz0
Ryan Kinnett
QUOTE (djellison @ Apr 17 2021, 08:04 PM) *
Don't even need GIMP. Irfanview's batch conversion output format options have a crop section.


I prefer ImageMagick and PIPP myself. I made the gimp script partly to get my feet wet with it before trying to leverage the Resynthesizer tool to remove the black smudge in Watson images.

Along the same vein (using a canon to kill a fly), I made a debayer action for Photoshop. It uses bilinear interpolation to demosaic any RGGB mosaic raw image, preserving its original dimensions. The method is based on this tutorial which is pretty up-front about being more of an existence proof than a practical solution. Indeed, it's awfully inefficient, throwing away 75% of its own calculations which are necessary to fit into the photoshop framework. A single full-frame navcam raw image takes about 5 seconds to process with this method. It's clearly not practical for batch processing, but may be useful for single-image operations.
fredk
QUOTE (Ryan Kinnett @ Apr 21 2021, 03:08 AM) *
It's clearly not practical for batch processing, but may be useful for single-image operations.

See this thread and elsewhere in this forum for lots of suggestions for deBayering. It is so easy now that we have un-jpegged raws with M20!
jvandriel
James,

I use BIC ( Batch Image Cropper).
Google for it.
It works great and very fast.

Jan van Driel
Brian Swift
QUOTE (PDP8E @ Mar 1 2021, 03:07 PM) *
The cameras used for rover lookdown and lookup are AMS CMV20000

here is the datasheet from AMS

Note - Per (Maki et al. 2020), engineering cameras use the CMV-20000 detector from AMS. The EDL (downlook and uplook) cameras are a mixture of On Semi P1300 and Sony IMX265 detectors. (See Table 4 on "Page 31 of 48")
fredk
QUOTE (Brian Swift @ May 15 2021, 12:19 AM) *
My take on processing the Descent Down-Look Camera raw images.

Left image colors are based on camera response and and illumination optimized from calibration target on top of rover, and includes chromatic adaption from modeled illuminant to D65 standard. In right image, only camera response is modeled and illumination is fixed at D65, so no chromatic adaption is applied.

This looks impressive, Brian. I'm curious about the general approach. You need to get from the sensor raw colour space into some standard space, using an IDT or "forward" matrix, before finally sRGB or whatever. How do you find that matrix: using the published CFA response curves for the IMX265 sensor, or by fitting the matrix elements to the expected standard-colour-space values for the calibration target patches?

And are you assuming blackbody illumination, at a temperature that you fit for?
Brian Swift
QUOTE (fredk @ May 17 2021, 04:07 PM) *
This looks impressive, Brian. I'm curious about the general approach. You need to get from the sensor raw colour space into some standard space, using an IDT or "forward" matrix, before finally sRGB or whatever. How do you find that matrix: using the published CFA response curves for the IMX265 sensor, or by fitting the matrix elements to the expected standard-colour-space values for the calibration target patches?

And are you assuming blackbody illumination, at a temperature that you fit for?

Thanks Fred. I'm not using the CFA curves (yet). When I started working on this, I couldn't find the curves. Then when I was building up the references list, I discovered a graph of them in a different FLIR doc than what I had been using.

So, I'm fitting the matrix and blackbody temperature. The fit minimizes the RMS of CIE2000 color distance between raw RGB values transformed to XYZ color via matrix and XYZ color values for calibration target patches derived from target reflectance measurements and and blackbody illuminant.

I don't consider the modeling of the illuminant as a blackbody to be well justified. The calibration target isn't directly illuminated. I did some experiments with more flexible illuminant models, but they produced some extreme results that I suspect were due to over-fitting the limited number of calibration patches. (I'm not using the white and light grey patch because they are over-exposed (clipping) in the raw data.)

The above description applies to the left image in the video, the right image just fits the matrix assuming calibration patches are illuminated by D65.

I've uploaded a PDF of the notebook so anyone interested can view it without a Mathematica viewer. https://github.com/BrianSwift/image-processing
fredk
Thanks for the details, Brian, and for the pdf.
QUOTE (Brian Swift @ May 18 2021, 07:48 AM) *
So, I'm fitting the matrix and blackbody temperature. The fit minimizes the RMS of CIE2000 color distance between raw RGB values transformed to XYZ color via matrix and XYZ color values for calibration target patches derived from target reflectance measurements and and blackbody illuminant.

I don't consider the modeling of the illuminant as a blackbody to be well justified. The calibration target isn't directly illuminated. I did some experiments with more flexible illuminant models, but they produced some extreme results that I suspect were due to over-fitting the limited number of calibration patches.

I have to wonder about a potential degeneracy between blackbody temperature and the matrix parameters. Ie, could you get a similarly good fit by shifting the temperature and compensating with different matrix elements, ie different CFA responses? Does your best-fit temperature sound reasonable? I guess that ambiguity would disappear if you could use the CFA curves to calculate the matrix directly.

Also, the sundial target emerges into the sun a bit later in the sequence - could it be better to use those frames with the D65 model?

I also worry about the small number of patches, so you're only sparsely sampling the gamut and may be quite a bit off on the colour of the ground, eg. Still, they look pretty good by eye. The main difference between your two models is with blue.

I've been wondering about something like this with nav/hazcam, but the stretching of the public images might make that hard (at least to the extent that the blackpoints are shifted).
Andreas Plesch
In an attempt to try to understand the camera model parameters in the metadata json, and therefore then the CAHVORE camera model, I ended up putting together a camera model analyzer:

https://observablehq.com/@andreasplesch/the...re-camera-model (updated for better table formatting)

It is based on

https://agupubs.onlinelibrary.wiley.com/doi...29/2003JE002199

and I think equivalent to the CAHVORE to Photogrammetric conversion of https://github.com/bvnayak/CAHVOR_camera_model which is in python.

I tested the analyzer with the test data of the python code repo and get the same results which is a good sign.

My goal is to unproject the fisheye projection of the Ingenuity cameras, and https://www-mipl.jpl.nasa.gov/vicar/dev/htm...p/marscahv.html would seem to be right tool but with a steep learning curve. In any case, I have some understanding now that the "E" parameter is for fisheye distortion, adjusting the radial distance with a third degree polynomial.

Looking the json metadata, I noticed that after the 7 triples for the CAHVORE parameters, there are additional numbers like 2;0 or 3;0 . Do we know what these extra parameters are for ?

Any further hints or feedback much appreciated.
Greenish
Andreas, this is really helpful - I have been poking at the same sources (when I have other real work to do...) and seem to be on the same track, if a bit behind - and I have been slogging in Octave and Excel, not making slick live calculation pages! Anyway, none of your results contradict what I've seen so far, including the RTE's ~2mm uncorrected focal length (with 5.9 mm sensor diag., corresponding to something like 14mm equivalent full-frame F.L., and FOV over 90 deg.) By the way, where did you find the JSON for the heli images? I've only been able to pull the Perseverence ones from the RSS feed.

QUOTE (Andreas Plesch @ Jun 1 2021, 06:21 AM) *
Looking the json metadata, I noticed that after the 7 triples for the CAHVORE parameters, there are additional numbers like 2;0 or 3;0 . Do we know what these extra parameters are for ?

I don't know that they're needed for "normal" CAHV[ORE] processing. In the MSL CAMERA SIS, they specify "MTYPE" and "MPARMS" related to the geometric camera model. I had thought it referred to 1=CAHV/2=CAHVOR/3-CAHVORE but now wonder if perhaps it refers to pointing models and associated parameter(s) used in post-processing ... they seem to align with Linearization Modes 1-3 listed in the VICAR MARSCAHV help file you list above, which is mentioned in the SIS.

Edit: I was half-right; at the end of the file here (linked from bvnayak's code you point to above), it explains the last 2 parameters. The MPARM value is called called L in several sources.

* T: CAHVORE Type (MTYPE): MODEL_COMPONENT_8
To distinguish the various CAHVORE model types (e.g., CAHVORE-1,
CAHVORE-2, CAHVORE-3), a parameter "T" is specified. It's value
may be integers 1, 2, or 3 to coincide with CAHVORE-1,
CAHVORE-2, or CAHVORE-3, repectively.

* P: CAHVORE Parameter (MPARM): MODEL_COMPONENT_9
"P" is an additional parameter to CAHVORE that specifies the
Linearity of the camera model for the CAHVORE-3 case. It is an
arbitrary floating-point number. A value of 0.0 results in a
model equivalent to CAHVORE-2, while a value of 1.0 results in a
model equivalent to CAHVORE-1.
Andreas Plesch
Thanks. Another mystery solved. I updated the analyzer to report T and P with a description. I also added theta, the angle between the horizontal and the vertical axis which can be off of 90 degrees according to the PDS geometry_cm document.

Unfortunately, the id query parameters which is provided for the json_link in the raw images json does not seem to work for the HELI images.

But one can just use the network tab in the Chrome developer tools while filtering for the heli images and look for the json request which looks something like:

CODE
https://mars.nasa.gov/rss/api/?feed=raw_images&category=mars2020,ingenuity&feedtype=json&ver=1.2&num=100&page=0&&order=sol+desc&&search=|HELI_RTE&&&condition_2=91:sol:gte&


There are so many great online tools. I like Observable notebooks but of course Jupyter notebooks are really nice as well. I think there is an Octave kernel for Jupyter. Python can be pretty friendly.

I hope to try some Image processing next. Pixel remapping is quite possible with js, especially when using the GPU with webgl (which I hope to avoid first). See what happens when one recalculates r by subtracting the distortion delta factor given by the R polynomial, or something like that. Not sure about how to use E to unproject full fisheye.
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Invision Power Board © 2001-2024 Invision Power Services, Inc.