Help - Search - Members - Calendar
Full Version: Photoshop, Wavelength, Channels, Rgb...
Unmanned Spaceflight.com > Mars & Missions > Past and Future > MER > Tech, General and Imagery
djellison
Anyone got any ideas as to better represent each image in an RGB sequence so as to better represent the wavelength at which it was taken. i.e. L2 isnt classic R (255,0,0) L5 isnt G (0,255,0) and L7 isnt B ( 0,0,255 ) - so to truely use the images we get in the most appropriate way - we need to use different channels - or perhaps a combination of layers multiplied by something etc etc.

http://www.mxer.com/sandbox/wavelengthtoRG...lengthToRGB.exe rather excellently converts from wavelength to RGB value as a starter for ten.

Any ideas?

Doug
M_Welander
I'm not entirely sure this is what you're after, but the way I do it is to use as many images as possible to create a graph over the spectra at each pixel in the image. So, the representation of the multichannel sequence there would be the spectral graph. Once you have that it's not too much trouble integrating it with the sensitivity graph for the output colors you're after - usually RGB.
CosmicRocker
I probably shouldn't even jump in here, because you people probably know way more than I do about this color stuff, but what the heck...

I'm not sure I know what you are trying to do, but if you are trying to make "true color" images, you might check with slinted. He uses that CIE method to combine seven different filters to make a composite image. That seems to work well when one has the information to calibrate each filtered image, but apparently that information has only been released for the first 90 sols, so far.

As slinted has explained it to me, the raw images often have different exposure times, not to mention that the different filters have different quantum efficiencies and would not be equivalent, even if the exposure times were equal.

I originally tried to make simple RGB composites in Photoshop, and some reflectance spectra of rocks in an attempt to identify mineralogy. It soon became apparent that the raw data I was using was not calibrated. I tried to force some empirical corrections on the data, but eventually gave up. sad.gif
djellison
Working off a spectrum is fine - if you have L2,3,4,5,6,7 - but when you just have L2, 5, 7 or L4,5,6 - youre best bet is always going to be just merging them as channels in photoshop - the challenge is - how to best represent the actual wavelenghts of the filters for each of those images - especially L2 and L7. I've got the RAD files properly converted using teh neuences of each filter now - I'm getting images very similar to that of nasa - just slightly colourifically different if I use L2 or L7 instead of L4 and L6 respectively. L4,5,6 comes out PERFECTLY smile.gif



- one of my favorite images of the whole mission - theres two frames there - L4,5,6

However



- this is using L2,5,6 - see the slight hue to the image compared to the NASA PAO release - that's almost certainly due to the difference between L2 and L4. I guess I could just sort out a levels-set that compensates that much - and ditto for L257 as well - but I'd rather represent each level more accurately.

Doug
slinted
QUOTE (djellison @ Dec 12 2004, 09:37 PM)
...perhaps a combination of layers multiplied by something etc etc.

I've been working on that as well, trying to expand out my color work from the 6 filter into the 3 filter...but with limited success. As it stands right now, I've got numbers based on conversion from an L2 L5 L7 spectrum (linearly interpolated between L2 -> L5, L5-> L7) into CIE XYZ and CIE XYZ to RGB.

These numbers are based on no whitepoint conversion
Red = 1.394 * L5 - 0.505 L7 + 0.314 L2
Green = 1.026 * L5 - 0.079 * L7 + 0.005 * L2
Blue = 0.037 * L5 + 0.932 * L7 - 0.010 * L2


I've done some work to figure out the proper whitepoint by using the sundial images, though that falls apart quickly due to dust accumulation (I'd LOVE to know what whitepoint the JPL images are using)

Here's those same figures based on a Bradfield whitepoint conversion to D65 based on the whitepoint of X = 97, Y = 100, Z = 75 which is my best calculation/guess as to the whitepoint of Spirit.

Red = 0.938 * L5 - 0.530 * L7 + 0.250 * L2
Green = 0.972 * L5 - 0.117 * L7 + 0.009 * L2
Blue = 0.123 * L5 + 1.21 * L7 + 0.003 * L2

And the figures based on a Bradfield whitepoint conversion to D65 based on the whitepoint of X = 97, Y = 100, Z = 85 which is my best calculation/guess as to the whitepoint of Opportunity.

Red = 1.043 * L5 - 0.515 * L7 + 0.266 * L2
Green = 1.002 * L5 - 0.104 * L7 + 0.009 * L2
Blue = 0.067 * L5 + 1.108 * L7 - 0.001 * L2

now for the "and here's why they're wrong"

As you said, L2 is a HORRIBLE approximation of red, and barely even contributes to the red channel even with the interpolation. Here's why:

I found/made these up for a recent discussion over on another message board, but they illustrate the point well :



As you can see, L2 isn't even on there (its center is well beyond the red center of human vision, with light coming through the L2 only contributing about 1% of the red that we perceive), although L3 4 and 5 and 6 all fall under the 'red' curve, which is why L5 contributes so much to the red part of the equations above.

Also, linear interpolation is a very bad way to interpret the spectrum in between the filters we have for most of the 3 frame color. As is polynomial...and splines. Basically, any straight mathematical interpolation does a very bad job of matching up with the real data since L2 is such a bad approximation. My current project is to try to get a statistical approximation using the 6 frame color to fit a curve to each individual filter, so at least it'll be a best guess as to the values in between 2 filters based on real ground truth. I'll let you know if that works.

And lastly, as I said...in terms of matching to the JPL color, it would be fundamentally easier if we could know what whitepoint is being used for the conversion. They might very well (and probably should) be using a different whitepoint at different times throughout the mission, since whitepoint is based on the illuminant, and as your graphs of the Tau values show, the brightness of the sun/sky varied greatly even over the first 90 sols. My best guess at this point is that the whitepoint at the beginning of the mission was much more 'red' than later on, since the sun brightness increased relative to the dust-laden sky brightness over those first 90 sols. This would mean that the surface illuminant was less dominated by the red sky and more dominated by the direct sunlight as the mission progressed.

I hope you are able to get use out of these. If you wanted to use a whitepoint other than the ones I picked, let me know, and I'll calculate the channel math based on it.
djellison
I look at all of that - and it makes a lot of sense - BUT - I'm just an artist smile.gif

I'm sure there must be a way using photoshop to have a template file using layers to replicate this - there must be. I think I might email Tim Parker biggrin.gif

Doug
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Invision Power Board © 2001-2024 Invision Power Services, Inc.