My process uses the same values and conversions for every image, such that I
can output close to true color on the output side if I wanted to. But I tend to just use a proportional stretching method when I convert from the ISIS3 cubes to tifs which in effect white balances them (though not on any specific value white). I've done rather little to validate the 'true color' output, so I don't really use it.
QUOTE (Bjorn Jonsson @ Sep 24 2020, 03:58 PM)
I've been getting color in the northern PJ29 images that I consider a bit suspicious. I'm therefore going to revisit some of the color tests/calibrations and among other things compare the PJ29 color to earlier images. So I'm very curious to know what you are disappointed with. Does your pipeline white-balance the images/color channels differently in different images or do you multiply the R/G/B values with the same fixed values for all of the images?