Help - Search - Members - Calendar
Full Version: HST Albedo Map Processing
Unmanned Spaceflight.com > EVA > Image Processing Techniques
ZLD
I've almost made this post several times today but I had to go back and redo my work multiple times because I really just couldn't believe it.

Going into this, I was absolutely sure I wouldn't get anything from it. It just seemed so incredibly unlikely to work.

Heres the original HST 2002/2003 maps of Pluto to refresh you.


Below is the 2002/2003 combined observations of Pluto, run through my experimental image processing to bring out albedo variations.



Here is an animation fading between the above and scalbers latest high resolution map (downscaled of course). (13MB gif)

(ctrl + scroll wheel to zoom - maximum possible encouraged)

Finally, heres the single frame from the fade.



I've also taken this process further with a different map and it appears to continue bringing out small increments of detail each time, to what limit, I have no idea. One especially important note about all of this, its somewhat like finding Waldo without knowing what Waldo looks like. Without knowing what to look for, it would have been increasingly difficult to know how to set the parameters in each iteration, to avoid corrupting the details.

----------
Edit
----------
I should also note, just in case it wasn't clear, the HST combined map was directly processed. Zero data from NH was involved in pulling out the details. The map from scalbers is just to compare.
Gladstoner
QUOTE (ZLD @ Jul 26 2015, 08:16 PM) *
Here is an animation fading between the above and scalbers latest high resolution map (downscaled of course). (13MB gif)

it may be just an impression, but the brightest area on the Hubble map (and your image processing) seems to match the CO-ice distribution measured by New Horizons:

Click to view attachment
ZLD
It looks pretty close to me as well.

I started working on it because I couldn't quite get the HST maps to line up with much on the NH maps so I took a shot in the dark, hoping I could get something to come out so I could get a better alignment.
nprev
Very cool work.

It'd be interesting to see this tried with the Hubble Ceres imagery in order to validate & further refine your technique for other applications, but the contrast levels there aren't nearly as high as those on Pluto so it might not work nearly as well.
ZLD
It definitely works best with high contrast terrain. I have already experimented a little with Ceres with results less appealing. I'll give it another go some time though. I'm terrible at planet mapping but if someone wanted to use the HST hemispheric views to make a quick map, it would definitely make comparison a lot easier.
fredk
QUOTE (ZLD @ Jul 27 2015, 02:16 AM) *
I really just couldn't believe it.

What can't you believe? There's a very crude large scale agreement between the Hubble imagery and NH, as we already know. Hubble isn't capable of resolving small-scale structure, of course.
ZLD
There is much more agreement between the images I just posted than just large scale structures. Pick something from the processed HST image and watch it resolve on the NH map, there's a very large number of these. These weren't originally resolved and technically shouldn't be resolvable but they are there.

A good one to notice is the crater or an arc (difficult to tell) near the center right on Cthulu.

Another is a curly feature on the northern rim of Tombaugh Regio. That one is really well defined.
nprev
That probably points to the way your process is tailored more than anything else, though. As Fred pointed out, the information just isn't present in the HST images because of the resolution. By definition, you can't get sub-pixel information from a pixel since it's a uniform unit (actually a single DN, if my weak grasp of the appropriate terminology is correct).

ZLD
To clarify again, all processing was performed on the HST image prior to alignment with the NH map. This is why I provided both, the uncorrected and the corrected views of the map for comparison. No retouching of any form is used in this. Its a mixture of lots and lots of different layering techniques and deconvolution. And again, full disclosure, I have no idea why it appears to work at all, all limitations of all involved systems considered.
Phil Stooke
OK... ZLD, let's be absolutely clear about what you are seeing. Here is an image I just made which takes a bit of text (left), then saves it as a low quality JPG (middle - it still looks good) - but then cranks up the contrast to a ridiculous degree (right). It's now full of artifacts. That's what JPG does, that's how its algorithm works. You can't avoid it.

Click to view attachment

Your processing is very good at bringing out details that are really present in a low contrast but decently resolved image. But apply it to an image which really does not have any fine detail (like the HST images, which are greatly enlarged to make the map, maybe 8 pixels across the disk to a hundred or more across the height of the map)... and what you are 'resolving' is this JPG fuzz. There is no way any of what you are seeing in your latest image is real. It is all JPG fuzz.

So keep on working with reasonably resolved images, but don't use enormously enlarged images. You are creating your own spurious detail.

Phil

ZLD
Here are several highlighted features, that in my opinion, are not similar to your example and also, extremely coincidentally close to current features.

Phil Stooke
Yes,... but in the same image, 100 other points that are completely different. If you lay one complex pattern over another there will always be a few points that appear to correspond. The biggest, clearest, features in the HST image should match LORRI much better than a few faint things do, but they don't.

I'm not saying your work is useless, I'm saying this particular application of it is mistaken.

Phil
wildespace
QUOTE (ZLD @ Jul 27 2015, 06:07 AM) *
I have no idea why it appears to work at all, all limitations of all involved systems considered.

Even considering what Phil said, some artificially-created features may match the real ones due to the fractal nature of the universe, i.e. the self-similarity evident at different scales. It's like taking a low-rez image of a coastline from space, using these algorithms to generate smaller detail, and then finding that many of these artificial details are similar to the real coastline details in a high-rez image.

Just my 2 cents on this, after watching the excellent and eye-opening BBC documentary "The Secret Life of Chaos".
4throck
Left: generated noise
Right: wild processing of the generated noise

Click to view attachment

If you want to find it, a pattern always emerges. On this case, I tried to bring out "linear features" and behold they appear. I can make out a diagonal line across the top left half!
If I started with a more complex image (with larger scale shadings for example) the resulting features would also be more elaborate.

JPG compression artifacts, aliasing effects on scaling our even histogram manipulation will all generate noise.
Just like Phil exemplified.

The purpose of image processing is to remove or hide the noise, not to amplify it!


To compare NH to HST, the first step it to reduce NH data to the same resolution as HST, and not the other way around.

You get this (NH at bottom):
Click to view attachment

Reasonable match to the top HST map, given that the wavelengths and phase angles are different.
JRehling
It seems like test data is not scarce: You could print out any known image, take a photography of it from far away, and run your process on that photo, then see if you recover detail at a higher resolution than the photograph of the printout. (Of course, downsampling a digital image would seem to accomplish the same goal.)

If you want to make a double blind study of it, someone else could provide the photos, wherein you would have no possible way to obtain the photos in any other way (e.g., through Google Image Search).

Roughly speaking, use the same methodology as studies of putative E.S.P.

That should give you a much broader supply of test data than solar system imagery alone.
ZLD
Thats the type of correspondence I was looking for JRehling. Thank you for this idea. I'd be highly interested in trying this.

I do think the imaging device can play a very large role in if this works though. Consumer products do emit much more noise that is very apparent when doing this in comparison to much higher quality research grade CCDs. I've tried this to some extent.
alex_k
Hi ZLD,
It's interesting, can you algorithm extract details from this image?
ZLD
I only worked on a small crop due to the strange shape of the image. Sudden contrast changes, as on the edges plays havoc.

Click to view attachment

Click to view attachment

Click to view attachment

I know pretty much nothing about 67P or the Rosetta mission other than the occasional news bits that come out. Is there any context to this / any idea where it actually is?

As a further note, this could be way off. There is a lot of motion and Philae was slightly rotating during capture, making it very possible that some of the surface features were very distorted/mangled.

Edit: Rotating the image 180 degrees gives a better image I think.

Click to view attachment

Edit 2: Totally guessing here, because I'm very unfamiliar with 67P as I said before, but just in a cursory search, I think the dark area is probably the shadow from this pillar feature and I didn't correct for the motion enough.

Click to view attachment
alex_k
Thanks ZLD, a nice processing. It was interesting to see what your method can do with this really very difficult image (non-linear motion blur, etc), and to compare with previous attempts:

http://www.unmannedspaceflight.com/index.p...st&p=216419
http://www.unmannedspaceflight.com/index.p...mp;#entry216427
http://blogs.esa.int/rosetta/2014/12/18/up...#comment-283282

Maybe it will help you to tune your algorithm. If you want I'll give some more "difficult" samples for testing.
ZLD
Sure thing, Alex. I'd be very interested. This has been an evolving process for months now.
alex_k
QUOTE (ZLD @ Jul 28 2015, 09:32 PM) *
Sure thing, Alex. I'd be very interested. This has been an evolving process for months now.


Ok, I'll find appropriate samples. One of my experiments you can see at neighbour thread.

About the comet, Philae make this shot about 5 minutes after first bouncing. So the place on the image is somewhere on the right top of this image, a blue arrow and further to right.
alex_k
QUOTE (ZLD @ Jul 28 2015, 09:32 PM) *
Sure thing, Alex. I'd be very interested. This has been an evolving process for months now.


Keeping to the topic. This is a very amazing map (not HST):
Click to view attachment

Can your algorithm extract anything correct from it?
ZLD
It looked computer generated from the set out so I went with that assumption.

Click to view attachment
Click to view attachment

Appears to be a cube (possibly rounded) a sphere, maybe with a texture but probably just jpeg noise, and something else at the right that I can't discern.
alex_k
QUOTE (ZLD @ Jul 29 2015, 08:05 PM) *
It looked computer generated from the set out so I went with that assumption.


Hmm... Actually it was a map of Mars, reconstructed from a set of single pixel measures made in 1960-es. The article is here.
If more proper image can be obtained, it should be closer to this:
Click to view attachment
ZLD
Well a wrong assumption will certainly wreck everything following. Oops.

Would you care to describe how you reached your test image result? I can't seem to reproduce it myself.
PDP8E
rolleyes.gif when I saw that 'test image' I fed it to a little stochastic battalion of filters I maintain.
The only instruction for convergence was high freq edges exceeding 30 %
Here is what my script 'hallucinated'
Click to view attachment
alex_k
QUOTE (ZLD @ Jul 29 2015, 08:23 PM) *
Would you care to describe how you reached your test image result? I can't seem to reproduce it myself.


My best result was the following:
Click to view attachment

It is uncertain due to strong Fourier extrapolation, and match with "ground truth" image is unclear.
But there's some correspondence with a real map of Mars - Tharsis volcanos, etc.
Click to view attachment
(animated)

Though maybe features are just exaggerated noise. It's interesting if it possible to extract the real details.
ZLD
Haha PDP8E.

Also, finally got around to looking through the paper. If I am understanding correctly, the very orange image is based on a 6x1 pixel image and then stretched out. This alone would never be recoverable into a true image. Very interesting paper though.



Click to view attachment

This should be close to what was resampled. I'm not quite sure how they simulated the upper and lower lines. Probably just an average falloff or something.

----------
Edit
----------
I decided to do a quick try to see what could be done with a 6x1 image. Not much still but it seems to correspond to whats on the map. Color data is derived from the same 6x1 pixel image, just applied differently than it was in the paper.

Click to view attachment

I could see their method being pretty useful in observing exoplanets if the resolution can hit a few pixels across.

JRehling
An old chestnut from the technology of image file compression: If two different source images are compressed and they make identical target files, you cannot recover from the target which one was the original source.

These one-row images present a stark situation: If you have three pixels which are, in sequence: BLACK - GRAY - WHITE, you cannot determine whether the real object (at, say, 1x100 pixel resolution) had a sharp cliff from black to white or a gradual transition. You cannot. The information is not there.

But there are some possible (and related) saving graces that can give you additional information:

1) You may have a priori information about the likely transitions. If you knew, for example, that the image was of Mercury, you would have a lot of constraints on the norms for transitions. If you knew, moreover, that the image was of Mercury at a high phase angle, you would have still more information. But if you didn't know that, you'd be much more limited in your ability to guess between abrupt versus gradual transitions.

Just knowing that the image is of a body in space (and not, say, a Captcha of blurred text) is potentially useful information, but that only goes so far. Iapetus, Mars, and Mercury have profoundly different norms for how sharp/gradual transitions are. You can guess with one set of norms and luckily get some details right, but it was a guess. If you guess the world is visually like Mars but it turns out to be visually like Iapetus, your guess is simply wrong. And if you have no a priori information, the guess remains a guess.

2) The one-row case is not a common one, so you have information from adjacent rows which might inform how abrupt/gradual the transitions are. This doesn't provide new information in an absolute sense (the real object might have sharply defined square blocks as its true shading!), but one can infer norms across the surface and then use those locally.

3) When you have selective high-resolution imaging of a world but only low-resolution imaging in many other areas, you can use this information to set the parameters in (1). This seems applicable for, eg., Europa and Pluto.

But given, say, an exoplanet with no high-resolution data possible, the ability to guess at details more fine than we can see in the raw image is going to be close to nil.
ZLD
I absolutely do agree that at a measured 'image', a single pixel in height, will leave the raw information as the best obtainable at the time. Without lots of other data, theres nothing else to work from.

However, it wouldn't be completely useless to make inferences based on lots of other collected data and following up by defining several scenarios based on multiple interpretations of the data. Isn't that the basis for forming future experiments most of the time?
JRehling
That's all fair and well; making the process more mature, one would define the parameters distinguishing one situation from another. e.g., Mercury at high phase angle, Mars at high phase angle, Iapetus at high phase angle, Mercury at low phase angle, etc. (Certainly, resolution is another important determiner of context: Rough at one resolution is smooth at another.)

Then, for any given image, you could say, here's a P% increase in resolution assuming a [KNOWN WORLD]-like surface at this phase angle, and whatever one sees or doesn't see, the prediction is appropriately contextualized. If the assumption is correct, the prediction should be correct. If we know that the assumption is questionable, then we know that the prediction is questionable.
alex_k
QUOTE (ZLD @ Jul 29 2015, 10:20 PM) *
Also, finally got around to looking through the paper. If I am understanding correctly, the very orange image is based on a 6x1 pixel image and then stretched out.

I understand it's an estimation - i.e. if to downsample the "ground thruth" image to 6x1 they will be comparable. And I think the authors underestimated the resulted image, because it has frequency structure that allowed to perform the extrapolation.

QUOTE (ZLD @ Jul 29 2015, 10:20 PM) *
I could see their method being pretty useful in observing exoplanets if the resolution can hit a few pixels across.

Yes, I think the method should be very promising for approximate mapping of exoplanets, Kuiper belt objects, etc. I'm sure it can be modified not only for 1 pixel observation but for 2x2 and any other resolution available from HST for the object. Having quality models and adequate algorithms it's possible to extract rich information even from several pixels.

upd: Most probably, original resolution of the "orange" image was 60x30 px.
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Invision Power Board © 2001-2024 Invision Power Services, Inc.