QUOTE (RoverDriver @ Jan 26 2009, 04:13 PM)
It might be so, but as I was saying, I would not be surprised if even the FT was somehow mostly influenced by intensity rather than texture.
Ihave to think about it.
Paolo
ok, I think the "texture classifier" approach should be worth a try
Here is an attempt to create a map (of the area south of Victoria) with a multi-scale bank of texture filters (my own variant of "mini-Gabor" filters, basically designed to capture granularity, directionality, edgeness and other micro-texture features).
Click to view attachmentThere are 13 filters applied to each 9x9 pixel moving window ( window-spacing = 2 pixels, with the rest interpolated by bilateral upsampling).
To capture features at multiple scales I decompose the original image in a gaussian pyramid and applay each filter to each pixel at each level of the pyramid,
So altogether we have 13 filters at 7 spatial resolution scales.
I used a PCA-based mapper to reduce this 91-dimensional original feature space into a 2D color space (LAB) that can be used for visually analyzing the resulting texture-map overlayed with the brightness information of the original bw image.
As this is only intended as a first test, I did
not try to assign the color-mapping to any meaningful scale in the sense of "dangerous/easy".
The different color just represent the wo most significant dimensions in over-all textural variation as it results from the PCA 91-to-2 dimension reduction.
(coincidentally, a quick Eigenvalue-Analysis shows that two dimensions are already sufficient to capture about 80% of all the variation...)
RESULT:
At a first glance, the usual "terrain classes" seem to be distinguished: "red color tones" = "soft sand", "blue/cyan=nort-south-trending ripples", "orange=east-west oriented ripples", "greenish = bedrock ?", and so on ...
Now, of course, it would be the task of the geologists and rover driver specialists to assign the colors to "meaningful" terrain classes.
For this first test I applied the algorithm only to a reduced 2000x1000 pixel crop of PSP_009141_1780 which took about 200 Seconds running time.
So in principle the approach should be feasible to be applied to full or almost-full resolution imagery as well (thanks to the implementation in good old plain C
If time permits I'm going to download the whole HiRISE-JP2 tonight and try another run at a finer resolution level ....
P.S.: I have no Idea if this is useful at all, just an attempt to see what the incorporation of multiscale texture algorithms could add to the existing brightness based
analysis ...