In the meanwhile I've pinned down a set of parameters which returns for frames 6 to 20 of the OpNav3 sequence pia19179-16.gif results sufficiently accurate to try a next analysis step.
I'm using this globe for the projections:
![](http://i.imgur.com/RKV8Oavt.png)
(Using actual distance (146000.0) and size data (975/909 km diameter) for Ceres, 5.8° rotation steps between images, Ceres axis tilt around x-axis -15°, around z-axis -10°, apparent equator diameter 108 pixels.)
First, if you look at the full sequence (square encoded), you may notice, that the right-most 5 frames rotate:
![](http://i.imgur.com/8CWZMLNt.gif)
Therefore I'm only using frames 6 to 20 to get a weighted average:
![](http://i.imgur.com/HottFY6t.png)
Since the human eye can also merge a rapid sequence of images, here a fast animated gif of the subsequence:
![](http://i.imgur.com/LObAIogt.gif)
You may note, that the brightness of nearby pixels can vary differently over the frames. This can indicate different slopes, more formally different surface normal vectors.
Different brightness can also indicate different albedo.
Together we get one of the simplest reflectance models, the Lambertian or diffuse reflectance model.
In the case of just one color (grey) and one light source (the Sun), the diffuse reflection is described by just three variable real-valued parameters: albedo (color), and two angles to describe the surface normal relative to the Sun; the Sun intensity can be considered as constant.
Actually the two angles can be reduced to one angle relative to the Sun, since the model is symmetric to the pointing towards the Sun. The intensity (brightness) of the pixel is proportional to the cosine of the angle.
If we now follow one pixel over several frames, the angle of the surface normal relative to the Sun changes, resulting in varying grey values as a function of the frame number.
The model refers to linearized intensities. So instead of the above square-encoded frame sequence, the linearized version is needed:
![](http://i.imgur.com/F5CBoy2t.gif)
Raw images are usually square-root encoded; therefore I've squared the pia19179-16.gif images to get the linearized version. If they are gamma-corrected with gamma = 2.2 for standard displays, the result is rather similar.
Merge of linearized frames 6 to 20 by weighted averaging:
![](http://i.imgur.com/0HrQZ8Kt.png)
The first of the following two diagrams shows the grey scale of one pixel of the longitude/latitude map as a function of the frame number.
The second diagram shows the grey scale function of the 720 pixels of one longitude over all latitudes (in 0.25 degrees steps), meaning one column of pixels:
![](http://i.imgur.com/FQojPnKt.png)
(For the diagrams here, the origin of the lon/lat maps is the lower left corner.)
Text version for this considered column:
Click to view attachmentMany of the curves in the diagram are rather noisy, others appear reasonably smooth. The jiggery ones are probably mostly due to Moirée-like artifacts near the border of the projected area.
The next days I'll try to match the reflectance curves of each pixel position via RMS minimization to the simulated reflectance curve of a slope on the surface to retrieve albedo/slope data for each pixel.
If the anticipated result won't be too noisy, it may be usable as an albedo map, and as a basis to infere a topographic map.
If the data turn out to be of good quality, residuals may provide shinyness information, according to the more general Phong reflectance model, usable as an indicator for surface roughness.