Help - Search - Members - Calendar
Full Version: DSCOVR
Unmanned Spaceflight.com > Earth & Moon > Earth Observations
Pages: 1, 2, 3, 4
scalbers
Yes this makes sense - thanks for clarifying. Quite the project to bring all the frames together into an evolving map of the Earth, and nice to take the subway ride. Perhaps you've considered this, as I would imagine that the cylindrical projected version could be converted into a rotation movie. The animated map could be reprojected into the viewpoint of DSCOVR, and we would then see a uniformly rotating Earth with the jumps removed. This is what was talked about at the AGU conference panel discussion.
Explorer1
The solar eclipse sequence is up on the main site today. I just figured out that you can animate the image by using the left and right arrow keys (on Firefox, at least)!
scalbers
I'm continuing to think about the most realistic brightness/contrast adjustment for the DSCOVR imagery. One line of reasoning is that in green light the oceanic areas (with overlying atmosphere) should be about 12% reflectance. The brightest clouds would be about 100% reflectance. So that is a ratio of 8.5 in brightness and a gamma corrected ratio of 2.62. In other words, if the bright white clouds are 255 counts, the oceanic areas near the center of the image (though outside of sun glint) should be about 97 counts. This means I should try a greater contrast reduction for the DSCOVR imagery than I had done previously (in post #81). I have posed a question to the DSCOVR team somewhat along these lines.

It's also interesting to note that the brightest clouds are found near the center of the Earth. Near the limb they are paler looking. Some of this is related to there being more intervening atmosphere above the clouds with the more grazing path. However it also seems to be related to a reduction of backscattering when light hits the cloud at a glancing angle and/or on the side. This can be compared with limb darkening on other cloudy planets and moons in the Solar System.
scalbers
Here is an updated version of a synthesized Earth image using land and 3-D weather data (left) and the correponding DSCOVR image (right). I did tweak the color & intensity a bit on the DSCOVR image.

Click to view attachmentClick to view attachment

It's interesting that one can get a handle on what the Earth colors should be by considering that over the ocean, most of the signal is actually light scattered from the sky instead of from the water. We are in essence looking down at the sky (with its attendant Rayleigh scattering) and that is the main reason the Blue Marble is blue. There's also evidence the DSCOVR imagery (as posted on their web site) is contrast stretched since the recent solar eclipse has an oversized region around the umbra where all the details disappear.
Explorer1
Wow, that really does look much more like the Apollo imagery. I did a side-by-side comparison of the blue marble in December, when the views were almost identical, and the differences between the images was clear.
scalbers
Thanks Explorer1. One thing with the synthetic image (2 posts back) is that the clouds near the limb look too bright. One of the reasons turns out to be that ozone absorption of the light rays interacting with clouds and terrain was left out. So here's an update with this effect mostly included, along with some aerosol adjustments.

Click to view attachment

(Edited Mar 27 1640UTC)
scalbers
More details on these synthesized Earth images can be found in my inaugural Planetary Society blog post.

There are a series of comments to this post. I was cutting my two comments a bit short to fit in the character limit. Thus to be more complete I would insert the following paragraph in between these two comments:

..........................................

The question about brightness and contrast relates in part to step (3) in my post. One way to check this is to consider that the brightest clouds have a reflectance of about 100%. The cloud-free ocean regions with the Rayleigh scattering (and a slight augmentation from aerosols) should have a reflectance of around 12% in green light (550nm). If the bright white clouds are set to a pixel value of 255, then the green component of the the ocean regions should be 255 * (0.12 ^ 0.45) or 98 counts. The 0.45 power is the gamma correction I'm using to account for the non-linear brightness relationship between pixel count and displayed intensity on a typical computer monitor.

..........................................

I might also add that the question of how to set brightness, contrast, and color saturation applies just as much to everyday photography as in spacecraft imagery. I'm following the notion that it's good to have the displayed image be as linearly proportional as possible to the actual scene, with the same color saturation.

(Edited May 31, 2016)
Explorer1
Space weather instruments are done commissioning, finally: http://spaceflightnow.com/2016/06/27/new-a...ive-next-month/
Is it just me, or does the embedded image of the eclipse looks different from those on the EPIC page, a lot brighter/less muted?

Also first I'm hearing of a successor to launch in 2022!
scalbers
I like the URL and where they are going with the imagery:

http://blueturn.earth/

https://vimeo.com/173723357

On another note pertaining to the character of the blueness of the Earth, I can offer the notion that the ground view of the overhead sky with thin scattered clouds (and the sun at a moderately low altitude) is rather similar (with a bit of imagination) to the space view looking straight down at the ocean. If you have a lot of imagination you can get a sense of vertigo looking up such a sky - and think you're really looking down.

Thus I'd consider the contrast between the darker blue and brighter white should be less than what we see in the videos above, assuming we want the display to be linearly proportional to the actual brightness.
Explorer1
One year highlight video is as good as I hoped: https://www.youtube.com/watch?v=CFrP6QfbC2g
scalbers
Here is the Blueturn version with a somewhat smoother presentation: https://vimeo.com/175935487
Michael Boccara
Thanks Steve for pointing at my videos.

A word about the technique: this is a simple interpolation of EPIC images based on orthographic projection on a 3D sphere, and linear blending in the geodesic space.
The results are good enough to provide a new earthgazing experience, like in this video of the week around the summer solstice:
https://vimeo.com/172956335
The less time between each images, the better the quality. Note that EPIC images are separated by around 1 hour around summer time, and by 2 hours around winter time. But for some exceptional occasions, like the March, 9th eclipse, NASA granted us with only 20 minutes between images, which led to the best interpolation results:
https://vimeo.com/170798080

Some work remains to fix quality issues, like the artifacts on the limbs and the lighting correction in interpolated images.
Note that the interpolation runs in real-time (30fps) in an interactive app, provided the images are already downloaded from NASA and uploaded into the GPU texture memory. So a lot of the effort is about paging the textures efficiently to provide a smooth video experience.

One may find it difficult to see the movement of the clouds, because their speed is very slow comparing to the rotation of the Earth. But you can see them better if you keep looking at the same geographic point, like in this video of a sandstorm in Egypt:
https://www.instagram.com/p/BHKDx8RDobB/

scalbers
Thought I'd mention that I'm attempting to resolve a discrepancy in the geometry contained in the metadata. For a case that I'm simulating, the image at 18:04 UTC on September 20 states that the SEV angle is 9.2 degrees (on the website). If I use the subpoint (centroid) of the spacecraft from the json file (4.3N and 102.6W) though I come up with a solar elevation angle consistent with an SEV angle of about 10.4 degrees. My simulated image also shows a bit more limb shading than the actual image. Judging from my simulated image the sub-point looks OK since the continents line up pretty well. Thus I wonder if the stated time of the image could be off by a few minutes and is actually about 18:08?
Explorer1
Any idea what the bright yellow spot in this image around Ecuador (a much more amateur question!) http://epic.gsfc.nasa.gov/epic-archive/nat...12163939_01.png
Cosmic ray hit, downlink issue, something else?

Strange how hurricanes look harmless, even cute, at such a distance...
john_s
Looks to be in about the right place on the disk to be a specular reflection from a lake, which would be cool.

John
ngunn
There is a place called Lagunas on the Amazon tributary Maranon in about the right place. However my atlas also shows that another tributary the Ucayali has wide reaches not too far away. (Both locations are in Peru.)
scalbers
Here's an advance simulated animation (click on MP4 link) of next year's eclipse from the DSCOVR perspective. This is without clouds.

Click to view attachment

http://stevealbers.net/albers/allsky/4ld_polar.mp4

Click to view attachment

And while I'm here, this is a simulated vs actual recent DSCOVR view as a blinking animation. Some improvements have been made since my description of this in the TPS blog. The case is from September 20.
fredk
Nice, Steve. You mentioned before the interesting fact that when you are looking at the ocean most of the light is scattered up in
the atmosphere above the water, rather than a reflection of the sky (or diffuse scattered sunlight, ie extended opposition surge) in the water.

This makes me wonder what the disk would look like if the atmosphere were gone. So no scattered light in the air or reflection of sky in water. Ie, what is the colour of the water itself, and how dark would it be? Presumably the land would also be quite a bit more contrasty. Is it easy for you to remove the air from a simulation?...
scalbers
Thanks fredk. The funny thing is that once I inadvertantly ran this without air and the ocean indeed looked darker. This was a goofy accidental run (image below) with Mars atmospheric pressure and Earth aerosols. The display looks somewhat reasonable, though the water looks too gray and thus would need more work. The ocean (outside of sun-glint areas) should be around a factor of 10 darker for the case of few aerosols in the sky and sediment in the water, though it could be a smaller ratio otherwise. The color would also vary depending on sediment content and the like, ranging from blue-green to sometimes more brown.

Click to view attachment

Note that sun-glint over water is different from the opposition effect that happens over the land. The sun-glint region is controlled by wave action, though I suppose this could be extended a bit depending on forward scattering by atmospheric aerosols then reflecting off of the water, diffused again by wave slopes.
Michael Boccara
QUOTE (scalbers @ Oct 13 2016, 08:10 PM) *
Thought I'd mention that I'm attempting to resolve a discrepancy in the geometry contained in the metadata. For a case that I'm simulating, the image at 18:04 UTC on September 20 states that the SEV angle is 9.2 degrees (on the website). If I use the subpoint (centroid) of the spacecraft from the json file (4.3N and 102.6W) though I come up with a solar elevation angle consistent with an SEV angle of about 10.4 degrees. My simulated image also shows a bit more limb shading than the actual image. Judging from my simulated image the sub-point looks OK since the continents line up pretty well. Thus I wonder if the stated time of the image could be off by a few minutes and is actually about 18:08?


Hi Steve,

I also have a weird discrepancy when confronting the ephemeris data provided on the EPIC website against the images themselves.
See this video I generated from my Blueturn app, of the last July 5th Moon photobombing, when I also represent a virtual Moon model at the location provided in the ephemeris. I just don't understand the difference. I double checked and couldn't find any error in my perspective projection matrix. It looks like either the Moon position is wrong, or the DSCOVR position (all in J2000), or the time stamp itself. Or, maybe this is because my rendering engine is OpenGL-based and is in floating precision.

What do you think ? I saw your nice simulation of the future eclipse across the US, so maybe you could also re-simulate the July 5th, 2016 Moon crossing, and tell me if you see the same difference as I do...

See the video, or a direct deep link to my WebGL app (the virtual ephemerid-based Moon is right-most) :

https://vimeo.com/189285144/6916063e34

http://blueturn.earth/app/EarthPlayer/?dat...0&cameraY=0
(Press 'm' to show the virtual Moon)
fredk
QUOTE (scalbers @ Oct 30 2016, 04:17 PM) *
The funny thing is that once I inadvertantly ran this without air and the ocean indeed looked darker. This was a goofy accidental run (image below) with Mars atmospheric pressure and Earth aerosols.

Thanks a lot, Steve. This is very cool: Earth without air. Comparing your with and without air views, I can certainly see your point about the oceans being dominated by light from the sky - the Earth is the blue planet because our sky is blue. Apart from the continents, I can almost imagine these views as fish-eye views of the sky with patchy clouds from the ground.

I'm still curious about one detail: switching off the air makes the oceans much darker, but how much of that darkening is due to the removal of scattered light in the air above the water, and how much is due to the removal of sky light reflected back up from the water's surface (ie, the removal of a sort of wide-angle "sky glint")? My guess would be that the former would dominate, since the water's surface is not a very good reflector (apart from large angles of incidence).
scalbers
Indeed it's interesting to imagine the aspects of symmetry between looking up at the sky and looking down from space - I can almost get dizzy looking up and imagining this.

If we assume no aerosols, then I agree the scattered light upward by the sky is much more. This is because the Rayleigh phase function is pretty similar upward and downward, and the downward diffuse light only has about 8% reflected by the water. If aerosols are present the scattered/reflected ratio may vary some (considering the asymmetry factor), though probably not change the main conclusion for most cases.

A fun example to help illustrate some of this is to note that Lake Titicaca is much darker than the ocean areas, due to the high altitude and less air present.
fredk
The symmetry of the Rayleigh phase function tells us one more interesting thing. Neglecting the subdominant light scattered or reflected from the water, the intensity of the ocean (near the centre of the disk but away from the sun glint) seen from space with the sun roughly behind your back would be similar to the Rayleigh-scattered intensity of the sky (away from the horizon) from the ground (near sea level) when the sun is high. Of course getting into the (mainly forward-scattered) Mie regime spoils this and will make the sky from the ground generally brighter than the ocean from above.

But basically the intensity of the sky near noon on the clearest, least dusty day would be similar to the intensity (and colour) of the ocean when viewing the full Earth from above. For those of us who aren't going to make it into space, at least we can imagine a bit more quantitatively now!
JRehling
The question of what Earth would look like without its atmosphere is potentially ambiguous, it could mean:

1) What would the view from space be if the light reflecting off the surface/ocean were not altered on its path up to the camera.
2) What would the surface/ocean itself look like if it had a black sky above it.
3) Both (1) and (2).

A related example: In towns/cities when there is snow on the ground and cloud cover overhead, night can be astonishingly bright because streetlights reflect off the clouds, and that light reflects off the snow, in what is effectively a damped infinite feedback loop.

The situation looking at normal, natural surfaces from above has some degree of this, with the sky altering how the surface looks and the surface, surely, altering how the sky looks.

If you wanted to address (1), an "easy" way to do it would be to ground-truth the DSCOVR images by taking images of isotropic surfaces (e.g., the ocean, snow, certain deserts). Compare your camera's color values with DSCOVR pixels of the same surface unit and determine the function that relates the two. Then apply the inverse function to DSCOVR images of the whole planet.

FWIW, I recently did something like this with images of Mercury that I took in a daytime sky. I subtracted the R, G, and B values of the background sky from the whole image, including the portions containing Mercury. The result gives a black sky and a brownish-grey Mercury approximating the colors seen by Messenger. That is the "up looking" version of what (1) would be the "down looking" version of.
Stratespace
I begin to be a bit desperate about the DSCOVR images.
I've first worked with the PNG images provided by their server, associated with their metadata. After days of work, I still couldn't understand what I was seeing: when projected on a map with Spice kernels, the data did not seem very well registrated, and there were slight shifts changing depending on the local view angle. I contacted NASA that nicely replied that the PNG images were not scientifically accurate in terms of localization, and that I should work with the L1B calibrated data instead.
To make it short, those L1B data are calibrated and include all bands registred in a common reference frame. In other terms, for each image you have the gray level for each band, as well as the local latitude/longitude corresponding to each pixel, or map of coordinates if you prefer.
But after hours of play with those data, it appears the problem is even worse than before: those L1B data are even wrong-er and it's impossible as it to project properly the images on a map.
As a short demonstration, I projected a "true" coastline map onto the images according to their lat/long maps. Here is the result:


After a bit of investigation, I finally understood where the problem is in those raw calibrated data. Unfortunately, the maps associated to the images have been generated very roughly, the DSCOVR EPIC team have considered all "non-null" pixels as "the Earth", and the darkness of space as "not Earth". As a result, the atmosphere of the Earth is considered as being the ground, and ten pixels outside the Earth the maps still indicate different latitudes and longitudes !
You can see this fact on the following images, where I show a portion of Earth's limb and the same portion with the size of the Earth uncorrectly given by the metadata overimposed on it:

As a consequence, when you project your coastline map on the images for debug, you see the coast floating up into the upper atmosphere:


I think we can all agree it is clearly off-target...

I've already tried different techniques to correct for this issue, such as detecting the "true" size of the Earth in the images and to shrink all the lat/long maps accordingly, but this is very hard in practice, as you need to make the difference between lit portions of the limb that are clearly visible and barely visible portions of the limb that are already into the night. Remember that DSCOVR is orbiting around L1 and not on L1, thus shadows are visible.
I've implemented alternative techniques as well, such as clouds removal and recognition of different patterns on the Earth to morph the metadata they provide to a more correct geometry, but it is very hard to make it 100% automated with high confidence without spending weeks and weeks of effort.
As a last chance, I've tried working with the L1A data (uncalibrated), but it fails for the same reason.

My conclusion so far is: I'm done with those data, they lack of reliable metadata to work with them at pixelic resolution with an automated process, to make animated maps for example. It's okay to make movies on the original images themselves (no transformation needed), it's okay to transform one of those images, but it's pointless working on them for a smooth transformed animation. At least without days and days of work. The optical flows and other filtering techniques I've implemented to guess automatically what should be the correct lat/long grid for each image are quiet complex, but still insufficient to do the job. That's a shame, the outcome would have been awsome...
scalbers
Indeed it's tricky to get accurate mapping. Here is a somewhat empirical fit used in my recent blinking comparison of synthetic vs DSCOVR:

Click to view attachment

http://stevealbers.net/allsky/cases/dscovr...k_162641808.gif

I did have some success in simulating the limb shading to help with the fitting. There is still some extra atmosphere appearing in the actual image for some reason.

It's interesting to consider how thick the atmosphere appears from this vantage point, and where the (often invisible) limb of the solid Earth is located. The simulations may provide useful estimates of the distance between the limb and the first "non-zero" pixel based on the atmosphere and on limb shading relating to the phase angle.
Phil Stooke
How about... don't use the limb detection routine to locate the limb, use it plus a limb-fitting routine to get a best fit and use that to establish the central (sub-spacecraft) pixel. Use that central point plus a calculated radius based on range and geometry to fit the 'true limb' to the image. You can adjust some of the geometry parameters until they give optimum results.

More simply, you could use your existing routine but multiply by the fraction necessary to shrink the coastline map to fit the image. Once established that fraction should be fairly constant, or at least can be varied as a function of range.

Phil
Stratespace
The limb detection is largely impacted by the uncertainties associated with the terminator. In some images, the pixels disapear when the sun is locally 10° above the horizon; and in other images they disapear when the sun is virtually 0° above the horizon !
In addition, it appears that a shrink/translate transform is not enough to compensate for all of the errors in the metadata, this is why I switched to optical flow that tried to fit the coastlines and other salient features (not an affine transformation).
It somehow works, but requires significant tuning to run on a batch of several images. I can't afford to spend too much time to make it work on thousands of images with a very low error rate.
I other words:
QUOTE (Phil Stooke @ Dec 10 2016, 09:14 PM) *
More simply, you could use your existing routine but multiply by the fraction necessary to shrink the coastline map to fit the image.
doesn't work.

I'm really curious to know how the scientists can actually work with such errors. Do they do the same kind of corrective process (with hopefully more time than 2 hours during the week-end like us) ? Could we expect that they correct their calibrated data accordingly one day ?
scalbers
Maybe the scientists are doing something like this: https://ntrs.nasa.gov/archive/nasa/casi.ntr...20160011149.pdf
Stratespace
You are right, thank you very much for the link !
Floyd
Here is a maybe crazy, maybe not so crazy idea. Could interested scientists or institutions set up 1 to 3 dozen lasers around the globe that point at DSCOVR during daylight and act as fiduciaries. The images would all have a set of hot pixels for perfect alignment. I'm sure a few universities across the globe would be happy to operate a facility to put themselves on the map as reference points. Laser frequencies could be chosen to blind only one channel of one pixel for each fiduciary laser.

Stratespace
QUOTE (Floyd @ Dec 11 2016, 03:32 PM) *
Here is a maybe crazy, maybe not so crazy idea. Could interested scientists or institutions set up 1 to 3 dozen lasers around the globe that point at DSCOVR during daylight and act as fiduciaries. The images would all have a set of hot pixels for perfect alignment. I'm sure a few universities across the globe would be happy to operate a facility to put themselves on the map as reference points. Laser frequencies could be chosen to blind only one channel of one pixel for each fiduciary laser.
According to the paper linked above, they don't need this. They cross-correlate the EPIC images with image taken simultaneously by satellites in LEO. They have a precision of a fraction of a pixel, sufficient for their need.
Considering they already have very precise pre-image registration for scientific purpose (called "navigation" precision in their paper), it's very unfortunate they don't update the L1B data with those corrected lat/lon values...
scalbers
It seems in the reference I linked that the image correlation is being done mainly in areas near the center of the disk. Even when I do this manually, there seem to be discrepancies that show up when we go very close to the limb. I suppose this relates to what was mentioned above about the EPIC team's decision on where the limb is when registering and combining the images from the various channels. It sounds like I might get better results with my the blinking comparison if I would work with the raw data, rather than the web imagery.
Explorer1
Big changes to the public website interface, plus an 'enhanced colour' option:

http://epic.gsfc.nasa.gov/
Michael Boccara
Following the previous discussion with Stratespace about DSCOVR data inacuracies, I have some new information from the DSCOVR team: they finally recognized having an error in their ephemeris, at least on the lunar position, because they were using geodetic coordinates instead of geocentric. It caused an absurd "rosette-shaped" path for the Moon around the Earth, as shown in the video below (sorry for the very artisanal screen capture - showing my Unity3D development environment):
https://drive.google.com/file/d/17C7FVMH5oU...y9hZKnTlLg/view

This same error is also the cause behind a problem I had with the famous "Moon photobombing" images of July 5th, 2016, when the Moon passed through EPIC's lens. Here's an article for those who missed it:
https://www.nasa.gov/feature/goddard/2016/n...-time-in-a-year
Now see the video I made from my Blueturn app, that shows interpolated EPIC images of July 5, together with a 3D model of the Moon (rightmost) based on their ephemeris:
https://vimeo.com/189285144/6916063e34
I am now waiting for the EPIC team to fix their database, and hopefully I'll be able to integrate correct 3D model of the Moon, and fix the above video.

I am currently in discussion with the DSCOVR team to find out whether this error also applies to DSCOVR's position and attitude, in which case there is some new hope of being able to have an accurate orthophoto calibration of the images in 3D.
JRehling
If I were to try to build a robust solution to this, I think I'd try the following. In large part, I think this follows more or less the algorithm that we people use in inspecting an image of the Earth.

Preparatory indexing:
1) Make an index of the shapes of coastlines at the resolution of ~ 5km/pixel. In particular, index segments where coastlines change orientation such as the Strait of Hormuz, the east coast of Somalia, the Baja peninsula, Gibraltar, southern Italy, Tierra del Fuego, Newfoundland, Michigan, etc.

Processing a single image:
2) At the time the image was taken, make a list of the coastline segments that are located within ~60° of the sub solar point. Perform a transformation to adjust them to how they should appear from the direction of the Sun, which will approximate the geometry of DSCOVR.

3) Run edge detection on the image, excluding any edges that are bounded by white regions, which are probably clouds.

4) Match the detected segments against the projected indexed segments from (2).

5) If three or more segments are matched (possibly two that are far apart), you now have a good registration between the image and the Earth.

Probably the tricky step is (4), but there's research on this.
scalbers
Interesting to consider this procedure. I wonder how this solution would work at the limb. I've been able to match the coastlines and other features fairly well. Clouds in my matching were also useful to check. However the extreme limb is where things appeared to drift off, possibly due to the setting of the reference limb in the atmosphere as mentioned earlier. It seems this might work OK with the raw data (however that would be available) and less well with the displayed web images or L1B image data. It's also helpful to consider the actual position of DSCOVR that can be around 10 degrees from where the sun is located.

http://stevealbers.net/albers/allsky/outerspace.html
JRehling
QUOTE (scalbers @ Jan 24 2017, 12:53 PM) *
I wonder how this solution would work at the limb.


I wonder how possible it is to capture the limb. We know that we can see stars and the Sun when they would, on an airless globe, be below the horizon. That means, conversely, that vantage points in space have a view of points on the surface beyond the literal horizon, which means that other points must be projected to other locations. In principle, this means something very messy is happening at the limb. And a small displacement near the limb corresponds to a large difference in position on the map. The devil is in the details as to the magnitude of that effect, or if it affects such a tiny boundary around the disk as to be negligible.
scalbers
Using my simulated imagery (link 2 posts above) as an example, it seems possible to determine how far the (often obscured solid surface) limb is located below the top of the visible atmosphere. With a non-zero phase angle, the shading effects can also be considered. This is the more significant aspect I think with the visible atmosphere extending perhaps 30-60km above the limb.

Refraction is also of interest as you note. The actual lateral displacement of the limb (and locations nearby) from refraction is only about 1km, smaller than the camera resolution. This small amount can still allow another 100km or so of land to be squeezed into theoretical visibility near the limb.
Michael Boccara
Hi

Sharing some nice results I had with interpolating EPIC images and projecting then on a planar map (equirectangular).

https://vimeo.com/207296528/b9b8eee67c

The video is at its optimal quality, in 4K resolution and 120Hz..

The images are generated in real-time as I stream the EPIC images (and their metadata) from the NASA website.
Interpolation is made by simple blending of perspective projection on a 3D ellipsoid model of the Earth, executed in the GPU via a custom fragment shader. Good enough to look almost like optical flow.
The conversion from 3D ellipsoid model into equirectangular map is also done in real-time, via a vertex shader on the GPU.

Here is another video from my Blueturn app (freely available on all platforms), that shows better the transition between spheric and planar.

https://vimeo.com/207296473/28f01f0807

Enjoy!
scalbers
Looks really nice to see the smooth changes in the clouds - interesting to see how much they evolve during the course of a day. It's a fun challenge to try and perceive this with a vertical perspective view where the Earth is rotating.

As a quick note I often like to suggest a bit lower contrast between the blue ocean/sky and the white clouds to have a more linearly proportional displayed brightness.
Michael Boccara
Thanks Steve.
This is indeed a nice global view on the clouds motion without any collage artifact.
And taking note of your remark about the blue/white contrast: aren't you suggesting to rather increase it ?

My next plan for the default sphere view is to indeed add some 3D navigation to pivot around the Earth beyond the L1 viewpoint. It's a simple thing to do with Unity. A geosynchronous view from above the pole (one with constant daylight) would indeed be an interesting vantage point to see the dynamics or the Coriolis force on the clouds. Maybe soon on the SOS ? wink.gif
scalbers
For the contrast I was thinking of reducing it, by increasing the brightness and having more of a sky blue color in the clear sky areas over the ocean. The bright clouds would then stay about the same. Here's an updated version of my earlier blog post on the Planetary Society site with a detailed rationale. Another example of this is ugordan's very nice LROC colorized image. Even this one appears though to have some contrast enhancement.

Click to view attachment

One current thing I'm doing is improving the sun glint so it looks more accurate for a crescent Earth (maybe for a future version of DSCOVR) as well as when fully lit. Here is an animation of the various phases of the Earth.

Indeed SOS will be fun to look at this with. It's easy to change the viewpoint then to see a polar view. I'll try this out with both geosynchronous and sun synchronous. Note that SOS Explorer can also do the 3D navigation. There is a tradeoff to the versatility of various viewpoints in that we wouldn't then be properly seeing the hazier looking limb.
Stratespace
QUOTE (Michael Boccara @ Mar 7 2017, 04:13 PM) *
Sharing some nice results I had with interpolating EPIC images and projecting then on a planar map (equirectangular).
Great work, congrats ! I didn't work again on the data since last time, but apparently you achieve much better navigation/projection performance than I did. How did you improve it so dramatically ? I want to go back to work with such metadata !
Michael Boccara
QUOTE (Stratespace @ Mar 13 2017, 09:43 PM) *
Great work, congrats ! I didn't work again on the data since last time, but apparently you achieve much better navigation/projection performance than I did. How did you improve it so dramatically ? I want to go back to work with such metadata !


Thanks Stratespace.
I had dramatic improvements after calculating the enclosing ellipse of the Earth instead of the enclosing circle. Plus I had a bug in the optimal enclosing circle.
But the funniest is that after I did that, my resulting ellipse always had its normalized center rounded at (0.500,0.500) (yes zeros until the 3rd decimal), and the axis sizes being constantly at (0.777,0.776), accounting for the ellipsoid polar squeeze. It means that the images were originally aligned by NASA's EPIC team. In other words, the image is centered to the Earth center at 1 pixel precision. In other word, I worked hard for nothing smile.gif That was not so a few months ago, but seems like a late refresh of the data brought this calibration improvement.

Bottomline, you can proceed with your work and use the EPIC metadata as-is.
Note that I'm using the L1B data from the EPIC website(https://epic.gsfc.nasa.gov/), not from the ASDC archive (https://eosweb.larc.nasa.gov/project/dscovr/dscovr_table).
I don't think it makes much of a difference. However please note this explanation I once had from a member of the EPIC team:
The level 1B (L1B) data is the science data product. This has the raw calibrated data that the scientists use. It also includes the complete geolocation information (per pixel lats/lons/angles, etc) and the astronomical/geolocation values required to do the calculations. The complete astronomical/geolocation metadata has been added to the images.


Michael
Michael Boccara
Hi

This is my version of the eclipse of last week, based on 13 DSCOVR images separated by 20 minutes each. NASA tuned DSCOVR specially for the occasion:

https://vimeo.com/230632867

Also as an interactive video via the online app:
http://app.blueturn.earth/?date=2017-08-21_15-17-45

Enjoy

Thanks

Michael
monty python
Thank you. This is the kind of video my friends with little astronomy knowledge can appreciate. At my location in Iowa we had 90% eclipse and eclipse glasses sold out fast!
Michael Boccara
Hi,

I released a new version of my app Blueturn. It is online as a web app, and also available on Android. iOS update will arrive in a few days.

Besides improving the basic feature of browsing and interpolating DSCOVR/EPIC images into a smooth interactive video, the app now allows to switch vantage points (DSCOVR, L1, Moon), and access geostationary views in 3D or in 2D maps (Mercator and Plate-Carree).
Also worth zooming out to see the Lissajous path of DSCOVR around the L1 point.
I also added an enhanced view by applying transparency on darker pixels, and using a CG illuminated Blue Marble model underneath.

Enjoy :
http://app.blueturn.earth

(Click the down-arrow in the top-right corner for advanced features)

Michael
scalbers
Nice to see this more flexible version of the app is now available. As we were talking about offline there is now the "SWIM" option in the advanced features. This is a color enhancement to produce colors and contrast more similar to the simulated DSCOVR images I've been making.
Michael Boccara
QUOTE (scalbers @ Sep 13 2017, 07:29 PM) *
As we were talking about offline there is now the "SWIM" option in the advanced features. This is a color enhancement to produce colors and contrast more similar to the simulated DSCOVR images I've been making.


Yes, and as a shortcut I just made this quick little video with SWIM mode enabled:
https://drive.google.com/file/d/0B0fW3eqrNr...iew?usp=sharing
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Invision Power Board © 2001-2024 Invision Power Services, Inc.