Technically red shifting is caused by movement (Doppler effect). What NASA is doing with the images is more like a transposition. They just scoot the frequencies sensed by the cameras into the visible spectrum so we can see them. Cool images either way. 🙂Basically they un-redshift it.
Tom
More than 60% and better tools than photoshop - astro imaging stuff contains all sorts of error correcting and deconvolving tools, and operate on 96 or more bit images (eg luminance plus 3 filters, 24 bits per pixel). Photoshop's not up to it... Look at Pixinsight for example...
Hexagonal mirrors...Those diffraction spikes are a bit disturbing.
Basically they un-redshift it.
Nah, they just too the mono luminosity of the sensor, normalised (re-scaled) the values within the image and then applied that to a colour pallet. Not really un-redshifting, more using colour crayons following the infrared image.
It's probably more complex than is suggested:
a) sensor noise (not all pixels work perfectly)
b) telescope convolution of the image
c) image object luminosity being in the noise means using multiple images stacked. The noise is gaussian distribution (except hot /misbehaving pixels) and the signal of the object isn't, so what happens is if you align the images the object signal becomes stronger faster than the surrounding noise. There are more advanced noise techniques but randomness is difficult to remove without alot of data about the randomness. You could ask if the noise is made from a quantum source or not but in the case of this question it should be "how much noise in the pixel"?
d) Multiple images can be stacked with pixels being orientated so you effectively sub-pixel (this is called super resolution and was developed for Hubble). The down side is that your noise is blurred across the image. This is how I see better resolution than my scope's Dawes limit and sensor size allows.
e) Dawes limit.. as you get closer the light is spread over more area and more sensor pixels adding to the noise etc. If you know the scope's Dawes point spread function you can reconstruct the image and remove a large amount of the blur but it's reconstructed data.. which complicates scientific imaging.
More than 60% and better tools than photoshop - astro imaging stuff contains all sorts of error correcting and deconvolving tools, and operate on 96 or more bit images (eg luminance plus 3 filters, 24 bits per pixel). Photoshop's not up to it... Look at Pixinsight for example...
I used to have PixInsight. Great tool for processing images. I remember using in on an aeroplane and a guy asked me later as I went past what that was (he was a photographer).
Yes, it's more like transposing. Or is it pitch shifting? I don't know what the sensors are like in the telescope, but the images do have to end up in RGB color space for us to see them on our monitors. How is that mapped from telescope sensor(s) to standard RGB?What NASA is doing with the images is more like a transposition.
Is it just re-scaled to shorter wave lengths, or is there some more sophisticated mapping of values? I would hope that there are more than just 3 wavelength sensors in the telescope. Translating multiple infrared sensors into 3 colors (RGB) would need some sort of remapping rather than just a pitch shift. But of course not everything in the image is red-shifted the same amount. Do they take that into account when making images that we can see?Nah, they just too the mono luminosity of the sensor, normalised (re-scaled) the values within the image and then applied that to a colour pallet.
Note there is a lot of artistic license added as well. They want the images to look pretty*. as much was admitted on the NASA live stream last week.
*within limits of retaining optimal scientific data.
*within limits of retaining optimal scientific data.
Shawdows from the struts they say, it is a piece of info that says that star is in the foreground and really “close"Hexagonal mirrors...
dave
Yes, I've read about this with Hubble and other scopes. The Wow! images released to the public are for pleasure more than for science. They do their best to make them look great.Note there is a lot of artistic license added as well.
All astrophotographers do that (I have a permanent dome with 3 computer controlled scopes... 🙂 ). Even though it makes it look pretty, there is good scientific value to assigning colours to narrowband imaging - anything that helps humans visualise is a help, as long as you understand the translation.
I used 2” filters for RGBL imaging, JWST also has selectable filters/defraction gratings for the IR sensors.
i just wish orion’s belt “trapesium” was on the list at the start 🙂 it’s one of my favourite images to test the resolving power - once the power comes back on here i’ll dig out an image. Getting The binary stars is fun 🙂
i just wish orion’s belt “trapesium” was on the list at the start 🙂 it’s one of my favourite images to test the resolving power - once the power comes back on here i’ll dig out an image. Getting The binary stars is fun 🙂
- Home
- Member Areas
- The Lounge
- How James Webb images are created