Idea: Using resolution imaging techniques to recover signal buried 32bit ADC noise.

In imaging it's possible to recover information (and increase resolution) by using multiple images that are aligned and stacked. It also means an object (a signal) can be recovered from below the noise floor of a standard image.

1520233808578.jpeg


Here's one of my astro images. Note the four dots - they are stuck camera sensor pixels. The image object (a galaxy) is used as the reference point to align each of the images that exist slightly offset from each other.

This means you end up with a set of pixel (ie samples) in a grid that are aligned on top of each other. Not aligned by the grid but the central point (and image) of the galaxy (our signal of interest).

So in the same form, it should be able to have a low jitter generated signal, say 1kHz and then sample the FFT. Next move the 1kHz a small amount to the left and to the right, taking FFTs each time.
Next you create a FFT 10X the size, then take each FFT and scale it up 10X and stack (add the bins ) the central points of the generated signal right over the top of each other on the bin that represents the 1kHz bin. The dynamic and static noise will then shift around.

The result would be a more detailed FFT that has a lower noise floor. In theory you could calculate how many images it would take to get below the noise floor by a predictable amount (treating the random noise as gaussian).

A 24bit ADC could, in theory, recover more than 21bits. Although this is more of a long measurement thus subject to temperature drift etc.

Thoughts?
 
Seems a lot like what we do with averaged FFTs already? You seem to be adding some autocorrelation for spatial alignment when processing images.

Thing is to use one ADC in multiple passes, it would appear necessary to repeat the exact analog input signal multiple times. Ways we have to do that are prerecorded digital, tape, and vinyl. Don't know that any of them are more than 21-bits to begin with. OTOH perhaps one could use multiple parallel ADCs capture the same live event, then try to sum the outputs to average out random noise that way.
 
  • Like
Reactions: NickKUK
IMO the main challenge that you will encounter when trying to apply this to the audio realm is this: The images of the sky that you are trying to improve are essentially time-invariant over the course of your measurements so if you take multiple exposures the subject remains constant in time over that time period. This will not be the case for audio. The signal will be different from instant to instant on the msec scale, so you cannot use averaging for normal sample rates (but see the next paragraph). It might be possible to average the input from a number of very closely spaced and therefore tiny microphones, like a MEMs array, but I think you will find that the result is not all that improved.

The only real signal improvement technique that I have read about was when audio was sampled at extremely high (for audio at least) rates of several MHz (IIRC) and then these were binned/averaged to improve S/N or dynamic range. I forget exactly the result. But it did work pretty well. The project was described on this site somewhere, I think about 2-3 years ago or so.
 
As an instrument is f(t) then converting over time (using shifted time) then you’re right. Also a sine wave would essentially be shifted in phase.

FFT( 2•pi•(t+x) ) where t is the time and x would be the variation from the time (ie the shift from the time), the cycle will not change over x shifts (it’s phase may but not the frequency) but it’s amplitude would (string getting quieter or signal noise changing it).
Also phase correlation returns a point to where there is a best match (ie every space has a value of how strong the correlation is).

Dynamic events get ignored as there’s little to correlate.

You’d need a constant source, and dynamic changes (like the atmosphere wobble) will be ignored for the strongest signal.

What may cause an issue is how the FFT artifacts cause an issue - thinking about the windowing for example and how artifacts may appear.
 
The technique is called "boxcar integration". For a periodic signal, take one sample per period over N periods. Average them. The signal increases by N times while the noise increases by sqrt (N). The signal can be recovered even if buried in noise.

The FFT already does this kind of averaging. Make the window large enough and the FFT will pull periodic signals out of the noise.

Your ears can do that too. 🙂
Ed
 
  • Like
Reactions: dliscomb
Reading up about boxcar averaging - then yes it seems equivalent.

Boxcar has a sliding sampling window over the time of the signal of interest.

This (which was developed for Hubble by NASA) moves the box (ie the pixels) across the target of interest - essentially sliding the sampling window over the target signal.

The difference is that instead of having an accurate window, the NASA version will take any sample set and translate it. It doesn't need to be that accurate as it uses the entire signal over time as the reference point to find a sub resolution correlation point. The more data instances, the better the resolution. NASA called it super-resolution.

Just in this version the sampling window is the sampling for the FFT. Essentially the FFT bins become the pixel windows.
 
Last edited:
I think that member 'simon7000' has been doing that measuring resistors at -150 dBs.
Two decades ago, a friend used a web-cam on a very good telescope to take great pictures of Mars as it pasted by. Back then good digital scope cameras were very pricey.