I'm looking for some way to corroborate some stuff I am working on.
If I:
- plug a loopback cable between Line out and Line in
- Play a 1kHz tone (at 48kS/s)
- Record that tone (for say a second)
- Run Fourier Transform over that data
- Adjust the playback and capture gains until the optimum point is found (just below clipping is detected)
Is there any examples of what these spectrums should look?
And then if:
- I remove the the 1kHz tone from the data.
- Remove the subsonic/ultrasonic components (I assume there is a standard weighting that could be applied here)
Then what is left over is a measure of THD+Noise?
What sort of figures would you expect to see for THD+Noise on a nasty audio CODEC and a good audio CODEC?
In my current experiments I'm getting figures over about 0.005% on a laptop and 0.01% on a desktop. Do those numbers sound about right?
If I:
- plug a loopback cable between Line out and Line in
- Play a 1kHz tone (at 48kS/s)
- Record that tone (for say a second)
- Run Fourier Transform over that data
- Adjust the playback and capture gains until the optimum point is found (just below clipping is detected)
Is there any examples of what these spectrums should look?
And then if:
- I remove the the 1kHz tone from the data.
- Remove the subsonic/ultrasonic components (I assume there is a standard weighting that could be applied here)
Then what is left over is a measure of THD+Noise?
What sort of figures would you expect to see for THD+Noise on a nasty audio CODEC and a good audio CODEC?
In my current experiments I'm getting figures over about 0.005% on a laptop and 0.01% on a desktop. Do those numbers sound about right?
There is a whole thread that goes into great detail on such things, maybe its what you are looking for: Digital Distortion Compensation for Measurement Setup