DIY AD1853 and AP measurement

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Terry Demol said:

Not many people are aware, but the AD1853 has a current set
resistor, (R28 p 14 data sheet) to pin 10 which can be used as a
true analog volume control.

I know of no one who has used this feature but it is
potentially a very good idea, making totally pre amp free signal
chain possible.

I won't touch R28 :rolleyes:

It decease signal level but the noise floor remains.
 
banana said:


I won't touch R28 :rolleyes:

It decease signal level but the noise floor remains.

Have you tried it?

Depends on internal topology, if you make R28 larger, the noise
gain may well be reduced as a larger R has less noise current than
smaller R. So they DR reduction may be less than anticipated.

I spoke with AD applications engineers WRT this and they said there
was "some" reduction in DR. However, it may well be well below the
16bit floor.

I have an 1853 dac on the bench so I will give it a go when time
permits.

You are lucky to have APII, I am jealous.

Cheers

Terry
 
banana said:


I won't touch R28 :rolleyes:

It decease signal level but the noise floor remains.


My take here: analog noise is less objectionable than quantization noise which is still nicer than nonlinearities.

Of course, with noise shaping, you get lots of out of band noise which combined with nonlinearities of the op amp input stages, will give you in band harmonics.
 
banana said:
Here it is the AP2 measurement with 16bit input.

input 16bit/48KHz, 0dBFS
Analog output 4.0Vrms (volume pot at maximum)
measurement bandwidth 20KHz
THD+N = 0.002%

For 16 bits undithered, you have a thoretical limit of 98 dB S/N because of quantization noise. However, you are measuring 94 dB. At 24 bits, you are measuring 104 dB, which is compatible with the spec of the part.

Looking at the plots, the harmonics are virtually the same for both measurements, first and second being at -118 dB for 16 bits and maybe a tad worse for 24 bits at -116/117 dB (why?).

The noise for for the 16 bit measurement is at -122 dB, i.e. equivalent to 20 bits of resolution. I suspect whatever device or software generated the 16 bit input data already used dithering, because no amount of dithering in the AD1853 could bring down quantization noise of a signal that was truncated or rounded to 16 bits.

The noise floor for the 24 bit measurement is at -130 dB for the left channel and -135 dB for the right channel, equivalent to 21 - 22 bits of resolution (hence, I would feel comfortable with a limited amount of digital volume control that shifts 16 bit input data down within the 24 bit word).

OK, big question then is why the THD+N of the 16 bit, apparently 20 bit dithered signal reads 94 dB. From eyballing the graph and trying to imagine an RMS addition of the first five harmonics and the -122 dB noise floor, I would still expect to come out with at least 100 dB. Is there a chance that the input bandwidth on the analyser was set to much more than 20 kHz?
 
capslock said:


OK, big question then is why the THD+N of the 16 bit, apparently 20 bit dithered signal reads 94 dB. From eyballing the graph and trying to imagine an RMS addition of the first five harmonics and the -122 dB noise floor, I would still expect to come out with at least 100 dB. Is there a chance that the input bandwidth on the analyser was set to much more than 20 kHz?

Thanks for your post, I don't want to recalculate if the THD numbers look accurate.

Something else is much more important:

When you look at both 0dB graphs, you see that the harmonic spectrum is absolutely the same for 16bit and 24 bit, just the noise is reduced for 24bit.

For this DAC which is a modern high end chip, higher input data resolution does only reduce noise, but not distortion.

Why ?

IMHO the linearity of the DAC is limited to 16 bloody bits or less.

It is total nonsense to use high bit DACs if the only thing that happens is that noise is reduced and the distortion becomes more noticeable.

Sound gets worse with more bits.
 
Berhard, those harmonics are in the -110 to 118 dB range. The quantization noise of 18 bits is also -110 dB. Those nonlinearities are hence better than 16 bits!

I suspect what we are seeing here are nonlinearities of the internal current sources or the external op amps at signal levels near full scale. Take a close look at the lower right hand corner of Fig. 16 of the AD1853 data sheet. THD+N has an optimum at about - 4 to -6 dB full scale, and as you move to full scale, THD+N degrades, which must be due to increasing nonlinearities.

This is actually a sensible optimization, as you will see the MSB used only in rare transients where you don't much care for nonlinearities.

However, if it makes you sleep better, attenuate all incoming signals by 1 bit (6 dB), and you will get 5 to 10 dB lower harmonic distortion. The degradation of S/N is insignificant with a DAC this quiet. Alternatively, you could use the AD1955 that seems to have solved this MSB distortion issue (fig. 12 of data sheet).
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.