ADC Cost

SAR ADCs rely more heavily on analog precision circuits than SD ones do. Digital circuits are relatively cheaper. There's one spec that SAR ADCs have which SD ones can't meet and that's ability to have its input time-multiplexed in a multichannel data acquisition system. So this particular application tends to rule out an SD ADC.
 
Terrific explanation! Thank you 🙂 So then for example, what would be a SAR ADC comparable to a top end SD like the PCMD3180 or ES9822PRO? Because one thing I've definitely noticed is once you get to a certain point on SAR ADCs, the SNR isn't near what a SD can do at the same spec.
 
I've not found any SAR ADC with as good noise spec as the lowest noise SD ADCs. SAR ADCs tend to have noise in the -100dB region whereas nowadays there are SD ADCs as low as -120dB.

Of course the SAR ADC can have its noise lowered by averaging (aka oversampling) but that normally can't provide a gain as large as 20dB.
 
A few cents worth of info: there is another important metric - ENOB (effective number of bits). A 24bit ADC may have 16 effective bits, depending on application. I read a very interesting articcle (I believe written by a TI engineer) which put it like so: 16.7 million steps for 150mV input signal equates to 0.01uV (that's microvolts) per step. this is out of the realm of reality. And obviously less resolution = more noise.

So, our shiny, new ADC might have a 110dB snr on paper but in reality, 100would be wowza! Let me explain - if the input hovers around 150mVeff, its peaks reach 1Veff. Thus, a 1V FS ADC operates at least 10-12dB below FS and thus we lose 10dB of the awesome 112dB spec. Now, if we account for other real-life factors such as non-linearity, etc we lose even more in reality.
 
Price reflects die-size, number of production steps, volume of production and supply/demand mostly. Modern CMOS processes are dense so die size can be small, whereas analog processes are usually spacious by comparison. Once the die is small enough the dominant cost is packaging.
 
So I found this guy (https://www.analog.com/media/en/technical-documentation/data-sheets/250832fc.pdf) last night after the conversation I had with @abraxalito . So then would this have comparable specs to one of the SD ADCs I posted earlier?

Also, I noticed that one spec that's not very talked about in a spec sheet's literature is delay and settling time (I would imagine so because marketing would be all over the place for multiple manufacturers). In the case of a real time system, what would be the minimum expected cost/spec of an ADC for such a system?
 
A few cents worth of info: there is another important metric - ENOB (effective number of bits). A 24bit ADC may have 16 effective bits, depending on application. I read a very interesting articcle (I believe written by a TI engineer) which put it like so: 16.7 million steps for 150mV input signal equates to 0.01uV (that's microvolts) per step. this is out of the realm of reality. And obviously less resolution = more noise.

So, our shiny, new ADC might have a 110dB snr on paper but in reality, 100would be wowza! Let me explain - if the input hovers around 150mVeff, its peaks reach 1Veff. Thus, a 1V FS ADC operates at least 10-12dB below FS and thus we lose 10dB of the awesome 112dB spec. Now, if we account for other real-life factors such as non-linearity, etc we lose even more in reality.
110 dB SNDR is 18 bit ENOB, quite feasible nowadays.

It would be silly to let the output word length of a sigma-delta ADC limit the signal to noise ratio. There are noise contributions from the analogue part (additive noise from the loop filter, reference noise and clock jitter), from the round-off errors of the quantizer in the sigma-delta modulator and from round-off errors in the digital processing. Adding one bit suffices to reduce digital noise by 6 dB, while reducing analogue noise by 6 dB usually involves quadrupling power and area. It is therefore far cheaper in terms of power and chip area to design for a negligible noise contribution from digital than from analogue. Besides, when analogue noise dominates, it can to some extent help in dithering the digital round-off errors so that they indeed behave similar to noise rather than as distortion.
 
Sigma-delta's use noise-shaping which means they effectively have much lower noise at lower frequencies - the number of output bits is often chosen to prevent it affecting the performance at low frequencies, but then scaled up to a multiple of 8 bits to be easy to handle in byte-oriented systems (and I2S which supports frames of 16/24/32 bits per sample). Internally the DSP works at lots of bits to prevent rounding issues in the processing.

So you might have an ADC with ENOB around >20 at 10Hz, but rather less at 20kHz, using 24 bits for output makes sense even if in practice 18 bits are generally noise-free.
Also the noise floor may depend on the signal level such that those "unnecessary" low order bits might be useful for quiet sections.
 
Delay and settling time are specs that would apply to a DAC rather than an ADC. For what it's worth, the LTC2508-32 is a very nice part but not something for audio applications.
I mean... sound waves are still waves though. What makes an audio DAC better than a SAR for it's application? Wouldn't it be better to use something more precise for similar cost?
110 dB SNDR is 18 bit ENOB, quite feasible nowadays.
I mean, with 2 of those you could easily do 32 bit audio so yeah I don't see why 18 ENOB would even be an issue.
It would be silly to let the output word length of a sigma-delta ADC limit the signal to noise ratio. There are noise contributions from the analogue part (additive noise from the loop filter, reference noise and clock jitter), from the round-off errors of the quantizer in the sigma-delta modulator and from round-off errors in the digital processing. Adding one bit suffices to reduce digital noise by 6 dB, while reducing analogue noise by 6 dB usually involves quadrupling power and area. It is therefore far cheaper in terms of power and chip area to design for a negligible noise contribution from digital than from analogue. Besides, when analogue noise dominates, it can to some extent help in dithering the digital round-off errors so that they indeed behave similar to noise rather than as distortion.
I would figure you want less digital in the scheme of things (more points adding delay right?). So in the case of the LTC2508-32 where it talks about how it's 32 bit, but 22 no delay bits; is this what you mean?

Would that mean overall a hybrid approach, like one SAR and SD, be optimal in interpreting the signal? Or is one truly better than the other?
 
I don't understand you.
I could give you a very long list of people who have the same issue 🤪

Do you have an application in mind where you need a very small delay?
Real time signal processing and manipulation. To be honest, I'm probably coming at this from an odd angle as it is. I have a bit of background in radar testing, so I'm just looking at it as a signal to be manipulated as opposed to pure audio. I admit it, I don't know everything lol
 
I mean... sound waves are still waves though. What makes an audio DAC better than a SAR for it's application? Wouldn't it be better to use something more precise for similar cost?
No reason you couldn't - As you very accurately pointed out, a signal is a signal however generated. My thought was (just a WAG as I haven't had any experience with ADCs targeted at voice/music) that it might be more difficult to get the results you are looking for than a targeted chip. It would, of course, be overkill for the application but so what? It's exactly the kind of thing that I would use because, why not? More bits! And looking at it from a signal processing standpoint it would be a great choice.

Hal
 
Absolutely! This whole project has been an experience. So far I've learned how to build analog filters from the ground up, and learned a ton about different process architectures. It's fun to find a hobby where you know a lot, but you also DON'T know a lot 🙂