Calibrated DAC with one chip per bit ?

Once in another thread somebody made a joke, why not use a DAC for every single bit...

Now, this would be not too complicated with parallel input DACs like PCM53 or PCM54 or even PCM64 which is 18 bit.

A lot of chips could be tested and selected for matching output current or better voltage after I/V with theoretical value.

Even the previous or next code could easily be chosen if matching.

When using PCM64s to built a 16bit DAC, the best code could be chosen from those inbetween extra bits.
 

OliverD

Member
2002-11-30 10:12 pm
Germany
For a 16 bit source like CD, you could use a 24bit DAC and "calibrate" it for better linearity by applying a fixed mapping from 16 bit input value to 24 bit output value... Leaves you with 256 output values to chose from for each input value.

However, the chips available today achieve almost true 24 bit resolution, so I wonder if it is worth the effort. Also, oversampling or any kind of DSP in the player requires more resolution than 16 bit which limits the "headroom" for calibration.

Anyway, as I recall a similar approach was taken in some commercial product... I tend to connect the brand Nakamichi with it, but I'm not sure at all.
 
OliverD said:
For a 16 bit source like CD, you could use a 24bit DAC and "calibrate" it for better linearity by applying a fixed mapping from 16 bit input value to 24 bit output value... Leaves you with 256 output values to chose from for each input value.
And this is very complicated. 65000 x choose from 256 values and store that somewhere.

OliverD said:
However, the chips available today achieve almost true 24 bit resolution, so I wonder if it is worth the effort.

S*itstream maybe...
 

OliverD

Member
2002-11-30 10:12 pm
Germany
And this is very complicated. 65000 x choose from 256 values and store that somewhere.

Actually, it's easy. If you parallelize the I2S data, you only need a 64kByte ROM, feed the 16 bit values from the source to both the ROM's address lines and the upper 16 bits of the DAC, while the lower 8 bits to the DAC are taken from the ROM's databus.

The most difficult part will be measuring the DAC and program the 64k calibration bytes to the ROM.

S*itstream maybe...

How big would the linearity error be for, say a PCM1704, in a 16 bit application?
 
OliverD said:


The most difficult part will be measuring the DAC and program the 64k calibration bytes to the ROM.


I started a thread once about exactly that problem.

OliverD said:


How big would the linearity error be for, say a PCM1704, in a 16 bit application?

BB states 17 bit linearity.
But I do not believe them.
PCM63 is also very bad in low level.
But I did not test PCM1702/04 yet.
 

wimms

Member
2004-03-02 2:59 pm
home
use DSP plugin
Sorry, slipped here. Not just "use", but *write* a DSP plugin and then use it.
I guess any software that you can write realtime DSP for is a go. Start with bitperfect software player, make a DSP that takes untouched PCM as input, strips off some n MSB's, make whats left into index to correction tables, apply LSB correction, and output PCM. Should be pretty simple.

To actually calibrate, output your calibration signal and measure DAC output by whatever means you wish, determine what PCM code position deviates from linearity you are after, apply correction to that code position. If suitably written DSP, you could do that in realtime with scope and a slider.

Eventually you'd come up with a table: 16-bit PCM-in -> 24-bit PCM-out. Most of this table should be copy of input, you really need to touch just the PCM codes that you find needs correction.

With such DSP you have very large freedom and development cycle speed changing the behaviour of your DAC, provided it has enough low level bits to get the output you need.

You can also be very creative in way how you select the PCM codes that become index into the correction table. You don't need 24-bit worth of database, as OliverD pointed just 64K or even just 256, depends on how bad your DAC is or how difficult its to select the indexing function. You can use bitmask, like say when MSB bit causes static shift of linearity, you'd need just 1 table entry to correct whole DAC perhaps.
 

OliverD

Member
2002-11-30 10:12 pm
Germany
Most probably you will get away with an offset value for each bit, thus 16 bytes should be enough, assuming that the current adder and output stage have a much better linearity than the DAC itself, which I believe is a safe bet.

However, for a simple non-DSP solution which could be incorporated in some DIY DAC, I'd go for the 64k ROM. No real-time computing power is needed, only a fixed mapping. I prefer to keep the complexity of the final product low even if that means more expenses during design.

You can still decide whether you measure each bit of the DAC and derive the ROM map from these 16 measurements, or a calibration of each possible datum. The latter definitely needs some automated calibration setup, or you will be busy for months.