Hi,
I know how to reach extremely high accuracy in 16 bit conversion.
It is really possible, but not for DIYers. 🙁
I know how to reach extremely high accuracy in 16 bit conversion.
It is really possible, but not for DIYers. 🙁
How ?
Some big company will make big $$$ and we will not be able to afford it
How could we avoid that ?
Some big company will make big $$$ and we will not be able to afford it

How could we avoid that ?
Let me know what you think
Calibration:
Take a 20bit DAC chip.
Measure the output voltage with a 6,5 digit DVM.
Calculate the theoretical output voltage of an ideal 16bit DAC for a 16 bit data code.
Measure which 20 bit data code gives closest match to theoretical 16bit value in output voltage on the 20bit DAC.
Store this 20bit code in eprom.
Do this with all possible 16bit data codes.
DAC:
Convert seriell data to parallel data, feed it to eprom adress input, read eprom data, convert it to seriel, feed 20bit DAC.
The 16bit data now is converted by the eprom to a 20bit data that very closely matches the exact value.
Next step:
This procedure could be used to eliminate distortion in non feedback amplifiers.
Instead of measuring output of 20bit DAC output of amp is measured.
Amplitude error correction is stored in the eprom.
Calibration:
Take a 20bit DAC chip.
Measure the output voltage with a 6,5 digit DVM.
Calculate the theoretical output voltage of an ideal 16bit DAC for a 16 bit data code.
Measure which 20 bit data code gives closest match to theoretical 16bit value in output voltage on the 20bit DAC.
Store this 20bit code in eprom.
Do this with all possible 16bit data codes.
DAC:
Convert seriell data to parallel data, feed it to eprom adress input, read eprom data, convert it to seriel, feed 20bit DAC.
The 16bit data now is converted by the eprom to a 20bit data that very closely matches the exact value.
Next step:
This procedure could be used to eliminate distortion in non feedback amplifiers.
Instead of measuring output of 20bit DAC output of amp is measured.
Amplitude error correction is stored in the eprom.
Ok,
do it for us.
65000 codes have to be stored in the eprom, 1000000 measurements have to be done and selected from
Another more simple thing could be to add 4 zeros to a 16bit data word and send that to a 20bit DAC.
I tried to swap PCM56 for PCM61, does not work, sound is distorted.
do it for us.
65000 codes have to be stored in the eprom, 1000000 measurements have to be done and selected from

Another more simple thing could be to add 4 zeros to a 16bit data word and send that to a 20bit DAC.
I tried to swap PCM56 for PCM61, does not work, sound is distorted.
65000 codes have to be stored in the eprom, 1000000 measurements have to be done and selected from
it may be better to find a funktion to do correction than storer in an eprom. Or if every value should be stored a SRAM. And the measurement a dsp could do at power up of the device, store the values, and compute them with the signal.
Unfortunately every chip is different.
So no function can be found.
6,5digit DVMs are still very expensive.
Do you think that can be implemented in the chip ???
So no function can be found.
6,5digit DVMs are still very expensive.
Do you think that can be implemented in the chip ???
A DAC has two channels, so 2 million measurements.
How fast is such a DVM ?
One second per measurement, this takes 23 days
How fast is such a DVM ?
One second per measurement, this takes 23 days

i would not use a DVM or standalone gear, but a DSP running a test sequence into the DAC, measuring its output with a ADC direct on the same board, The DSP runs the test sequence, reads the values he gets from ADC, stores them in a RAM, and no accepts music input signal to compute corrected data for the DAC. This test sequence can be run at power up, or at CD change etc, when the DAC warmed up. All hardware needed for this must be implemented on the DACs PCB, no handwork measurement etc. needed, only software. You don´t do anything else then every lab instrument with some correction curve for a sensor stored in a eeprom does, but they usually are not that fast (Thermometer, pH - Meter etc)
Re: Let me know what you think
and so on...
You forgot that
- the DAC drifts with temperature.
- the DAC drifts with time.
You forgot that nonlinear distortion not only depends on amplitude, but also on frequency.
This 'method' does not even qualifiy for 'Nice Try'.
Thomas
Bernhard said:Calibration:
Take a 20bit DAC chip.
Measure the output voltage with a 6,5 digit DVM.
Calculate the theoretical output voltage of an ideal 16bit DAC for a 16 bit data code.
Measure which 20 bit data code gives closest match to theoretical 16bit value in output voltage on the 20bit DAC.
Store this 20bit code in eprom.
and so on...
You forgot that
- the DAC drifts with temperature.
- the DAC drifts with time.
This procedure could be used to eliminate distortion in non feedback amplifiers.
You forgot that nonlinear distortion not only depends on amplitude, but also on frequency.
This 'method' does not even qualifiy for 'Nice Try'.
Thomas
Re: Re: Let me know what you think
Everything drifts with everything.
I forgot that calibration should be done after burn in and after warm up.
The DACs I have measured cold and after 5 minutes warm up had insignificant drift.
Also if it is done the way till proposed, chip aging and temperature drift can even more be neglected.
A chip with better static linearity will have less distortion.
Your answer does not even qualifiy for 'Nice Try'
tl said:
and so on...
You forgot that
- the DAC drifts with temperature.
- the DAC drifts with time.
You forgot that nonlinear distortion not only depends on amplitude, but also on frequency.
This 'method' does not even qualifiy for 'Nice Try'.
Everything drifts with everything.
I forgot that calibration should be done after burn in and after warm up.
The DACs I have measured cold and after 5 minutes warm up had insignificant drift.
Also if it is done the way till proposed, chip aging and temperature drift can even more be neglected.
A chip with better static linearity will have less distortion.
Your answer does not even qualifiy for 'Nice Try'
Nice idea!
There are techniques where you use more bits than your final resolution to compensate for errors in DAC's and ADC's. The thing is that no one can afford such a device if it was to be measured in production let alone the fact that you would have to send the level mapping along with the chip as well. I suppose an end product manufacturer could do it for all his products but it is not likely.
There are techniques to do on chip calibration which deals with signal dependent errors as well but when we are talking resolution above 12 bits things get hard really fast.
Isn't the PCM1704 already specified to be linear down to 16 bits already from production?
+-0.5dB error for a 1002Hz signal at -90dB (lsb of 16 bits creating the sine, 24 bit resolution though)
There are techniques where you use more bits than your final resolution to compensate for errors in DAC's and ADC's. The thing is that no one can afford such a device if it was to be measured in production let alone the fact that you would have to send the level mapping along with the chip as well. I suppose an end product manufacturer could do it for all his products but it is not likely.
There are techniques to do on chip calibration which deals with signal dependent errors as well but when we are talking resolution above 12 bits things get hard really fast.
Isn't the PCM1704 already specified to be linear down to 16 bits already from production?
+-0.5dB error for a 1002Hz signal at -90dB (lsb of 16 bits creating the sine, 24 bit resolution though)
Unless I'm missing something, noone has mentioned about correcting the ADC non-linearities.
What you're proposing is analoguous (no pun intended) to gamma correction in the film production world, where I believe some monitors have built in colouromiters(???) to optimise themselves. A 29" unit could cost $70000, quite expensive for a TV!
If you look at reviews of LCD panels, you will see that most panels are completely unable to show true black - they always let some light through, so the gamma correction can only go so far in manipulating the output for correct brightness.
The same would hold true for a DAC, and the limit of correction would be partly dictated by how accurate the ADC is.
A.
What you're proposing is analoguous (no pun intended) to gamma correction in the film production world, where I believe some monitors have built in colouromiters(???) to optimise themselves. A 29" unit could cost $70000, quite expensive for a TV!
If you look at reviews of LCD panels, you will see that most panels are completely unable to show true black - they always let some light through, so the gamma correction can only go so far in manipulating the output for correct brightness.
The same would hold true for a DAC, and the limit of correction would be partly dictated by how accurate the ADC is.
A.
I don't want to spoil your fun but almost any 24/96 converter can produce "perfect" 16 bit out.... It's an easy task these days.Bernhard said:Hi,
I know how to reach extremely high accuracy in 16 bit conversion.
It is really possible, but not for DIYers. 🙁
Re: Re: Re: Let me know what you think
You could do it on a sample-by-sample basis:
Do
1 - do genaral calibration run;
2 - store all code offsets;
3 - pre-fetch next sample;
4 - do calibration run for that value only;
5 - convert the sample with correction applied;
6 - goto step 3.
Until done
You will need really fast stuff, but that's just implementation details 😉
Nice Try.
Jan Didden
Bernhard said:
Everything drifts with everything.
I forgot that calibration should be done after burn in and after warm up.
The DACs I have measured cold and after 5 minutes warm up had insignificant drift.
Also if it is done the way till proposed, chip aging and temperature drift can even more be neglected.
A chip with better static linearity will have less distortion.
Your answer does not even qualifiy for 'Nice Try'
You could do it on a sample-by-sample basis:
Do
1 - do genaral calibration run;
2 - store all code offsets;
3 - pre-fetch next sample;
4 - do calibration run for that value only;
5 - convert the sample with correction applied;
6 - goto step 3.
Until done
You will need really fast stuff, but that's just implementation details 😉
Nice Try.
Jan Didden
Re: Re: Re: Let me know what you think
If your DACs are already that stable, their accuracy will already be excellent. Face it: The accuracy of a multi-bit DAC is limited by drift (thermal and over time). Using a look-up table does not help against drift.
BTW, 16bit accuracy is not a problem. If you want true 18bit (DC) accuracy, get a AD1139K. Hard to get and costs approx. 1000 Euro.
Bernhard said:The DACs I have measured cold and after 5 minutes warm up had insignificant drift.
If your DACs are already that stable, their accuracy will already be excellent. Face it: The accuracy of a multi-bit DAC is limited by drift (thermal and over time). Using a look-up table does not help against drift.
BTW, 16bit accuracy is not a problem. If you want true 18bit (DC) accuracy, get a AD1139K. Hard to get and costs approx. 1000 Euro.
- Status
- Not open for further replies.
- Home
- Source & Line
- Digital Source
- Near perfect accuracy 16bit DAC idea