John Curl's Blowtorch preamplifier part II

Status
Not open for further replies.
Member
Joined 2002
Paid Member
A nice recounting of Bernie Gordon's IC's will never beat discrete saga. JC was not alone. :)

Planet Analog - Doug Grant - The Elusive 12-Bit D/A Converter

Scott
What is your opinion over the closing lines of Mr. Doug Grant's article ( Aug 2013 )

I looked through a few websites to see if anyone has yet made a "true" 12-bit D/A converter... guaranteed over temperature, time, and including all error sources. I didn't find any. Some are pretty close, at least hitting the linearity specs, and even holding them over temperature. Even 16-bit linearity is being met by some devices. Getting the full-scale accurate to 0.024 percent with an on-chip reference is a lot harder.
Will we ever get there? Was Bernie Gordon right in 1977?
Acknowledgments:
Dan Sheingold (ex-Analog Devices) and Walt Kester of Analog Devices provided some fact-checking and filled in a few details for this post.


In Dan Sheingold and Walt Kester earlier seminal work (Dec 2004) ,
http://www.analog.com/library/analogDialogue/archives/39-06/data_conversion_handbook.html
on the subject of specifying DAC linearity, they express their concerns in Chapter 5, page 14-15

Much more could be said about testing DAC static linearity, but in light of the majority of today's applications and their emphasis on ac performance, the guidelines put forth in this discussion should be adequate to illustrate the general principles. As we have seen, foremost in importance is a detailed understanding of the particular DAC architecture so that a reasonable test/evaluation plan can be devised which minimizes the actual number of codes tested. This is highly dependent upon whether the DAC is binary-weighted (superposition generally holds), fully-decoded, or segmented. The data sheet for the DAC under test generally will provide adequate information to make this determination.
...

Finally, there are several types of DACs which generally do not have static linearity specifications, or if they do, they do not compare very well with more traditional DACs designed for low frequency applications. The first of these are DACs designed for voiceband and audio applications. This type of DAC, although fully specified in terms of ac parameters such as THD, THD + N, etc., generally lacks dc specifications (other than perhaps gain and offset); and generally should not be used in traditional industrial control or instrumentation applications where INL and DNL are critical. However, these DACs almost always use the sigma-delta architecture (either single-bit or mult-bit with data scrambling) which inherently ensures good DNL performance.

1. What is the importance in difference (for the customer's commercial audio application) between a specified linearity and a reliably measured and statistically weighed reported linearity?
2. Are there any industry data supporting the issue of DAC ICs linearity variation with time (again for commercial audio applications)?

George
 
Last edited:
Frank,
If you are using any type of normal soft dome or even many hard domes on the very high frequencies it isn't going to happen with any type of resolution, not if you understand the way they all break up at the upper limits. And you surely can't do that with a cone driver, so I would imagine what I am saying you have never experienced. Not that they won't make some noise at very high frequencies but I have never seem any other type of device that can cleanly work at the extreme high frequencies. It is usually noise you are hearing or very rough FR at the upper limits and most devices have trouble getting over 16Khz before some serious breakup modes in the device. Sorry but you wouldn't know what I am talking about if you didn't hear it.
And I'm sorry, but I do know the sound, Kindhornman!! I am very much aware that normal systems do make a mess, most of the time, of this area, but it's the electronics, not the speaker!! That's why I keep saying that it is very easy to hear systems distorting, because they are generally not able to reproduce this part of the spectrum well!!

I would suspect most people here would get a bit of a shock hearing a relatively conventional setup absolutely nail these sounds, but that's what I work towards all the time, that's a key metric that I use for progress made. And once a system is competent at normal volumes, then you test for loud it can go while still maintaining the integrity in those sounds. If you hear "break up" at some stage it's because the electronics are misbehaving, not the speaker - I've been able to produce those tones at ear-splitting, deafening levels, with full integrity ... because that's what achieving convincing sound is all about ...

I can say this with full confidence, because I've been able to get the most ordinary of speakers to perform, within the limits of the amplifier - over and over again.
 
Last edited:
Take the bottom 6 bits, put them into a 6 bit D/A, and add them to the signal scaled appropriately.

Better than flipping a coin with dither..

jn

ps..all this goop for a half an LSB..sheesh

Stick to physics my friend.:) You guys really need to understand the converter technology before making up these scenarios. As I joked why not scale the second of two 16bit converters to make a 32bit one? Dis don't woik. Neither does the other .

1 LSB TPDF perfectly connects the dots on either the A/D or D/A, very importantly ON ANY TIME SCALE, there is no averaging. This uncovers the inherent linearity of the device which of course can vary. Pasting on 6 bits would invariably cause a discontinuity which invariably would be worse.
 
Last edited:
Scott
What is your opinion over the closing lines of Mr. Doug Grant's article ( Aug 2013 )

1. What is the importance in difference (for the customer's commercial audio application) between a specified linearity and a reliably measured and statistically weighed reported linearity?
2. Are there any industry data supporting the issue of DAC ICs linearity variation with time (again for commercial audio applications)?

George

To the first point, I'm not sure of the utility of using the A/D as a primary standard. A zero and cal cycle get you a lot, after all they make very accurate digital scales legal for trade with far more than 12bits resolution.

I don't know of published time stability data, but we do take mass quantities of it in characterization.

I'm not sure what to say about the different ways of specifying it, and measuring it gets very complicated when one worries about things like how many codes are exercised.
 
One of the more interesting experiments was to take two 16 bit converters and use one for the six MSB and the second for sixteen LSBs. The idea was that the first one was essentially an AGC and really didn't change very often. Clearly showed the need for an algorithm to predict AGC level in advance.

So a prescient A/D would solve the recording issues. Of course there is one minor problem. :)
 
And I'm sorry, but I do know the sound, Kindhornman!! I am very much aware that normal systems do make a mess, most of the time, of this area, but it's the electronics, not the speaker!! That's why I keep saying that it is very easy to hear systems distorting, because they are generally not able to reproduce this part of the spectrum well!!

I would suspect most people here would get a bit of a shock hearing a relatively conventional setup absolutely nail these sounds, but that's what I work towards all the time, that's a key metric that I use for progress made. And once a system is competent at normal volumes, then you test for loud it can go while still maintaining the integrity in those sounds. If you hear "break up" at some stage it's because the electronics are misbehaving, not the speaker - I've been able to produce those tones at ear-splitting, deafening levels, with full integrity ... because that's what achieving convincing sound is all about ...

I can say this with full confidence, because I've been able to get the most ordinary of speakers to perform, within the limits of the amplifier - over and over again.

How about some actual details about how you achieve this Frank? I am of the understanding that the best amp has no sound of it's own, adds nothing, just amplifies the source signal. You seem to indicate that you can somehow take equipment, which is not designed or built by you, and somehow make it work better.

Every piece of kit has it's own inbuilt limitations based on design compromises and budget. So, bearing this in mind, just how do you perform this magic act you keep on referring to and would you do us all a favour and finally impart this wisdom. Technical details is what we need, not endless waffle.

I am obviously no expert and would never claim to be, but i have built several amps and a buffer since joining this forum, i have posted pictures of my builds and discussed the process with people on here. If i ever come across a way of making them work better together, and i do try, the folks on here would be the first to know EXACTLY how i did it.

Apart from using decent electronics and careful speaker positioning, what is left to do Frank? Please do tell.
 
How about some actual details about how you achieve this Frank? I am of the understanding that the best amp has no sound of it's own, adds nothing, just amplifies the source signal. You seem to indicate that you can somehow take equipment, which is not designed or built by you, and somehow make it work better.

Every piece of kit has it's own inbuilt limitations based on design compromises and budget. So, bearing this in mind, just how do you perform this magic act you keep on referring to and would you do us all a favour and finally impart this wisdom. Technical details is what we need, not endless waffle.
Which tells me you still don't get it - the key process is being able to identify when a system is not working as well as it can, and then rectify or mitigate the issues. It is impossible to present a useful how-to, because the method essentially involves having a way of looking at a situation, largely learnt by just doing it - it's the difference between learning something by going to lectures, vs. being an apprentice looking over the shoulder of your boss.

I've mentioned before, what would happen if you went to a mechanic and asked me how to fix a car that wasn't quite right - using those specific words. He would look at you as if you were a bit daft, and then if he decided to be gentle about it would say something along the lines of, first I find out from the customer what was wrong, then I would make sure I could also pick the problem happening, then I would track down the cause, and then I would resolve it.

IOW, if the sound is not good enough, the mental attitude needs to be, from the word go, that something is wrong, or not adequate, with the system - that it's faulty, pure and simple. Completely disregard what is good about the sound, you must fixate on that which is not good - you become a troubleshooter.

Inbuilt limitations? You would be surprised by how good "mediocre" equipment can sound - but it will help if you ditch the idea that it can't possibly perform, because "it is mediocre!" Tell yourself that it can sound good, but it isn't at that moment, because it has a number of problems - some of these will be too expensive or messy to correct, not worth the effort - but you highly likely will go a long way by sorting out the easier things.

What do I do? Number one, is that I listen ... a strange thing to do, perhaps, :D. I put on a number of recordings until the most glaringly wrong thing about the sound is screaming at me, and then I just fixate on locating the cause of that, I'm just worrying about solving that aspect. This is obviously the tricky bit, because it's mainly hands on experience that helps here - you need to be able to link that audible artifact to a possible cause, and having been there before, and sometimes just trial and error are some of the ways to get answers.

Yes, no actual details - because how does one describe hearing distortion, and making a mental jump to the right cause of that - at best it would be extremely involved ... and every situation is different.

Sorry about that ... :)
 
Last edited:
Disabled Account
Joined 2012
Ok. Looks like CD can sound fine for some people. 24bit players measure and sound better/more realistic. None of the reviews yet that say 16b sounds better than 24b.

Did some of you notice that the BenchMark 2 - how many bits and how they use them?

It sounds better than the BenchMark One I replaced it with on HD files.

Who has a better explanation why 24b sounds better than 16 and if it is not more bits/lower distortion at the levels recorded/played back - mid levels and not the nice number at 0?

Everything else has just been noise.

http://benchmarkmedia.com/products/benchmark-dac2-hgc-digital-to-analog-audio-converter


THx-RNMarsh
 
Last edited:
Richard after all the last conversation about dynamic range and and bit depth it seems that 16 bit may be enough with compressed music and perhaps we can use 24bit depth minus 1/2 the lsb for extreme dynamic range. A change from 44.1 to 96Khz or above would seem more likely to be the only other thing left to do, not ever more bit depth, just faster clock speeds.
 
who has convincing dbt evidence 24 bit sounds better?

surely the industry that wants to upsell has incentive to do, publish good studies

First, one has to measure his D/A to see how it behaves at different sampling frequencies and bit depths. This may be an eye opener. 99% of guys here are only story tellers, so do not expect any exact answer. They will never do any exact exploration to find the reasons of sound differences. It is much easier to remain story tellers.
 
AX tech editor
Joined 2002
Paid Member
Just been reading some Benchmark advertisement. What you guys think about this fraction:

The bottom line is that the playback hardware is the bottleneck. Properly mastered noise-shaped 16-bit recordings will not limit the SNR of the playback experience. [My note: such a system has 118dB SNR]

It is very important to understand that improved noise performance is the only advantage offered by the extended bit depth of a 24-bit system. But, in most cases, this noise advantage is totally obscured by the noise limitations of the playback system.

The real advantage provided by 24-bit systems is the ability to record and produce releases that can fully utilize the SNR available in a 16-bit system. High-quality 16-bit recordings cannot be produced using a 16-bit processing chain. But a 24-bit processing chain can create 16-bit recordings that push the limits of most playback systems. There is certainly no harm in delivering a 24-bit product to the end user, but it may offer nothing more than what could have been delivered on a 16-bit format.


Jan

PS The whole piece is here:
High-Resolution Audio - Bit Depth - Benchmark Media Systems, Inc.
 
Ok. Looks like CD can sound fine for some people. 24bit players measure and sound better/more realistic. None of the reviews yet that say 16b sounds better than 24b.
I downloaded the free sampler from HD tracks, and so far I have only had a casual listen on my laptop speakers.
One thing I did notice was an unusual fluidness and relaxedness in the recordings, and an unusual clarity....within the limits of the inbuilt 'HK' one inch speakers of course.
Who has a better explanation why 24b sounds better than 16 and if it is not more bits/lower distortion at the levels recorded/played back - mid levels and not the nice number at 0?
THx-RNMarsh
Better effective resolution at -18dB average levels most certainly is of significant benefit.

The output of modern SD DACs is pulse density modulation.
So therefore, by definition higher bit depth translates to higher frequency carrier at the DAC output.
This for one relaxes the steepness requirement of reconstruction filtering and consequent phase distortions of the recovered output audio signal.
This may be another reason.

In this thread dac-filtering-rasmussen-effect, Joe Rasmussen advocates putting a 'supercap' across SD DAC power supply.
These supercaps are effectively an infinite capacitance with 20-40 ohms internal resistance...ie a damping/zobel network that is effective to down way below the audio band.

SD DACs are effectively directly switching the power rails to the signal output line, at a high repetition rate.
The effective duty cycle is variable in sympathy with the coded audio signal.
This variable duty cycle could be said to play merry hell with the DAC supply, and according to the supplies reactances, supplies decoupling and supplies damping all sorts of supplies modulations would result resulting in output line dynamic deviation from the dynamically specified output.
So, SD DAC supplies quality is of utmost importance, but thankfully not all that difficult to achieve, and in line with Joe's observations.

SD DAC technique is an elegant work around ladder DAC weighting resistors precision requirements.
The 'downside' is the nature of the noise shaping, which in my experience can be quite audible and imparts a sonic 'character' according to the algorithm employed.

That said, 24 bit depth is in practice a sonically better/cleaner system than 16 bit.
16 bit is generally very good, but 24 bit does take replay to another level.
In pro audio/recording world there is no argument in favour of 16 bit.

Dan.
 
Status
Not open for further replies.