Beyond the Ariel

The reality is that 96dB is perfectly fine - one can do tests to 'prove' to oneself that this range in any normal listening environment is no problem at all, something I've done on a number of occasions and it always verifies that situation. Of course, one can do ridiculous experiments, grossly exaggerating gains in various ways which have nothing to do with how one listens to real, recorded music ...
I think we should not limit ourselves to numbers adequate at this time even though I think you are correct for playback systems. But technology in he recording and mastering process needs to be much higher to avoid accumulation of tolerances from polluting the end result.

The auditory data commonly referenced seems to be based on steady state signal, while we are more sensitive to sudden changes and onset of sound, at least for me.
 
Implementation issues. DACs aren't perfect devices - they introduce errors. The most significant errors for audio (as opposed to for instrumentation applications) turn out to be the dynamic ones. Glitches are generated every time a DAC's output code changes - R2R ladder converters are particularly poor for this. The following analog circuitry has to settle at each code change, which is in effect a step change. So does the analog circuit slew-rate limit? If so that's another error. Then there's settling time - only pure exponential settling introduces no error.

All the above issues get worse with increased density of code changes - i.e. higher sample rates.

Should one try to avoid all these problems of multibit converters and instead go sigma-delta architecture (as most consumer chips are these days) then you'll open a whole new can of worms as regards introducing noise modulation.
I must admit I have no feeling how large these errors are, but with a step to step voltage level change so small, it is hard to imagine that any overshoot is larger than a sample step value. In this case, a higher sample rate will reduce the voltage difference from one step to another keeping the overshoot at a smaller absolute level.

Most problems I had with the digital format ever since it was first introduced into consumer products was interaction between the player motor servo, digital sector, and analog sector. When the first CD players were introduced, they were just intolerable to my young untrained ears until someone took a player and changed the output to a tube driven output adding a separate regulator, effectively decoupling he last analog section from the rest. They rebranded the player to be called the Analogic, I think they sold it for $600 at the time. It still sits there to this day? Then came alone technology they just commercially referred to 1bit D/A converters, delta sigma technology? They sounded different, and I just got one of those just because the different technology, also sitting there today. Now I just use a notebook with a meridian Explorer D/A converter after selecting the right USB cable for it. Yes, it made a very significant difference which is an indicator to me that the design does not properly decouple the analog ground from the digital ground. So the basic problem still exists after all these years. This is really due to how the professionals were educated and accept or reject ideas. I am still tempted to open up the Explorer to tweak it a bit more.
 
Last edited:
I must admit I have no feeling how large these errors are, but with a step to step voltage level change so small, it is hard to imagine that any overshoot is larger than a sample step value.

I wasn't talking about overshoots in isolation, they're just one aspect of settling transients. The major one I mentioned first, which is glitching within the DAC itself. The issue is serious, so much so that Burr Brown developed their 'Co-Linear' architecture (used from PCM63 and onwards) which moves the glitching away from the zero crossing point where its considered most audible. However it only moves the glitching, it doesn't eliminate it. And in fact because in 'Co-Linear' there are effectively two 19bit DACs (rather than one 20bit) it exacerbates the magnitude of the glitches.

In this case, a higher sample rate will reduce the voltage difference from one step to another keeping the overshoot at a smaller absolute level.

This is indeed true - higher sample rates do have the benefit of reducing the magnitude of the code differences between successive steps. If the settling time was a linear function of step size then the errors would be unchanged with increased sampling rate. However settling time is not directly proportional to step size in general - therefore errors increase. Step size increases directly in proportion to frequency - its my hypothesis that rising THD+N (in a DAC chip's spec sheet) with frequency is a marker for this error source.
 
I have not studied DACs in detail yet. Simply because they do not seem to be the dominant factor in my experience yet, rather the majority of issues seem to surround the application of them. So I guess I should focus on the analog sector until that all DACs have the same issue which is dominant compared against other factors.
 
Implementation issues. DACs aren't perfect devices - they introduce errors. The most significant errors for audio (as opposed to for instrumentation applications) turn out to be the dynamic ones. Glitches are generated every time a DAC's output code changes - R2R ladder converters are particularly poor for this. The following analog circuitry has to settle at each code change, which is in effect a step change. So does the analog circuit slew-rate limit? If so that's another error. Then there's settling time - only pure exponential settling introduces no error.

All the above issues get worse with increased density of code changes - i.e. higher sample rates.

Should one try to avoid all these problems of multibit converters and instead go sigma-delta architecture (as most consumer chips are these days) then you'll open a whole new can of worms as regards introducing noise modulation.

that sounds more like a limitation of current technology rather than something inherent in the math.
 
Member
Joined 2008
Paid Member
the 2-way was already realized

Yet to be born. It is obvious some obstacles have prevented it from reaching the design goals. Lots of ideas floating around, but hardly conclusive.

In a previous post by Lynn, he answered a question by a new follower of this thread, as to what the current design is for the BTA.
He (Lynn) mentioned the 2 way was already built. I think the design is elegantly simple, and actually quite inspiring. The 3way verson is also quite clever. To be technical,though, it's not really a 3 way, but a 2.1 system.

I don't know how many pages back, but it's there for all to read.
 
Scott L,
Agreed that from what Lynn said a friend of his is building the prototypes as we speak. His only real remaining questions seem to be about the bottom end implementation.

On another note has anyone done a comparison, perhaps I have missed it of a comparison of a TAD 2001 and one of the Be diaphragmed Radian !" drivers. I would be interested to see a comparison of the two drivers. I know it is not a 1 to 1 comparison as the TAD is a long throat driver based on older JBL designs and the Radian is a short pancake type driver. Anyone?
 
I didn't get any sense that Lynn has stopped participating in this thread. At this time he seems to be up in the air on how he wants to configure the bass section only. Two 15" on separate angled baffles, or side by side, or one on top of the other. He was talking about using the bottom driver close to the floor to remove the floor bounce I think. Tricky section to match his horn section in radiation pattern.
 
In a previous post by Lynn, he answered a question by a new follower of this thread, as to what the current design is for the BTA.
He (Lynn) mentioned the 2 way was already built. I think the design is elegantly simple, and actually quite inspiring. The 3way verson is also quite clever. To be technical,though, it's not really a 3 way, but a 2.1 system.

I don't know how many pages back, but it's there for all to read.
I think it is obviouse the experiment of various configuration is going on. Obviously Lynn is looking for more from the speakers. Lots of possibilities that vary the sound. I one worked on a design where two drivers connected in parallel in separate enclosures were not moving together, turned out that different cable lengths made the difference. Two drivers in the same enclosure can also cause problems. It is still quite a long way from finalization as far as I can make out of the information and feelings of how aggressive Lynn wants to achieve.
 
that sounds more like a limitation of current technology rather than something inherent in the math.

Yep - the thing that higher sample rates buy us in the theory is a better phase response at HF. Bandlimiting by the AAF for RBCD has to be fairly severe and this brings along with it ringing. Going to higher sample rates allows the AAF (needed prior to the ADC) to have a more benign time domain response.

Since all engineering is about trade-offs there's a sweet spot. Bob Stuart of Meridian's view (as far as I remember) is that somewhere around 60kHz would be about right.