Bob Cordell's Power amplifier book

Administrator
Joined 2004
Paid Member
Mark,
You continue to drag up a subjectivist opinion at every opportunity.

You also forget, people with the instrumentation and training also listen and are rather good at it. Now, it isn't my job to educate you. I've done all the work, and bought the test gear and spent decades honing my craft. Follow the same path, walk in the shoes of knowledge and then let's hear what you have to say. I'm pretty sure your tune would change.

I'm not going to review what I have learned over and over again. I refuse to listen to every argument as it sucks up time and has yet to offer one shred of proof that the results are repeatable by anyone else, and all variables have been controlled to allow any conclusions to be drawn. As I have said so many times, if you believe this stuff, then great. But do not attempt to "re-educate" those who have learned how things really are, invested the money and time and actually know. Remember, you just listen. We listen, measure and draw valid conclusions. You're missing a great deal of the information we have access to.
 
Administrator
Joined 2004
Paid Member
Hi Bob,
You are so right. The sound of various compressors depends greatly on method of volume or gain reduction, signal detection and side chain processing. Even down to the audio buffers and muting method. Power supply issues may also matter. Many compressors take the signal before any processing after level adjustment from a pot, then use that to trigger the compression.
 
You continue to drag up a subjectivist opinion at every opportunity.
What I am trying to do is balance a form of extremism with a counterweight. The goal should be to end up somewhere in the middle, where there is a place for measurements and a place for subjective listening. Otherwise you would have things engineered devices like video monitors that nobody ever checked to find out if they look good to humans. They just have some color meter and brightness meter and assume that's all there is to it. IMHO, it would be stupid to design for human use without checking to see if your two meters missed anything important to humans. If you have a small company of one or two people, you can't afford formal DBT at every juncture. You have to do the best you can with what you have. If you have good eyes and ears them use them too and let the market decide if you chose correctly.

EDIT: I used to design complex medical systems that cost a few million $ each. They had to be safely operated by humans. We don't have FFTs to show if a certain user interface is safe for human operators to interact with. We were not the size of NASA or Boeing either, although we had a high percentage of PhDs. It took a lot of testing and common sense, but I assure you some things had to be evaluated subjectively. Other things were evaluated with test instruments. That's the real world for many engineered devices.
 
Last edited:
Hi Bob,
You are so right. The sound of various compressors depends greatly on method of volume or gain reduction, signal detection and side chain processing. Even down to the audio buffers and muting method. Power supply issues may also matter. Many compressors take the signal before any processing after level adjustment from a pot, then use that to trigger the compression.
Yes, I think you are referring to the feedforward side chain compressor architecture.

The legendary Fairchild 670 stereo tube compressor and many of its clones and descendants use the feedback side chain approach. A very accurate clone of the 670 can be had for a mere $30,000.

Cheers,
Bob
 
Member
Joined 2010
Paid Member
Two things about DACs worth noting. The first relates to clock jitter and some of its origins. Clock recovery from the Manchester-encoded SPDIF input is often done in the digital audio interface chip (DAI) that precedes the DAC chip with a phase-locked loop (PLL). A PLL is a feedback circuit whose loop gain is usually influenced by the transition density in the input data. The gain of the phase detector is usually dependent on the transition density. If there is a static error, its magnitude is reduced by the amount of loop gain in the PLL. Virtually all digital data streams have data-dependent transition density. This can lead to clock jitter if an error signal is required to move the VCO free-running frequency to the exact frequency of the incoming data. This effect depends on the details of how the DAI chip recovers clock and the quality of its PLL. A sloppy on-chip VCO creates more clock jitter. A precise crystal-controlled VCO (VCXO) with adequate control range can greatly reduce this effect. The quality and behavior of the DAI chip really matters. In some DACs, there are other, more sophisicated ways to handle clocking of the DAC, some of which clock the DAC with an ultra-low jitter fixed-frequency crystal oscillator and use special means to deal with the resulting asynchronisity between the crystal clock and the incoming data frequency.
...

(1) Back in 1980 we were already using Manchester-encoding clocks with the MILSPEC-1553 bus. And we could fly supersonic planes without splashing onto the hill side!

(2) Clock jitter was effectively solved a loooong time ago by buffering the received data. The front end locks into the clock contained in the bus and writes the received data into a buffer. Reading and processing that data is done under a different , very high precision clock. BTW, audio clocks are ridiculously slow, even with DSD.

I believe i have paid many, many years of our home mortgage by writing device drivers that work like just like that.
 
Clock jitter was effectively solved a loooong time ago by buffering the received data. The front end locks into the clock contained in the bus and writes the received data into a buffer. Reading and processing that data is done under a different , very high precision clock.
That is exactly what happens in asynchronous UAC (USB audio class). In addition asynchronous UAC has a feedback mechanism which allows the device to control the host data rate as well so non-coherency of clocks does not lead to buffer over/underruns.
 
(1) Back in 1980 we were already using Manchester-encoding clocks with the MILSPEC-1553 bus. And we could fly supersonic planes without splashing onto the hill side!

(2) Clock jitter was effectively solved a loooong time ago by buffering the received data. The front end locks into the clock contained in the bus and writes the received data into a buffer. Reading and processing that data is done under a different , very high precision clock. BTW, audio clocks are ridiculously slow, even with DSD.

I believe i have paid many, many years of our home mortgage by writing device drivers that work like just like that.
With respect to clock jitter, reading received data into a FIFO buffer and writing it out to the DAC with an ultra-low jitter fixed clock frequency is simple, but there is more to it than that. Would that it be that simple. The problem is that eventually the buffer overflows or underflows due to the input and output clock being of slightly different frequencies. This asynchronisity must be handled in some way, and that is where things get a little harder.

Occasionally dropping a but or stuffing a bit can be done, but the way it is done matters a lot, Using an DSP asynchronous sample rate converter is another approach, but it involves significantly added complexity and great care is needed in the design of the ASRC chip. Finally, using a low-jitter VCXO to match perfectly the long-term input and output frequencies can be done.

Cheers,
Bob
 
With respect to clock jitter, reading received data into a FIFO buffer and writing it out to the DAC with an ultra-low jitter fixed clock frequency is simple, but there is more to it than that. Would that it be that simple. The problem is that eventually the buffer overflows or underflows due to the input and output clock being of slightly different frequencies. This asynchronisity must be handled in some way, and that is where things get a little harder.
As I said in post #10869 feedback from device to host in asynchronous USB solves the buffer over/underrun issues. But of course nothing similar available for SPDIF.
 
...using a low-jitter VCXO to match perfectly the long-term input and output frequencies can be done.
Some of us care about close-in phase noise, more than far-out phase noise that affects the noise floor. Close-in phase noise is convolved with the audio, and is thus highly correlated with the sound being listened to. Its hard enough to make SOA ultra-low close-in phase noise clocks without trying to pull on them with a PLL, even a pretty sophisticated one that works in a certain way analogous to some of what an ASRC does in terms of clock jitter attenuation.
 
Last edited:
The feedback mechanism of asynchronous USB in practice eliminates jitter from source and allows the use of a SOTA clock with ultra-low phase noise as master clock. VCXO or PLL is bound to be inferior in that respect. But no doubt SPDIF can be much improved with a well executed VCXO, PLL, FIFO or ASRC.
 
  • Thank You
Reactions: 1 user
Some of us care about close-in phase noise, more than far-out phase noise that affects the noise floor. Close-in phase noise is convolved with the audio, and is thus highly correlated with the sound being listened to. Its hard enough to make SOA ultra-low close-in phase noise clocks without trying to pull on them with a PLL, even a pretty sophisticated one that works in a certain way analogous to some of what an ASRC does in terms of clock jitter attenuation.
Everyone should care about close-in phase noise. SOA crystal oscillators are readily available with extremely low close-in phase noise. Go look up the specs over at Vectron.

Cheers,
Bob