THD measurements at the smallest output power where crossover occurs

maintaining a minimum current of [...] on both outputs at all times
have a look at LV's circlophone:

The servo stabilizes the quiescent current, but it goes further: it actively controls the output stage so that both sides always remain active in a "warm" class AB, having most of the attributes of the full class A, including a complete absence of crossover artifacts.
 
  • Like
Reactions: Alexandre
I test for IMD specifically. A circuit with high THD really amplifies IMD problems.

I respect Pavel. I don't disagree with him on most things. It's too bad you couldn't measure what you were hearing, now that would have been informative and very helpful.
 
Pavel measured opamp buffers, and IIRC posted the measurements in the thread and or on his website after he announced which files were which. Have to see if I can find it if it really matters. Its what you would expect from some decent audio opamps in non-inverting unity-gain configurations, and as built by Pavel himself.
 
Good 24-bit systems can certainly get there, including when reproducing well digitized 16-bit content. IME most of the limitations have been in the DACs.

BTW, the number is 96dB isn't it? 6dB per bit times 16-bits.
SNR = 6.02*N + 1.76 (google is your friend 😉 )

Good 24 bit systems will certainly go higher, I have designed a whole bunch myself for pro environments.

My whole point is that there is much more than just measurements and theoretical results.
People seem to forget reality of things. It's not just ONLY if we can hear things.
These kind of numbers also tell us more about a specific system.

Keep in mind that in many cases the SNR is given @ maximum output swing.
From there you can calculate the actual noise floor.
You will be surprised how bad that can be in some cases.
 
google is your friend
So I see:

1698192482756.png


https://www.analog.com/media/en/training-seminars/tutorials/MT-001.pdf

IOW, SNR = 6.02N + 1.76dB is yet another approximation.

Many people seem to settle for this:
The maximum SNR of a digital system is about 6 dB times the number of bits per sample. Therefore, a 16-bit system gives us a maximum SNR of 16x6=96 dB.

https://benchmarkmedia.com/blogs/application_notes/14949345-high-resolution-audio-bit-depth#:~:text=The maximum SNR of a,SNR of 24x6=144 dB.


That said, I would agree that the purpose of measurements is often to gain useful insights into system behavior.

Its another question entirely as to whether the old psychoacoustics is exactly right, and or if its common interpretation in the audio industry is right. IME there is are some still some significant problems in that area.

For one example, the whole idea that the ear is a phase-insensitive FFT analyzer is easily falsified.
 
Last edited:
You're forgetting other system impairments exist Mark. Then you get analogue issues past that.

The ear is sensitive to phase difference in impulse noise, or the same sound arriving in both ears. It is not to different frequencies. Harmon Kardon made a huge design premise on this false idea.

Hi b_force,
I completely agree with you. Looking past audibility does tell you a great deal about a system or device. However most people get wound up over what they think they can hear. Injecting some reality is difficult.

Yes, SNR is given from some maximum signal level to the noise level. Electronic systems easily exceed the acoustic system for SNR.
 
So I see:

1698192482756.png


https://www.analog.com/media/en/training-seminars/tutorials/MT-001.pdf

IOW, SNR = 6.02N + 1.76dB is yet another approximation.

Many people seem to settle for this:
The maximum SNR of a digital system is about 6 dB times the number of bits per sample. Therefore, a 16-bit system gives us a maximum SNR of 16x6=96 dB.
Weird settlement, leading to errors, but most of all doesn't hold any fundamental roots in the math and physics behind it.

I don't mind rules of thumb, as long as they are still derived from the original equations. So that previous approximation works well without having to much error.
I don't understand why people would come up with something differently?
It's also definitely not something they teach you at school or any basic electronics books.
 
Hi b_force,
I completely agree with you. Looking past audibility does tell you a great deal about a system or device. However most people get wound up over what they think they can hear. Injecting some reality is difficult.

Yes, SNR is given from some maximum signal level to the noise level. Electronic systems easily exceed the acoustic system for SNR.
Can you please use the @ command plus typing the name? It will show a list with nicknames. That way people get a notification when you reply to them. 🙂

Context and practicality are essential.
Otherwise people keep talking around each other.
 
Simulating for my amplifier with a 1W RMS power fed into 8 Ohms, gives a distortion figure of 0.0022%. The maximum voltage across the loudspeaker is 4V peak to zero. That is achieved by a signal peak of 80mV. The distortion at full power is 0.0047%. The signal input peak is 1.2V. With the latter, the power into 8 Ohms is 225W RMS. This is far more than enough for a home setup.

This is the amplifier I built with the help of generous forum members who dedicated some of their precious time to post their help. Without this forum, I would have never succeeded. For instance, LTSpice proved to be an indispensable tool. I did not know about LTSpice before joining this forum. Mooly was the person who suggested it to me. I did not know that free to use simulators existed.

When I was still in my early twenties, I wanted to design and build a switching power supply. After trying for some years, I realised this was not possible for me. At that time, I had no simulators and the repeated failures to build a circuit, which appeared to work, convinced me that electronics was not for me, notwithstanding it was my passion during my adolescence and early adulthood. This forum was instrumental to teach me, that failures in designing a circuit, were normal and that learning was very gradual and required persistence and perseverance. Using LTSpice for many days helped me realise real circuit design takes time and a lot of patience.
 
The ear is sensitive to phase difference in impulse noise, or the same sound arriving in both ears. It is not to different frequencies. Harmon Kardon made a huge design premise on this false idea.
I take you are referring to LF transients, ITD, and continuous sine waves, respectively? If so, the continuous sine wave theory is falsifiable and has in fact been falsified by the guys at Purifi, Lars Risbo and Bruno Putzeys. Surprised if you haven't seen that already.

In the blog section at Purifi there are two audio files that consist of 4 continuous fixed frequency and fixed level sine waves. The only difference between the two audio samples is that the phase of two the frequencies have been shifted in phase in one of the files. No other difference. They recommend to listen with headphones. So far everyone who has listened to the audio files say they sound different. Not only do they sound different, it is also interesting their FFT power spectra are exactly the same/identical. That shows FFTs are not good for predicting what something will sound like. Moreover, they say in the text, "I’d like to offer up a small demo to caution against this form of Popular Psychoacoustics." It is a warning against the belief that humans can't hear phase.
 
Last edited:
The only difference between the two audio samples is that the phase of two the frequencies have been shifted in phase in one of the files. No other difference.
Markw4, repeating false claims will not change facts, regardless of constant repetitions.

It is absolutely not true that two audio files differ only in phase. Mathematical method used to produce two audio files (Mathlab SW) used phase rotation, but that doesn’t mean that resulting files differ only in phase. One is amplitude modulated, other is not. How on earth they could not sound different? There is no mystery and this is not a proof that most people can easily hear phase difference, which you often repeat.

BTW, in original thread, all that was discussed and Lars eventually (after lot of backfire) agreed that phase difference relates only to spectrum measurement and that files are otherwise vastly different in time domain.

phase is the only difference between the two spectra.

The signals them selves are of course vastly different as anyone can see from the time domain graph
Only result of false claims was that a very distinguished member was banned because he has lost his temper, exposed to such level of nonsense.
 
Markw4, repeating false claims will not change facts, regardless of constant repetitions.
Then you should stop making false claims. What I said was correct. It is you who are confused.

Don't know how many times I have had to explain this, but here we go again: If you change the phase of a frequency in the frequency domain, it will always change the corresponding time domain waveform. Always. It cannot be otherwise. So, you are now shocked that phase was changed and so the time domain waveform changed accordingly? Did you go to college to study this stuff? There is an equivalence between the time domain and frequency domain. The time domain view is a single valued function of time. The dependent variable is amplitude. The frequency domain view is a complex valued function of frequency. Each frequency has magnitude and phase (when expressed using polar coordinates). Starting to come back now?

When Purify says one signal in amplitude modulated and the other is frequency modulated, they are only describing the appearance of time domain waveforms that result from phase shifts in the frequency domain. Also, the process used to produce the time domain waveforms doesn't have to be by phase rotation. It could just as well have been done with filters in the in time domain that separate out each frequency and all pass filters to adjust the delays. That Purifi did it by phase rotation is immaterial.
 
Last edited: