Acceptable output quality for power calculation

I've seen a few videos where someone does power calculations on amplifiers, and it usually boils down to increasing output level to just below clipping at 1000Hz and using the rms value from that.

I'm experimenting with a cheap amp, and with an amplitude sweep I found that an input signal at -5.5 dBV was the point of no clipping on the output. A frequency sweep at -5.5 looks pretty bad though with a THD of 10% at 100-200Hz:

te2iM2V.png


With an amplitude sweep at 200Hz I could see that I needed to lower the input signal to -7.5 dBV to avoid clipping across the whole frequency range.

Is there an industry standard way to define the quality of the signal for getting the rms power of an amplifier? Is it just whatever looks good at 1000Hz, or is it common to include the whole frequency range? Something in between?

I tried searching, so sorry if it's very obvious.
 
The easiest and most common way would be to use an oscilloscope and look for actual clipping as maximum output is approached. Having got that figure (and that would normally be at 1kHz with a sine wave) you then derive the RMS voltage and calculate the power.

Most amplifiers would meet the same no clipping point over at least 20Hz to 20kHz.

Your picture is like nothing I've seen before 🙂

If in doubt over results then perform the test on the input to the amplifier as that should be clean and undistorted.

If you set the generator to 20,000Hz then you should see a single spike on the FTT at 20kHz. Can't make your image out at all. It looks to be centred around 200Hz and yet I can see it 20kHz at the top ... dunno 😉

Do it the traditional way.
 
The picture above shows independent THD measurements with the QA401 at different frequencies.

Here are details from a measurement at 200Hz with input of -5.5 dBV:

jjUIcnO.png


And this is the output waveform:

T5X2ijs.png


The waveform looks pretty good when lowering the signal to -7.5 dBV:

3bYZioE.png


The waveform looks nice at 1000Hz and -5.5 dBV as well.
I think I can trust the QA401 - so it's probably just a bad amplifier. It's a Creative Pebble 2.0.

So from your response it sounds like I can usually expect to get a much more linear result from a frequency sweep and just measuring at 1000Hz is the normal way?
 
Is there an industry standard way to define the quality of the signal for getting the rms power of an amplifier? Is it just whatever looks good at 1000Hz, or is it common to include the whole frequency range? Something in between?
We usually measure output power at 1000 Hz (it is convenient because it is not too low and not too high).
I possibly don't fully understand your question but there is several ways to rate an output power. For example: There is a standard which tells us to measure max output power at 10% THD. There is another way - we may measure rated output power at any THD we want.
And so on. Look here: Audio power: RMS. PeakPower. Total System Power. PMPO.
 
Most of the small chip amps are rated at midband (1 KHz) and 5% or 10% THD. Its an industry convention but usually means just into clipping. There are several formal standards for measuring audio amp power but they are all behind paywalls and won't help much anyway. I would focus on a realistic worst case useable distortion (3% for low power and .1% for high power) and look at the useable frequency extremes. 10 Hz and 100 KHz are not really useful since there is no musical content at those extremes. But 20 Hz to 20 KHz should be a range with pretty constant output level and distortion. Usually lower distortion at lower frequencies.

The amp you measured seems to have some issues.
I need to try your web interface soon, it looks pretty good.
 
Thanks Vovk Z and 1audio!

The amplifier I'm testing is rated for 4.4W into 4 ohm, so 2.2W pr channel. At 1000Hz and -5 dBV input I get pretty close to that output power:

uBgI135.png


It's a little bit above clipping, and right below 3% THD. But as we saw from the measurement at 200Hz the THD increases to about 10%. I guess they were aiming for 10% THD at the worst case frequency in their power calculations then?

I don't have the experience yet to tell exactly what the clipping or distortion at different levels does to sound, but looking at the waveform it seems a bit crazy to use 10% THD as an industry convention, as it certainly looks like it's butchering the original signal. Or maybe this amp is butchering more than usual 😀

I have a few amplifier kits I want to build, and I want to try to make something myself eventually. I think it will be interesting to compare them to their specifications and to each other.

1audio said:
I would focus on a realistic worst case useable distortion (3% for low power and .1% for high power) and look at the useable frequency extremes.

Yeah, I like that idea better than the industry convention. I think I'll start here.

1audio said:
I need to try your web interface soon, it looks pretty good.

Thank you! Let me know if you have any feedback.
 
Correct. Its possible that the switching artifacts are strong enough to cause isuses with the QA401's input circuitry. A passive low pass filter is usually a good idea but it can't compromise the measurements. AP has such a filter and I believe it has been reverse engineered.
The low power digital amps are usually the worst for HF leakage. Same issue as using 10% THD as a metric. All about minimizing cost.
 
I suspect at this distortion level, you are measuring the frequency response of your supply rails and bypass caps.

At some frequency, your amplifier supply rails sag more than at others. At lowest frequency you're good, because no AC effects. At mid frequency, your THD sucks because you have cable inductance and your bypass caps are too small. At high frequency, your bypass caps are big enough to prevent ripple and sag.
 
Perhaps you have to firstly show correct benchmark loopback response, and then a DUT response at low signal level. If the DUT response at low signal level is not ruler straight, and the DUT is just an amp module with no tone controls etc, then I'd suggest your test setup needs more assessment (eg. you may be trying to incorrectly probe a class D output).

This is the flattest I can get it, at -7.5 dBV. At -7 it has the same hump as at -5.5 from the first screenshot.

r1O4D1g.png


The hump has turned into a valley.
The valley gets deeper as the input signal gets smaller:

0kn5iBn.png


The drop at the end is pretty consistent.

There are no tone controls on the amp, just a volume control set to max when measuring. This is all I had time for today, but I'll get back with loopback tests and I'll look more at the rest of the comments.
 
The HF drop is because sampling at 48KHz filters all harmonics above about 22KHz. Switch to 192 KHz for a distortion sweep to 20 KHz so you can see the harmonics through the fifth. Have you measured the amplitude response into your load? Is it resistive?

Its quite possible that the amp is tuned to get the best result at 200 Hz (has to do with switching and null lockout time between the transistors switching on).

If you have a simple linear amp you can swap in it helps a lot to validate the measurement setup.
 
WHY do you let the amp clip SO MUCH?

T5X2ijs.png


uBgI135.png


proper waveform is similar to this one:
3bYZioE.png

rounded top/bottom , and rise it slowly until you just detect a little flattening , sometimes starting as slight trace fattening, then going over for confimation to be certain you get the flat topping and then backwards a little so it just disappears. Period.

Use that RMS voltage to calculate power.

For sake of completeness repeat at 20 or 40Hz, depending on your goals, and 20kHz.
 
Just a quick post on my measurement setup:

LIoJwFs.jpg


1audio said:
Have you measured the amplitude response into your load? Is it resistive?

The dummy loads are 4 ohm 200W from Parts Express. They call them "low inductance". Do you mean like an amplitude sweep? Or is this a different kind of test?

I'm using a switching lab PSU with USB output to power the amp, since it's USB powered. I tried using my linear PSU as well (had to make a USB adapter), and the noise floor looked a bit cleaner, and slightly better THD, but it didn't make much difference on the frequency sweep. Same tendency, but some variations to the values.

The amplifier has a balanced output as far as I can understand. Here are the waveforms on the scope:

qbUgZtA.png


Yellow is output+, pink is output-, green is input and white is the math function to get the complete signal.

I've connected both + and - to the QA401 on both channels. Input to the amp is just from the + on both channels since it's not balanced input.

I've tried connecting the ground probes for the QA401 inputs to the USB ground and to the signal input ground, but it never seems to make any difference, so I mostly leave them hanging to avoid crossing cables everywhere.

Here is a closer look at the amplifier:

aZonjm1.jpg


The chip is "Chipstar CS8563S"
 
WHY do you let the amp clip SO MUCH?

I guess that's my question too. The waveform looks fine at 1000Hz at that signal level, but clips bad at 200Hz. I was wondering if power calculations were only based on the result at 1000Hz or if the whole frequency spectrum was relevant.

proper waveform is similar to this one:

rounded top/bottom , and rise it slowly until you just detect a little flattening , sometimes starting as slight trace fattening, then going over for confimation to be certain you get the flat topping and then backwards a little so it just disappears. Period.

Use that RMS voltage to calculate power.

For sake of completeness repeat at 20 or 40Hz, depending on your goals, and 20kHz.

I think that sounds like a good strategy in order to find the max power of an amplifier at a level where it delivers a consistent good output.

As I understand it, it's common to measure for 10% THD though, and then you might see that ugly clipping when driving the amplifier to it's specifications.
 
Perhaps you have to firstly show correct benchmark loopback response.

I've recorded some loopback measurements now.

First an amplitude sweep for THD at 1000Hz:

yk78lvz.png


The best case is around 0 dBV.

Next is a frequency sweep for THD at 0 dBV:

2NEWrwx.png


Well, it's not exactly flat, but without any other units to compare to I'm assuming it's how it should look?

And also the spectrum at 1000Hz, 0 dbV:

W2vJiAr.png


I think that looks pretty good.
 
The HF drop is because sampling at 48KHz filters all harmonics above about 22KHz. Switch to 192 KHz for a distortion sweep to 20 KHz so you can see the harmonics through the fifth.

Here is a frequency sweep on the QA401 in loopback, set to 192KHz. In order to get THD for the extra frequencies I had to change the measurement from 20Hz-20KHz to 20Hz to 40KHz. Is that what you meant?

It certainly looks very flat from 40Hz and throughout the rest of the range, unlike the loopback measurement in my previous post. Something weird with 20Hz-40Hz though.

61W6SxF.png


Its quite possible that the amp is tuned to get the best result at 200 Hz (has to do with switching and null lockout time between the transistors switching on).

Interesting theory. Some sort of trick for cheap chips again?

If you have a simple linear amp you can swap in it helps a lot to validate the measurement setup.

I don't have any amps ready for that purpose yet. I have the JohnAudioTech preamp on a breadboard that I could measure this weekend. Not sure if a preamp is representative, or if the breadboard will ruin the results. I was planning on doing some measurements of the preamp on the breadboard and later on a perfboard to look for any differences.
 
I've set up a test of the preamp now, for comparison. It's based on a NE5532 opamp. I only connected one channel to make it easier to see the setup, and it's replicating the setup of the amplifier. It's not a balanced output, so negative on the QA401 is connected to the ground of the preamp.

TEcuOIt.jpg


Amplitude sweep for THD at 1000Hz:

7RfIvCg.png


Input at -7 dBV seems to be the point before THD rapidly gets worse.
It could probably do better with a higher supply voltage. Currently only using 2 batteries at about +-8.6V.

Frequency sweep for THD at -7 dBV:

dH8vGG3.png


Still not very flat.

Spectrum at 1000Hz and -7 dBV looks pretty good:

FAr5nN2.png


Not sure if anyone can see any issues with my test setup that would be the reason for the bad results on the amplifier?
 
It looks like your test setup can be trusted. The lousy performance from the little switching amp is not unusual.

I would recommend the TI amps from my experience. They work well sound good and many have on board DSP you can use to optimize the driver performance.
 
Thanks for the feedback 1audio!

I don't have any TI amps currently, but nice tip. I might check them out in the future. I have a TDA1517 as part of a Velleman MK190 kit I'm going to build and test later. Curious how that will work. It seems TI sells that chip as TPA1517. Not sure if they differ in any way.

I was able to find a point where the harmonics were under control with the amp I'm discussing in this thread, at -34 dBV input signal! Not much current produced at that level.

oEOtggP.png


I've done some tests with an 8 ohm load as well. It behaves very similarly:

fYbR5Sn.png


What surprised me was the phase:

9AETXyJ.png


The channels are a 180 degrees from each other!

Here is the phase in a frequency sweep:

bvJUZQo.png


Here is the same test at 4 ohm for comparison:

WmTRDzJ.png


4 ohm works like expected, while 8 ohm seems completely wacky.
I tried switching left and right on the input signal, as well as switching the loads between left and right, and none of it made any difference. So it must be the amp doing this.

Guess it's not 8 ohm compatible. Anyone seen something like this before? Is it the chip, or could it be related to the output filter?

The amp comes bundled with 4 ohm speakers, so it's not something Creative would worry about, but interesting as I thought an amp capable of 4 ohm loads would always be able to drive 8 ohm loads.