Sound Quality Vs. Measurements

Status
Not open for further replies.
Administrator
Joined 2004
Paid Member
The direct DAC feed to headphones reduced but not removed this issue, so at the time I figured "it must be the recording" (to a degree it may actually be).
Recording engineers seem to think it's a recording problem. Look at any pro audio catalog under "De-Essers".
They may be happy to hear that they need only fix the playback, not the recording. ;) IME, poor crossovers are a large part of the problem at the playback end.
 
Folks,

The problems created by SMPS illustrated:

An externally hosted image should be here but it was not working when we last tested it.


The above is a rather excellent result. Sticking even a generic 20MHz 'scope on the DC Output of most SMPS's is very illuminating.

It is possible to reduce the problems, but it tends to get so complex that you loose the cost/weight/size benefits for which you where introducing the SMPS to start with.

An alternative could be a Sinewave inverter running directly off the mains at 400Hz output and the use of 400Hz transformers for aircraft/ship/military use instead of 50/60Hz. This can give many of the advantages of an SMPS and keeps the RF noise controllable.

Ciao T
 
Pano,

Recording engineers seem to think it's a recording problem. Look at any pro audio catalog under "De-Essers".

Thank you very much. I am very familiar with De-Essers and their use. And no, they do not really fix the problem either.

A funny thing is that we did not have any severe problems with this back in east germany where recording consoles used Transformers and discrete circuitry.

They may be happy to hear that they need only fix the playback, not the recording. ;) IME, poor crossovers are a large part of the problem at the playback end.

Please re-read what I wrote. Certain recording techniques (which interestingly seem to rarely if ever involve the use of a De-Esser) seem to avoid this problem becoming audible or to appear at all (it would be interesting to know WHY).

However my experience is that certain types of replay systems appear to handle what is on the recording in a way that aggravates the audibility, often severely so, while others appear to not cause such aggravation and if such a thing is even possible seem to "fix" the problem.

Of course, I have no real interest in debating this, as I do not have the problem. I merely found the fact that I do not have this problem, despite the fact that if measured performance using the traditional set was a reliable guide to sound quality my system should sound completely awful) both interesting and significant as well as a bit bewildering...

I was mainly sharing my bewilderment and considered that some may derive some benefit from sharing it...

Ciao T
 
Folks,

The problems created by SMPS illustrated:

An externally hosted image should be here but it was not working when we last tested it.


The above is a rather excellent result. Sticking even a generic 20MHz 'scope on the DC Output of most SMPS's is very illuminating.

It is possible to reduce the problems, but it tends to get so complex that you loose the cost/weight/size benefits for which you where introducing the SMPS to start with.

An alternative could be a Sinewave inverter running directly off the mains at 400Hz output and the use of 400Hz transformers for aircraft/ship/military use instead of 50/60Hz. This can give many of the advantages of an SMPS and keeps the RF noise controllable.

Ciao T

I don't understand the point being made with this measurement, also since I do not know what was measured and how. But two observations can be made.

The first is that up to 300Khz, ripple is below 10uV against a 100mV reference level, so that would be better than 0.01% or 80dB down. Show me an linear PS under a realistic load that does better. The second is that all the nasties are at RF, most of them pretty high up in the MHz scale. Provided the source of this RF contamination is properly shielded, I don't see what would be complicated in getting rid of it.

vac
 
...
The first is that up to 300Khz, ripple is below 10uV against a 100mV reference level, so that would be better than 0.01% or 80dB down. Show me an linear PS under a realistic load that does better. The second is that all the nasties are at RF, most of them pretty high up in the MHz scale. Provided the source of this RF contamination is properly shielded, I don't see what would be complicated in getting rid of it.

vac

I would not draw that conclusion at all from this data. If you said from 100kHz up to 300kHz, fine. The horizontal scale STARTS at 100kHz.

Since this is obviously a spectrum analyzer (RBW and VBW gives it away) and most SA show nothing below 100kHz, (high pass filtered) you can't say anything about what the supply does below 100kHz.

While on the subject of test gear, another tip - This relates to scopes. Scopes have a linear display. This means they show at most about 40dB of dynamic range.

Can anyone hear more than 40dB? If so, don't expect any amazing revelations from a "single" scope display. Dual displays with different input ranges, of course, are different.
Conversely, if you can't "see it" on a single scope display, that proves very little that it's not there.
This fundamental problem has been a big detriment of folks looking in the "time domain" for things they can hear.
To truly do it right, you need to grossly oversample with a LOT of resolution and BW, and view the time domain on a log scale. This means using something like a 16 bit 10-100MS digitizer.
 
However my experience is that certain types of replay systems appear to handle what is on the recording in a way that aggravates the audibility, often severely so, while others appear to not cause such aggravation and if such a thing is even possible seem to "fix" the problem.
Spitty essy sibilants is one of my pet peeves as well, and for a long time I blamed the recordings, especially modern (pop etc) ones which use lots of processing and often purposely boost the high treble on vocals. Some recordings sounded worse than others of course, but it was a problem with many recordings on many speakers.

No amount of EQ seemed to fix the issue - tame down the treble a bit with EQ and it becomes less objectionable (although never eliminated) but then the treble sounds dull and lacking in sparkle, enough treble to seem crisp and you're stuck with sibilance - seemingly a no win situation.

Then I heard a pair of good ribbon tweeters for the first time and it was one of those "Ohhh" moments. Without even much effort to get the network right, I immediately noticed the lack of spitty esses or sibilance yet at the same time the top end was crisp and well balanced.

Recordings that previously had moderately annoying sibilance seemed to have none at all, and even modern pop recordings that I had found completely intolerable before were now perfectly listenable, (surprisingly good in some cases) albeit still quite hot in the treble.

The question is, which "conventional" measurements, if any, give us any clue as to why this might be. (I'm certainly far from the only one to comment on the lack of sibilance with most ribbon tweeters, so I think there is definitely something there)

Frequency response ? A good ribbon tweeter can be very flat in response, albeit usually with a bit of gradual rise above 6-8Khz which needs correction in the network. Narrow band smoothness is also very good in a well designed unit, although not unique - there are modern dome tweeters that are comparably flat in the 6-15Khz range, and EQ doesn't stop a sibilant tweeter from sounding sibilant, so I don't think frequency response is a significant determining factor, as important as narrow band frequency response smoothness at high frequencies is for overall quality.

Harmonic distortion and/or IM distortion ? A small ribbon tweeter has significantly higher distortion than modern high quality dome tweeters, so unless we believe the "people prefer higher distortion" myth, harmonic/IM distortion can't be the determining factor.

Directivity ? Many ribbon tweeters (mine included) are waveguide loaded, and together with the vertical length of the ribbon they are fairly directional, especially vertically. However there are plenty of small horn tweeters of comparable directivity which suffer from sibilance (especially ones using a soft dome instead of a compression driver) so I don't think directivity is a factor either.

CSD ? Of all the conventional measurements, this one seems to have the most promise for explaining sibilance issues with tweeters. Many tweeters, most notably soft domes have significant resonant decay right in the sibilance region around 6-8Khz where the first major dome resonance occurs. (Middle of the dome moving anti-phase from the perimeter)

Of all the kinds of tweeters, foil ribbons are one of the few that are almost entirely devoid of resonant decay / energy storage in the sibilance frequency range. A few really good metal dome tweeters are also pretty clean in this range, but their higher frequency resonances I think introduce other slightly different but similarly annoying colorations which the ribbons don't have.

Despite measuring and listening to ribbons for nearly 10 years I'm still not sure that CSD is the whole story though. What I am convinced about though is that un-natural sounding sibilance is almost entirely (maybe 90%) a speaker issue, not a recording issue.

If poor CSD in the speaker is indeed the root cause of sibilance then a poor recording can only reveal/excite the sibilance, not cause it.
 
Last edited:
A high Q resonance (either peak or null) could cause problems, but might not show up in measurements unless one of the test frequencies happened to hit it spot on. Music, where the higher frequencies could be mainly percussive noise, might excite the resonance because noise hits all frequencies within its bandwidth.

If that's in reply to my post then yes I largely agree - my working theory is that sibilance is a matter of brief impulsive stimulus in the music that "hits" or excites high Q driver resonances in the sibilance region that then "hang on" for a comparatively long time due to their slow decay.

Because they don't decay as quickly as they should the perceived integrated response is greater than it should be, even if the 1/3rd octave steady state frequency response were EQ'ed flat. (Looked at in the time domain with a constantly varying music stimulus there would be too much energy in that frequency region integrated over time)

I would say this perception mechanism is probably the same for any high frequency high Q resonances - except the frequency range where it occurs determines the character of the sound - for 6-8Khz it sounds sibilant.

Such resonances don't necessarily show up well in the frequency response plot either, especially if they're dips as such dome resonances often are, but they are revealed quite clearly in the CSD, so in that sense there is a "conventional" measurement that does correlate with it - if indeed it is resonances that is the key issue.
 
Last edited:
Too true. A few nights ago, I had a long conversation with a recording engineer about my mike preamp. He just couldn't "get" the notion of a box of gain that simply amplified without coloring or why in the world someone would want to build such a thing.

When the job is creating the sound, fidelity may or may not be relevant. When the job is reproducing the sound, fidelity is paramount. Two distinct objectives that happen to use similar technology.
 
It is really beyond me why people believe that transforming voltage at 50 or 60 Hz would be intrinsically better than doing the same at, say, 40KHz.

I would say just a big trafo by itself only is a simpler circuit than a switching power supply. Extra circuits mean extra noise.

Switching power supply does not remove 50/60Hz fundamental it adds hi frequency overtone no real magic hear.

Linear voltage regulator is able to handle lower frequencies better.

Big low frequency trafo (a lot of winding) is an efficient inductance by itself usually that might help filter out some noise from power path right away.

IMHO it looks like there are no any contra in a big low frequency trafo from a sound quality point of view. Cost, size, weight & power efficiency are other valuable subjects but no critical in case of a standalone monoblock on a floor that is being feed from a receptical connected to a power plant once we gonna optimize design for the sound quality and not being limited by a tight budget.

Moreover living in class A with huge powerful monsters is a tough approach (I abandoned Krells monoblocks by myself several years ago). So in case of hi fidelity class A + hi efficiency speakers for a living room we are talking about several watts amps where switching power supplies would be an overkill anyway.
 
Originally Posted by ThorstenL
However my experience is that certain types of replay systems appear to handle what is on the recording in a way that aggravates the audibility, often severely so, while others appear to not cause such aggravation and if such a thing is even possible seem to "fix" the problem.

EXACTLY, but in my wife's case it is horns and some violins. So, what objective measurement can we make in the amp/driver system that quantifies this problem? It would be easy to just say pony up for well designed above average equipment, except her favorite amp is a mid-fi, but well executed, Rotel 840! I am sure my mid-fi speakers "allow" this more than decent ones will.

One gentleman who did some recording engineering in his time suggested it was common to boost the 4K range a bit to make the recording "pop". Maybe true, but I tried a gentle broad dip (via my DCX) and it did not seem to help. Of course adding that monster in the mix might have done more harm than good. I should build a passive notch filter for a better test.
 
Hi,

Then I heard a pair of good ribbon tweeters for the first time and it was one of those "Ohhh" moments. Without even much effort to get the network right, I immediately noticed the lack of spitty esses or sibilance yet at the same time the top end was crisp and well balanced.

I am rather partial to Ribbon/Magnetostat/Electrostatic treble. But they are not requirements. I have had speakers with dual-cone (main & whizzer) fullrange drivers that had no issue if driven by the right electronics and these have a very bad CSD in the higher regions...

Ciao T
 
One gentleman who did some recording engineering in his time suggested it was common to boost the 4K range a bit to make the recording "pop". Maybe true, but I tried a gentle broad dip (via my DCX) and it did not seem to help. Of course adding that monster in the mix might have done more harm than good. I should build a passive notch filter for a better test.
Even if the recording or the speakers introduced a narrow peak at 4Khz (which could indeed cause a harsh or fatiguing sound) then applying a gentle broad dip will not fix it as the net result is still not flat, and the ringing in the time domain will not be fixed. Only an exactly complementary dip would help.

Have you done a narrow band sweep of the speakers ? See any peaks around the 4Khz region ? If you do, try applying a parametric EQ that precisely corrects it (correct centre frequency, bandwidth, and amplitude) and see what you notice.

It doesn't take much of a peak in that region to upset things - a 1dB 1/3rd octave peak around 4Khz is plenty to cause a forward, slightly aggressive/harsh sound.
 
Last edited:
Status
Not open for further replies.