Sound Quality Vs. Measurements

Status
Not open for further replies.
diyAudio Senior Member
Joined 2002
Hi,

My point was that cables aren't directional because otherwise it would show up as distortion, and it doesn't, and now it has morphed into this. Funny.

I wish we would more clearly distinguish between cable (a finished item) and bare wire.
A piece of wire is directional in the sense that it is in an unnatural state due to the manufacturing process it goes through when it leaves the die.
In that sense, and only in that sense, it is true.
After that it's anyone's guess if that would be enough to make an audible difference regardless of the fact whether or not it would be measurable at all.
The directionality is measurable as I stated before and in the context of research it is an important issue but this research has nothing to do with anything audio related, not directly anyway.
This research focuses on the effects of crystal boundaries, voids etc. etc. In short we're talking submicron layers here, sputtering and plating techniques and so forth.
Heck, they can even grow supersized cystals on sublayers and what have you.

To cut a long story short, one of the major players in cables for audio/video use is based in the Netherlands and I can assure you that they keep a very close eye on all this research and implement some of it in their product.
They have had the factory in the US scratching their heads more than once just by looking at all the demands they had to meet asking what the hell all this is going to be used for...

I can assure you also that they're not the only audio company funding these researches. From cartridge manufacturers to speakers, they are there.

Some people only seem to hear things after they've been able to measure them. Me, I'm curious, I like to know why I hear what I hear.

Respectfully, ;)
 
Last edited:
Read my sig, the wide open-loop BW idea is still taken as gospel to the point that one very successful high end designer takes an amplifier with huge BW and slewrate and claims simply resistively loading the VAS to kill the gain and raise the apparent open-loop BW has great benefit.

My point has always been that this idea is wrong, and teaching it to folks that come to learn some basic engineering is a bad idea. Same goes for the sound of feedback.

Before I forget I remain surprised at how rarely the high quiescent bias in the output stage of the O/L amplifier is credited. This was a large departure from the practice of the day and considering the impact on reducing crossover distortion IMO had as much or more to do with the sound.

I do NOT recall Otala/Lohstroh ever propsing that, but I could be wrong. IF they did, I'd say that was wrong.

But I'll also say that this may well have been an overzealous fan, who noticed the fact that in their amp the VAS is indeed biased at 20 mA, a relatively high level of current.

I think the point was to bias the VAS at a current by which the transistors used have about reached their peak performance in the given circuit. In my experience, this will usally be at 10-15 mA, depending on the transistor. For 2SA1381, 15 mA is better, for BF721 10 mA is just fine, etc. One size does not fit all.

As for biasing the output stage, my findings are that just about everything worthwhile will happen by about 130 mA per output trannie. More gives next to nothing, you have to hog it all the way to full class A for anything bigger to happen. On the other hand, I should add that I like paralleled/series pairs of no less than 3 and rarely more than 4. 130 mA per trannie may not seem to be much, but keep in mind that most Japanese amps are biased at 20-40 mA per trannies, so it is a fair bit more.

But that's just my opinion.
 
I once had an interesting experience that showed that sometimes, reducing the open loop gain with a resistor loading the driver stage, could, in fact, be MORE LINEAR than without.
This is where you have to take more into account than just a simplified understanding of a design.
It goes like this:
The DRIVE IMPEDANCE to the 'follower' at the output stage can be very important, sometimes dominating the overall distortion. How can this be? Is a follower not almost perfect, open loop? Here is the difference, the bipolar follower becomes a BETA MULTIPLIER when the ratio of the drive impedance and the input impedance presented by the follower are comparable. This is often so in many, many standard power amp circuits. Therefore the assumptions of linearity as calculated for the follower, might be way off, because the Gm of the bipolar transistor(s) is virtually out of the equation. NOW, Beta dominates, and its linearity, which depends both on the device, and its operating point.
This is why it is not always a losing thing to load the driver stage with a resistor. By doing so, you present a lower drive impedance to the follower circuit and you usually linearize it further, compared to having a very high drive impedance.
Now, is it a good idea to add this resistor? I don't usually do it, haven't done it for decades, in fact. But my requirements are for very low distortion at rated output, so I can't afford to throw away the gain, and therefore the feedback. Unfortunately, my open loop bandwidth is usually about 4KHz (est) in my power amps, and not the 20KHz that I would like to see. Does this make an audible difference? Probably. I make good amps, but they are not my best, in principle.
By the way, I talked to Charles Hansen of Ayre, yesterday. It would be useful would keep an eye on his efforts, as I do.
 
Almost no one interested.

Class A lovers tend to dislike elements which reduce distortion to ppm levels, such as steep NFB factors, or various EC schemes.

Those who do not, generally regard the above mentioned as means to avoid elevated dissipation levels, highest efficiency (and all of its merits) remains the primary objective.

(a bit like inviting both a Jesuit and an inner Jihadist to visit a brothel, after a bribe over a copious meal)
 
The curious thing about many is the "obsessive" belief that conventional electronics, say amplifiers, are "good enough". No, they're not - and I was reminded of this by Pavel's take on the ExtremA sound, http://www.diyaudio.com/forums/solid-state/96853-extrema-class-strikes-back-24.html#post4070637. Once exposed to what well debugged systems can achieve, in reproducing all that has been captured in recordings, one can never take conventional playback seriously again ...
 
The debugging is the interesting part - the straightforward measurements might give you clues, indications of final capabilities, but they're only a first step, typically. If otherwise, why do systems built with "superbly measuring" components sound pretty blahh most of the time.

I'm lazy, building a "real amplifier" is not very interesting, the juicy part is getting hold of one of these already built, which is sortta OK to listen to, and kicking it into shape. Like the F1 car which had all the best designers work on it, but it's a dog, fails to finish, crashes, limps in at the rear of the field every time - it's the last stage of fiddly, constant refinement where the real work is done, and finally makes it a winner ...
 
This is the problem that we fight here. Why bother with amp design details, IF nobody can hear a difference? When can nobody hear a difference? When subjected to ABX testing designed and run by true believers that there is no difference.

What a bunch of crap.

I don't care about my believes, I care only about reality and change my believes according to the evidence provided. And the same goes for all scientific people.

You sir are the one who has fixed believes and refuses to change them when faced with evidence showing your believes are wrong.
 
Hi,



I wish we would more clearly distinguish between cable (a finished item) and bare wire.
A piece of wire is directional in the sense that it is in an unnatural state due to the manufacturing process it goes through when it leaves the die.
In that sense, and only in that sense, it is true.
After that it's anyone's guess if that would be enough to make an audible difference regardless of the fact whether or not it would be measurable at all.
The directionality is measurable as I stated before and in the context of research it is an important issue but this research has nothing to do with anything audio related, not directly anyway.
This research focuses on the effects of crystal boundaries, voids etc. etc. In short we're talking submicron layers here, sputtering and plating techniques and so forth.
Heck, they can even grow supersized cystals on sublayers and what have you.

To cut a long story short, one of the major players in cables for audio/video use is based in the Netherlands and I can assure you that they keep a very close eye on all this research and implement some of it in their product.
They have had the factory in the US scratching their heads more than once just by looking at all the demands they had to meet asking what the hell all this is going to be used for...

I can assure you also that they're not the only audio company funding these researches. From cartridge manufacturers to speakers, they are there.

Some people only seem to hear things after they've been able to measure them. Me, I'm curious, I like to know why I hear what I hear.

Respectfully, ;)

Why am I not shocked that it is supposedly audio companies looking at this, any pointers to the research please...
And why is this not discussed in other areas...why just in Audio, audio is not at the cutting edge of technology yet seems to have discovered phenomena that has passed the rest of electronics by!
As to crystal boundaries, have you looked at the research relating to PCB copper foils, crystal structure signal propagation etc?
 
Last edited:
If the shield is carrying the return this is impossible, if you want to shield RF then it has to connected at both ends.

I completely understand this. But it should also be clear that when you disconnect the shield, with both sender and receiver connected to ground, "directivity" of cables can be heard. Of cause it has nothing to do with directivity, but everything with RF interference.

In the Pro audio industry, its very common to disconnect the shield at 1 end. Strange how often professional people have so little knowledge of basic concepts like this.
 
Why am I not shocked that it is supposedly audio companies looking at this, any pointers to the research please...
And why is this not discussed in other areas...why just in Audio, audio is not at the cutting edge of technology yet seems to have discovered phenomena that has passed the rest of electronics by!
As to crystal boundaries, have you looked at the research relating to PCB copper foils, crystal structure signal propagation etc?

Marce, while your request for relevant links is quite logical, it is hard to do.

From my days of association with Philips, I remember that I had to pry out some of their research projects. Bog companies often research aspects they are unwilling to even admit they are researching. I suspect they are investigating even the very improbable possibiliities, just in case there's some truth in them.

For example, I was shown and demoed a prototype CD player, obviously from its construction intended to show the world what Philips knows about the CD medium, and trust me, it sounded like nothing I had ever heard before, even from the likes of Wadia. It sort of made it to the market, meaning it did appear as a finished product (in several guises), but only in Japan. Never anywhere else. Why, I have no idea. The point is, unless you closely followed the Japanese market, you'd never even know it even existed.

Since Philips is a mass manufacturer, and since at the time it still owned Marantz, my GUESS is that they didn't want to promote in-house competition. Just like they made the Black Tulip and Laboratory ranges just once and never again.
 
AX tech editor
Joined 2002
Paid Member
I once had an interesting experience that showed that sometimes, reducing the open loop gain with a resistor loading the driver stage, could, in fact, be MORE LINEAR than without.
This is where you have to take more into account than just a simplified understanding of a design.
It goes like this:
The DRIVE IMPEDANCE to the 'follower' at the output stage can be very important, sometimes dominating the overall distortion. How can this be? Is a follower not almost perfect, open loop? Here is the difference, the bipolar follower becomes a BETA MULTIPLIER when the ratio of the drive impedance and the input impedance presented by the follower are comparable. This is often so in many, many standard power amp circuits. Therefore the assumptions of linearity as calculated for the follower, might be way off, because the Gm of the bipolar transistor(s) is virtually out of the equation. NOW, Beta dominates, and its linearity, which depends both on the device, and its operating point.
This is why it is not always a losing thing to load the driver stage with a resistor. By doing so, you present a lower drive impedance to the follower circuit and you usually linearize it further, compared to having a very high drive impedance.
Now, is it a good idea to add this resistor? I don't usually do it, haven't done it for decades, in fact. But my requirements are for very low distortion at rated output, so I can't afford to throw away the gain, and therefore the feedback. Unfortunately, my open loop bandwidth is usually about 4KHz (est) in my power amps, and not the 20KHz that I would like to see. Does this make an audible difference? Probably. I make good amps, but they are not my best, in principle.
By the way, I talked to Charles Hansen of Ayre, yesterday. It would be useful would keep an eye on his efforts, as I do.

Very nice post, thanks John, that's very clear. It's a valid reason for deliberately lowering the Vas Zout in the interest of making it more linear. I'm glad we left the 'but it makes the ol bw wider' nonsense behind.

Jan
 
Status
Not open for further replies.