Sound Quality Vs. Measurements

Status
Not open for further replies.
Thorsten,

First let me say I really enjoy your posts, and from them I (right or wrong) sense some of your motivation. This image I've formed of you just doesn't line up with the x3 PSU testing you noted a few pages back.

I understand there is never enough time, but how would someone like you be able to let such test results go without getting to the bottom of it. You went as far as modifying production units, which just makes me want to believe that you did find a measurable metric, or at least are still working on it?

Thanks
-Antonio

Three identical production units where pulled from the line and tested. We selected amplifiers with levels less than 0.1dB different ...

... One unit was unchanged from the production line ...

If you liked for me to propose any mechanism or physical process that would explain the outcome (including my own surety picking "my" design and preferring it whereas other clearly did not prefer it) I would have to admit to being clueless.

I would not generalise beyond the actual individual amplifier design tested, but I suspect there are many lessens hidden in such a small and simple test.

Ciao T
 
Sy,

My apologies- I said IEC when I meant EBU. Perhaps if I'd had one less cup o coffee, I would have typed "ASTM." But same comment applies to all the alphabet soup standards and recommendations.

A better way to call these recommendations would be "best practice".

In any case, controlled listening is the sine qua non of objective evaluation of subjective aural perception and/or preference.

As long as we can say "controlled listening with tests implemented according to 'best practice'" you will find me entirely in agreement.

Ciao T
 
Personally I don't agree with the definitions of objective-subjective as having a direct connection to measurements or no measurements. To me "objective" basically means "without bias". The listener is aware of the sound and nothing else, and so draws a conclusion from the sound and nothing else. Not the shiny chrome or the leggy dancing girls.
A reasonable conclusion from "It should be noted that [subjective] listening tests in themselves are not controlled and may be deliberately or unintentionally made to give incorrect indications" is that listening tests may be deliberately or unintentionally made to give correct indications. Seems like pretty lame method then. Maybe I misunderstand. But I do understand one cannot at once claim "Suitable controls...are needed" yet also claim "controls are a completely separate issue." Statistics aren't used to develop controls; controls are used to develop statistics.
I do agree measurements may give incorrect data, for whatever reason. That is a measurement issue, not an audio issue.
 
Hi,

No, a standard is just that, a standard.

Correct. An example may be British Standard (appropriatly abbreviated as "BS") Number 5750, issued by BS Institutions.

Appropriately, this BS codified "best practice" in Quality Management into a auditable format by which organisations could be certified.

So, standards are codified "best practice".

I was not referring, however, to a standard, but a set of recommendations, introduced thusly:

EBU Project Group said:
This article presents a number of
useful means and methods for the
subjective quality assessment of
audio programme material in radio
and television, developed and verified
by EBU Project Group, P/LIST.

The methods defined in several new
EBU Recommendations and
Technical Documents are suitable for
both operational and training
purposes in broadcasting
organizations.

So, a standard may be codified best practice, the document I referenced is simple "Recommended Best Practice".

I am very comfortable that we disagree, because, alas, you wrong.

You should sometimes to AT LEAST skimp read documents before rubbishing them.

Ciao T
 
Perhaps you should read them before linking them. Or lying about me "rubbishing" them. That document does not use the term "best practice" or "recommended best practice," and especially not interchangeably (which is also incorrect). "Recommended practice" or "recommendation" are yet other distinct terms which should not be carelessly used interchangeably with "best practice."
 
Hi,

Personally I don't agree with the definitions of objective-subjective as having a direct connection to measurements or no measurements. To me "objective" basically means "without bias".

I guess we need to descent into semantics and linguistics then.

The simple word "objective" is fraught with more meanings than may be sensibly and safely contained by a dozen.

Objective | Define Objective at Dictionary.com

The ones we concern ourselves with here are likely:

dictionary.reference.com said:
adj
1. existing independently of perception or an individual's conceptions: are there objective moral values?
2. undistorted by emotion or personal bias
3. of or relating to actual and external phenomena as opposed to thoughts, feelings, etc

Personally, I relate the term objective in science (and other related areas) to "objective(1)" and "objective(3)", as the very practice of science (from Latin scientia, meaning "knowledge") is not possible if our approach is distorted by emotion or bias.

I relate the term objective as in a determination, like a legal one, such as King Solomon is reported to often have made, without bias, and as to how a Jury or Judge should regard testimony placed before them/him as "objective(2)".

Now in your views the scientific use of the term objective should be a legal one, namely "objective(2)", as opposed to the phenomenological one, namely
"objective(1,3)".

At that I am we must part company, I am afraid. As a result of dramatically different understandings of the actual meaning of the word objective in science I doubt there is any ground for exchange and debate and the rest of your missive literally cannot make sense to me, as you use words with a different meaning than I do.

Ciao T
 
Sy,

Perhaps you should read them before linking them. Or lying about me "rubbishing" them. That document does not use the term "best practice" or "recommended best practice," and especially not interchangeably (which is also incorrect). "Recommended practice" or "recommendation" are yet other distinct terms which should not be carelessly used interchangeably with "best practice."

Well, in most of the fields I have experienced (Sound Engineering, Electrics/Electronics, Accounting/Financial Systems), a recommendation or recommended practice from a professional body descries what at least in these disciplines is colloquially also known as "best practice", without a particular obligation to be actually followed.

The EBU recommendation I have linked provides exactly that, best practice for the subjective assessment of the quality of programme material. I am sure this information is of equal use and applicability in assessing the impairment of progamme material by electronics, speakers cables etc.

Now STANDARDS are rather different. They are either legally binding (electrical safety comes to mind), need to be adhered to for certain independent certifications (e.g. Underwriter Labs to be able to get liability insurance) or need to be complied with to attain and retain certain accreditations (e.g. ISO9K in QM or ACCA for accounting).

Standards are generally not "optional" if applicable, recommendations absolutely are...

Ciao T
 
A 'standard' issued by engineers might be misinterpreted by a lawyer to be 'best practice', but in reality it may or may not be. One of the problems I found when Quality Assurance (and we always had to use the initial capitals) was introduced into IT many years ago was that it could reduce quality. The aim of QA (like a standard) is to ensure that you do what you are supposed to do and what you have said you will do. It simply ignores the issue of whether this is the correct or best practice thing to do (although managers never seemed to realise this). The aim is consistency, not quality. If your QA procedure was faulty, and many were, then it would insist that you did the job the wrong way every time.

A standard test procedure is more interested in comparing what you do now with what you did 10 years ago or will do in 10 years time. Whether it is helpful or appropriate is not of any interest to an auditor with a clipboard.
 
some may get a better idea of the issues, audio perceptual experimental design, practice from http://www.madebydelta.com/imported...l_T4_Perceptual_Audio_Evaluation_Tutorial.pdf

I have the book, agree with some reviewer's criticism that its a little light on the backgrounding theory for audio perceptual experiment design - but its a far cry from the "strawman" that several here enjoy beating on
it does outline methods, reference standards, their applications to audio evaluation


in particular we had dvv's copper vs silver wired amp perceptual test - and critiques of fundamental problems from poor blinding - possibly poor level matching from the apparent dismay dvv expressed at the required tolerance
but rather than intelligently discussing those specific issues which have well established audio perceptual experimental design “standards” we had a rush to "validate" dvv's experience, defend it from "impossible to satisfy" critics

"bad" experimental design is bad whether the results support your prejudices or not
and despite the “impractical” amount of effort “good” experiments would be
 
Last edited:
Hi,

One of the problems I found when Quality Assurance (and we always had to use the initial capitals) was introduced into IT many years ago was that it could reduce quality. The aim of QA (like a standard) is to ensure that you do what you are supposed to do and what you have said you will do. It simply ignores the issue of whether this is the correct or best practice thing to do (although managers never seemed to realise this).

Well, I have worked in IT, but not on the programming/harware side but as sysadmin. Not much QA going on there. Or best practice. At least in those days, epitomised by the BOFH (been there, done that, got the T-Sirt, though it no longer fits as I'm now a Bigger B..ard and only portraid in a mild form in TV-Show "The IT Crowd"...

Bastard Operator From Hell - Wikipedia, the free encyclopedia

The aim is consistency, not quality. If your QA procedure was faulty, and many were, then it would insist that you did the job the wrong way every time.

Then, with respect, your QA Manuals/Standards where clearly issued by the "BS Institute"....

A standard test procedure is more interested in comparing what you do now with what you did 10 years ago or will do in 10 years time. Whether it is helpful or appropriate is not of any interest to an auditor with a clipboard.

True, having both faced Auditors and audited myself (I'm still not sure which of the two is more fun for a gentleman of superior intelligence to the average gibroni), they have their own objectives.

However, if the Auditors are worth their name, they are aware of best practice and their recommendations will not just reflect standards and checkboxes (of course these need to be ticked and woe betide you if you fail them), but also reference best practice and indicate where you fall short of best practice.

Equally, I have had Auditors revise their reports following me illustrating to them that the standards that triggered recommendations was in clear contradiction of best practice.

If your auditors give you any less, change your auditors...

Ciao T
 
Hi,

some may get a better idea of the issues, audio perceptual experimental design, practice from http://www.madebydelta.com/imported...l_T4_Perceptual_Audio_Evaluation_Tutorial.pdf

I find little if anything to disagree with in the above presentation.

If I compare that with the practice commonly evidenced by the ABX Mafia it clearly illustrates why I object to their tests strongly.

in particular we had dvv's copper vs silver wired amp perceptual test

What I particularly noticed was that my own posted test was met with silence.

Do I take this as agreement that even though looking at the results of of an AP2 you could not tell the differences between amplifiers, when music was played real differences where observed?

"bad" experimental design is bad whether the results support your prejudices or not and despite the “impractical” amount of effort “good” experiments would be

This applies as much to tests that fail to reject the null-hypothesis as to those that appear to reject it, which is often conveniently forgotten by some "objectivists. "

Ciao T
 
I guess we need to descent into semantics and linguistics then.
Your attempt at condescension is a awfully blunt dagger since I'm only following, not leading the way. Re:
Objective implies in the context that...
Subjective implies in the context that...
You see, you declare that
as the very practice of science (from Latin scientia, meaning "knowledge") is not possible if our approach is distorted by emotion or bias.
and then proceed to tell that that definition (2) is irrelevant and invalid in this (yes, scientific) context. 'Tis definition (1) that is irrelevant and invalid in this context.
Very true that without common definitions there can be no meaningful communication.
PS To be honest, I think definition (1) is invalid and poorly executed. Morals depend on conceptions at some level. Since morals are... concepts!
 
Do not confuse design, quality, and reliability. Quality is if you built what you intended, reliability is if it works over time, design is if it solves the problem. You can design a piece of du-du, built it exactly, and it lasts forever. (Chevy Caviler comes to mind) High quality, high reliability, bad design. Of course the customer well still hate it and call it bad quality.

Me? I love standards. There are so many to choose from.
 
some may get a better idea of the issues, audio perceptual experimental design, practice from http://www.madebydelta.com/imported...l_T4_Perceptual_Audio_Evaluation_Tutorial.pdf

I have the book, agree with some reviewer's criticism that its a little light on the backgrounding theory for audio perceptual experiment design - but its a far cry from the "strawman" that several here enjoy beating on
it does outline methods, reference standards, their applications to audio evaluation


in particular we had dvv's copper vs silver wired amp perceptual test - and critiques of fundamental problems from poor blinding - possibly poor level matching from the apparent dismay dvv expressed at the required tolerance
but rather than intelligently discussing those specific issues which have well established audio perceptual experimental design “standards” we had a rush to "validate" dvv's experience, defend it from "impossible to satisfy" critics

"bad" experimental design is bad whether the results support your prejudices or not
and despite the “impractical” amount of effort “good” experiments would be

I think the problem lies elsewhere.

Somehow, some people seem to have completely missed the goal of that exercise - we simply wanted to determine how one type of cable behaves versus another type IN OUR EQUIPMENT. Just that and no more.

We were definitely NOT evaluating copper vs silver in general, I mean, hey, we had just one sample of each, and both from the same manufacturer, the best we could hope for would be an assessment of those two types of cables in just one device. It's no brainer that as anything more than a simple test valid for only that one device, that was no test at all.

But you saw the reaction. Some people immediately proceeded to take the text apart as if we were going to be on national news, proclaiming a new truth. Why such a reaction, I honestlly don't know.

What I do know, and say it here and now outright, is that such behavior is about the worst thing that can happen to a discussion forum. It scares off any indepenedent thinking by immediately calling one either a heretic, or a swindler.

And some of those wanna-be "scientists" didn't even bother checking out the details; whatever for, when the "know" it's all wrong? I never claimed to be a scientist, I never claimed that to be anything else but a personal exercise, one we were sort of forced into simply because locally, we have a choice of just that one pure silver cable. That's why there were no other samples, and why we couldn't even dream of doing a larger, multimodel test. Nor could we be bothered to do it, we leave such matters to the well greased magazines.

Before somebody now slams me because of a low opinion of magazines, INCLUDING Stereophile, this comes from a guy who was a magazine editor himself 1986-1997, who helped start up a new magazine, who published 312 pieces in various magazines, who published a book which sold in three editions (and was stopped cold by the local bout of wars), who authored and anchored his own weekly TV show for three seasons (and who gets stopped on the street even today, and is asked why is there no more TV shows), and who eventually got out of there because he couldn't stomach the dirt, the lies and the avarice any more.

At least allow for the possibility that I just may know what I'm talking about, don't dismiss it out of hand. I never dismiss anyone out of hand, that practice tends to lock you up inside your mind worse than in the Bastille.
 
Last edited:
Current is the important measurement parameter, not necessarily voltage.

One must work within the tools they have. I was looking around to see if I could rent one. No luck yet. I can't justify half of retail for a weeks use. I have been watching e-bay, but no luck. So, best I can do is scope voltage and PC based spectrum analyzer. With a digital snapshot camera, I can't even fool it to take a single shot picture. Life of an amateur.
 
diyAudio Member RIP
Joined 2005
One must work within the tools they have. I was looking around to see if I could rent one. No luck yet. I can't justify half of retail for a weeks use. I have been watching e-bay, but no luck. So, best I can do is scope voltage and PC based spectrum analyzer. With a digital snapshot camera, I can't even fool it to take a single shot picture. Life of an amateur.

The Tek ones are fine instruments and are able to see d.c. But for what you're interested in, a relatively low-inductance air-core coil will work, with the problem being that the current-carrying conductor (diode lead for example) has to pass through it (and make sure that lead passthrough is insulated from the coil!).

Calibration is possible if tricky. Relative measurements otoh are still illuminating. And don't forget a cheap AM/FM portable radio to get some idea of what's being sprayed around.
 
Somehow, some people seem to have completely missed the goal of that exercise - we simply wanted to determine how one type of cable behaves versus another type IN OUR EQUIPMENT. Just that and no more.

I understood that quite well. And as I said, this is no knock on your engineering abilities or your honesty or your sincerity, but rather on your ability to set up a properly controlled sensory test. There's no shame in that, not very many engineers do, it's not part of the training or the usual job duties. I certainly didn't learn any of that in school, had to pick it up the hard way from working in several industries where accurate, repeatable, and replicable data from sensory tests (haptic and organoleptic, in my case) were critical, and poorly designed or executed tests would put us out of business.

I would recommend again that you read through the little article in the last (Vol 2) issue of Linear Audio. It specifically deals with the sort of small-scale testing you were trying to do in your wire experiments.
 
Status
Not open for further replies.