Audiophile Ethernet Switch

Status
Not open for further replies.
frugal-phile™
Joined 2001
Paid Member
No evidence of this exists.

I was there. I saw it happen. Something wrong Search for answers. In the case of jitter, human ear/brain turns out to be particularily sensitive to time errors.

Now we have super-duper 1-bit DAC chips that allow manufacturers to just assembly the support around them and they do a good job. Brings some pretty good DACs for peanuts. A bit more work to get an R-R right. But with potential benefits.

dave
 
It does require a hearing to motor response converstion step. There is cognition there and that is what needs to be removed.
Im sorry but that makes no sense. Without cognition we don’t have hearing. What would you be testing?
A good DBT is way harder to do well then most think.
Yes!
ABX is particuallarily bad with a ton of gotchas, and really only the power to conclud \e that things sound different. Any other resut is statistacally meaningless (because of the beta error?). Am ABX test can be used to prove that 2 things sound different. Nothing else.

dave
That is the entire point of the ABX test, but not a “gotcha”. If you cannot prove a statistically significant difference, there’s no point to continue to find a preference. Since all sighted testing, even for an audible difference, is horribly flawed by bias, there really isn’t much choice.

The typical things people think are problems with the ABX method are simply the result of not understanding the concept well.
 
I was there. I saw it happen. Something wrong Search for answers.
Are you trying to say that your claim of personal observation is sufficient proofs?
In the case of jitter, human ear/brain turns out to be particularily sensitive to time errors.
Please clearly define what you are calling “jitter” and “time errors”.
Now we have super-duper 1-bit DAC chips that allow manufacturers to just assembly the support around them and they do a good job. Brings some pretty good DACs for peanuts. A bit more work to get an R-R right. But with potential benefits.

dave
R-R?
 
I was there. I saw it happen. Something wrong Search for answers. In the case of jitter, human ear/brain turns out to be particularily sensitive to time errors.

Now we have super-duper 1-bit DAC chips that allow manufacturers to just assembly the support around them and they do a good job. Brings some pretty good DACs for peanuts. A bit more work to get an R-R right. But with potential benefits.

dave

They are not 1 bit. I am afraid you have very little idea what you are on about, no offense.
 
frugal-phile™
Joined 2001
Paid Member
Im sorry but that makes no sense. Without cognition we don’t have hearing. What would you be testing?

Perhaps i used the word cognition wrong.It takes a concious action to hit the button, you just hear. i want to directly measure the brain under the influence of music.

If you cannot prove a statistically significant difference, there’s no point to continue to find a preference.

The result is only applicable to that specific instance. You still don’t know if there is a difference. The test has told you nothing.

dave
 
More than 1 bit. Do some research. All you seem to do is thread-crap with ignorance. You do know jitter was well known in data conversion before digital audio was even a thing. The exact effects are described in many papers. Do you even know what they are?

Btw, R2R does not even mean resistor to resistor.
 
Last edited:
A good DBT is way harder to do well then most think.
It depends. A good enough test able to be performed by a diy audio fanatic that would reveal useful results for him/her (provide he/she wants to listen ;)) is not that hard to do. If you want to do one to publishable "medical" standards then I agree, but who here, in their right mind and/or not in the audio business, would want to do that?
 
Im sorry but that makes no sense. Without cognition we don’t have hearing. What would you be testing?

I'd think planet10 statement is related to the internal judgement processes of the test participants.
We know that participants reacts differently, doing the same evaluation, when testing under different test protocols.

The accepted explanation (up to now) is based on the assumption that the internal cognitive processes change due to the different tasks.
It was already observed in 1952 that participants got better results in A/B tests as in ABX tests.

That is the entire point of the ABX test, but not a “gotcha”.

The "gotcha" so to speak exists due to the more difficult task and is further strengthened by the common statistical analysis based on the proportion of correct answers.
It is known, that this "proportion correct" number is most likely lower in ABX tests compared with other test protocols like A/B (paired comparison); same number of participants evaluating the same sensory difference under different test conditions.

If you cannot prove a statistically significant difference, there’s no point to continue to find a preference.

It depends as usual; test orthodoxy usually works that way, first testing for difference (or equality/non equality) then doing preference tests, but there is no test police observing all experiments. :)

As difference is usually the point that is the least interesting one (people are usually looking for something better) and because a existing preference implies a difference, and given the fact that A/B tests are usually easier on the participants, it is a good idea to do preference tests. Segmentation might happen (means both alternatives are liked equally) but there are ways to thandle that.

Since all sighted testing, even for an audible difference, is horribly flawed by bias, there really isn’t much choice.<snip>

It is not that they are flawed by bias (how could we know if not by further experiments?) but that the risk of bias impact could be high.
 
<snip> You do know jitter was well known in data conversion before digital audio was even a thing. The exact effects are described in many papers. Do you even know what they are?

<snip>

Does it really make a difference if some reasons were already known but omitted in audio?
Even "dither" was known (in general since ~1870 afair and for picture coding since the 1960s) and it was mentioned in publications both by Sony and by Barry Blesser, but obviously it wasn't used for some time after the invention of the CDDA.

I can remember the outrage among german EEs after the first audio magazine wrote in reviews about different sound quality among CD-players reviewed (outrage based on the usual measured numbers way below the thresholds) although it was already known that a single DA converter used for both channels wasn't the best idea.

Same happened when listeners reported sound quality differences when using the S/PDIF interface; it took some time but Hawksford/Dunn and Julian Dunn as well examined the situation and find some explanations.
Same happened when listeners reported audible difference between S/PDIF and Toslink.
Same happened when listeners reported inferior sonic quality when using the USB interface.
 
Perhaps i used the word cognition wrong.It takes a concious action to hit the button, you just hear. i want to directly measure the brain under the influence of music.
Been done. And you think ABX is difficult!!!

The result is only applicable to that specific instance.
You still don’t know if there is a difference. The test has told you nothing.

dave
You clearly don't understand the concept. I guess we can stop here.
 
I'd think planet10 statement is related to the internal judgement processes of the test participants.
We know that participants reacts differently, doing the same evaluation, when testing under different test protocols.
Sounds to me like he wants to measure the brain directly. Possible, but even less practical.
The accepted explanation (up to now) is based on the assumption that the internal cognitive processes change due to the different tasks.
It was already observed in 1952 that participants got better results in A/B tests as in ABX tests.
It would be great to have that 1952 reference.
The "gotcha" so to speak exists due to the more difficult task and is further strengthened by the common statistical analysis based on the proportion of correct answers.
It is known, that this "proportion correct" number is most likely lower in ABX tests compared with other test protocols like A/B (paired comparison); same number of participants evaluating the same sensory difference under different test conditions.
"It is known"...by whom?

Different test conditions often produce different results. But A/B with no control, and done the typical sighted way by amateurs is not scientifically valid.

It depends as usual; test orthodoxy usually works that way, first testing for difference (or equality/non equality) then doing preference tests, but there is no test police observing all experiments. :)
If you throw "good science" out the window, then, sure.
As difference is usually the point that is the least interesting one (people are usually looking for something better) and because a existing preference implies a difference, and given the fact that A/B tests are usually easier on the participants, it is a good idea to do preference tests. Segmentation might happen (means both alternatives are liked equally) but there are ways to thandle that.
But what you just outlined is the source of data corrupted by bias.

It is not that they are flawed by bias (how could we know if not by further experiments?) but that the risk of bias impact could be high.

Agreed. But that fact is not recognized by the people forming strong opinions by their results and ignoring even the possibility of bias. That's a big problem.
 
Sounds to me like he wants to measure the brain directly. Possible, but even less practical.
It would be great to have that 1952 reference.

The first time I've read about additonal imaging techniques used along psychoacoustical experiments was by Oohashi et al. , who used PET - Scans and EEG. Today fMRI is comes more often into play; trying to seperate the conscious answer given by the participants of tests from the physiological reactions. The experiments within SDT have shown that additional factors like extra paid boni for successes had strong influence on the socalled ROC, means when judging the same sensory difference the internal judgement barrier was set higher to a more conservative level. It would be interesting to see, if the physiological response is (by a differently working feedback path) is different or if it just another response by other brain areas forming the conscious answer.

And we still rely on these answers when doing controlled listening tests; therefore my assumption that planet10 was alluding at.

The 1952 remark was stated by Harris in a letter to the editors:

In this laboratory we have made some comparisons among DLs for pitch as measured by the ABX technique and by a two category forced-choice judgment variation of the constants method (tones A B, subject forced to guess B "higher" or "lower"). Judgments were subjectively somewhat easier to make with the AB than with the ABX method, but a greater difference appeared in that the DLs were uniformly smaller by AB than by ABX. On a recent visit to this laboratory, Professor W. A. Rosenblith and Dr.Stevens collected some DLs by the AB method with similar results.
The case seems to be that ABX is too complicated to yield the finest measures of sensitivity.

( J. Donald Harris, Remarks on the Determination of a Differential Threshold by the So-Called ABX Technique, J. Acoust. Soc. Am. 24, 417 (1952).)

"It is known"...by whom?<snip>

The sensory experimenters in the Food department are very much interested in exploring the strengths of the various test protocols (the developers of the Signal Detection Theory were as well) and over the years were running experiments to assess the differences, see for example an excerpt from one experiment:

yu_ting_table2_exerptvcj8r.gif


The same group of participants, judging the exactly same sensory difference. Retasting in each trial was allowed and the participants were informed what the difference (and the direction) was.
Not only the proportion of correct answers varied (even between quite similar working protocols like ABX and Triangle) but the consistency is quite different as well.

(Yu-Ting Huang,Harry Lawless, Sensitivity of the ABX Discrimination Test, Journal of Sensory Studies 13 (1998) 229-239.)

Different test conditions often produce different results. But A/B with no control, and done the typical sighted way by amateurs is not scientifically valid.

In this context I'm talking always about controlled listening tests (including the "blind" property) regardless if A/B, ABX, Triangle or some other, otherwise I'd mention "sighted" .
 
Last edited:
AX tech editor
Joined 2002
Paid Member
Just to back up, "Subjective" means human judgement is involved. "Objective" means measurements are involved. Since human perception is the end goal, the ABX/DBT becomes a very useful, if cumbersome and prohibitively expensive, tool.

The outcome of a well planned and executed DB test with statistically significant results can be considered an objective result. That is the goal.

Jan
 
Status
Not open for further replies.