Audiophile Ethernet Switch

Status
Not open for further replies.
frugal-phile™
Joined 2001
Paid Member
...those have become a new folk legend...

Not new. A resurgence. There are many still holding onto their Philips 1541/1543 DACs.

Multibit DACs are much harder to do (largely due to few if any multibit DAC chips anymore), and pricier therefore, than simplier chip DACs. A growing market for them, i take, means something.

dave
 

TNT

Member
Joined 2003
Paid Member
I was there. I saw it happen. Something wrong Search for answers. In the case of jitter, human ear/brain turns out to be particularily sensitive to time errors.
-snip-
dave

Sure - but "digital" jitter don't manifest istelf as timing errors on the analog side. Jitter creates distortion actually. Its a timing error in the digital domain so it can be easily confused if one is not so versed in digital PCM technology.

//
 
The biggest objection to ABX testing comes from subjectivists because it shows them they imagine most of the differences they hear. If theres an "obvious" difference in DUTs then an ABX test should come back as 100%.

I’d dial that back from 100% to “statistically significant” based on the number of subjects an trials. But I agree basically with where you’re going.
 
As are 99% of the ABX tests done by amateurs.

dave

To pull off a real ABX/DBT you need a comparator system and precision gain matching at very minimum. Often, much more than that. That lets out nearly every amateur.

However, I can’t recall an amateur actually claiming to do a real ABX/DPT in the first place. So, I have no idea why anyone would state the 99% figure.
 
I’d dial that back from 100% to “statistically significant” based on the number of subjects an trials. But I agree basically with where you’re going.

It is a matter of definition, if you define "obvious" as "must be detected by unexperienced participants in an ABX" , then it would work.
Otherwise we know that even quite large differences can remain undetected under specific test conditions, one example for an onedimensional difference was included in the table I've posted recently.

Otoh we know that experienced listeners (trained for detecting the difference and trained under the specific test conditions) are able to reach surprising levels of sensitivity in controlled listening tests.

The problem exist because the ABX protocol seems to be more difficult for participants and that the usual analysis by using the number of correct trial answers and run an exact binomial test, raises the risk of beta-errors.

Unfortunately nobody (a quite small number of members actually only) tells listeners that they need training under the specific test conditions and that they should use so-called positive controls to check if they reach a sufficient sensitivity under these specific conditions. A negative control must be used as well.

To pull off a real ABX/DBT you need a comparator system and precision gain matching at very minimum. Often, much more than that. That lets out nearly every amateur.

Back in the beginning that was true, but today (it began with Arny's pcabx) there are software tools (quite often mentioned is the Foobar ABX tool) that allow everyone to do this kind of tests.

However, I can’t recall an amateur actually claiming to do a real ABX/DPT in the first place. So, I have no idea why anyone would state the 99% figure.
You'll find some in this forum (and others as well) where totally unexperienced members are doing ABX runs with as low as 6 - 10 trials, which can be fine if one is experienced, but if not let the beta-error risk skyrocketing.

I'd assume that the situation overall and the acceptance of controlled listening test would be much higher, if everyone who demands test results would point out the importance of training to reach correct results.
 
It is a matter of definition, if you define "obvious" as "must be detected by unexperienced participants in an ABX" , then it would work. Otherwise we know that even quite large differences can remain undetected under specific test conditions, one example for an onedimensional difference was included in the table I've posted recently.
Agreed, since that’s pretty much what I said.
Otoh we know that experienced listeners (trained for detecting the difference and trained under the specific test conditions) are able to reach surprising levels of sensitivity in controlled listening tests.
ABX tests have shown this to be true, but OTOH, this is not a problem with the test method, it’s introducing a specific characteristic within the test subject, an effective population sample bias and resulting data skew. If it’s known, and the goal of he test is to qualify a difference as generally audible to anyone, and trained listeners make up a significantly large portion of the subject group, you have skewed the results and are no longer answering the original question.

You obviously know this. I fail to see it as an ABX flaw when it’s clearly a matter of test subject selection as it relates to the desired information.
The problem exist because the ABX protocol seems to be more difficult for participants and that the usual analysis by using the number of correct trial answers and run an exact binomial test, raises the risk of beta-errors.
While I agree that an ABX trial is more involved, it does consist of a set of 3 AB comparison, the first of which is the classic A/B, no different from the “usual analysis”, followed by two more choice pairings, also no different to the “usual analysis”. The challenge then comes in matching identical test pairings. I have not looked up your references yet, but the inferiority of the ABX protocol isn’t jumping out so far.
Unfortunately nobody (a quite small number of members actually only) tells listeners that they need training under the specific test conditions and that they should use so-called positive controls to check if they reach a sufficient sensitivity under these specific conditions. A negative control must be used as well.
Nobody? Who's that? Perhaps a researcher who isn't applying the correct controls. If my question is if the average listener can hear something, there's no need or desire for training. Since trained listeners are a minute segment of the population, most valid population samples would not be invalid to exclude one.

If my question is can any human detect something, they I'd not only want trained listeners in the sample, I'd probably train a few, then do a run with no trained listeners as the control.

Highlighting uninformed application of any test protocol does not negate the protocol's advantage.
Back in the beginning that was true, but today (it began with Arny's pcabx) there are software tools (quite often mentioned is the Foobar ABX tool) that allow everyone to do this kind of tests.
Nope, still true. Arny's (actually David Clark's box) ABX Comparator was an expensive tool. The only hardware comparator available today is also an expensive tool. The software tools often suffer from various problems. I've tried every one I could easily find out about. One often mentioned system does not switch quickly, another only compares audio samples restarting from the beginning, and none had the ability to adjust gain. Also, they are all useless for working with hardware DUTs. They are useful only for working with files. Even then, if comparing two files of differing sample rates, they depend on the DAC tracking that switch quickly, and some don't. It's a nice amateur tool, not very versatile.

I'm happy to report my Clark ABX Comparator is still working well after 37 years.
You'll find some in this forum (and others as well) where totally unexperienced members are doing ABX runs with as low as 6 - 10 trials, which can be fine if one is experienced, but if not let the beta-error risk skyrocketing.
Agreed. 6-10 trials is never enough, even for someone experienced. You need more than that just to represent the results of varying stimulus music.

Again, any tool in uneducated hands most likely produces poor results. A hammer will do that too.
I'd assume that the situation overall and the acceptance of controlled listening test would be much higher, if everyone who demands test results would point out the importance of training to reach correct results.
Training is important to a specific type of test and desired data set. It's not to be assumed always essential, but often is. Even a bit of training like "here's what you're listening for" is sometimes better, but again, only if that type of data skew is beneficial.

Interesting how absolute we are, right?

All of this just points out how difficult a good ABX test is to perform.

Just to keep the thread on track, I would like point out that devising a good ABX test to compare network switches would be one of the most complex and expensive set ups yet. Two networks, two switches, two destination devices, two source devices, all identical but the switches. And the source devices would need to be started in perfect sync. The two networks would need other devices on them too with identical, specific and controlled data streams giving the switch something to do that is valid and identical. Multiple streaming methods would have to be tested. And the usual large, trained test subject sample and number of trials. Given what we know about data networks and how streaming works, there is practically zero impetus to engage in such a test, pro or otherwise.

And that forces indirect A (time delay) B subjective, sighted comparisons dripping with expectation bias.
 
Status
Not open for further replies.