The Black Hole......

Relax Bob, people do this all the time. They create a blind test that throws serious listeners off. IF we did not hear well enough, we would hear NO difference between the files. Bill, you know better.

Why would any blind test throw any listeners off? Just another, of many, excuses to hide the reality, that these SELF proclaimed golden ears cant hear as well as there egos believe. I love the bit about blaming the computer: now that ive fixed it the other file sounds better. You mean now that I know which file is which I like the other one better.
 
Yes, just have a look through "PC Based" here on DIYA - those folks really slog themselves through the mud in the deepest of trenches - compared to merely having to select a resolution / sample rate from a pull down menu! Or select the right radio buttons...
 
Computers have been reliably used in SOTA music playback since late '80s /early '90s. Just get the proper additional hardware and application(s). Not cheap though.

Oh. Thats all and then everything works just fine out of the box with my next win 10 upgrade.

Sure. piece of cake. I have never had the patience needed for any computer soft/hardware that wasnt PnP and work perfectly every time. These PC systems have way too many traps and little things to know about.

And I have no use for DAW also. So.... I just want to play music with a minimum of hassles and surprises. So, I just down load files and play them thru DAC-3. I am never disappointed nor surprised with sounds bad like "A" did but shouldnt have.

THx-RNMarsh
 
Last edited:
CBDB -- we've gone through this a hundred times: testing* does drink up a bunch of cognitive capability (especially among untrained subjects), which certainly make blind tests less sensitive. Well, any test with an untrained subject, but blinded testing protocols are typically more stringent. Plus, we can't have flame wars about two different unblinded protocols can we? 😉

That said, lower sensitivity does not mean NO sensitivity, so the usual loud mouthed claims of huge differences would very much show up. Small differences will assuredly be much harder to determine compared to an "ideal tester". Anyone who throws the baby out with the bathwater in terms of testing has clearly shown their ignorance or ideology.

* (blind but really any stressful situations in general, so it's protocol dependent and one could certainly design a sighted experiment that has greater cognitive load than a blinded test)
 
Last edited:
Oh. Thats all and then everything works just fine out of the box with my next win 10 upgrade.

Sure. piece of cake. I have never had the patience needed for any computer soft/hardware that wasnt PnP and work perfectly every time. These PC systems have way too many traps and little things to know about.
This is that I do not understand. You (and others) go to great lengths in setting up listening enviroment, adjusting crossovers, turntables, DSPs, bias tape, select cables, replace capacitors, upgrade op-amps and so on and yet you can't configure one stinkin' computer that any teenager can?
Computer is an universal appliance, it was never ment to be a SOTA audio playback device out of the box.
If you can't master this part, you should refrain from making statements on sample rates, bit depths and modern digital audio playback technology, I must say.
 
Last edited:
CBDB -- we've gone through this a hundred times: testing* does drink up a bunch of cognitive capability (especially among untrained subjects), which certainly make blind tests less sensitive. Well, any test with an untrained subject, but blinded testing protocols are typically more stringent. Plus, we can't have flame wars about two different unblinded protocols can we? 😉

That said, lower sensitivity does not mean NO sensitivity, so the usual loud mouthed claims of huge differences would very much show up. Small differences will assuredly be much harder to determine compared to an "ideal tester". Anyone who throws the baby out with the bathwater in terms of testing has clearly shown their ignorance or ideology.

* (blind but really any stressful situations in general, so it's protocol dependent and one could certainly design a sighted experiment that has greater cognitive load than a blinded test)

see for example the experiments on "inattentional blindness/deafness" :

gorillad5jmw.jpg


Lots of people don't see the "gorilla" in ....err......"sighted seeing test" .

To argue about "huge differences" is kind of "ad hoc" to rescue the "blind test shows every difference" myth.

As in the normal routine no-one clears up the vocabulary/meaning and does not use any positive control (to make it even more worse neglects the negative ones as well and don't talk about a clearly expressed hypothesis to test), we simply don't know about the degree of difference needed to get detected in not-so-well-planned-and-executed experiments.

Part of the problem seems to be that people don't want to learn about the problems of studies which they obviously like because they offered corrobation for own beliefs. It's only human of course, but as long as you cite Meyer/Moran as example for good science or rate it as "well done" (no pun intended) it most likely will not get better.......
 
Last edited:
This is that I do not understand. You (and others) go to great lengths in setting up listening enviroment, adjusting crossovers, turntables, DSPs, bias tape, select cables, replace capacitors, upgrade op-amps and so on and yet you can't configure one stinkin' computer that any teenager can?
Computer is an universal appliance, it was never ment to be a SOTA audio playback device out of the box.
If you can't master this part, you should refrain from making statements on sample rates, bit depths and modern digital audio playback technology, I must say.

And he, who never got surprised by the crazyness of new windows versions handling things, cast the first stone....
 
I don't hold Meyer/Moran in any esteem, Jakob. I think it's a crap study to be blunt. I honestly think everything I've read (that's hardly comprehensive for sure) in terms of audible perceptibility has large caveats that need applying

I agree that positive controls are necessary to make more definitive claims. Sadly most studies are bereft that wing of their experimental design, or, really any characterization of their subjects to a synthetic test suite.

I feel like I'm being painted into a caricature: I don't try to rescue "blind test shows every difference" myth, either, as obviously experiments have huge grey areas of interpretation where we have to be cautious taking any sort of conclusion from them. I thought that's what my post you quoted emphasized! On the other hand, I think your rebuttal of "huge differences" is also an ad hoc "devastating caveat" to the idea that "blind testing has notable merits, if difficult".

We'll have to agree to disagree in terms of where the line lays. I find the perpetual claims of "you have to do this (insert this sometimes plausible, if unlikely) thing or you're doing it wrong" obnoxious. I also find the "tests show absolutely what's going on with no caveats" black and white thinking odious, which I have tried to refute as well.
 
Richard - " Not for me. Too masochistic."

Yes, it amazes me the engineering workload some of the younger generations will go through to get something realized, like the Raspberry Pi based stuff I see available.

I'm stuck in a paradigm of a very strong valence around real world analog; things I can see, touch, handle and manipulate with my own hands - and perhaps a few common power tools. While the rest of the world races onward with their own implementations using 0201 "ground pepper flake" parts, assembled for them by those contract PCB manufacturers -

I wouldnt be one of the guys trying to 3D print his own speaker cone or waveguide design...
 
(Every 45 or so minutes) Pull a vinyl disc out of a sleeve, carefully clean it on a regular basis, set the needle to the correct location, flip the disc half way through and carefully return it at the end...

I'll stick with my digital manipulations. 🙂
 
Using computers for audio sucks, it just happens to be what we have right now in our technological evolution.

Mac and Windows currently want to make it seamless for you to handle everything that comes your way in terms of format. To that end they make a lot of compromises to not have the audio blast out at you at any time. For critical work it sucks. For background listening its "glorious".

To get any control over audio in a critical setting you have to move up to professional or at least qualified semi-professional tools. You need well written drivers that tend to be ASIO these days. And you need to use an interface that doesn't have built in problems, that pretty much excludes USB. There may be some USB/2 audio drivers that aren't bad, not sure, haven't kept up on that. So today its Thunderbolt or a built in card (Lynx, RME, etc) Win 10, as far as I know, has screwed it up even more. And Mac wants to SRC everything you do to the first audio format you used since logging in. Oh yeah.

SRC has come a long way over the years, see SRC Comparisons for test results of a wide variety of src's over the years. Atrocious to technically beautiful.

Deterministic conversion vs src conversion is an interesting question. When Benchmark introduced src conversion, it was marketed as a jitter reducing (immunity) technique, and it is quite good at that. Jitter was a big problem in the industry at the time. Of course it always will be, but it has become less of a problem as good design has become more common place in products. But the question remains, can src conversion be as good in the nth degree as deterministic conversion. Now Benchmark seems to be introducing other reasons to use that technique, filters, not a bad reason as has been discussed here already.

One of the biggest advantages of digital is the uniformity of delivery, what you have and what I have is the same. Its just incumbent upon us to make the final delivery good. Just like with cooking, same set of ingredients, different results.

* deterministic conversion- the incoming set of data defines the outcome in a repeatable fashion (in a perfect world) ie PCM data converted at the base rate.

not deterministic, DSD (random modulator in all conversion) and any up or down sampling, integer or not.

random thoughts

Cheers
Alan
 
Last edited: