Threshold point for that test was met: between 64kbps and 128kbps MP3 format. Depending on the music excerpts and participant.
McGill university made a similar test and their threshold was found to be 192kbps.
Anyway, that's below today's standard of 320kbps.
Old news. At least for me.
McGill university made a similar test and their threshold was found to be 192kbps.
Anyway, that's below today's standard of 320kbps.
Old news. At least for me.
Regarding inattentional deafness, that may account for some observed limits of some ABX testing for small differences. However, I suspect there is more to it only that. For example, echoic memory appears to be shorter for some types of very small distortion, although I don't know why, would need some study to figure out.
Unfortunately, for the purpose of changing minds of people who have already decided they know what people can and can't hear based on preexisting research, pointing to research not directly aimed at understanding limitations of ABX, etc., is likely to be discounted. It would take new, overwhelming, direct research to get some people to reconsider. And that's probably not going to happen anytime soon, maybe not ever, as nobody cares to pay for it. It would be expensive, some tens of thousands of dollars and up just to get started for an initial study - so says Earl Geddes who has done related research before.
Unfortunately, for the purpose of changing minds of people who have already decided they know what people can and can't hear based on preexisting research, pointing to research not directly aimed at understanding limitations of ABX, etc., is likely to be discounted. It would take new, overwhelming, direct research to get some people to reconsider. And that's probably not going to happen anytime soon, maybe not ever, as nobody cares to pay for it. It would be expensive, some tens of thousands of dollars and up just to get started for an initial study - so says Earl Geddes who has done related research before.
And just a little peed off I'd warrant. It's possible that Fiio made a very good DAC by happy accident. It's nice to know that you don't have to spend thousands of beans to get high-end results, I'd always suspected as much even though a whole industry is trying to convince me otherwisei'm not surprised, i'm downright stunned! I've got tens of thousands in gears here that are now iron-marked ''illusion'' all over...
Yes they do.
Over hundred participants and thousand of rounds.
Original files were genuine uncompressed 24/96.
In fact, the uncompressed 24/96 was indistinguishable from the lossy AAC 256 version.
Foo. I've recorded, mixed, and mastered music. Most people in my experience can hear a difference on a professional reproduction system using near-field monitors. For inexperienced listeners, it may be necessary to ask them to compare cymbals to start with. Its unusual for people not to hear a difference there if asked to listen carefully. From there you can go on pointing out all the differences and they will say, "wow, I never listened to it that way before."
But I would agree that inexperienced listeners, including very experienced musicians, may not notice unless asked to compare specific things.
To me, there is a difference between something being inaudible, and it not being noticed by people who aren't paying attention for it.
Last edited:
I think the bottomline of all that is: We are (or were) overestimating the human capacities, therefore making audio reproducing gears that are far beyond our needs.
Foo. I've recorded, mixed, and mastered music. Most people in my experience can hear a difference on a professional reproduction system using near-field monitors. For inexperienced listeners, it may be necessary to ask them to compare cymbals to start with. Its unusual for people not to hear a difference there if asked to listen carefully. From there you can go on pointing out all the differences and they will say, "wow, I never listened to it that way before."
Hearing a difference (in a perceptive, subjective way) is completely different than proving it (with an ABX test)
Do we REALLY have to point that out, again ?
I'm perceiving differences all the time. Still, i'm unable to prove my capacities through a blind test.
It's all ILLUSIONS. Our brains is playing tricks on us. Is that so hard to understand, or to swallow? Me, i do understand perfectly but it's still sideways in my throat... 😉
That's a reasonable point, but the question I have is this: if we agree there is an absence of all but the most tangential, speculative evidence, and given all we do know about how easily perceptual biases creep in, why such vehement argument against carefully run ABX with experienced listeners?Unfortunately, for the purpose of changing minds of people who have already decided they know what people can and can't hear based on preexisting research, pointing to research not directly aimed at understanding limitations of ABX, etc., is likely to be discounted. It would take new, overwhelming, direct research to get some people to reconsider.
It's very difficult not to conclude that it's because the outcomes are disadvantageous in some way.
I am not opposed to blind testing at all, in fact it is necessary. However, present implementations of ABX have some problems IMHO. Like THD years ago when it was all we could measure, people thought it correlated perfectly with how people hear, but now we know it does so only poorly. I'm sure it had it's defenders until the bitter end.
So it will probably go with ABX as we have it now.
So it will probably go with ABX as we have it now.
Everything points out toward a ''We can actually hear much less than we thought'' kind of thing.
Everything.
The futility of a demonstration with video sources HD v.s. 4K on a 50'' screen at 30ft distance is easy to understand: it's visual and can be easily compared.
The only thing that made the illusions breath for so long on planet Audiophilia is the fact that is not as easy to compare. That's it. Simple as that. The is no mumbo-jumbo in the video domain because the illusion wouldn't breath a minute.
Now, if your last hope to think the audio illusions are not... is by diminishing the very ABX test concept, well my dear friend: you're fcuked. The illusions will prevail in your mind, you'll chew on that blue pill and enjoy your mind-created steak for the rest of your life, while others are going to Truthville.
Everything.
The futility of a demonstration with video sources HD v.s. 4K on a 50'' screen at 30ft distance is easy to understand: it's visual and can be easily compared.
The only thing that made the illusions breath for so long on planet Audiophilia is the fact that is not as easy to compare. That's it. Simple as that. The is no mumbo-jumbo in the video domain because the illusion wouldn't breath a minute.
Now, if your last hope to think the audio illusions are not... is by diminishing the very ABX test concept, well my dear friend: you're fcuked. The illusions will prevail in your mind, you'll chew on that blue pill and enjoy your mind-created steak for the rest of your life, while others are going to Truthville.
I admit I wasn't listening to music when the 1970s TIM/IMD/crossover distortion debates happened, but I think we've had research showing differing sensitivity to differing harmonics since before the transistor. Manufacturers might have preferred to give a single THD value and claim it was sufficient to characterize a piece of equipment, but did science really hold that position?I am not opposed to blind testing at all, in fact it is necessary. However, present implementations of ABX have some problems IMHO. Like THD years ago when it was all we could measure, people thought it correlated perfectly with how people hear, but now we know it does so only poorly. I'm sure it had it's defenders until the bitter end.
So it will probably go with ABX as we have it now.
If there is a "longer term" audibility effect that short-term ABX is missing, one so patently obvious that people are willing to argue the case for it over 50+ pages, it should be relatively easy to produce some direct prima facie evidence, surely? Not of the mechanism, but of its existence.
I test myself blind. I don't claim to hear any difference unless I can repeated hear it blind and do so at multiple sessions. I don't use any of the currently available ABX programs to do it. It's not the only way to test blind. My personal conclusion is that a results using a particular method or particular software are valid only for that software. It's possible to get better results with foobar ABX than with some of the alternatives. It's also quite possible to cheat with foobar ABX or other ABX software, if one wished to do so. My further conclusion is that it is possible to hear much smaller distortion than is generally recognized around this forum. But, I will heartily agree with anyone that one can be easily fooled if trying to detect small distortion and using blind testing to separate out real from mistaken. It's a problem that only blind testing can address, but not ABX as we have it today. With that, I will bow out of this conversation because it is a never ending argument that won't die.
Well in my statement that I quoted at the top of the post I stated "forced choice blind testing" by which I meant ABX testing as I had done every other time I used the phrase.The paper above describes a very interesting result, thanks for that - frequencies above the accepted limit of human hearing affecting perceived sound quality and (not surprisingly, if you accept that mental properties supervene on brain states) brain activity. The subjects were (again, not surprisingly) not able to consciously identify the high-frequency content, but they were - explicitly - able to state that preference in the context of a blind AB test of short (three minute, separated by ten seconds) snippet. The effect absolutely did not require the "longer term" to become apparent, unless by "longer term" you mean six minutes. Nothing in there discredits blind, short-term AB testing; rather, it depended upon it.
This is why i suggested to jonbon to use preference testing rather than ABX. ABX testing is very specific in its requirements & mostly not suitable for what Jonbon is trying to do - it is a spot-the-difference style of test which I said is skewed to false negatives for all sorts of reasons.
I thought this might be the result of posting such papers - nothing will satisfy unless it directly states in its conclusions exactly the words I used.Inattentional deafness is not a new concept, but again, how does that support the existence of the "longer term" audibility effects you posit? It simply means that if you concentrate on aspect "X" of a recording, you become less sensitive to aspect "Y". The example given does not relate to sound quality (the subjects were asked to count timpani beats); do you have any examples that relate to evaluation audio quality?
I have shown on many links that ABX testing is prone to false negatives - I have shown research which details how inaudible sounds effect us physiologically
There is no funding or interest in the research community for testing audio quality as others have stated
Yes, but he shows how precarious such AB testing is when differences of 3dB can't be sensed.Dave Moulton's blog post isn't peer-reviewed research, but it's notable that in that piece he too concludes that blind AB testing is the best available option for subtle audibility effects, quote "So, I recommend that you depend on blind (or better, double blind) testing to find out answers to questions about the audibility of effects like 96 kHz. sampling rates or 24-bit words.".
You will have to draw your own conclusions from your own experience & the research that shows that sound of which we are not conscious can affect our emotional center.In general, these links are either highly tangential to the hypotheses that there exist "longer term" audibility effects in the evaluation of sound quality, or fail to support it at all. I really hoped you had something directly pertinent / less speculative when you said your claims were "verifiable" by "perceptual research". Since at least one of your "references" is linked direct from a google.ie search, one wonders if this is serious presentation of your evidence or just a 'Gish Gallop'.
I didn't bring up bias & agenda - you just did.To return to my earlier question, since you have again brought up "bias and agenda" - which DACs have you designed and / or sold?
Last edited:
Thank you, that's better than before. Your best points in one place. If we could reset and take up any questions from here forward that might be good for the discussion.
I believe in only expending energy in providing information when I believe the recipient is actually interested in the information & not, as was the case here, just trying to win a debating point. It's all about motivation - why should I expend the energy when there is no motivation to learn on the part of the questioner?
Your post was the decider in me putting in the effort but as you can see the questioner is not interested in my reply wanting something else which is exactly what my original test question teased out from him. As you have stated many times, there is no money or motivation in the research community to research sound quality as he wants so his requests for such research is not possible.
No doubt he will use this as some sort of failing on my part?
That's a reasonable point, but the question I have is this: if we agree there is an absence of all but the most tangential, speculative evidence, and given all we do know about how easily perceptual biases creep in, why such vehement argument against carefully run ABX with experienced listeners?
It's very difficult not to conclude that it's because the outcomes are disadvantageous in some way.
If there was a vehement argument against "carefully run ABX with experienced listeners" perhaps you can point it out. I have pointed out the weaknesses in ABX testing & suggested preference testing instead. Any one who doesn't want to evaluate the strength/weakness of a test is simply not being objective despite what they claim.
Hi Jon, no offense, but I remember couple years ago you had another post that tested several DACs, gave marks down to individual categories, how did you test that time?
I'm sorry, but this is flatly untrue (my bold).mmerrill99 said:I didn't bring up bias & agenda - you just did.
My reluctance to spoon feed any papers here was based on the lack of demonstrated motivation i.e. the unwillingness to learn or consider any viewpoints/research which didn't match their own bias/agenda
I don't require evidence that matches the exact words you said, no. Those papers aren't even close to supporting your hypothesis of "longer term" audibility, which is what I've been questioning throughout. They are tangential at best. If that's what you've got then fair enough, but in that case you set yourself up making a categorical statement that "All of what I say is verifiable in the perceptual testing research".
So which DACs do you design and / or sell? How much do they cost?
I admit I wasn't listening to music when the 1970s TIM/IMD/crossover distortion debates happened, but I think we've had research showing differing sensitivity to differing harmonics since before the transistor. Manufacturers might have preferred to give a single THD value and claim it was sufficient to characterize a piece of equipment, but did science really hold that position?
If there is a "longer term" audibility effect that short-term ABX is missing, one so patently obvious that people are willing to argue the case for it over 50+ pages, it should be relatively easy to produce some direct prima facie evidence, surely? Not of the mechanism, but of its existence.
You now exaggerate as is typical - no such debate has happened over 50 pages - discussion of ABX testing certainly has - the rest of what you say is in your head & you pick on one statement & act like a dog with a bone
I'm sure in your head you have won that debate hands down - I can hear your mental fanfare from hereI'm sorry, but this is flatly untrue (my bold).
I don't require evidence that matches the exact words you said, no. Those papers aren't even close to supporting your hypothesis of "longer term" audibility, which is what I've been questioning throughout. They are tangential at best. If that's what you've got then fair enough, but in that case you set yourself up making a categorical statement that "All of what I say is verifiable in the perceptual testing research".
- Home
- Source & Line
- Digital Line Level
- DAC blind test: NO audible difference whatsoever