Test your ears in my new ABX test

Have you been able to discern the files in an ABX test?

  • Yes, I was able to discern the files and have positive result

    Votes: 3 20.0%
  • No, I was not able to discern the files in an ABX test

    Votes: 12 80.0%

  • Total voters
    15
  • Poll closed .
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
If there is a general agreement on a test file and the file is available, I will repeat the test and prepare the same one with the new file.

However, based on my extended experience with A/B testing, different music sample will not change the result, unless you suggest something that has music parts below -60dBFS and people will use excess gain to discern the higher noise. Such noise evaluation however completely destroys the original goal - to discern crossover distortion.
 
Member
Joined 2014
Paid Member
Problem with this test sample file is that it has a lot of harmonics.
The distortion is less than the harmonics in level and is therefore burried in the harmonics.
It is very difficult with this example to hear any difference.

You mean music has harmonics? No sh*t! As for many of us the point of a hifi is to listen to music, not test tones I think this makes a very important point. I will try and analyse some Arvo Part minimalist piano to see if there is anything there that might be more revealing, but I doubt it and his music is not something all like to listen to :).
 
Administrator
Joined 2004
Paid Member
I have some anechoic recordings that might be simple enough for a clean comparison. It would be interesting to know if Pavel's same crossover distortion can be heard on some recordings, but not others. And there are some recent recordings that are very dry and clean that might be worth a try.
 
The guys had no interest in trying to cheat.
Could not be bothered is more like it.
All that said, I understand that the recorded file may sound more pleasing to someone. All I wanted to get is the proof that the files were really discerned. I am not offending anyone, I just want to make understand that the brain sound evaluation is a complex process and we may be unintentionally biased and fooled by our brain.
I think you have enough info from two of us to say there are discernible differences.
I also reckon the Steely Dan track is a poor choice....it's full of 'studio' distortions and is overly 'produced'.....feel free to discuss.....

Dan.
 
Member
Joined 2014
Paid Member
enough info? Erm no. No evidence at all, just a load of words, and now a load of excuses. Zero cred to your 'discernable differences' I'm afraid. You can't claim 'I was able to pretty reliably pick the differences.' then diss the music choice after the event.

Of course this was exactly what some of us expected you to do as you do it every time. What are you so scared of?
 
Re new file test -

first of all, before you guys start with recommendations, try the 1kHz pure tone and 1kHz distorted tone through my amp. Please forget Mooly's Texan, which is close to defective device.

The files are on the further link and names are self-explanatory:


http://pmacura.cz/1khz_549.zip

Post a positive ABX protocol that must be based on a part of signal with steady state amplitude.

If there is no result on the pure sine, it is useless to try any further music file. You must not change volume when switching between the files.
 
Last edited:
When I decided to get back into music recording roughly 15 years ago, I bought a 24/96 recording interface A/D/A with built-in mic preamps, and balanced I/O, and I made the purchase based on specifications that should have been fine.

When I tried using it, recorded vocals sounded muffled, and the only way to make them sound approximately right to me was to use DSP to make an EQ that ramped up HF at about 0.5dB/octave (a straight line ramp as seen using log/log scale). Don't know why that particular EQ, but it sounded better and closer to right.

Got rid of that interface soon as I could and replaced it with an Apogee Rosetta A/D which was supposed to be pretty good, back at the time anyway. It didn't sound much better to me, so I sold it about a week later, and got a Lynx2 instead. Much better, but not perfect.

Anyway, why recount old history? I'll tell you. The sound difference between the last two test files here, now that we know oo was the processed file, reminds me of that first A/D I had that lost HF detail in some way despite having flat frequency response. Maybe it was like a perceptual encoder with flat FR that just threw away some HF detail somehow, I don't know.

The recent test file cc did sound like it had a little more HF distortion to me, maybe not cross over distortion, I would guess it was some other harmonic distortion from however the record was made. The other file oo sounded lacking some of that distortion, and the difference between the files was slight in terms of how they sounded to me.

So, trying to keep moving along towards the point: If we keep doing these tests with the same A/D/A hardware Pavel has been using so far we should expect to keep getting results that are obscured by the data conversion chain in ways that are not fully characterized. It's like a joke to show graphs of amplifier distortion while keeping unknown the data converter chain contribution to the final file sound as though it were negligible. It's not negligible IMHO.

All this talk about finding other music or amplifiers to run tests on misses the point to me. How about swapping out the data converters with something better and running the test(s) over that way? Probably more revealing than trying different music.

If that isn't feasible, could we at least find a very clean hi-res source file and compare that source to the source after only passing through the data conversion chain? We could at least start to get a better idea of how negligible or non-negligible it is. Otherwise, to me anyway, future tests this nature probably aren't worth listening to because its a game of obscure the amplifier distortion with the data converter distortion and pretend like we're not.

And of course, the use of 16/44 isn't helping to provide clarity either. Despite good 2nd order dither statistics being good enough for most humans most of the time, it still sounds grainy to some people, and not the same effect as real analog noise. Given a choice, I would probably prefer 24/44 or better with some tape hiss blended in, if we need to raise the noise floor for some reason.
 
With the first file test and Lacinato ABX
ABX audio testing tool
Completed trials: 8
Number correct: 5 (62.5%)
Confidence that your results are better than chance: 0.63671875

Last played: B

Individual test results [choice/actual] from among 2 possible files:
Test #1 [ A / A ]
Test #2 [ B / B ]
Test #3 [ B / B ]
Test #4 [ A / A ]
Test #5 [ A / B ]
Test #6 [ B / B ]
Test #7 [ B / A ]
Test #8 [ A / B ]
With foobar2000 I was unable.

I try again with JRiver and it is easy to me.
 
And of course, the use of 16/44 isn't helping to provide clarity either. Despite good 2nd order dither statistics being good enough for most humans most of the time, it still sounds grainy to some people, and not the same effect as real analog noise. Given a choice, I would probably prefer 24/44 or better with some tape hiss blended in, if we need to raise the noise floor for some reason.

Tape hiss is probably highly colored and what mixing engineers expect to hear. There are high quality Gaussian random number generators also.
 
Administrator
Joined 2007
Paid Member
Mooly likes this one but I have a problem. I'm pasting the verification code into the appropriate box and it says not valid.

No apparent signature found. Please make sure you've pasted the log correctly, without any additional text, with a '-- signature --' footer.

Code:
foo_abx 2.0.4 report
foobar2000 v1.3.16
2017-11-05 14:27:28

File A: 1khz_549.wav
SHA1: fb97a16900ced6548fa0773a310e70bf056f9d87
File B: 1khz_gen.wav
SHA1: ae1954bae888e3af13dce14654cd318bef230f30

Output:
DS : Primary Sound Driver
Crossfading: NO

14:27:28 : Test started.
14:27:40 : 01/01
14:27:48 : 01/02
14:27:58 : 02/03
14:28:07 : 03/04
14:28:16 : 04/05
14:28:24 : 05/06
14:28:35 : 06/07
14:28:48 : 07/08
14:28:48 : Test finished.

 ---------- 
Total: 7/8
Probability that you were guessing: 3.5%

 -- signature -- 
257bb569582a2c331064aadcbfc9703e51d5b51b
 
The second test (1 kHz) with Lacinato ABX

Completed trials: 9
Number correct: 7 (77.77778%)
Confidence that your results are better than chance: 0.91015625

Last played: B

Individual test results [choice/actual] from among 2 possible files:
Test #1 [ A / A ]
Test #2 [ A / A ]
Test #3 [ B / B ]
Test #4 [ A / A ]
Test #5 [ B / A ]
Test #6 [ B / B ]
Test #7 [ A / B ]
Test #8 [ A / A ]
Test #9 [ B / B ]

Now, with JRiver, easy again.
 
Administrator
Joined 2007
Paid Member
A good result this time but again the log will not verify.

Code:
foo_abx 2.0.4 report
foobar2000 v1.3.16
2017-11-05 14:36:13

File A: 1khz_549.wav
SHA1: fb97a16900ced6548fa0773a310e70bf056f9d87
File B: 1khz_gen.wav
SHA1: ae1954bae888e3af13dce14654cd318bef230f30

Output:
DS : Primary Sound Driver
Crossfading: NO

14:36:13 : Test started.
14:36:26 : 01/01
14:36:36 : 02/02
14:36:59 : 03/03
14:37:10 : 04/04
14:37:20 : 05/05
14:37:30 : 06/06
14:37:39 : 07/07
14:37:48 : 08/08
14:37:48 : Test finished.

 ---------- 
Total: 8/8
Probability that you were guessing: 0.4%

 -- signature -- 
40fcc6fdc360f7138084065d7c9147d635e4f274
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.