RF Attenuators = Jitter Reducers

Do you have a SPDIF transformer in your Digital Device

  • Yes

    Votes: 40 71.4%
  • No

    Votes: 16 28.6%

  • Total voters
    56
Status
Not open for further replies.
Administrator
Joined 2004
Paid Member
What we are considering here, IMHO, is whether a $12 item inserted into an audio set up can make a sonic improvement, nothing more and nothing less.

Yes there is something more, the modified Hiface USB to SPDIF converter. Have we forgotten about this one? It's not a $12 "buy it and listen" item.


Hearing differences? I hear differences all the time. But one very revealing test is to put the same song on repeat. Yep, I'll hear differences each time it plays. But I don't think it's the system. :)

.
 
Sorry, Thorsten, you're still completely off base. You still don't understand the different between signal to noise, resolution, and noise floor, or if you do, you're deliberately conflating them for the sake of misleading the gullible.

Having gone through the fruitless exercise of trying to teach you basic bit-counting a couple of years ago, I have no desire to go through that again. You'll have to find someone else to teach you the basics.
 
Hi 'biki,

Basically now we're down to an argument between subjectivists and engineers.

Due to your generally vitriolic manner I generally ignore you, but this requires correction.

The argument is not between Engineers and subjectivists.

It is between pseudo-objectivist debunkers, who are happy post measurements as "proof" that any first year EE student should be able to recognise as fatally flawed, as well their syncopants on one side and the subjectivists on the others.

Most real engineers had the good sense to stay out of this mudfight. The few that occasionally throw something in the ring to at least point out gross issues with either side.

My point is that from a scientific viewpoints all the arguments on both sidews are sufficiently flawed to make them best ignored. Sadly this leaves with a huge thread that is essentially devoid of meaningful information (and to which you have contributed in a disproportionate manner, I cannot but fail to observe, statistically speaking).

As for measurements, there enough extant in the public domain and I feel no need to add to them. As to me testing anything originating from a commercial source, I think the conflict of interest alone should suffice to rule me out, but as said, I have signed certain undertakings (which I intend to honour) that mean I cannot anyway.

__________________
Mike zerO Romeo Oscar November

Don'tchasay!? Spot on my old man.

Ciao T
 
Last edited:
Hello Mr. Didden

Dave, that's an easy one - just do a controlled subjective listening test.

You noticed that a single blind preference test was undertaken by two distinct individuals, who established the same preference?

This is quite an effort and can be analysed as to statistical consequences and the likely hood of the same result being exhibited as consequence of chance. Of course, the small N in the Test limits it's significance severely, but this is the same for any of these tests undertaken casually by individuals.

Despite this I think the methode is a reasonable attempt to minimise bias, so stop dumping on it and give the man some credit.

Ciao T
 
Testing for difference

Demonstrating the existence of any sonic difference is prerequisite to investigations of cause.

There is no necessity for testing with more than one person to determine whether there is a difference.

If one person can reliably detect a difference then a difference is demonstrated.

If a particular person cannot detect a difference, this does not demonstrate that there is no difference. This is why testing with more than one person is desirable.

If testing with more than one person does not indicate a difference then this still does not mean that there is no difference, there might be one person in the world who can hear the difference, but who wasn't in the tested group. As the tested group gets larger without detecting a difference the existence of the difference is increasingly in doubt.

The testing should be done, or at least recorded, on an individual basis, so that the aggregated numbers do not hide the one person who can detect the difference.

If, say, only one person in 50 can detect a difference, this tends to invalidate the usefulness of the modification, but the difference still exists.

If, say, 50 people are tested and no difference is detected, then the existence of a difference is extremely unlikely, and I for one would see little point in continuing testing beyond this point.

w

ThorstenL, I am happier to be characterised by you as vitriolic than to imagine that I would ever characterise myself as 'the lone sage in the wilderness'.
 
Administrator
Joined 2004
Paid Member
As for measurements, there enough extant in the public domain and I feel no need to add to them.
Cheap out there, T. Why all talk, no action? I'd love to see some of these - really - I'm open to learn. Come on, you can do it!

As to me testing anything originating from a commercial source, I think the conflict of interest alone should suffice to rule me out..
Sigh, yet another cop out.
 
AX tech editor
Joined 2002
Paid Member
Hello Mr. Didden

You noticed that a single blind preference test was undertaken by two distinct individuals, who established the same preference?

This is quite an effort and can be analysed as to statistical consequences and the likely hood of the same result being exhibited as consequence of chance. Of course, the small N in the Test limits it's significance severely, but this is the same for any of these tests undertaken casually by individuals.

Despite this I think the methode is a reasonable attempt to minimise bias, so stop dumping on it and give the man some credit.

Ciao T

Hi T,

Yes I noticed what *might* have been a single blind test, although from the description it isn't clear that indeed that's what it was.

And I DID explicitly give him credit - you may have missed that post. And no, I didn't dump on him; I just answered his question. Again, with this thread going so fast, you may have missed the question.

jan didden
 
Hi,

Before I embark on a controlled Listening test could we get some agreement on what qualifies so as this is not a waste of time? What group size, how many cables, double blind, level matching, etc.

Okay, many questions.

Fit the First - Statistics, understand that the underlying analysis methode is statistics. The issue is that we try to determine if the test result was likely due to chance or an actual difference being present.

Such statistical analysis is subject to two possible errors, one is a "Type A" error, where we conclude in error that a difference is present when the test results where caused by chance and a type B error where we conclude that no difference exists when in fact one is present.

Les Levental presented the required math to work out the likelyhood of both errors. The upshot is simple, with small datasets (the N I have been referring to) and a high significance level (such as commonly demanded by the ABX/DBT Mafia) will result in a very large risk of Type B errors.

IN PRINCIPLE one may perform a much more advanced statistical analysis that gives more meaningful results, but the ABX/DBT Mafia in their never ending quest to rid audio from the Bogey Man and Voodoo refuse to engage in such practices, as this would stop returning reliably "null" results from their tests, which invariably feature very small N and high significance.

So much for the statistics. It means if you want a test that will not reliably gloss over existing differences and you want to put one over on the ABX/DNT Mafia you require a very large N. If you do the test with very few participants and sensibly low numbers of trials you are playing "their" game and you will return "null" results.

So much on statistics.

Fit the Second, human perception. It is fickle and deceptive.

NB1 - Test situations create stress (remember when you did you any final exams).

Any stress mitigates against perception sensitivity. As we cannot avoid the knowledge of a "test situation" we need to reduce stress.

Long test sequences show increasing stress. Usually five sequential presentations in one go tend to be an observed limit, before attention flass off and stress rises and sensitivity goes to hell.

So make sure to have multiple breaks, preferably activity filled that can re-set stress & attention levels, between presentations. As you are going to have very few participants the best you can do is run multiple tests with the same participants. While in my view it does not really adds to confidence levels, it is acceptable to the ABX/DBT Mafia and hence you may as well employ it, to maximise your chances.

NB2 - beware of expectation bias.

Expectation bias is the result of having expectations about the outcome while undertaking the test. I once did a blind test at an audio society meeting, comparing my own modifications of a Marantz CD-67 to a stock machine. I knew my mods where better. I knew my machine would win.

(Note, Sy's measurements would have pegged them "not different", Note 2 - Yes it is the CD-67 from the TNT-Audio modification article)

Well, once the "blinds" where down I could NOT hear any difference. I literally could not. But over 70% of those present (a group of maybe a little under 20, mostly seated very sub-ideally) could hear differences with good reliability (4/5) and expressed a preference for the modified unit.

Intreagued by my own failure I did another test, in which I told the participants we would test mains cable sound differences. The group was very small four plus me, so statistical significance is low, we may treat this as "anecdotal evidence".

Most had no particular opinion on this, all cables where cheap to make DIY Mains cables and should they prove effective, anyone could build their own. One participant was a EE with BBC background very vocal cable sceptic. I professed openly, that I had not listened either and I had no idea if there was a difference or not, but would like to find out.

What I really did was to reverse the polarity of one speaker in the stereo set. Everyone heard the diferences correctly 10/10, except out "cable sceptic" WHO SCORED RANDOM! Due to the small N no real significance, but one may assert that in this case expectation bias was sufficient to mask something as gross as wiring one speaker out of polarity.

Fit the third - Calibrate your tests.

Just as I ask that any measurements are accompanied by demonstrations of the limits the equipment can resolve and as proof that the tests are competently implemented, any listening test should be calibrated with "known audibly different" stimulae, to ensure sufficient sensitivity to at least distinguish known audible phenomenae.

I suggest as a minimum polarity reversal of one channel, 1dB level difference and 0.3dB level difference, I personally feel that at times adding both channel polarity reversal helps to screen out "clothears", but this is not generally aknowleged as "audible", consider this as one to do for "extra credits" only.

In fact, the routine lack of any such calibration in the ABX/DBT Mafia's testing is one of my strongest arguments (together with "awfully bad statistics") towards the need to simply disregard their extensive set of published "null" results as evidence.

Fit the fourth - let's get down to the dirty work

I would suggest that you should try for an N no lower than 100, This allows you to select a significance level that has about equal risks of type A & B errors. With 20 participants and 5 trials each this requirement is satisfied.

As we would like some "calibration" and would like to remove "clothears" but keep "goldenears" you probably will need to start with a larger number of participants and not to be rude, you should not dismiss the "clothears" early.

So perhaps 50 Participants with the 20 who score highest in the preliminary calibration tests having their actual tests included in the final analysis (you should not cherry-pick people who score highly in the final real tests, but excluding participants that show a low sensitivity to known audible stimulae is acceptable).

Now, you can of course use much smaller numbers and all that, but not only do you find that the significance (that is the general applicability of your test results) is very low, but equally the risk of type B errors (erroneously returning null results) becomes unacceptably large.

If following all that you do not feel much like undertaking a "controlled listening test", I cannot blame you.

Ciao T
 
Hi,



Okay, many questions.

Fit the First - Statistics, understand that the underlying analysis methode is statistics. The issue is that we try to determine if the test result was likely due to chance or an actual difference being present.

Such statistical analysis is subject to two possible errors, one is a "Type A" error, where we conclude in error that a difference is present when the test results where caused by chance and a type B error where we conclude that no difference exists when in fact one is present.

Les Levental presented the required math to work out the likelyhood of both errors. The upshot is simple, with small datasets (the N I have been referring to) and a high significance level (such as commonly demanded by the ABX/DBT Mafia) will result in a very large risk of Type B errors.

IN PRINCIPLE one may perform a much more advanced statistical analysis that gives more meaningful results, but the ABX/DBT Mafia in their never ending quest to rid audio from the Bogey Man and Voodoo refuse to engage in such practices, as this would stop returning reliably "null" results from their tests, which invariably feature very small N and high significance.

So much for the statistics. It means if you want a test that will not reliably gloss over existing differences and you want to put one over on the ABX/DNT Mafia you require a very large N. If you do the test with very few participants and sensibly low numbers of trials you are playing "their" game and you will return "null" results.

So much on statistics.

Fit the Second, human perception. It is fickle and deceptive.

NB1 - Test situations create stress (remember when you did you any final exams).

Any stress mitigates against perception sensitivity. As we cannot avoid the knowledge of a "test situation" we need to reduce stress.

Long test sequences show increasing stress. Usually five sequential presentations in one go tend to be an observed limit, before attention flass off and stress rises and sensitivity goes to hell.

So make sure to have multiple breaks, preferably activity filled that can re-set stress & attention levels, between presentations. As you are going to have very few participants the best you can do is run multiple tests with the same participants. While in my view it does not really adds to confidence levels, it is acceptable to the ABX/DBT Mafia and hence you may as well employ it, to maximise your chances.

NB2 - beware of expectation bias.

Expectation bias is the result of having expectations about the outcome while undertaking the test. I once did a blind test at an audio society meeting, comparing my own modifications of a Marantz CD-67 to a stock machine. I knew my mods where better. I knew my machine would win.

(Note, Sy's measurements would have pegged them "not different", Note 2 - Yes it is the CD-67 from the TNT-Audio modification article)

Well, once the "blinds" where down I could NOT hear any difference. I literally could not. But over 70% of those present (a group of maybe a little under 20, mostly seated very sub-ideally) could hear differences with good reliability (4/5) and expressed a preference for the modified unit.

Intreagued by my own failure I did another test, in which I told the participants we would test mains cable sound differences. The group was very small four plus me, so statistical significance is low, we may treat this as "anecdotal evidence".

Most had no particular opinion on this, all cables where cheap to make DIY Mains cables and should they prove effective, anyone could build their own. One participant was a EE with BBC background very vocal cable sceptic. I professed openly, that I had not listened either and I had no idea if there was a difference or not, but would like to find out.

What I really did was to reverse the polarity of one speaker in the stereo set. Everyone heard the diferences correctly 10/10, except out "cable sceptic" WHO SCORED RANDOM! Due to the small N no real significance, but one may assert that in this case expectation bias was sufficient to mask something as gross as wiring one speaker out of polarity.

Fit the third - Calibrate your tests.

Just as I ask that any measurements are accompanied by demonstrations of the limits the equipment can resolve and as proof that the tests are competently implemented, any listening test should be calibrated with "known audibly different" stimulae, to ensure sufficient sensitivity to at least distinguish known audible phenomenae.

I suggest as a minimum polarity reversal of one channel, 1dB level difference and 0.3dB level difference, I personally feel that at times adding both channel polarity reversal helps to screen out "clothears", but this is not generally aknowleged as "audible", consider this as one to do for "extra credits" only.

In fact, the routine lack of any such calibration in the ABX/DBT Mafia's testing is one of my strongest arguments (together with "awfully bad statistics") towards the need to simply disregard their extensive set of published "null" results as evidence.

Fit the fourth - let's get down to the dirty work

I would suggest that you should try for an N no lower than 100, This allows you to select a significance level that has about equal risks of type A & B errors. With 20 participants and 5 trials each this requirement is satisfied.

As we would like some "calibration" and would like to remove "clothears" but keep "goldenears" you probably will need to start with a larger number of participants and not to be rude, you should not dismiss the "clothears" early.

So perhaps 50 Participants with the 20 who score highest in the preliminary calibration tests having their actual tests included in the final analysis (you should not cherry-pick people who score highly in the final real tests, but excluding participants that show a low sensitivity to known audible stimulae is acceptable).

Now, you can of course use much smaller numbers and all that, but not only do you find that the significance (that is the general applicability of your test results) is very low, but equally the risk of type B errors (erroneously returning null results) becomes unacceptably large.

If following all that you do not feel much like undertaking a "controlled listening test", I cannot blame you.

Ciao T



Short AB tests don't work because people have little memory of sound perception. People believe the visual sense with FFT and measurements yet there is not a means to model the correlation between visual representations of sound to actual sound perception. Its beyond the scope of even the highest ranked neuroscientists, let alone us engineers.
 
Ridiculous

This is the most ridiculous thread I have ever followed. But then I have never even bothered to click on any of the " Do cables/ capacitors really make a difference?" threads. Thanks for the laughs! This isn't medicine. It's a $12 piece of audio gear. Demanding testing in a 100 participant double blind ABX listening test at an accredited University is so absurd, it's beyond words. Hah! Give me a break. Demanding a one person blind test is still over the top. No big audio company ever does certified blind testing on finished products, let alone changes in each component spec. Nothing would ever get done. Measurement guys measure, listener guys do sighted listening. The reports are taken for their value on either side and the designs move forward.
 
Last edited:
Sy,

Sorry, Thorsten, you're still completely off base.

In this case, please:

A) Post the formula for estimating the FFT Noisefloor of a digital signal with a given wordlength for a given number of FFT Bins and solve for your measurement.

B) Explain the discrepancy in your measurement against this level as something else than poor experimental technique.

C) Estimate the confidence level for your measurements in equivalent wordlength.

Having gone through the fruitless exercise of trying to teach you basic bit-counting a couple of years ago, I have no desire to go through that again.

Sy, you disappoint me. You mean you still have not learned the difference between an FFT noisefloor and a noise measurement in classic analogue terms?

I'm tempted to add more snide comments, but they are uncalled for.

Ciao T
 
Sadly, he's flipped.

Over into verbose mode I mean.

The more they talk, however, the greater the chance that they will tie themselves in knots.

If you do the test with very few participants and sensibly low numbers of trials you are playing "their" game and you will return "null" results.

This isn't a conspiracy, ThorstenL. 'they' aren't out to get 'you'.

NB2 - beware of expectation bias...

...I knew my mods where better. I knew my machine would win.

Jesus.

If following all that you do not feel much like undertaking a "controlled listening test", I cannot blame you.

Thanks for the encouragement. I guess that outcome would suit you. No instrumented test, no listening test. Then 'the lone sage in the wilderness' could carry on his conversation with God.

You need to seek professional advice.

w
 
Hi 'biki

I doubt that we shall be hearing a great deal more from ThorstenL, as he has descended to the level of name calling

I am sorry to read about your health issues and hope you recover soon and as fully as can be expected. Given your ordeal I'll chalk your style up to stress and possibly medication issues (I had friends go through similar) and I'll cut you some slack.

Hope you get better soon.

Ciao T
 
Hi Scott,

Demanding testing in a 100 participant double blind ABX listening test at an accredited University is so absurd, it's beyond words.

I agree. My short missive merely suggested what would have to be done IF one wanted to have a reasonable assurance that potentially audible issues would be detected by the test while at the same fulfilling the requirements for "proven different" as set down by the ABX/DBT Mafia.

I also suggested that if there was an attempt to satisfy the requirements by the ABX/DBT Mafia with smaller numbers was playing their game. The "game" being of course the "only one listener needs to hear the difference" gig (and "game" should be read here in the same way as when applied to confidence tricksters) and whenever "DBT Test Ace" Tony Faulkner walks in the room (I have seen him get 10/10 in tests where I could hear nothing, simply because the test setup was so bad) they cry foul and pack up.

Ciao T
 
Unreasonable

Hi Scott,
I agree. My short missive merely suggested what would have to be done IF one wanted to have a reasonable assurance that potentially audible issues would be detected by the test while at the same fulfilling the requirements for "proven different" as set down by the ABX/DBT Mafia.

I also suggested that if there was an attempt to satisfy the requirements by the ABX/DBT Mafia with smaller numbers was playing their game.
Thanks. I did understand where you were coming from, but many others have been swept up in actually believing a controlled blind test is a reasonable demand in these discussions.
 
This is the most ridiculous thread I have ever followed. But then I have never even bothered to click on any of the " Do cables/ capacitors really make a difference?" threads. Thanks for the laughs! This isn't medicine. It's a $12 piece of audio gear. Demanding testing in a 100 participant double blind ABX listening test at an accredited University is so absurd, it's beyond words. Hah! Give me a break. Demanding a one person blind test is still over the top. No big audio company ever does certified blind testing on finished products, let alone changes in each component spec. Nothing would ever get done. Measurement guys measure, listener guys do sighted listening. The reports are taken for their value on either side and the designs move forward.


Well said.



The attenuators do improve the SQ in the systems I have tried it in. Also jkeny's modded hiface sounds better than the original one.


After that, I'm not interested in debating with people who are here just for an argument, have pretty crappy sounding systems themselves anyway which throws their viewpoint and opinions into question, and feel they have an absolute right to comment endlessly that something cannot work but refuse to try it.

Sorry if the truth hurts but there it is.

I'm fully expecting some kind of censure after those comments, but feel its worth it.


Fran
 
Status
Not open for further replies.