DAC blind test: NO audible difference whatsoever

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Steve: The larger question is whether you could identify your Empirical Audio DAC compared to something like Benchmark’s DAC3 HGC in a blind test of your choosing? (Usually, manufacturers don’t overtly weigh in on things like this discussion.)

There is another designer/ manufacturer here who could also make history. I’ve mentioned already that Alan Shaw of Harbeth Speakers has a challenge that if you can identify your well designed commercial amp compared to the one he uses in a blind test of his speakers, you can have his speakers ($15K) for free. I’m sure he would do a DAC comparison as well. A guy could take a short hop across the Irish Channel over to Sussex, England for an afternoon of listening and be home by evening. The victory would would be such an awesome marketing coup. I’m sure JA would put the person who could do this on the cover of Stereophile.

I’m willing to throw in $50 for a crowdfunded test to see a result.

The reason that legitimate designers should support testing is that it sanitizes the high end audio world that is currently infected with dubious products. It builds confidence, especially for new customers, that most products provide value at some level. Bringing new people into the hobby and growing the total market would be a benefit for all.

Mike
 
Steve: The larger question is whether you could identify your Empirical Audio DAC compared to something like Benchmark’s DAC3 HGC in a blind test of your choosing? (Usually, manufacturers don’t overtly weigh in on things like this discussion.)

I have heard so many DACs at shows, even though I have not done extensive listening tests with HGC, I think I could pick it out of the line in an ABX test. My DAC sounds so different from the typical DAC, I think it would be easy. It would have to be in my own system however. I don't trust other amps and cables to be transparent enough. If someone wanted to bring one out, say from Portland, we could do the test. I've done this before with customers. They send me cables and other devices to test. I have some here right now as a matter of fact, but I will not reveal them.

There is another designer/ manufacturer here who could also make history. I’ve mentioned already that Alan Shaw of Harbeth Speakers has a challenge that if you can identify your well designed commercial amp compared to the one he uses in a blind test of his speakers, you can have his speakers ($15K) for free. I’m sure he would do a DAC comparison as well. A guy could take a short hop across the Irish Channel over to Sussex, England for an afternoon of listening and be home by evening. The victory would would be such an awesome marketing coup. I’m sure JA would put the person who could do this on the cover of Stereophile.

Maybe. They get a lot of money for those front pages. Not cheap to fly to Britain either.

The reason that legitimate designers should support testing is that it sanitizes the high end audio world that is currently infected with dubious products. It builds confidence, especially for new customers, that most products provide value at some level. Bringing new people into the hobby and growing the total market would be a benefit for all.

Mike

I agree. I'm all for this, as well as developing a better set of measurements that actually correlates to sound quality. I wish JA would spearhead this.

The problem is I believe some companies are afraid the emperors clothes might be revealed. What happens when the $50K DAC is beaten by the $13K DAC? Not a pretty sight and they would have a cow and claim some kind of fixed test, fake news etc. Their lawyers would probably be phoning you....

Steve N.
Empirical Audio
 
...a blind test of your choosing...

Blind testing of some type is necessary to be honest with one's self. However, for detecting very small differences ABX, as it exists now, is probably not the not the most sensitive blind protocol. Unfortunately, the readily available blind testing systems, both software and hardware, appear to be exclusively ABX.

Some specific features I would recommend to improve existing ABX would be to allow automated looping of an audio track time selection, and 1-button switching between A or B, just a source toggle that can be operated by touch, no looking at a box or computer screen required. It should so much as possible be easily operable by someone with their eyes closed and with their attention focused elsewhere. That's how I do it.

Also, there needs to be some way to find which section of sound to loop. That could be done in another program and start and stop times typed into the ABX system, or the ABX system functionality could be expanded to make it easy to browse through a track, and focus in on an exact area to loop. Currently, such functionality exists in audio editor programs, and one of those could be use to find good choices of start and stop times of a selection to be used for testing.

Finally, the whole system needs to be clean enough so as not to create masking distortion that could obscure whatever DUT differences there may be to listen for, unless perhaps the purpose of the test is to see if any difference is detectable under some particular, perhaps compromised, test conditions.

The only other way I am aware of to deal with the problem of distraction by ABX system is to spend a lot of time practicing with it. Although it is distracting to operate when trying to detect very small differences, there is some evidence that the distraction factor can be reduced with a lot of practice.

Personally, the latter option of practice with ABX is not of interest to me. I have other things I want to do with my time.
 
Last edited:
Or colored enough, how do you decide the difference? This is a serious question since I've met more than one very serious audiophile that won't compare on anything but low power SET's and horns. If you don't think that chain is colored then there is not much to talk about.

If Steve could differentiate between two DACs set to equal volumes, does it matter what playback equipment he can do this on - if there's an audible difference does it not mean there's a difference in the DACs that is audible?
 
If Steve could differentiate between two DACs set to equal volumes, does it matter what playback equipment he can do this on - if there's an audible difference does it not mean there's a difference in the DACs that is audible?

Of course if you design in a deliberate coloration (and I have examples of this) there is going to be a difference. I'm still not sure you are on board with this concept. People have been doing it for years, deliberately allowing non signal related artifacts because they think PREFERENCE wise they are better. Difference vs. preference it's very important and ignored all the time.
 
Scott, not everybody has exactly the same beliefs regarding sound just as they may have their own beliefs in other area's of life. There are also people with personality types who think they know the answer to everything and like to talk about it at great length. It would be good if we could avoid lumping a whole lot of people with different beliefs into one very loaded word, audiophile, where one mistaken idea could condemn someone into being lumped in and associated with every crazy idea ever out there.

Steve, I would say this about DACs: one of the properties that makes a DAC a good one is immunity from audibly being affected by different cable types. With enough reclocking, or stages of reclocking it should be possible to clean up any cable induced distortion to digital pulse trains (for cables of reasonable length, not excessively long). So to me, if a DAC sounds different with different cables of reasonable quality, such as commonly sold at consumer electronic outlets, then that tells me something is wrong with the DAC. Either a DAC should be able 100% be able handle the digital signal coming in without affecting sound, or the red light that says "data error" should light up.

mmerrill99, Very oddly, sometimes it is possible to hear distortion that should be inaudible on systems that are sufficiently nonlinear. Even a laptop computer built-in sound, lousy as it is, can on some odd occasions expose very low level distortion that should be and is inaudible on a clean, low distortion system. It's a odd quirk, but useless for practical audibility testing. It would be like wearing distortion detector hearing aids, not like natural human hearing. Therefore, audibility test systems should both measure well and sound good. For the most sensitive tests they should be the best we know how to do or as close as we can reasonably get. On the other hand, mixing and matching unmeasured components until some kind of distortion detector hearing aid has been produced could be considered a cheat, like doping in the Olympics. That's one reason it might matter what kind of playback equipment was used.
 
Last edited:
Or colored enough, how do you decide the difference? This is a serious question since I've met more than one very serious audiophile that won't compare on anything but low power SET's and horns. If you don't think that chain is colored then there is not much to talk about.

Of course if you design in a deliberate coloration (and I have examples of this) there is going to be a difference. I'm still not sure you are on board with this concept. People have been doing it for years, deliberately allowing non signal related artifacts because they think PREFERENCE wise they are better. Difference vs. preference it's very important and ignored all the time.

Steve said he could differentiate his DACfrom others in blind tests but only using his system as he considers it transparent. You In your first quote seemed to suggest that his system may not be transparent.

I pointed out that it makes no difference as the contention in this thread is that there is no audible differences between 'competently designed' DACs & if Steve is able to consistently differentiate his DAC from other 'competently designed' DACs then the claim made is false.

Is there some point about colored DACs I'm missing as I can't see how it fits into the discussion?
 
.....

mmerrill99, Very oddly, sometimes it is possible to hear distortion that should be inaudible on systems that are sufficiently nonlinear. Even a laptop computer built-in sound, lousy as it is, can on some odd occasions expose very low level distortion that should be and is inaudible on a clean, low distortion system. It's a odd quirk, but useless for practical audibility testing. It would be like wearing distortion detector hearing aids, not like natural human hearing. Therefore, audibility test systems should both measure well and sound good. For the most sensitive tests they should be the best we know how to do or as close as we can reasonably get. On the other hand, mixing and matching unmeasured components until some kind of distortion detector hearing aid has been produced could be considered a cheat, like doping in the Olympics. That's one reason it might matter what kind of playback equipment was used.

Yes, that is a point & one could argue that it shows there is a specific non-linearity in one DAC which interacts with a non-linear system in a an audible way that another DAC doesn't i.e there are differences (whether they should be audible in a 'competently designed' system is another point). But there are non-linearities in all systems & if one continually invokes the 'competently designed' term for everything which reveals differences then we are in the logical fallacy of Bulverism.(I'm not saying you are doing this)

The only resolution to this is to be able to show the interaction of non-linearities between specific DAC & system & that is going to be a tall order except for the most gross non-linearities.

In a way it all comes back to having controls in the blind test which give some modicum of the quality of the test & participants.

I understand what you are saying but don't think this is what Scott meant?
 
Last edited:
Steve, I would say this about DACs: one of the properties that makes a DAC a good one is immunity from audibly being affected by different cable types. With enough reclocking, or stages of reclocking it should be possible to clean up any cable induced distortion to digital pulse trains (for cables of reasonable length, not excessively long). So to me, if a DAC sounds different with different cables of reasonable quality, such as commonly sold at consumer electronic outlets, then that tells me something is wrong with the DAC. Either a DAC should be able 100% be able handle the digital signal coming in without affecting sound, or the red light that says "data error" should light up.

Mark, I think what Steve is suggesting is that, IHO, his system doesn't mask the subtle differences between DACs.

Your point is very close to the 'competently designed' Bulverism I just mentioned but I know your posts by now & know that's not what you mean - I'm just pointing out the oft-seen use of this technique i.e Q: what is 'competently designed' A: any DAC which displays no audible difference to other 'competently designed' DACs - contained in the definition is the conclusion - it's circular logic at its most profound
 
Last edited:
<snip>

In a way it all comes back to having controls in the blind test which give some modicum of the quality of the test & participants.

<snip>

Which is one detail that must be considered. (Negative controls as well)

It still seems to be misunderstood that statistical tests/analysis and the actual experimental conditions as part of the scientific examination of real hypothesis are two different things.

Good design of experiments is needed and there is - beside the ITU recommendation we´ve mentioned - a lot of literature out there, but most people seem to be not interested.
 
Of course if you design in a deliberate coloration (and I have examples of this) there is going to be a difference. I'm still not sure you are on board with this concept. People have been doing it for years, deliberately allowing non signal related artifacts because they think PREFERENCE wise they are better. Difference vs. preference it's very important and ignored all the time.

Sometimes the distinction between difference and preference might be neglected but furthermore there is a distinction between difference and transperency. Testing or the latter is often even more difficult, especially wrt multidimensional evaluation.
 
Of course i meant "most people (i.e. of those which are doing/conducting test) don´t seem to be interested" . :)
Yep, but don't forget those onlookers who consider null results as datapoints, the accumulation of which they claim 'strongly suggests' (wink, wink, nod, nod) that there is no audible difference between 'competently designed' X ( for X = DACs, amplifiers, preamps,etc)

Why introduce facts & any kind of attempted rigor into this collective delusion - it's usually not welcomed?
 
Mark, I think what Steve is suggesting is that, IHO, his system doesn't mask the subtle differences between DACs.

Your point is very close to the 'competently designed' Bulverism I just mentioned but I know your posts by now & know that's not what you mean - I'm just pointing out the oft-seen use of this technique i.e Q: what is 'competently designed' A: any DAC which displays no audible difference to other 'competently designed' DACs - contained in the definition is the conclusion - it's circular logic at its most profound

Okay.
Regarding Scott's comment, I think he may have been saying that we don't if people using equipment known to be pretty nonlinear have constructed what might be considered to be distortion detector hearing aids, or not. But, the more nonlinear the system to more likely for that type of thing to happen.

Regarding circular definitions, I don't think it would be circular to mean I think there should be measurements to provide some kind of figure of merit for things like cable-effect-immunity. Right now we don't have a standard test I am aware of, but if we did have such a test it would be better for DAC to reject cable effects at -150 dB, rather than at only -90 dB. It's should be a number derived from measurement just like any other number of that type they should be included in product specifications so people can take them into consideration as they wish. (This is assuming of course that cables actually do have some audible effect on many or most DACs, which I still have some doubts about. If only a small number of DACs do, that would suggest they might be defective in design, in which case it would make sense to carefully investigate.)
 
Good design of experiments is needed and there is - beside the ITU recommendation we´ve mentioned - a lot of literature out there, but most people seem to be not interested.

I have had a lack of interest I think because I don't know anything about their provenience. Where is the science behind them? Are they just some guys opinions, or what? I would need to familiarize myself with the basis for the recommendations in order to defend them here. It carries no weight at all with people here ITU did this or that when nobody even knows what ITU is, or how they arrive at recommendations.
 
Sure, but this is a very tricky area, as I said bordering on Bulverism

Without proof, we can't dismiss all DACs which sound different as 'incompetently designed'. A blameless DAC/amplifier/etc can only be considered without flaw within the definition of the known distortions of the time which obviously includes known measurements.

Before jitter was discovered & measured were there DACs which although they had a figure of merit, were still audibly distinguishable?

The same applies to any & all devices & figures of merit - we always have to bear in mind that we don't know what we don't know - the unknown unkowns. We similarly can't expect devices to be immune to unknown unknowns - how could they be designed to be. What we can expect is that some devices will be immune to certain unknown unknowns purely by chance or by unintended consequences.

Sometimes these devices give us insight into what might be the unknown unknowns & hopefully, over time they become known knowns
 
I have had a lack of interest I think because I don't know anything about their provenience. Where is the science behind them? Are they just some guys opinions, or what? I would need to familiarize myself with the basis for the recommendations in order to defend them here. It carries no weight at all with people here ITU did this or that when nobody even knows what ITU is, or how they arrive at recommendations.

Actually i didn´t wonder about the lack of interest in defending the ITU recommendations, but about the lack of interest in good DOE (wrt audio listening tests) because good experiments are needed (and good execution as well) . All data sampling and statistical calculations don´t help if a test wasn´t objective, valid and reliable, which essentially means if it wasn´t a _good_ experiment.

Seems that i have increasingly problems to get the message across in my posts......:confused:
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.