DAC blind test: NO audible difference whatsoever

I have had a lack of interest I think because I don't know anything about their provenience. Where is the science behind them? Are they just some guys opinions, or what? I would need to familiarize myself with the basis for the recommendations in order to defend them here. It carries no weight at all with people here ITU did this or that when nobody even knows what ITU is, or how they arrive at recommendations.
ITU BS are the International Telecomm Union broadcasting Standards, AFAIR & they have been involved in this sort of audibility research for along time now - just as the BBC have had their own research dept.

The recommendations in the ITU BS document is one of the 'gold standards' in how to do such testing "METHODS FOR THE SUBJECTIVE ASSESSMENT OF SMALL IMPAIRMENTS IN AUDIO SYSTEMS INCLUDING MULTICHANNEL SOUND SYSTEMS"

Although it covers ABCHR blind testing method, it nonetheless gives recommendations about the correct methods involved in perceptual testing.
 
Actually i didn´t wonder about the lack of interest in defending the ITU recommendations, but about the lack of interest in good DOE (wrt audio listening tests) because good experiments are needed (and good execution as well) . All data sampling and statistical calculations don´t help if a test wasn´t objective, valid and reliable, which essentially means if it wasn´t a _good_ experiment.

Seems that i have increasingly problems to get the message across in my posts......:confused:

Yes, I am very much in agreement with you that DOE is critical. However, it seems to me it is in some ways hard to talk about a lot at a general level.

When it comes down to designing a particular experiment someone with expertise in that area should be on the research team. I also think they need to have someone with expert listening skills, for want of a better term, to help validate the test setup.

I have always held that engineers undertaking biomedical research on their own are liable to make mistakes. Even people trained to work in that area make mistakes, it's a very complicated area in which to do research.

As far as publishing hearing research, I think enough information should be included for someone else to be able to replicate the experiment. Types of equipment used, measurements of test system performance, and how test subjects are recruited (randomly?) should be included. Probably a lot more than that as well. How about video taping the testing of a few people (with their consent, of course).
 
Sometimes the distinction between difference and prefernce might be neglected but furthermore there is a distinction between difference and transperency. Testing or the latter is often even more difficult, especially wrt multidimensional evaluation.

Transparency is often used as a generic "goodness" factor so for instance someone here found a Bryston as the only transparent amplifier for him. If the SET guy subbed the Bryston he could find the SET more transparent otherwise there could be a conundrum or simply he likes the less transparent sound.

If transparency does not equate (a least a lot) with preference I fail to get the whole point of the exercise. Same goes for transparency and accuracy, I don't see how to have one and not the other.
 
@mmerrill99, why don't you rather explain how to competently design systems that you refer to continuously and we won't need to test anything as they will all perform exactly the same. No listening needed, ABX can be thrown out the window and the mmerrill99 method would be applied universally to all designs.:wiz:

We will refer to this as the mmerrill99 method.
 
@mmerrill99, why don't you rather explain how to competently design systems that you refer to continuously and we won't need to test anything as they will all perform exactly the same. No listening needed, ABX can be thrown out the window and the mmerrill99 method would be applied universally to all designs.:wiz:
I think you are reading me wrong - I use the term 'competently designed' as a phrase used by some but has no definition when they are asked for one - I'm using the term as an example of a meaningless phrase.

Please re-read my post with that in mind
 
Last edited:
If transparency does not equate (a least a lot) with preference I fail to get the whole point of the exercise. Same goes for transparency and accuracy, I don't see how to have one and not the other.

Um, I happen to have a Bryston amp. When I could't hear the difference between PMA's violin Toccata files on it, it tried the DAC-3 headphone amp, which worked to hear the difference. So, now I believe the DAC-3 headphone amp to probably be more transparent than the Bryston. So, I guess I prefer the DAC-3 from critical listening tests and prefer the Bryston for casual listening.

It seem to me transparency should be measurable and audible (at least to some people), and is probably some combination of low noise and low distortion. If someone happens to like that, then they may prefer it. Otherwise, maybe not.

That's how it looks to me.

On the other hand, Sean Olive has said that people mostly tend to prefer systems that measure well when tested blind. If he is right about that, then maybe there is a fair amount of statistical correlation between preference and transparency.
 
Last edited:
When I could't hear the difference between PMA's violin Toccata files on it, it tried the DAC-3 headphone amp, which worked to hear the difference.

Both through the headphones? Otherwise you made a major second swap and there is no experiment happening. Even so headphones on a high power PA vs a dedicated headphone amp has its own issues. Or how about trying to drive your speakers with the headphone amp and repeating the test.
 
Um, I happen to have a Bryston amp. When I could't hear the difference between PMA's violin Toccata files on it, it tried the DAC-3 headphone amp, which worked to hear the difference. So, now I believe the DAC-3 headphone amp to probably be more transparent than the Bryston. So, I guess I prefer the DAC-3 from critical listening tests and prefer the Bryston for casual listening.

It seem to me transparency should be measurable and audible (at least to some people), and is probably some combination of low noise and low distortion. If someone happens to like that, then they may prefer it. Otherwise, maybe not.

That's how it looks to me.

On the other hand, Sean Olive has said that people mostly tend to prefer systems that measure well when tested blind. If he is right about that, then maybe there is a fair amount of statistical correlation between preference and transparency.

Yea, I don't know why this idea of preference = colored is being promulgated - it's the exception I believe? As the Olive tests showed, it is not for that particular setup & why would we assume that it should be different for other situations/devices?
 
Both through the headphones? Otherwise you made a major second swap and there is no experiment happening. Even so headphones on a high power PA vs a dedicated headphone amp has its own issues. Or how about trying to drive your speakers with the headphone amp and repeating the test.

They are two different systems being swapped, not individual components. I didn't mention it, but the Bryston is an original one at least 20 years old. It measures a lot worse the DAC-3, so it is probably not unreasonable to posit that most of the difference is likely to do more with the amps than the speakers. But, I would agree I have not proven it. Not sure it would be easy to do, the headphone amp isn't rated for 8 ohms (I don't think, at least at lowest distortion.) and the Bryston output is too hot without an attenuator to use with headphones. Then I would have to prove the attenuator wasn't changing anything.
 
Last edited:
If Steve could differentiate between two DACs set to equal volumes, does it matter what playback equipment he can do this on - if there's an audible difference does it not mean there's a difference in the DACs that is audible?

It does make a difference. All it takes is a bad cable or a poor active preamp and the masking effect of distortion, compression and noise will drown out the differences.

Steve N.
Empirical Audio
 
Um, I happen to have a Bryston amp. When I could't hear the difference between PMA's violin Toccata files on it, it tried the DAC-3 headphone amp, which worked to hear the difference. So, now I believe the DAC-3 headphone amp to probably be more transparent than the Bryston. So, I guess I prefer the DAC-3 from critical listening tests and prefer the Bryston for casual listening.

It seem to me transparency should be measurable and audible (at least to some people), and is probably some combination of low noise and low distortion. If someone happens to like that, then they may prefer it. Otherwise, maybe not.

That's how it looks to me.

On the other hand, Sean Olive has said that people mostly tend to prefer systems that measure well when tested blind. If he is right about that, then maybe there is a fair amount of statistical correlation between preference and transparency.

Headphones are not a good comparison device IME. The 3-D imaging and venue cues will not be apparent. Need an acoustically tuned room and good speakers.

I have good headphones and a modded tube headamp. Not good enough.

Steve N.
Empirical Audio
 
I don't know where the idea of designing an amplifier to do anything other than "output = X times the input no matter what you hang on the output" came from. This is verifiable in the lab.

Classical measurements on amplifiers are poorly lacking. They do not address the dynamic response of the amp to a music waveform. Impulse and step response are insufficient and even these are usually not in the measurement suite.

You cannot get the same steady-state measurements on two amplifiers and expect them to sound the same.

Steve N.
Empirical Audio
 
I don't know where the idea of designing an amplifier to do anything other than "output = X times the input no matter what you hang on the output" came from. This is verifiable in the lab.

Some designers will make sure there is some third harmonic down at -60 dB to -80 dB to create an impression of a little more detail and clarity to the sound. Some people listen for that and down at a very low level it doesn't sound like distortion, it sounds like it's supposed to be there. By comparison, an amp that doesn't have it can sound like some detail is missing.
 
Classical measurements on amplifiers are poorly lacking. They do not address the dynamic response of the amp to a music waveform. Impulse and step response are insufficient and even these are usually not in the measurement suite.

You cannot get the same steady-state measurements on two amplifiers and expect them to sound the same.

Steve N.
Empirical Audio

Sorry I thought it was obvious I was talking about ANY input stimulus, you assume immediately we sit around the camp fire with our 1kHz oscillators frankly it gets tiresome.
 
Headphones are not a good comparison device IME. The 3-D imaging and venue cues will not be apparent. Need an acoustically tuned room and good speakers.

I have good headphones and a modded tube headamp. Not good enough.

Steve N.
Empirical Audio

Headphones are not useful for *some* purposes, such as mixing. On that I would agree. As far as hearing very low level distortion between two wav files, they worked fine. I would have preferred to use speakers, but in that case I think I needed a cleaner amplifier and the only one I had was a headphone amp. As I said, it worked for the purpose at hand.

As far as rooms, for critical listening of low level distortion I usually use so-called near field monitors. I don't want room reflections interfering with direct sound. For mixing, near field monitors are also good way to go. For mastering, bigger speakers and treated room are needed. Not everybody can afford a mastering level system at home though. It would cost several hundred thousand dollars and up to do it right.
 
Last edited: