World's Best DAC's

As i´ve said earlier you missed Kunchur´s point but your screenshots illustrated exactly what he meant.

That's self contradictory. I designed the experiment that I posted screen shots of to show what Kunchur was talking about, and even you admit that I did the experiment corrrectly. More likely: you misunderstood what I wrote.

Secondly, the results of my experiment contradicted Kunchur's claims. If they illustrated exactly what he meant then they accurately prove his claims to be wrong.

He was talking about the ability to resolve the essence of the original signal (which might have looked as the first signal in your 1024k screenshot- just for simplicity at this point) which undoubtly isn´t possible with a 44.1 kHz sampling system.

Except my second screen shot proved otherwise. There was a measurable difference after downsampling.

It does not help that this signal, after distorting it to the max during downsampling is still different from the second one, also distorted to the max during downsampling......

I can't do anything about the laws of physics - downsampling can be reasonably expected to visibly distort a wave, and that is how the real world works.

But that misses an important point when it comes to audio. In audio, the most important question always is: "What's audible?"

It has been shown many times with DBTs involving a wide variety of listeners, signals and music that were chosen to be challenging, that 44 KHz downsampling done right reproduces everything that the human ear can reliably perceive.
 
Your example proved something that nobody had questioned (at least not Kunchur). ;)
Therefore what i´ve wrote wasn´t self contradictory....


That´s why i wrote that you´ve tried to use two "tricks" from Schopenhauer´s list.

Wrt "it matters if it is audible" i totally agree and, as said a couple of years before, imo Kunchur was a bit to fast, although additional preliminary experiments might have shown promising results, as confirmations with music samples are needed to justify his conclusions.

Wrt to the question if it was already shown that 44.1 kHz is transparent, i hope that you don´t refer to the Meyer/Moran experiments.
 
Last edited:
Your example proved something that nobody had questioned (at least not Kunchur). ;)

It was just the thesis that he and others have been trying to prove for years.

http://boson.physics.sc.edu/~kunchur/papers/Audibility-of-time-misalignment-of-acoustic-signals---Kunchur.pdf

Abstract:

"Misalignment in timing between drivers in a speaker system and temporal smearing of signals in
components and cables have long been alleged to cause degradation of fidelity in audio reproduction.
It has also been noted that listeners prefer higher sampling rates (e.g., 96 kHz) than the 44.1
kHz of the digital compact disk, even though the 22 kHz Nyquist frequency of the latter already
exceeds the nominal single-tone high-frequency hearing limit fmax∼18 kHz. These qualitative and
anecdotal observations point to the possibility that human hearing may be sensitive to temporal
errors, τ , that are shorter than the reciprocal of the limiting angular frequency [2πfmax] −1 ≈ 9
μs, thus necessitating bandwidths in audio equipment that are much higher than fmax in order to preserve fidelity.

The blind trials of the present work provide quantitative proof of this by assessing
the discernability of time misalignment between signals from spatially displaced speakers. The
experiment found a displacement threshold of d≈2 mm corresponding to a delay discrimination of
τ≈6 μs."

My examples showed that differences in digital signals < 5 μs survived downsampling to 44.1 KHz sampling rate.

It is well known that while the audiophile preference for sampling frequencies > 44.1 KHz is widespread, actually reliably discerning an reliable audible difference due to well-done downsampling to 44.1 Khz is a goal that remains to be accomplished.

An example of this failure to prove a reliably audible difference may be found here: https://secure.aes.org/forum/pubs/conventions/?ID=416
 
Audibility of temporal smearing and time misalignment of acoustic signals talks about arrival time differences and then tries to back track it to how signals are sampled. I think the use of square waves serves more to confuse the researchers than it does to help, and sends them off scent. Thus, the conclusions grossly mis-attribute a need for frequency resolution versus the inter-track temporal resolution (how finely can you shift the phase of a sinusoid in a 16/44 signal). I just read the paper and his AES talk outline.

Do you mean with "inter-track" "intra-track"?

If a body of tests points in one direction, even if they all tend to be underpowered, that's makes for a strong suggestion of which way the data tends. If a body of tests point in all directions, especially while underpowered, then Bayes shrugs his shoulders. Yes, the metanalyses need to be done carefully.

Stating that a body of tests shows .... is in fact a crude form of metaanalysis and does miss obviously the question of data heterogenity.

Normally one must suspect that some data is missing; see for example the socalled "amplifier challenge", according to Clark several thousand people did the test, but no one got more than 65% correct answers. It is extremely unlikely that that had happen, so there might have exist a problem with data tracking.
Btw, the experiments within the SDT have shown, that it is generally not a good idea to combine listening tests with money bets.

Something similar with the ABX tests, most are not well documented and given that we are talking, according to arnyk´s description in the "resistor thread", literally about hundreds of ABX tests with EUT which are considered to be inaudible, but positive results were missing? Quite unlikely too.

Otoh if you analysize the quite small number of well documented and methodolical sound listening tests the picture is a bit different.
So i have doubts about the body of tests showing the same direction.
 
Do you mean with "inter-track" "intra-track"?
Stating that a body of tests shows .... is in fact a crude form of metaanalysis and does miss obviously the question of data heterogenity.

Normally one must suspect that some data is missing; see for example the socalled "amplifier challenge", according to Clark several thousand people did the test, but no one got more than 65% correct answers.

Reference: AES E-Library Ten years of A/B/X Testing

Here is an example of a person posting under an alias attempting to overcome reliable evidence presented by and agreed with established authorities in the field"

It is extremely unlikely that that had happen, so there might have exist a problem with data tracking.

Is this article an explanation for this?


http://psych.colorado.edu/~vanboven/teaching/p7536_heurbias/p7536_readings/kruger_dunning.pdf
 
Last edited:
Post #1038:
Could you cite some well documented tests in which that happened?
How about that question asked for the following?
Btw, the experiments within the SDT have shown, that it is generally not a good idea to combine listening tests with money bets.

Otoh if you analysize the quite small number of well documented and methodolical sound listening tests the picture is a bit different.
So i have doubts about the body of tests showing the same direction.
 
It was just the thesis that he and others have been trying to prove for years.


My examples showed that differences in digital signals < 5 μs survived downsampling to 44.1 KHz sampling rate.
<snip>

Kunchur´s point was that higher sampling rates should be used to preserve the essential characteristic of the original signal.

Your point is that two different signals (both having lost their essential characteristics during downsampling) are still different.

These are two completely different topics.

<snip>
Here is an example of a person posting under an alias attempting to overcome reliable evidence presented by and agreed with established authorities in the field"

Actually it is simply an argument based on statistics.
If no difference in Clark´s amplifier challenge tests were audible than we have to assume that all the participants were guessing randomly.
Calculate the probability that within a couple of thousand tests the number of tests with >65% hits is zero. You will get P(x=0|1000) < 1 x 10E-6 .

Probability is therefore high that there might have exist a problem with data sampling.

Post #1038:
How about that question asked for the following?

Any good introductory textbook will provide some insight, see for example:
Thomas D.Wickens, Elementary Signal Detection Theory, Oxford University Press, 2002

No, I really meant inter-track. As it's a path length difference between the channels in the experiment.

Kunchur did three different experiments in a diotic setup and got the result that the temporal resolution of the hearing sense was better than expected. He concluded that phase changes of the spectral components could be the reason for the audibility.

In a real world 44.1Khz sampling system people usually try to avoid any content above 20kHz, so it seems to be a valid conclusion that removing the 21kHz component of the 7kHz square will be audible too.
(As said before the conlusion need further experimental confirmation especially wrt music)

But in which way did Kunchur this:
Thus, the conclusions grossly mis-attribute a need for frequency resolution versus the inter-track temporal resolution (how finely can you shift the phase of a sinusoid in a 16/44 signal)
 
Last edited:
Kunchur´s point was that higher sampling rates should be used to preserve the essential characteristic of the original signal.

Exactly what is "The essential characteristic of the original signal"?

Of course you can't tell me because there is no formal generally agreed-upon meaning for that sentence.

Classic pseudoscience - set out to prove something that is not well-defined and wave your hands until you fooled most of the people.

Here's an example of relevant science:

"Higher sampling rates should be used to avoid causing audible changes to the signal."

Unfortunately, it is an audiophile dream that lacks sufficient reliable evidence to support.
 
Actually it is simply an argument based on statistics.
If no difference in Clark´s amplifier challenge tests were audible than we have to assume that all the participants were guessing randomly.
Calculate the probability that within a couple of thousand tests the number of tests with >65% hits is zero. You will get P(x=0|1000) < 1 x 10E-6 .

Probability is therefore high that there might have exist a problem with data sampling.
If, assume, might... Speculation is all you have. :rolleyes:

Any good introductory textbook will provide some insight, see for example:
Thomas D.Wickens, Elementary Signal Detection Theory, Oxford University Press, 2002
I was talking about "well documented tests in which that happened". So you don't know such test to cite. Again, you argue from speculation. Let us know when you find actual events "which that happened".
 
frugal-phile™
Joined 2001
Paid Member
If no difference in Clark´s amplifier challenge tests were audible than we have to assume that all the participants were guessing randomly.
Calculate the probability that within a couple of thousand tests the number of tests with >65% hits is zero. You will get P(x=0|1000) < 1 x 10E-6 .

Probability is therefore high that there might have exist a problem with data sampling.

A pretty damning check on the quality of the data…

dave
 
If no difference in Clark´s amplifier challenge tests were audible than we have to assume that all the participants were guessing randomly.

I suspect you don't realize that there are two different people named Clark that are relevant in this discussion - one is named David L. Clark, and the other is named Richard Clark.

It would be curious to know which one you are talking about.

However, there's a bigger problem, and that relates to your misunderstanding of the meanings of scientific findings. I see someone who lives in a black and white world. I see a world with shadings of grey.

The relevant shading of grey is that when you combine the test results from a large group of people, your statistics relate to that large group of people, and not necessarily to each one individually. Just because as a group, they did not reliably hear a difference is not proof that none of them heard a difference.
 
Last edited:
The mantra is the meaningless "downward dynamic range." As long as this is something claimed but not measurable nor distinguishable in ears-only testing (as a separate matter from the plain vanilla and easily measurable "dynamic range" and "noise"), it is an unanswerable tautology. There, I saved Dave some keystrokes.
 
The mantra is the meaningless "downward dynamic range."

An analogy to the meaningless "DDR" is the meaningless "GOOD SOUND". Good sound can be perceived and (to certain degree) can be "measured". Problem is of course that measurement cannot be 100% correlated with subjectively perceived good sound because good sound has too many variables. Each variable can be measured but an exact relationship between these variables (the formula for good sound) are not clear.

As long as this is something claimed but not measurable nor distinguishable in ears-only testing (as a separate matter from the plain vanilla and easily measurable "dynamic range" and "noise"), it is an unanswerable tautology.

As long as anyone is willing to define the formula, everything can be measured.

Defining a formula for "good sound" is too difficult as it varies with individual tastes. But I think "DDR" is a lot simpler and IMO not subjective.

First problem is of course, sound perception happens in acoustic domain. If it has to be measured, it has to be measured after the speaker, probably using mic and electronics that is way better than human ears.

As it is a perceived quality, DNR is irrelevant. "Noise" or SNR is more relevant except that we re measuring the lower amplitudes. Then because we are also expecting a perceived distinction between two signals, this will relate with distortion.

Is there any common measurement "technology" that do this? I think no. May be this is what Dave means with "sound people can hear that cannot be measured with the current audio measuring technology." This statement has to be read between the lines. It doesn't mean that we don't have the tools, rather, we don't have the knowledge or the efforts.