They are? Where.
dave
The discussion in that link to Stereophile contains several posts by JJ highly respected member of the audio professional community they at least deserve a read.
As I see it distinct separable by 5us events in the SAME channel implies 100's of kHz BW, I don't see how this applies to music or any real microphones or instruments and I don't see the experiments bearing this out for the same reasons jcx, arnyk, and JJ give.
The discussion in that link to Stereophile contains several posts by JJ highly respected member of the audio professional community they at least deserve a read.
I understand he is, but reading that discussion, this sticks out:
And your evidence of that their studies are flawed? Well, you have none.
dave
Whoa, the aforementioned Stereophile thread is quite the utter ******* match.
And a failure to understand sampling theory, which is a pity.
Planet10, please don't drag stats into here--unless you're willing to acknowledge that the probability of something shown null to be a true positive (type 2 error) is shrinking, and dramatically. And, no, this isn't the American Physical Society and we're not gunning for 5-sigma robustness in our stats. A look over to the Quantum Entanglement thread will tell us that we probably don't have a deterministic world. 😀
And a failure to understand sampling theory, which is a pity.
Planet10, please don't drag stats into here--unless you're willing to acknowledge that the probability of something shown null to be a true positive (type 2 error) is shrinking, and dramatically. And, no, this isn't the American Physical Society and we're not gunning for 5-sigma robustness in our stats. A look over to the Quantum Entanglement thread will tell us that we probably don't have a deterministic world. 😀
I understand he is, but reading that discussion, this sticks out:
dave
Flawed is not necessary lack of relevance is, just as I question poorly understood brain activity in response to an acoustical input as proof of audibility.
Last edited:
IATD, the logic of experiments, misleading reasoning
Please, Please keep in mind that we're talking 2 signals, each separately presented to each ear
in our head the 2 signals from the R,L ears are (nonlinearly) encoded into nerve impulses and compared in a complex, nonlinear neural net that is wired to both auditory nerve bundles
neural nets can have delays, short term memory https://en.wikipedia.org/wiki/Types_of_artificial_neural_networks
and do a great job at the nonlinear signal processing function of Correlation https://en.wikipedia.org/wiki/Cross-correlation
the signals are not linearly subtracted in the source' PCM representation - this is "too simple" a "model" - trying to reason from this viewpoint is going to lead you astray
we quite regularly create, judge different delays as we move our heads in a stationary external sound field
few degrees azimuth/horizontal angle resolution is a few, ~2 tens of microseconds if interaural time delay is considered an important part of that known human hearing ability
Kunchur's apparent misunderstanding of Signal Theory, Digital Audio "time resolution" led him to making odd physical and electrical arrangements to "fix", Digital Audio "time resolution" that was never broken
the odd experimental techniques may have introduced other possible confounding mechanisms
such as the Physical Acoustics of his sliding box transducer changing its radiation pattern with changing distance to nearby diffracting edges is one possibility
that may or may not have affected his conclusions on human iteraural time delay sensitivity
so there is room to argue methods - which then cast doubt on results - show the need for more experimentation
Arny, others seem to be quite convinced that discriminating 10 microseconds is solid, 5 may be speculative, need more careful training, testing and replication
Please, Please keep in mind that we're talking 2 signals, each separately presented to each ear
in our head the 2 signals from the R,L ears are (nonlinearly) encoded into nerve impulses and compared in a complex, nonlinear neural net that is wired to both auditory nerve bundles
neural nets can have delays, short term memory https://en.wikipedia.org/wiki/Types_of_artificial_neural_networks
and do a great job at the nonlinear signal processing function of Correlation https://en.wikipedia.org/wiki/Cross-correlation
the signals are not linearly subtracted in the source' PCM representation - this is "too simple" a "model" - trying to reason from this viewpoint is going to lead you astray
we quite regularly create, judge different delays as we move our heads in a stationary external sound field
few degrees azimuth/horizontal angle resolution is a few, ~2 tens of microseconds if interaural time delay is considered an important part of that known human hearing ability
Kunchur's apparent misunderstanding of Signal Theory, Digital Audio "time resolution" led him to making odd physical and electrical arrangements to "fix", Digital Audio "time resolution" that was never broken
the odd experimental techniques may have introduced other possible confounding mechanisms
such as the Physical Acoustics of his sliding box transducer changing its radiation pattern with changing distance to nearby diffracting edges is one possibility
that may or may not have affected his conclusions on human iteraural time delay sensitivity
so there is room to argue methods - which then cast doubt on results - show the need for more experimentation
Arny, others seem to be quite convinced that discriminating 10 microseconds is solid, 5 may be speculative, need more careful training, testing and replication
Last edited:
The discussion in that link to Stereophile contains several posts by JJ highly respected member of the audio professional community they at least deserve a read.
And since the self-proclaimed no-credentials authorities here dismiss his comments and hard physical evidence posted on this thread that agrees with him, what should we conclude? I conclude that JJ's comments and the evidence is over their heads.
As I see it distinct separable by 5us events in the SAME channel implies 100's of kHz BW,
Right, and I the example I posted here had to be created with a 1 MHz sample rate.
Nevertheless, when downsampled to 44.1 KHz, microsecond-level differences remained readily detectable as JJ and others claimed, and contrary to Kunchur's naive and erroneous ramblings.
I don't see how this applies to music or any real microphones or instruments and I don't see the experiments bearing this out for the same reasons jcx, arnyk, and JJ give.
And, just for grins despite their irrelevance to the real world, that miniscule difference was found when the data was down sampled at 44.1 KHz, no matter what Kunchur claimed.
Whoa, the aforementioned Stereophile thread is quite the utter ******* match.
That comes with the territory. If its too hot in the kitchen... ;-)
The discussion in that link to Stereophile contains several posts by JJ highly respected member of the audio professional community they at least deserve a read.
Really sursprising in that sad discussion (sad because so typical for discussions about such topics) is that jj did not criticize Kunchur´s assertion but the more or less out of context quoted excerpts posted by others in the discussion.
Arnyk´s post didn´t help either, as he missed Kunchur´s point already back then.
Kunchur´s experiments were not related to interaural time differences (iatd) and it is nearly unbelieveable that nevertheless literally hundreds of posts were wasted about that. Add the usual ad hominem attacks and you know why i wrote about "mumbo jumbo" .
As I see it distinct separable by 5us events in the SAME channel implies 100's of kHz BW, I don't see how this applies to music or any real microphones or instruments and I don't see the experiments bearing this out for the same reasons jcx, arnyk, and JJ give.
Maybe i missed a few relevant snippets in that circus, but jj did quite late obviously read one of the original papers, misinterpreted something (imo mostly due to the heated up discussion and falsely attributed claims) but commented only in one or two posts, the remaining was related to the personal fights.
And a failure to understand sampling theory, which is a pity.
I don´t know if you attribute the failed understanding to Kunchur, but if you do, you were mistaken in that point.
Planet10, please don't drag stats into here--unless you're willing to acknowledge that the probability of something shown null to be a true positive (type 2 error) is shrinking, and dramatically.
As we are discussing sensory tests subject to statistical analysis i somehow don´t understand your assertion. The probality of a type 2 error shrinks only if a test is not underpowered.
In reality in audio tests it is quite uncommon to see any a priori estimation of effect sizes, calculation of power and sample sizes needed.
Most of the in-depth discussions took place in other forums:
https://www.hydrogenaud.io/forums/
International Skeptics Forum
So I can understand 'jj' not adding much to the Stereophile forum.
*****************************
I just looked at a 115 post thread on the subject in an above forum, only a very few of the posts were from 'jj' aka 'Woodinville'.
https://www.hydrogenaud.io/forums/
International Skeptics Forum
So I can understand 'jj' not adding much to the Stereophile forum.
*****************************
I just looked at a 115 post thread on the subject in an above forum, only a very few of the posts were from 'jj' aka 'Woodinville'.
Last edited:
when he builds a mechanical sliding mechanism to move one tweeter back and forth relative to another along the listening path to implement sub sample delays he has thoroughly shown he doesn't understand digital signal theory
when he likens the resulting multipath radiation "time smearing" to representing a single audio channel he is now completely "off the reservation"
understanding digital signal theory he could have dispensed with the mechanics, Physical Acoustics confounders of the changing geometry by simply calculating the fractional sample delayed signal and putting it out with a ca 2003 PC motherboard chipset
now if the hypothesis wasn't framed as "time smearing" and instead was considered a probe of "ultrasonic" 21 kHz 3rd harmonic's relative phase of his wide band square wave drive waveform's nonlinear interactions in the ear - that's another story
even with the alternative hypothesis, requiring controlling relative phase of the 3rd harmonic of his 7 kHz test square wave could have been handled by the 48 ks/s "computer/pro digital audio" standard vs the CD 44.1 "consumer" sample rate - since all motherboard chipsets did 48 k, some even did sample rate conversion rather than play 44 native
when he likens the resulting multipath radiation "time smearing" to representing a single audio channel he is now completely "off the reservation"
understanding digital signal theory he could have dispensed with the mechanics, Physical Acoustics confounders of the changing geometry by simply calculating the fractional sample delayed signal and putting it out with a ca 2003 PC motherboard chipset
now if the hypothesis wasn't framed as "time smearing" and instead was considered a probe of "ultrasonic" 21 kHz 3rd harmonic's relative phase of his wide band square wave drive waveform's nonlinear interactions in the ear - that's another story
even with the alternative hypothesis, requiring controlling relative phase of the 3rd harmonic of his 7 kHz test square wave could have been handled by the 48 ks/s "computer/pro digital audio" standard vs the CD 44.1 "consumer" sample rate - since all motherboard chipsets did 48 k, some even did sample rate conversion rather than play 44 native
Last edited:
I don´t know if you attribute the failed understanding to Kunchur, but if you do, you were mistaken in that point.
Audibility of temporal smearing and time misalignment of acoustic signals talks about arrival time differences and then tries to back track it to how signals are sampled. I think the use of square waves serves more to confuse the researchers than it does to help, and sends them off scent. Thus, the conclusions grossly mis-attribute a need for frequency resolution versus the inter-track temporal resolution (how finely can you shift the phase of a sinusoid in a 16/44 signal). I just read the paper and his AES talk outline.
As we are discussing sensory tests subject to statistical analysis i somehow don´t understand your assertion. The probality of a type 2 error shrinks only if a test is not underpowered.
In reality in audio tests it is quite uncommon to see any a priori estimation of effect sizes, calculation of power and sample sizes needed.
If a body of tests points in one direction, even if they all tend to be underpowered, that's makes for a strong suggestion of which way the data tends. If a body of tests point in all directions, especially while underpowered, then Bayes shrugs his shoulders. Yes, the metanalyses need to be done carefully.
Maybe i missed a few relevant snippets in that circus, but jj did quite late obviously read one of the original papers, misinterpreted something (imo mostly due to the heated up discussion and falsely attributed claims) but commented only in one or two posts, the remaining was related to the personal fights.
What's there to miss the statement was very clearly separating SPL events that occurred 5us apart in one channel, period, nothing to do with IATD. Clear separation implies 100's of kHz BW just as in astronomy it implies a certain spacial sampling (note the acoustic/optic analogy breaks down since there is no negative optical flux). IMHO as soon as I saw the analogy I thought there was some expectation bias.
It is possibly easy to confuse an optical image (two dimensions) with stereo (two channels), for the argument here the optical image is in a sense one dimensional (both eyes see the same thing).
Last edited:
when he builds a mechanical sliding mechanism to move one tweeter back and forth relative to another along the listening path to implement sub sample delays he has thoroughly shown he doesn't understand digital signal theory
I´m sorry, but that is not a valid conclusion. (formal logic)
...understanding digital signal theory he could have dispensed with the mechanics, Physical Acoustics confounders of the changing geometry by simply calculating the fractional sample delayed signal and putting it out with a ca 2003 PC motherboard chipset
A couple of yeas ago i´ve written that i´d like to see if his results could be confirmed in the experiment that you´ve described above.
But he choosed another method and if you were trying to execute your experimental approach i´m sure that you´d realize the confrontation with a whole bunch of confounders too but different ones.
Btw, i´d wish that more publications were giving as much detailed information as Kunchur´s did. And no, that does not mean that there aren´t questionable decisions or points.
I´m sorry, but that is not a valid conclusion. (formal logic)
Please explain in more detail. Appealing to formal logic without showing an application of it is like turning formal logic into something like an amulet. It gets waved around, but that proves nothing and sheds no light.
yes you have to read a little more - he goes on at length explaining that ca 2003 digital audio on PC couldn't do the microsecond dealy he wanted - which is factually wrong
First of all i owe Scott an apology, as i cited him in my post:
http://www.diyaudio.com/forums/lounge/280626-worlds-best-dacs-108.html#post4548814
and accused him of using some from Schopenhauer´s list, but my comments weren´t related to his post but adressed arnyk´s post.
http://www.diyaudio.com/forums/lounge/280626-worlds-best-dacs-108.html#post4548814
and accused him of using some from Schopenhauer´s list, but my comments weren´t related to his post but adressed arnyk´s post.
Please explain in more detail. Appealing to formal logic without showing an application of it is like turning formal logic into something like an amulet. It gets waved around, but that proves nothing and sheds no light.
Formally correct is a conclusion that is true if the premises is fulfilled.
Obviously an experimenter might choose this specific setup although he understands digital signal theory.
yes you have to read a little more - he goes on at length explaining that ca 2003 digital audio on PC couldn't do the microsecond dealy he wanted - which is factually wrong
Could you please be more specific in which publication "he goes on at length....." ?
Formally correct is a conclusion that is true if the premises is fulfilled.
Since the premise was shown incorrect a number of different ways, including graphs I posted on this thread earlier, that doesn't relate.
Since the premise was shown incorrect a number of different ways, including graphs I posted on this thread earlier, that doesn't relate.
No, jcx´s premise and conclusion were:
when he builds a mechanical sliding mechanism to move one tweeter back and forth relative to another along the listening path to implement sub sample delays he has thoroughly shown he doesn't understand digital signal theory
As i´ve said earlier you missed Kunchur´s point but your screenshots illustrated exactly what he meant.
He was talking about the ability to resolve the essence of the original signal (which might have looked as the first signal in your 1024k screenshot- just for simplicity at this point) which undoubtly isn´t possible with a 44.1 kHz sampling system.
It does not help that this signal, after distorting it to the max during downsampling is still different from the second one, also distorted to the max during downsampling......
- Home
- Member Areas
- The Lounge
- World's Best DAC's