Now where did this new straw-man come from?
it is adequate. it is not excellent.
-RNM
No straw man, I regularly see it attacked as rubbish here.
I do think it's excellent and I'm not yet sold on the audible merits of high-res. Again, that doesn't mean I don't think a little more breathing room would hurt.
No straw man, I regularly see it attacked as rubbish here.
I do think it's excellent and I'm not yet sold on the audible merits of high-res. Again, that doesn't mean I don't think a little more breathing room would hurt.
Big IMO: the reason why high resolution and rates caught up in the last years has nothing to do with sound quality. It is the broadband bandwidth and the ultra cheap storage commodities, to the point they are almost free.
Get a free, larger, new fridge and stuff it. The new large fridge is going to make you feel so much better, even if the beer tastes exactly like before.
Last edited:
Hostie de criss de câlice de tabarnak!.
Cinq six boîtes de tomates vartes.
No straw man, I regularly see it attacked as rubbish here.
I do think it's excellent and I'm not yet sold on the audible merits of high-res. Again, that doesn't mean I don't think a little more breathing room would hurt.
You dont need any one here to change your mind . Its for those who have a high res system and want to explore the best we can do. not what is adequate.
If I was satisfied that CD sounded like real music, I would say it is then adequate too. I hear the improvments in higher sampling rates. there are solid reasons for that. It isnt just there is more available BW. I hear it, too. Soo to me, with my syste, it is a real improvment.
Thx-Richard
Last edited:
The problem is that a sine wave has the one and only frequency in spectrum only if it has infinite duration. Some people do not realize that the sine wave turned on at some defined instant or turned off at some defined instant is not frequency limited anymore and does not have a single line in spectrum. So it may easily violate Nyquist criterion. Therefore "first cycle distortion" and similar nonsense. Technically it is a single frequency only in case that we analyze between turn-on and turn-off instants and we have time enough for the windowed or synced spectral analysis.
Although we've gone down some inaccurate paths regarding sampling, one very good point has been raised: approaching the Nyquist limit in the frequency domain means that the signal is required to be less variable in the time domain. One way to think about it is that an amplitude variation generates "sidebands" in the frequency domain that try to creep outside the Nyquist limit. This is not a fault of Nyquist sampling, but rather an example of its definition.
PMA has encapsulated this perfectly, IMO. True clarity is always beautiful to see.
Much thanks, as always,
Chris
Last edited:
I hear it, too. Soo to me, with my syste, it is a real improvment.
Thx-Richard
Joke:
Double blinded? 😀
If someone said, he can not hear the different, so the others either. Because his listening ability is the best.
Tss Tsss "vertes"
Tss Tss, I told you Québécoise can be challenging. Leave “vartes” and read the phrase quickly. It will have a totally new meaning for you.
Yes, you were punked

JN,
Stereo! Stereo? It is very rare for me to do a stereo system. Starting at the bottom for restaurant background music, left and right channels would be swapped for about half of the seating. Balance left to right would be acceptable for less than a third of the seating area.
Second we have places of public assembly, music stores often do them in stereo loudspeaker systems. Then the system operators have to send monaural signals to the loudspeakers otherwise the audience not in the center will have reduced speech intelligibility. (Think of a church with a lectern on one side and the pulpit on the other.)
Next we get to stadia. Folks have tried two channel systems. Almost doubles the cost, gets quite confused imaging with distributed systems and although some announcers might seem two faced, even so they have only one mouth.
Even in concert systems where everything is "stereo" your favorite "pan" pot is usually quite close to centered on each and every input. Otherwise many folks would not get the full range of the "music." (Just my take on some really bad performances I have been paid to attend. Probably the reason I don't like to do live performances.)
For large performance venues left, center, right systems really do work best. They can accommodate stereo reproduction and with proper loudspeaker array choice actually cover almost all of the audience.
But then we could get into reverberant field issues...
Stereo! Stereo? It is very rare for me to do a stereo system. Starting at the bottom for restaurant background music, left and right channels would be swapped for about half of the seating. Balance left to right would be acceptable for less than a third of the seating area.
Second we have places of public assembly, music stores often do them in stereo loudspeaker systems. Then the system operators have to send monaural signals to the loudspeakers otherwise the audience not in the center will have reduced speech intelligibility. (Think of a church with a lectern on one side and the pulpit on the other.)
Next we get to stadia. Folks have tried two channel systems. Almost doubles the cost, gets quite confused imaging with distributed systems and although some announcers might seem two faced, even so they have only one mouth.
Even in concert systems where everything is "stereo" your favorite "pan" pot is usually quite close to centered on each and every input. Otherwise many folks would not get the full range of the "music." (Just my take on some really bad performances I have been paid to attend. Probably the reason I don't like to do live performances.)
For large performance venues left, center, right systems really do work best. They can accommodate stereo reproduction and with proper loudspeaker array choice actually cover almost all of the audience.
But then we could get into reverberant field issues...
Last edited:
Indeed. I was obliged to ask Gougeule, hard to find the "sacre" behind, as it is not usual in my poor country.Yes, you were punked![]()
So, are we making peace? We played enough to the crowd, right? And Scott will be happy.
Hi Chris,Although we've gone down some inaccurate paths regarding sampling, one very good point has been raised: approaching the Nyquist limit in the frequency domain means that the signal is required to be less variable in the time domain. One way to think about it is that an amplitude variation generates "sidebands" in the frequency domain that try to creep outside the Nyquist limit. This is not a fault of Nyquist sampling, but rather an example of its definition.
Chris
I very much agree with your recent and well formulated down to earth postings.
But this one sounds (again) as if Nyquist does not apply to short events.
How can any sideband creep outside the Nyquist criterium while a brick wall filter is there to prevent this from happening ?
Hans
And let's not confuse aliases x mirrors (A/D x D/A).
Nyquist only applies to A/D 😀
Hans
I think the issue is you have the chasing the weeds DAC mob who don't care about soundstage and the pragmatic bunch who view it is a mind trick with only 2 sources anyway. the lack of any pickup of proper soundfield reproduciton means that those of us who are interested are a tiny minority.For a sound system that has to produce a stable and reproducible soundstage, it requires understanding localization. That localization parametrics are unheard of or unknown by some here on this thread astounds me.
This irks me too, but my solution is to consider pan potted studio music as not in the pile of reference recordings for checking my system out. It should be noted that a lot of people still listen in mono on those hateful home assistant things (and Sonus etc)That cosine rule level only pan pots still exist boggles the imagination, there is no further need to support mono compatibility.
And some of us have an unholy mix of those in one speaker 🙂That nobody even considers the significant difference between point, line, and planar speakers in the IID near field equations is boggling.
I'm still lost on the assertion that is isn't TBH. The highest note you are likely to get in any recording has a fundamental IIRO 4kHz so unless I am being very dense the first 4 harmonics have no issues and the 5th, due to not being a pure tone and riding on the others is likely to not suffer zero crossing issues either. there will be some forms of synthetic creation where you might be able to generate the odd confounder, but for real music in a real acoustic space (which is what I care about) I am having difficulty spotting the flaw.The assumption that redbook is by default adequate to support interchannel localization requirements is also astounding. Whenever I post the base requirements, the first response is always "bs". Followed by its not a problem. Followed by misunderstandings of sampling rudimentary concepts.
The assumption that redbook is by default adequate to support interchannel localization requirements is also astounding. Whenever I post the base requirements, the first response is always "bs". Followed by its not a problem. Followed by misunderstandings of sampling rudimentary concepts.
It is important, if one wishes to engage in a discussion, to actually read and understand what is being posted.
I hope for those days. Alas, I suspect on this forum, the tone has become one of turf wars, condescension, and testosterone. I see no indication of change.
Jn
Jn, I think you are mixing up real facts into an untrue conclusion...that Redbook is inadequate to support interchannel localization requirements.
It is true that recognizable ITD cues are shorter than the time between samples in Redbook.
However, it is also true, that localization through ITD is limited to stimuli below 3.5kHz. It is also true, that the ear codes for zero crossings of tones, and that these zero crossings are all the brain has to determine localization.
Now, back to Redbook. It can reproduce a 3.5kHz tone with uncanny precision, ppm error. The zero crossings of the reproduced tone are totally unrelated to the sampling frequency. The zero crossings of the reproduced tone are only related to, and fully determined by, the zero crossings of the tone before digitizing. There is no 'graininess' in this, besides the possible graininess of time itself.
Since it is the temporal precision of the reproduced zero crossings that determine the potential quality of ITD, and since this temporal precision is perfect under Redbook, there is no issue here.
At last, a proper analysis.Jn, I think you are mixing up real facts into an untrue conclusion...that Redbook is inadequate to support interchannel localization requirements.
It is true that recognizable ITD cues are shorter than the time between samples in Redbook.
However, it is also true, that localization through ITD is limited to stimuli below 3.5kHz. It is also true, that the ear codes for zero crossings of tones, and that these zero crossings are all the brain has to determine localization.
Now, back to Redbook. It can reproduce a 3.5kHz tone with uncanny precision, ppm error. The zero crossings of the reproduced tone are totally unrelated to the sampling frequency. The zero crossings of the reproduced tone are only related to, and fully determined by, the zero crossings of the tone before digitizing. There is no 'graininess' in this, besides the possible graininess of time itself.
Since it is the temporal precision of the reproduced zero crossings that determine the potential quality of ITD, and since this temporal precision is perfect under Redbook, there is no issue here.
Thank you.

- Status
- Not open for further replies.
- Home
- Member Areas
- The Lounge
- John Curl's Blowtorch preamplifier part III