John Curl's Blowtorch preamplifier part II

Status
Not open for further replies.
Disabled Account
Joined 2004
In a multitrack studio with a 64 channel mixing board, is every signal processing function analog? If not, are all the channels A/D's with local clocks, or is there a master?

I'm actually not worried about 1/4 hz at 20Khz, nor at 1Khz. I would worry about maintaining ITD though.

jn
I would expect most professional gear takes extreme care in controlling interchannel latency, but maybe someone has some numbers/specs. I doubt component tolerances in 64 purely analog paths would maintain a few degrees at 20kHz. I would think your reference recordings for ITD would need to be simple mic pairs and 2 channels
 
I would expect most professional gear takes extreme care in controlling interchannel latency, but maybe someone has some numbers/specs. I doubt component tolerances in 64 purely analog paths would maintain a few degrees at 20kHz. I would think your reference recordings for ITD would need to be simple mic pairs and 2 channels
For simple gain and pan, I'd assume analog could keep interchannel latency down well below a microsecond. If a single source branches to right and left and is then analog filtered in some way, certainly all bets are off. Once digital, I could see filters maintaining interchannel locked if there's only one clock.

There's a master clock. There're even upscale ones!

Thanks,
Chris

Thanks. Guess I may have to upgrade from my Behringer UB502 and UB802, eh?

That is, if I wish to listen to something other than Inna Gadda Da Vita..

jn
 
Member
Joined 2004
Paid Member
In a multitrack studio with a 64 channel mixing board, is every signal processing function analog? If not, are all the channels A/D's with local clocks, or is there a master?

I'm actually not worried about 1/4 hz at 20Khz, nor at 1Khz. I would worry about maintaining ITD though.

jn

In the world of POP effects all bets are off (no surprise). However for a classical recording with multiple microphones in the recording all the ADC's will be locked by a common word clock. There will be some skew, nanoseconds in the cabling and possibly more in the internal logic but not anything by acoustic standards. In post production most effects are digital and in sync. If an analog external effect is used it will probably have enough impact that delays are not an issue. And it can be realigned on a DAW. There are bigger issues mixing dynamic, ribbon and condenser mikes on a common source.
 
Member
Joined 2002
Paid Member
Where is the 20kHz band limit of which you speak? Even if the LP were band limited, the arm-cartridge resonance is producing higher sidebands than were recorded. That's the idea of finding a situation where something isn't right to test for audibility.

Scott means the red book CD band limiting.

Based on my investigations, I confirm the effect you mentioned: Arm-cartridge resonance produces sidebands, which in turn build-up IMD successive products, producing the effect of artificial bandwidth extention.
Doing low resolution FFTs hide the effect, which turns to an eye opener with high resolution FFTs.

Vinyl has so many mechanical artifacts I doubt any judgement about digital quality will come from digitizing LP's.

You are too polite :D
Vinyl was not designed and engineered to be a data storage standing detailed measurements.
It was designed to be a voice-music information transcript.
Reporting the mechanical issues is not to be taken as bushing the medium.

SY
I have the feeling you’ve read those two posts of charlesp210 in the wrong way

Denon spec their compliance numbers at 100Hz, and apparently it's significantly higher at ca. 10Hz. I have heard a factor of 1.5 used.
Thanks. It will be interesting to see or obtain actual data.

I am in the third day of trying to bring out any reliable consistent measurable hint of compliance change vs frequency.
Nothing up to now.
I will devote one more day to this but I don’t have great expectations anymore.
Mass decoupling alone as the frequency increases, is enough to mask any compliance change.
SY, if you have the means to measure elastomer compliance with established laboratory procedures, I can send you the elastomer ring from a DL-103 suspension.

George
 
Vinyl was not designed and engineered to be a data storage standing detailed measurements.
It was designed to be a voice-music information transcript.
Reporting the mechanical issues is not to be taken as bushing the medium

I've read that if a modern engineering analysis of phonographs could have been done at the time, they never would have bothered. Just too difficult.

But you could say the same thing about photography. The image on a flat piece of paper isn't really a whole lot like the original subject. Yet, I've read that some aboriginal people when first shown a photograph acted as they could not tell it from the real people who were photographed.

These thoughts are slippery, but fun.

Thanks, as always,
Chris
 
jneutron said:
I did not mean reconstruct then re-sample. My question was geared towards a single 20 Khz sine being fed into two independent 96k converters which are not synched to each other.
Sorry, I misunderstood.

OK, two sets of independent samples at nominally the same rate will be different, but completely equivalent. You can't compare them, but you can reconstruct the original analogue signal identically (apart from noise) from them.
 
From 3 years ago. j.j. on time resolution:

To the sampling issue, it is simply false that one can not reproduce an ITD smaller than a sampling instant. In fact, we can resolve, at 20kHz, for a "Redbook CD", a time resolution proportional to, and very near 1/(2*pi*20000*216) in a mechanical fashion. This shows, I think, that sampling is simply not an issue (please note, per Shannon, the absolute absence of the sampling rate in that equation beyond the knowledge that the sampling rate is sufficient to reproduce 20kHz). Arguing that one needs 500kHz because of interchannel time resolution due to sampling is simply wrong, period. Arguing that you need 500kHz bandwidth, likewise, is just wrong.
 
it seems this just doesn't get through to some people

RedBook CD has plenty of bits for nanosecond resolution

this can be simmed - like I have with the dealyed clik tone bursts before when Kunchur's lack of DSP prowess has come up

LTspice - free, already used by many here


below I have created 2 50 ms raised sine enveloped 1 kHz tone bursts: B1, B3

they are dealyed by td1-td2 = -119 us

added 2 lsb pp TPDF dither (B2, B4), then write out both to a 2 channel .wav file at 16/44100


the next asc reads the .wav, then I run a fft on the burst wavefrom

the cursors are set to the peak at 1 kHz

calculating from displayed phase diff of -42.8401° / 360° * 1 ms = -119.000278 us

thats sub nanosecond resolution folks - encoded, transmited through a CD resolution .wav file

all the moving parts are visible, you can play with the sims yourself

(LTspice group delay calc seems to have a rounding? error - looks like I'll be emailing mike)

just to see it wasn't dumb luck I can change the td2 value to 12 us for a diff of 11 us - which is pretty much halfway between 2 samples @ 44.1k
-3.96005° / 360° * 1 ms = -11.000139 us

again sub nanosecond temporal resolution - just coincidence?


things you can try with the sims - edit the BV for shorter, higher frequency, lower amplitude - say 5 ms of 6 kHz at -20 dB: -23.7746° / 360° / 6 kHz = 11.007 us

that's 7 ns error due to shorter burst and lower amplitude showing limiting with dither noise floor

381415d1384043410-temporal-resolution-temporal_asc.png


381416d1384043410-temporal-resolution-temporal_plt.png
 
The trouble is, 16 bit replay for a lot of people, a lot of the time, doesn't sound quite right, there's always a niggle, an itch that doesn't go away, that can't be scratched to make it better ... there must be a reason for that irritation! And an easy culprit is lack of resolution, it sorta sounds right as a "reason", so will do in the absence of a more satisfying explanation ...

Unfortunately, :p, the real answer is, and always will be, is that the quality of the implemention of the replay chain is where the action is - and until that is addressed the ongoing thrashing about this matter will continue ... :D
 
Frank,
It always seems to come back to just some lousy transfers from many old analog tapes to the digital format, Not the format itself but the mastering engineers messing things up or just using lousy multi-generational tapes. We have all I imagine heard really good cd sound at some point and on the same system if a cd sounds bad that is where the problem lies. You can blame it on everything else and give us your prose but it really is no different than a lousy vinyl album or compressed garbage that so many engineers seem to put out there and think we don't notice.
 
What people hear as being 'wrong' with 16 bit CD sound I'm sure will vary between individuals, but what I hear, or listen for, is different from what you're talking about. I have miserable transfers on CD, which I know sound absolutely revolting on normal systems - and those are the recordings I want to 'rescue', in the sense that I want to be able to enjoy listening to them. Thus I look firstly for a system being capable of revealing everything, absolutely everything that's been captured - and the majority of system do fail here: at the recent hifi show there were only a couple that showed this capability, most suffered the thick blanket over the speakers syndrome; and secondly that this detail is rendered with absolutely minimal extra distortion imparted from the replay chain. IME, when these two conditions are met then the really difficult recordings do 'work', the ear/brain is getting the right information to untangle the sound.

I'm sure your experiences are very different from that, and that therefore what I say doesn't make sense - but this exercise of recovering sufficient quality from recordings to bring them to life is something I've done over and over again, for years now. As they say, I could do it with my eyes shut, :D ... sorry 'bout that ...
 
Last edited:
...
However for a classical recording with multiple microphones in the recording all the ADC's will be locked by a common word clock. There will be some skew, nanoseconds in the cabling and possibly more in the internal logic but not anything by acoustic standards. ...

It is a bit more complex than that. Word Clock only guarantees that all converters are running at the same sample rate but it does not automatically guarantee you that all ADC chips are sampling at the same time.
We're OK if we're using one multi channel box (or console), but, if more that one converter box is used in recording setup, it could be problematic.
See attachment and google for some Crystal app notes.
Best,

diyAudio
 

Attachments

  • ADC-sync.jpg
    ADC-sync.jpg
    99.1 KB · Views: 222
Last edited:
Frank,
That just doesn't compute. Unless you are doing some masking or harmonic changes a great system should not be able to correct a lousy recording that just says that you have changed the material from what is actually on the disk. I am not saying you might not find some system that makes a bad recording sound listenable but it is doing something that it shouldn't, if anything a great system should let you hear what somebody did wrong on the recording side. You can't fix a mathematical problem that is wrong by doing something after the fact but you can round up an answer and hide the errors, that is what it appears that you are attempting to do with a lousy recording.

And I do agree that there are many terrible sounding systems at audio shows, that has nothing to do with correcting a bad recording, it should just sound that much worse.
 
The recording is not being "corrected", the system is allowing you to hear more of the part of the recording that is of interest - the actual music rather than all of the distortion artifacts. Think of it this way: you're in a room listening to a live instrument but something in the room, not connected to the musician or instrument at all, is making an irritating, jarring sound - this is going to get in the way of you enjoying the sound of the instrument, and the louder that off sound is, the more irky the tonal quality, the worse things get. Now, make it even worse: that sound, having nothing to do with the live instrument, actually modulates, varies itself to match the intensity of the musical sound - this is going to drive you crazy! The solution: get up, and locate the source of that irritation, and sort it out, minimise its impact.

This is subjectively what happens when you refine the sound of the replay; you are reducing the intensity of the "off" components in the sound, and then the elements of the musical event captured are more clearly heard - people will say, what information are you talking about? Well, even the really primitive recordings do capture the acoustic clues, you can hear the echo in 100 year old recordings, they make aural sense - and are satisfying to hear ...
 
Member
Joined 2004
Paid Member
It is a bit more complex than that. Word Clock only guarantees that all converters are running at the same sample rate but it does not automatically guarantee you that all ADC chips are sampling at the same time.
We're OK if we're using one multi channel box (or console), but, if more that one converter box is used in recording setup, it could be problematic.
See attachment and google for some Crystal app notes.
Best,

diyAudio

I did reference that there can be internal delays as well, but 81 nS!!! That's all of about 81 microinches of acoustic time delay. I think this suggests the stereo pairs need to be going through the same ADC chip. You would also need phase matched cables with known thermal coeffecients for distributing the clock. These are the only cables I know of in the industrial market that rival audiophile cables for cost.

Actually I would not be surprised if there were internal processing delays in ADC boxes on the order of microseconds however I don't think that would really mess up the acoustics much. I'm sure JN can help with the necessary scale on this issue.
 
Tracks can be easily slid around in time in the DAW, and commonly are when multitracked (to correct latency issues). But plenty of great recordings of real-time events like orchestral works were made before DAW's and nobody got too bent about timing from various microphones in the same space and various distances. Even Decca, Mercury and RCA used occasional spot mics in the Golden Age.

Thanks,
Chris
 
Very nice demonstration of digital reconstruction filter capability (right?) Is that the same functional element as also called interpolation filter?

The amazing ability of reconstruction filters to reconstruct, say, a 19kHz tone where there are just barely more than 2 samples per cycle…that was demonstrate to me in mid 1980's and I quit arguing about that. Though I would still say, nothing is perfect, but that's a quibble.

Anyway, regarding human hearing, where is the bandwidth (aka time resolution) limit? Is similar to a mechanical filter in parts of the ear or cochlea? Or is it later, in some kind of neural network processing? How is it that people don't hear tones above whatever, say 16.1kHz for me? It could be a mechanical input limitation, and that might end the story, maybe, that we need higher bandwidths. On the other hand, there are many peculiarities of human perception. In a lot of areas, it is widely believed by scientists that we "sense" things only after having responding to them. What we sense is actually the response, and our consciousness creates a story that explains what we sense after the fact. This explains a lot of well known anomalies. Our consciousness that does the reporting. But it doesn't directly do the sensing. So that's one of many questionable areas I wonder about.

But back to the obvious electronic stuff, but with hypothetical assumptions. If it were true that if ear mechanical bandwidth limits were actually around 100kHz, then to achieve the same response at the neural level we would need something like 200kHz sampling rate, right? Would we ultimately be able to "perceive" the difference, even if we don't hear tones that high? I don't know, many people claim to hear the effect of bandwidth extension above 20kHz, and I believe at least one positive result was published in JAES. Hardly solid proof, I know. Or maybe it was published in some Psychology journal, which you might say had lower standards, though being peer reviewed FWIW.

Anyway, given higher bandwidth response, the same would apply in time domain, right? While a digital reconstruction can resolve a single instant down into the nanosecond dust, it cannot possibly reconstruct multiple events within one sampling interval right?

Where does this ability to reconstruct end. If it is only limited (if at all) to multiple events within a single sampling interval, what about one event at 4/5 of one interval, and 1/5 of the next? Those two events are now 2/5 of a sampling interval apart, but can they still both be resolved to the nanosecond by a digital reconstruction filter? It seems to me that the minimum time difference a sampling system can resolve is the sampling rate. Below that, you get one pulse in between, or something like that.
 
Status
Not open for further replies.