John Curl's Blowtorch preamplifier part II

Status
Not open for further replies.
In those methods I mentioned, the data rate is locked onto the DA clock for the CD player, and asynchronous ways. Smart buffering means that the circuitry doesn't have to care if clock and data rate keep varying with respect to each other: there's lots of padding so to speak between what comes in and what the DA circuit needs. The ultra-low jitter clock regenerators create the clock signal from the data rate, removing any significant wobble as they do so.

Frank

It seems you are confusing the two methods. In buffering you are transferring the serial data to parallel and only one clock is required.

The issue was resampling and how the clock rate is embedded in the word clock. Even the clock jitter cleaner chips use PLLs to clean up the dirty clock. Now if you have a 44,100 hertz word clock, how long would it take a clock cleaner chip to settle to the rated femtoseconds of jitter?

It seems every serial data format requires a PLL to recover the clock data.

Then the issue is not rms jitter but worst case. With 850 pico seconds of rms jitter my AP system 2 shows the 13 MSBs are without error but bits 14 15 & 16 get errors. Now are there errors at that level or is the AP wrong?

Oh the settling time would be less than the total playing time of the CD, but not by much.
 
It seems you are confusing the two methods. In buffering you are transferring the serial data to parallel and only one clock is required.
I think we have general confusion here, :) ...

Okay, buffering in the sense I used it has nothing to do with serial or parallel issues; I'm talking about data being transferred between two areas of a system which are running with unsynchronised clocks. The data could be parallel, serial or standing on its head, :D

The issue was resampling and how the clock rate is embedded in the word clock. Even the clock jitter cleaner chips use PLLs to clean up the dirty clock. Now if you have a 44,100 hertz word clock, how long would it take a clock cleaner chip to settle to the rated femtoseconds of jitter?
The resampling I was doing is completely offline: the data in a computer file is tranformed into another format in another computer file -- no DA was harmed, or involved, in the process ... :D

As regards the settling time of clock cleaners, I'm sure it's faster then anyone could hear an audible difference.

It seems every serial data format requires a PLL to recover the clock data.

Then the issue is not rms jitter but worst case. With 850 pico seconds of rms jitter my AP system 2 shows the 13 MSBs are without error but bits 14 15 & 16 get errors. Now are there errors at that level or is the AP wrong?

Oh the settling time would be less than the total playing time of the CD, but not by much.
I'm not following you here, where are you measuring this jitter? And why are you getting errors of this magnitude, this does not compute! What you're saying here, overall, does not make sense to me.

Frank
 
Frank, I'm not sure I understood your last reply to me. Please explain.
Do you agree or not that a lpf at 300khz will "sound better" than one at 50k?

About clock cleaners, any pll will introduce a lot of noise and jitter.

I agree on digital manipulations to be transparent until they leave the computer and walk away on USB, FireWire or any other digital bus: there start the problems. Noise CAN enter the digital stream somehow and make it less perfect and solid as it should be.
(Note that I purposefully didn't mention spdif and its cave of worms).
 
clock cleaner chip to settle to the rated femtoseconds of jitter?

It seems every serial data format requires a PLL to recover the clock data.

Then the issue is not rms jitter but worst case. With 850 pico seconds of rms jitter my AP system 2 shows the 13 MSBs are without error but bits 14 15 & 16 get errors. Now are there errors at that level or is the AP wrong?

Oh the settling time would be less than the total playing time of the CD, but not by much.

Clock recovery for 100G ethernet is rated in femto seconds, you're making numbers up again. On what do you base your settling time estimate? You are orders of magnitude off. The jitter and bit error rate are related and there is much literature on this. You are definately not dropping any bits of data in the literal sense.

A priori in a serial data system there is no higher probability (in general) for error in any bit position over another. In a simplistic sense the jitter moves the effective sampling point adding noise to the eventual output. There is ramdom jitter and pattern dependent, etc. jitter again mass quantities of stuff out there on this.

Brute force, a modest FIFO and independent clock is an alternative, several threads here on this.
 
Last edited:
Yes. The evil is the flux.

It's also known as copying the CD to an SD card and using one good clock in an outboard DAC. I think the point is being lost that all there is no time information per se in the file as data, only that the samples are supposededly exactly equally spaced at SOME frequency, very near 44,100Hz hopefully in this case. A quick survey shows that 25-50 psec. of jitter would beat many very expensive commercial component DAC's out there, I think ED is riding his -150dB hobby horse again.
 
Last edited:
It's also known as copying the CD to an SD card and using one good clock in an outboard DAC. I think the point is being lost that all there is no time information per se in the file as data, only that the samples are supposededly exactly equally spaced at SOME frequency, very near 44,100Hz hopefully in this case. A quick survey shows that 25-50 psec. of jitter would beat many very expensive commercial component DAC's out there, I think ED is riding his -150dB hobby horse again.

Scott

The higher the clock frequency the faster the settling or lock time. On commercial products I have looked at 200 nS of jitter required 4 minutes to lock at a clock of 10 MHz.

If you look at the folks who use the CS8420 reclock chip they seem to think that if you start the chip before sending data it does not sound as well as it should. Measuring the chip showed a "peak" jitter of around 35 nS if started early as opposed to 4.5 nS if the data came first.

Now I don't know the integration time of my AP. if you look at the phase noise ratings of crystal oscillators you will see the noise goes up as the frequency decreases. This 1/f effect may very well translate into enough jitter to cause missed bits.

Now back in the day of stand alone audio delays a single bad bit every 16 seconds would result in a service call. As these were at first 12 bit devices even LSB 10 would show up as a problem.

Now when you oversample it is clear you also raise the stopband results.

Now if you want to argue the math that shows settling time of a PLL for low jitter at subhertz bandwidth, please give me a reference.
 
Scott,

My back of envelope calculations show that going from 22 uS to 120 fS requires 41 time constants. If my clock frequency is 44,100 I would want my loop filter to have ripple here of less than -120 db or with a single pole filter at 20dB/decade another 10e6 so that would require 22 us x 10e6 x 41 or 902 seconds for a perfect system. Noise would of course increase the lock time.

Now the jitter reduction chips make some estimates about their dirty clock signal. One is that the clean clock can start off quite close (.001% or better) the second clever technique is that when the clock is lost to hold the last known frequency. So with a 100 MHz data rate (and even higher clock frequency) they can reacquire a signal very much faster.

Now of course these techniques could be applied to an audio word clock, but as the PLL is analog using 1% parts would limit your lock range to around 5%, so a lock could be achieved in a minute or so. Of course spending that much effort on the clock is silly when there are better methods of reproducing music.

Ripping a CD to static memory would be a much better solution as discussed. Of course once you standardize this it clearly makes the CD obsolete as a distribution method.

Now as to bit errors I assume you understand the process for receiving serial data. The incoming signal is used to derive a master clock that is faster (2x 4x 16x etc.) than the data. The idea is to be able to determine the leading and tailing edge and then use the sample between these for the data value.

In some digital audio systems the master clock is the word length clock. (44.1, 48 K etc.) Then the packages of bytes are sent. If there is enough jitter or frequency shift during the transmission the last bits may not be sampled at the center of the time period. So the last bits could under the really nasty case be mis-sampled! That is why the more important data should be after the hello bits but still near the start of the word. (Or data frame if you prefer.)

So the random missing bit can become an issue.

Now error rate is a biggie. In my measurements I found it interesting that different fiber optic cables affected jitter. A bit of dirt was more than enough to get the jitter up to what I think may be a problem level.

Now the issue of what people can actually hear, I will turn around to "If it costs nothing but careful design time to reduce these issues is it worth it?"

Now when the system is fully up and running I will have a chance to see if it is different from the systems that use other audio data transmission methods.

ES
 
Last edited:
fas42,
Could you go into a little depth on the differences between clocking in a CDP and clocking when using a hard drive based system? What I think I am reading is that as long as the data buffer is staying ahead of the clock frequency then the DA master clock will have no problems. As long as the hard drive information is stored as a loss-less file with identical bit for bit information then there should be no difference between the two types of storage media, CD or hard drive except where the master clock is located?

Forget this question I missed the last page of explanations.
 
Last edited:
Scott

This 1/f effect may very well translate into enough jitter to cause missed bits.

Nope, I don't know where you get this stuff. I'll give that 4 min. number to one of our telecom experts. A typical 10G com link settles to a 10^-12 error rate in usec. You throw out numbers like -120dB ripple in PLL filters, where's the reality check, I see nothing like that in the literature or the patents Dimitri listed?
 
Last edited:
Thanks for this, my main concern is the design of the experiments. I didn't read every word but the above paper is interesting. The biwa is a good choice of instrument (the sibilance of a shakuhachi played by a master would also be interesting). One observation the mic has a large rising response past 20kHz and beyond that ripples that look a lot like those of a high order filter. With a strongly plucked instrument like the biwa, I would think it important to address the effects on the peak and shape of transients. I certainly am willing to entertain the possibility that the leading edge of transients can shift in localization. I definately have heard the sibilant sounds on instruments have an off image "noise" component added. OTOH I have perfectly enjoyable CD's and LP's of traditional Japanese instruments.

BTW - These instuments make sense as being right there in the room with you (as does solo acoustic guitar), I find almost any recorded examples to be so far from the reality I would probably look for something more fundamental wrong.

They tried at least to find out if the super tweeter reproducing the >20kHz content is offering a lot of IM distortion in the audio band.
They find around 25dB (SPL) when measuring the tweeter alone which translates to -55dB wrt ~80dB (SPL) .
They argued this IM in the audio band would be masked in any case what seems a bit too optimistic.
I don´t know the sound of these instruments as you do, and have never seen any spectral analysis of their sound, so the authors might be right.

I totally agree, quite often something fundamental is missing in existing recordings but the >20kHz thing might be useful in making a system better althought it can already work quite good without this extension in frequency range.

Btw, it was a bit surpising for me but i was not able to find measurments of microphone IM distortion at high frequencies and somewhat normal sound levels.
Holger Pastille´s PHD thesis is interesting (wrt to laser interferometer measurements of membran movement and the discussion about the various distortion mechanism at work) but concentrates on lower frequencies (up to 1kHz) and high sound levels.

Before you mentioned it, i would not have considered it as a serious problem, as harmonic distortion drops fast with lower sound levels.

@ kgrlee,

the Meyer & Moran experiment is unfortunately not an example of good scientific practice and it should not have passed the review board.

Nevertheless their results may be correct, but it is simply not possible to draw any conclusion about the reasons, due to lacking data and methodological flaws.
 
Last edited:
Nope, I don't know where you get this stuff. I'll give that 4 min. number to one of our telecom experts. A typical 10G com link settles to a 10^-12 error rate in usec.

Scott,

I think the problem is we are talking apples and bananas here. The question is who is which. :)

I'd be interested in knowing the error rate of the entire system. For 44,100 10e-12 would be a few times each year. 10e-7 looks like it would be enough to show up in music reproduction.

ES
 
Scott,

I think the problem is we are talking apples and bananas here. The question is who is which. :)

I'd be interested in knowing the error rate of the entire system. For 44,100 10e-12 would be a few times each year. 10e-7 looks like it would be enough to show up in music reproduction.

ES

Well you've got me confused again. The eye diagram on a typical audio link is orders of magnitude beyond that needed for even one dropped bit per cd.
 
the Meyer & Moran experiment is unfortunately not an example of good scientific practice and it should not have passed the review board.

Interesting, this would make the prof. at my university, using this paper in the Digital Processing course, blush. If you could substantiate your opinion (what was wrong in the paper and what would it take to correct the mistakes) I could feed him back, after the holidays.

EDIT: I am sure you are aware of this and this.
 
Last edited:
Scott,
I know that the references that you are making are in the area of microphone distortion and IM above the 20Khz range but I think that this is also a very similar situation in the reproduction side of things with loudspeakers. If you have devices that are linear to above the 20Khz bandwidth without the hash that is present in most upper frequency devices I think that you would see a marked improvement in the reproduction of those sounds of instruments that have that type of upper frequency tonal qualities. Here is my take on that as regards loudspeakers. What we have to do is not only have the FR of the device in question have a bandwidth wide enough to reproduce those frequencies, but we also have to get the phase response correct. If you have a device that has a rising rate impedance and also has more than a purely resistive impedance curve you will never reproduce those frequencies correctly. Phase shifting is going to destroy those tones. We need to have a perfectly flat impedance curve, time coherent zero phase shift and flat frequency response. Only then can we produce those upper frequencies with any accuracy. It doesn't matter now good you get the electronic chain before the loudspeaker if you can't reproduce a coherent waveform from the loudspeakers, you are only chasing fairy dust then.
 
Status
Not open for further replies.