John Curl's Blowtorch preamplifier part II

Status
Not open for further replies.
Ed I would never question your work. Equipment for live sound has an entirely different set of processing latency issues that the rest of us don't have to deal with. Putting 44.1K into a SRC at 48K in even your cheapest sound card does not have dropouts but of course the latency does not matter.

When I read just about any manufacturers data sheets/white papers I see numbers like 50 - 100ps jitter so what is the issue?

http://www.cirrus.com/en/pubs/white...e_of_spdif_digital_interface_transceivers.pdf

The sound card is working with one or two inputs. The console 48 or more. I was surprised that it did not do the rate conversion. They now sell an add on card to do it. First time it showed up we did that input analog until I got a stand alone converter.

When I was playing with reclocking the data the popular chip (Forget the number) had to be booted correctly to give less than 5 nS as measured on my AP System 2 otherwise it had at least 20 nS. That was actually noticable on the test bench.

Now these days there are systems (Dante) that will work for my systems. So I use them and don't worry about the issues anymore. (The older Cobranet never did well although a lot of folks used it and some still do.)

Your cite puts the jitter limit at 32 nS max, not RMS!

" The competitor’s transceiver loses lock entirely with jitter input greater
than 350mUI. " UI=192 nS for 44,100

And that is a single link system.
 
Last edited:
The sound card is working with one or two inputs. The console 48 or more. I was surprised that it did not do the rate conversion. They now sell an add on card to do it. First time it showed up we did that input analog until I got a stand alone converter.

When I was playing with reclocking the data the popular chip (Forget the number) had to be booted correctly to give less than 5 nS as measured on my AP System 2 otherwise it had at least 20 nS. That was actually noticable on the test bench.

Now these days there are systems (Dante) that will work for my systems. So I use them and don't worry about the issues anymore. (The older Cobranet never did well although a lot of folks used it and some still do.)

Your cite puts the jitter limit at 32 nS max, not RMS!

" The competitor’s transceiver loses lock entirely with jitter input greater
than 350mUI. " UI=192 nS for 44,100

And that is a single link system.

Ed the reference is for an SPDIF receiver to not drop bits, not the jitter at the DAC or the reproduced audio. They state that the Wolfson receiver does 7UI before data is lost and the recovered clock is 50ps. You said the guys hear it, what system actually clocks data into the DAC with a clock that has 12ns jitter?

Note that the WM8805 easily passes this test with input jitter levels up to 500mUI. In fact, it
can be shown that the WM8805 tolerates jitter up to 7UI, significantly more than can
reasonably be expected in real audio systems.

It's right there in their figure 5 5UI input jitter 50ps output jitter.
 
Last edited:
Except... no listening tests. "may have" "may produce" "potentially-audible"

And of course, no mention of the receding-in-the-rear-view-mirror claims that jitter is the most important aspect.
I was just impressed that he allowed that some differences might be inaudible.

The mention of feedback on the other hand is perilously close to that tail-chasing dog we all know and love 🙂

His pitch of course is for the patented topology using feedforward and feedback, which is also very very old. And the limitations on conventional feedback to correct for (particularly) dead-zone crossover distortion have most to do with the system gain going to zero in the crossover region.

Considering that Olive hasn't even gotten to any amplifier DBTs as yet, although he has admitted that he ought to, makes the likelihood of DAC DBTs anytime soon rather remote.
 
Ed the reference is for an SPDIF receiver to not drop bits, not the jitter at the DAC or the reproduced audio. They state that the Wolfson receiver does 7UI before data is lost and the recovered clock is 50ps. You said the guys hear it, what system actually clocks data into the DAC with a clock that has 12ns jitter?



It's right there in their figure 5 5UI input jitter 50ps output jitter.

The data is dropped in transit. The stadium is not the insides of a CD player.

They give the unit interval of 132 nS which is in transmission it is actually more time once all the data packets have been removed from the audio stream to the D/A. So a jitter of 50 nS RMS should have no trouble screwing up an occasional packet.
 
The data is dropped in transit. The stadium is not the insides of a CD player.

They give the unit interval of 132 nS which is in transmission it is actually more time once all the data packets have been removed from the audio stream to the D/A. So a jitter of 50 nS RMS should have no trouble screwing up an occasional packet.

OK so finally, it's a problem unique to you and we don't have to care about it.
 
Yes .
Which, there, translates to overload margin (avoid overdriving the DAC). See “inter-sample "overs" “ section.
This kind of distortion is easy to notice ( if I can hear it, everyone can hear it) when feeding an ordinary DAC from a USB source e.g. computer based digital source.
Use the volume control of the USB source (don’t worry, it’s a transparent volume control) and reduce the volume of the source by 3 to 6 dB.
Listen again with the reduced level.
If you leave the volume control there, you only sacrifice 3 to 6 dB from the max SNR attainable.

George

Yeah. It's interesting reading the number of DAC's that are at highest performance at -12 dBFS. -3/-6 would probably work.
 
So quit SYing on me.

If by "Sying" you mean asking you for information that you never gave in support of a ridiculous statement, then I guess I am doing so. Look at how far we've had to go in order to find out exactly what you're talking about.

We could now talk about what the actual issue was.
So you mean I can stop Sying you for information, you're gonna give it??

Now quit confusing time alignment with jitter.
Do me a favor, show me exactly where I'm confusing time alignment with jitter?

Otherwise, I'm too smart to fall for such a poorly done diversion.

Bit errors are the issue, the standards are quite old and not always completely complied with.

Um, duh? Look up CRC. Or, if your old like me, grey code...😀

John
 
Yes .
Which, there, translates to overload margin (avoid overdriving the DAC). See “inter-sample "overs" “ section.
This kind of distortion is easy to notice ( if I can hear it, everyone can hear it) when feeding an ordinary DAC from a USB source e.g. computer based digital source.

Any examples of recordings that have that (and are not mp3)? I don't know if I have heard it but would be nice to have an example we can all play with.
 
People like John Siau and me are design engineers, often at VP level, depending on the company. We are NOT MARKETING, that is another type of individual, AND it is difficult to keep marketing completely happy, when we admit to our less than perfect designs.
Many here try to confuse MARKETING with DESIGN. They are completely different functions, usually done by completely different people, and we serious designers just want to do the best job possible. Marketing wants to make the best presentation possible. It is not always the same, and my marketing people are always trying to quiet me, and I, in turn, am always criticizing them if they exaggerate one of the products that I am associated with. It is an uneasy relationship, but doable.

😎🙂

Too many here who will use marketing descriptions as a basic for argument for or against a technical position. I would include spec/data sheets in that marketing. You will never see - not even a clue - anything which is negative nor other tests which might show other imperfections. It just isnt good marketing, otherwise. Specmanship is its own art form. Standards help a lot but are limited also. So, to get the 'complete' picture often requires doing your own T&M.


THx-RNMarsh
 
Last edited:
The data is dropped in transit. The stadium is not the insides of a CD player.

They give the unit interval of 132 nS which is in transmission it is actually more time once all the data packets have been removed from the audio stream to the D/A. So a jitter of 50 nS RMS should have no trouble screwing up an occasional packet.

Really? 5 UI would be 5X 132 nS or 660 nS jitter before the data is dropped. And that is not a particularly demanding spec. If you are using fiberoptic and its appropriate for the distance I don't see how you could get the equivalent of 600' of dispersion in a stadium. Are you using plastic fiber? Fiber is by definition point to point. No intermediate points reflecting etc unless Its a bad installation. Are there repeaters etc? Have you hooked up a scope and looked at the eye pattern? Mabe a bit error rate test? There are older communications BERTs that would work around the data rates you are using.
 
back in the 80s maybe, but an awful lot were remastered for CD as well.

Still not sure where this great improvement is coming from.

There is a lot of reissued stuff. Esp classical but old R&R groups also. And Blues.
To get hold of the owners and obtain the "original' master is often hard to do and transfers take a lot of care with them to prevent further deterioration. So, unless it was cut and mastered recently, chances are it will have that bass roll off. And, some today will make a single copy or 2nd gen Master to work off of and still cut bass for the rising popularity of LP, again..... rather than produce two working masters -- one for CD and one for LP. And now there may be some truth in making music play louder, as well.

I do not know with absolute certainty, either. But, I am narrowing things down one by one to likely remaining cause(s).



THx-RNMarsh
 
Last edited:
There is error correction but with the large amount of data even good correction lets some errors through. Can you imaging what a single MSB error would do to the audio? No I don't recall what the exact scheme used is.

What are you using, 2 tin cans and a piece of string? What kind of error correction is sensitive to the amount of data? CRC corrects all single-bit errors, not some percentage of them which varies with data volume, and it doesn't matter whether the error is on the MSB or the LSB. This is getting embarrassing.
 
Status
Not open for further replies.