Asynchronous I2S FIFO project, an ultimate weapon to fight the jitter

What you are showing here is IMO a pathological example of a fractional PLL. Directly comparing the 1 Hz normalized phase noise between an integer PLL and a fractional PLL is completely invalid if the differences in phase detector input frequency are not accounted for, by adding the a term to the integer PLL phase noise index.

Fractional PLL Advantage = 10*log(fPD(FPLL) / fPD(IPLL))

Since a fractional PLL allows a larger phase detector input frequency (fPD), fractional PLL allow for lower phase noise performance. But a direct comparison would make sense only if the phase detector frequency would be the same.

What you are showing appears to me as the output of an uncompensated fractional PLL, which is a design issue, not a fractional PLL limitation or disadvantage.
 
This is not PLL, but a fractional divider. The PLL stage is before (from 54MHz to 3GHz) and is not discussed by these posts. Please read the description.

The fractional divider cycles between IN/DIVI and IN/(DIVI + 1) in way that the fractional part is "emulated".

You may not like the principle (neither do I) but that's how the RPi's internal clock for the PCM output is generated. That is the reason to dump master I2S alltogether and slave it to an external clock feeding the DAC directly.
 
Yeah, the RPi4's SoC is originally for settop boxes, not really aimed at highend audio. The typical audio output would be HDMI which the VC4 part supports at 192/24/8.

But the PCM controller can be slaved, runs up to 768kHz/2ch/24bit, the 4 ARM72 cores with DDR4 RAM offer decent DSP performance, the price-convenient CM4 variant already has eMMC onboard instead of the SD card, and the support team is responsive. Any more serious use case would use the external DAC-located clock anyway.
 
Yeah, the RPi4's SoC is originally for settop boxes, not really aimed at highend audio. The typical audio output would be HDMI which the VC4 part supports at 192/24/8.

But the PCM controller can be slaved, runs up to 768kHz/2ch/24bit, the 4 ARM72 cores with DDR4 RAM offer decent DSP performance, the price-convenient CM4 variant already has eMMC onboard instead of the SD card, and the support team is responsive. Any more serious use case would use the external DAC-located clock anyway.

I think this is well understood and Ian has excellent devices to improve this. Another clock worth looking at is the SiTime Emerald range, utilising MEMS timing.

I2S is very well understood for audio, it is a 40 year old technology, as is the benefit of ADC/DAC sampling clocks and the data conversion is directly related to the quality of the sampling clock.

However. If a device like the Uptone Etherregen which has sold 2600 devices (I don't own one nor heard it). Has strong positive sentiment and very rarely re-sold. Maybe that is worth investigating. Clocking data transmission in the digital domain is extremely complex as they're designed by nature to deal with far larger bandwidth and distances.

Ethernet was designed by nature to tolerate errors, to keep costs feasible. The implementation of packets, error detection bits, data encoding and a FIFO are fundamental with how they deal with signal degradation and jitter. The jitter requirements for infrastructure fibre is extremely high and the eye pattern of a sub-sea cable is unbelievably impressive. Residential networking has far lower acceptance rates for error rates, error recovery, packet size, packet gap size, no. of retimed segments. The requirements for period or cycle-to-cycle jitter is low, even where accumulated jitter could result in an error within every packet. They're only focused on getting every packet to the other side and timing is just an obstacle.

Often a home router, is a very cheap product with many compromises. Does it make sense that the local ethernet encoder on a PI4 uses a low phase noise clock as a local reference or does it matter the quality/PN of the signal arriving to the SOC? We know that is the case in enterprise networking.

If so, how does that have anyway of affecting the audio output on the GPIO. If any? If you understand networking, I can provide a more technical/deeper response, but I'm very interested if anyone has looked into this.
 
Debating clocking in networking devices is a total waste of time. Years of such debates on the Internet always revealed that the supporters lack elementary knowledge in networking fundamentals, protocols and layering, and you already started showing such;

Ethernet was not designed to tolerate errors. Ethernet is one of the possible networking Physical Layer standards, as much as fibre optics, old Token Ring, etc... It is the upper layer (in the OSI model, the Transport Layer) TCP protocol that implements error tolerance, retransmission, etc… TCP provides a reliable virtual-circuit connection between applications; that is, a connection is established before data transmission begins. Data is sent without errors or duplication and is received in the same order as it is sent. Nothing to do with clocking, the fundamental principle of the OSI model is to make each layer independent on the lower layers implementation, using only an abstract interface.

P.S. The Uptone Etherregen EtherREGEN – UpTone Audio is one of those shameless scams intended for those audiophiles with deep pockets and flat parietal lobe.
 
Last edited:
lack elementary knowledge in networking fundamentals, protocols and layering, and you already started showing such;

LOL. You have no idea of the variety of networking and internet projects I have implemented, been involved in or been the authority on in EMEA or America.

When were you last involved in the design of an architecture for a global roll-out of networking solutions? :rolleyes:

Also that is complete nonsense. Of course ethernet was designed to tolerate errors, please google.
 
Debating clocking in networking devices is a total waste of time. Years of such debates on the Internet always revealed that the supporters lack elementary knowledge in networking fundamentals, protocols and layering, and you already started showing such;

Ethernet was not designed to tolerate errors. Ethernet is one of the possible networking Physical Layer standards, as much as fibre optics, old Token Ring, etc... It is the upper layer (in the OSI model, the Transport Layer) TCP protocol that implements error tolerance, retransmission, etc… TCP provides a reliable virtual-circuit connection between applications; that is, a connection is established before data transmission begins. Data is sent without errors or duplication and is received in the same order as it is sent. Nothing to do with clocking, the fundamental principle of the OSI model is to make each layer independent on the lower layers implementation, using only an abstract interface.

P.S. The Uptone Etherregen EtherREGEN – UpTone Audio is one of those shameless scams intended for those audiophiles with deep pockets and flat parietal lobe.

This post is so wrong and your understanding of networking is beyond extremely limited. You are talking about virtual-circuit connections between applications - that isn't a thing at layer 4 (TCP).... That is Layer 7.

You do not understand networking.
 
You can't reason someone out of an opinion that was not arrived at by reasoning...

Ahh syn08. I am not trying to beat you up, but you cannot say it's not worthwhile exploring and everyone else has limited understanding of networking. Where your own post is 90% wrong. A little knowledge can often be more dangerous than no knowledge. Probably most audiophiles :D.

You are confusing the OSI model with the TCP/IP model and TCP's function itself.

However, as we all know. Software is reliant on the hardware it sits on. Or we wouldn't have so many networking vendors such as Juniper, Huawei, Hp or Cisco with a variety of switching and routing platforms.

I haven't formed an opinion just yet and I cannot see how these audiophile devices do have an impact.

But I'm not curious about Layer 3 and above. I am curious about Layer 1 and 2.
 
Ahh syn08. I am not trying to beat you up, but you cannot say it's not worthwhile exploring and everyone else has limited understanding of networking. Where your own post is 90% wrong.

Yeah, you are not the first scammer around, more or less ignorant, pointing to my lack of knowledge, lack of understanding (RF, negative feedback, networking, current feedback, double blind testing, etc…) and alleged wrong posts, I am about to build a collection of characters and add you guys to my resume. Can you please drop some names supporting your statements? That would certainly impress the audiophiles.

Meantime, you may want to reflect to the reality that TCP guarantees delivering a bit perfect stream, and in the right order, so how would for example a DAC know that the switch has a good or bad jitter clock. No need to reply, I am sure my limited knowledge won’t allow me to understand your reasoning.
 
Last edited:
Yeah, you are not the first scammer around, more or less ignorant, pointing to my lack of knowledge, lack of understanding (RF, negative feedback, networking, current feedback, double blind testing, etc…) and alleged wrong posts, I am about to build a collection of characters and add you guys to my resume. Can you please drop some names supporting your statements? That would certainly impress the audiophiles.

Meantime, you may want to reflect to the reality that TCP guarantees delivering a bit perfect stream, and in the right order, so how would for example a DAC know that the switch has a good or bad jitter clock. No need to reply, I am sure my limited knowledge won’t allow me to understand your reasoning.

Well I am glad, I was contentious enough to hit your resume!

I think a lot of people, who don't come from a computer engineering background always fall back on bit's are bit's. Bit Perfect, can't be improved etc.

They may well be right.

However, for someone from a computer science background would understand data transmission systems and ancillary sub-systems within networking and computing environment will never be that simple. There is some extremely clever engineering, that people take for benefit, which allows bit perfect transmission at high bandwidth over a potentially huge distances.

You may be bit perfect by the end, but does the journey matter (particularly the timing). The question is, could these practises be of benefit to lower phase noise at the SOC's and if so, how could that ever affect audio performance.

I refer to data mapping, bit transition, overheads on the data stream, clock recovery, error detection, encoding processes, sequencing, domain crossing, re-timing, EMI coupling to data signal and many more.

These may have no benefit whatsoever. I am interested to see if someone has looked into this and proven that. I am not sure how my curiosity turns me into a scammer, but so be it.