Low-phase noice clock for ethernet, 25Mhz

Status
Not open for further replies.
I have said this before, but for some reason some people have a hard time hearing: its not about bit-faults, resends, error checking or something similar. We can safely assume that the data reaching the DAC chip is bit-perfect, and it will be bit-perfect with or without tweaks. Improving the digital chain before the DAC gets it is not about changing or correcting bits.

If you fail to understand that, then I can understand why you don't see the points about tweaks and improvements to the digital transmissions coming before the DAC.
 
Have a look at any CD player PCB, locate the microprocessor chip and then its xtal. You'll notice that, on a properly designed PCB's, there's a fair amount of effort invested in segregating the ground fill around this microprocessor chip and its xtal, from the rest of the PCB ground fill. Furthermore, the power supply rails that supply the current to microprocessor chip and its xtal, are heavily decoupled from the rest of the PCB rails. Expensive CD players have completely different rails supplying the microprocessor sections, from the rest of the electronics located on that same PCB.

Nothing special about this, a competent engineer designing high speed digital circuitry uses multilayer pcb layouts to minimise noise, a cheap cd player is no different to a expensive one. Why do you think these types of circuits are typically minimum 4 layer.

By decreasing the noise of any uPC switching / any xtal located on the PCB that shares the ground fill with the rest of the digital audio playback (including digital audio on Ethernet) system, you will improve the overall performance of that PCB. Any decrease in noise in the digital replay chain, will be immensely appreciated by human ears.

Firstly, the clocking mechanism for a CD player is vastly different to that of IP ethernet network interface.

Secondly, a properly designed device will be engineered to have minimal noise and this has nothing to do with clocking accuracy of an ethernet interface.

Thirdly, a ethernet physical interface is a balanced transmission line and is impervious to noise.
Take for example the inside of a building, there will be hundreds of ethernet cables all bundled together transmitting at gigabits per second, how do you think this works without interfering with each other ?

Forthly, most noise in a digital replay chain will occur in the DAC.
 
I know exactly what I am doing, I am experimenting.
No, you're flailing. Experimenting would be application of the scientific method. That's not happening here at all.
And I will report whatever I find here, but even with the same clock the card sounds better than my old ethernet switch run on battery
Make measurements of the clock phase-noise, and error rate at the receive end. Uncontrolled, sighted, and biased listening tests, are meaningless. Or you could create a duplicate unmodified system and perform controlled tests.
(and yes, I know you don't understand how that can be).
I understand that in the application of ethernet transmission, phase-noise of the typical clock is far below anything that can affect the transmitted data. And I understand that if you make the effort to change something, the make an uncontrolled listening test, whatever you report hearing remains unsubstantiated.
But surely you have something better to do that troll every thread I post in?
I feel very strongly that forums, in particular DIY forums, have the potential to distribute information very widely to the DIY community which, while enthusiastic, are in many cases not oriented to the technology very well. When I see information being posted that can easily mislead someone (yourself included) down a pointless path, is completely wrong, or worse, could easily result in a new myth, the only responsible thing to do is to post the correct information.

You could do this yourself with just a tiny bit of research into the technologies first.
 
I have said this before, but for some reason some people have a hard time hearing: its not about bit-faults, resends, error checking or something similar. We can safely assume that the data reaching the DAC chip is bit-perfect, and it will be bit-perfect with or without tweaks. Improving the digital chain before the DAC gets it is not about changing or correcting bits.
If you don't change the bits, you don't change the resulting sound. I agree that if the data arrives at the DAC bit-perfect, the it will be bit perfect without the tweaks. But, if it's arriving bit-perfect, then what are we trying to change?
If you fail to understand that, then I can understand why you don't see the points about tweaks and improvements to the digital transmissions coming before the DAC.
OK, help me out then. The data arrives at the DAC bit-perfect, no corruption or changes. And then you do a mod that results in no change to the data and how it arrives. What difference do you expect to get at the output of the DAC if the data going in is the same as before the mod?
 
I have said this before, but for some reason some people have a hard time hearing: its not about bit-faults, resends, error checking or something similar. We can safely assume that the data reaching the DAC chip is bit-perfect, and it will be bit-perfect with or without tweaks. Improving the digital chain before the DAC gets it is not about changing or correcting bits.

Then, what is your point ? A TCP/IP protocol will transmit data whether that data is valid or not from the streamer, the ethernet physical layer will ensure that data will get to its endpoint error free.

If you fail to understand that, then I can understand why you don't see the points about tweaks and improvements to the digital transmissions coming before the DAC.

We can safely assume that you do not understand what we are telling you.
 
Here are some things I do know:
- Electronic noise affects a DAC negatively (both its clock and its analogue stage)
- All electronic activity generates electronic noise
- Switched electronics, especially switched voltage regulators, often produce lots of electronic noise.
- All metallic conductors can transport electronic noise
- Galvanic isolation only removes some types of electronic noise (its not even used for that purpose in ethernet, its to prevent voltage spikes to traverse through whole networks)

With this in mind, its not hard to understand why more than the actual bits affect the sound from the digital side of the DAC, and why tweaks like the ones I have made to the ethernet card works.

And yet I see again and again talk about "affecting transmitted data", "not enough to cause bit-faults" etc.
 
- Galvanic isolation only removes some types of electronic noise (its not even used for that purpose in ethernet, its to prevent voltage spikes to traverse through whole networks)
Additionally, well maybe it has something todo with the physical interface being a balanced transmission line making it resistant to noise, lots of data cables in buildings don't have noise problems at gigabits per second.

With this in mind, its not hard to understand why more than the actual bits affect the sound from the digital side of the DAC, and why tweaks like the ones I have made to the ethernet card works.

And yet I see again and again talk about "affecting transmitted data", "not enough to cause bit-faults" etc.
Well, maybe its a problem with your DAC.
 
I know exactly what I am doing, I am experimenting. And I will report whatever I find here, but even with the same clock the card sounds better than my old ethernet switch run on battery (and yes, I know you don't understand how that can be).

But surely you have something better to do that troll every thread I post in?

Believe me, I know.

//
 
I have said this before, but for some reason some people have a hard time hearing: its not about bit-faults, resends, error checking or something similar. We can safely assume that the data reaching the DAC chip is bit-perfect, and it will be bit-perfect with or without tweaks. Improving the digital chain before the DAC gets it is not about changing or correcting bits.

If you fail to understand that, then I can understand why you don't see the points about tweaks and improvements to the digital transmissions coming before the DAC.

We all understand this. Still, you are out on very deep waters. Sinking actually.

//
 
Mods, I asked people to not make this into a debate about if ethernet tweaks can improve the sound or not. I wanted this to be a DIY thread and not yet another "bits-are-bits" flaming thread. Any chance you could clean the thread?

You see, you cant ask to be permitted to go uncommented when dubious/false statements are being spread. It doesnt work lite that.

Sorry - not here. There are other forum I'm sure would gladly welcome you.

//
 
If you don't change the bits, you don't change the resulting sound. I agree that if the data arrives at the DAC bit-perfect, the it will be bit perfect without the tweaks. But, if it's arriving bit-perfect, then what are we trying to change?

OK, help me out then. The data arrives at the DAC bit-perfect, no corruption or changes. And then you do a mod that results in no change to the data and how it arrives. What difference do you expect to get at the output of the DAC if the data going in is the same as before the mod?

Perhaps I'm misreading it, but I'm puzzled by the assertions of this post. Bit-perfect transfer is all that matters as long as those bits remain only in the digital domain. There, jitter is irrelevant, so long as it is not so severe as to provoke actual bit errors in the transfer. Should those bits represent an signal, however, and be eventually converted to the analog domain, then jitter does comes in to play, and at magnitudes far lower than required to provoke bit errors. Instability in the domain conversion interval will produce errors in the reproduction relative to the original waveform. This is 'conversion jitter'.

You must keep in mind that signals are two-dimensional constructs, consisting of an amplitude-dimension and an time-dimension. The sample value carries the amplitude information while the D/A conversion clock carries the time information. The D/A conversion clock intervals ideally will exactly match the A/D conversion clock intervals. Which is why signal jitter is measured in time-amplitude (nano-seconds, pico-seconds). It represents the degree of error in the time-dimension information. The correct data converted at the incorrect time instant, is equivalent to incorrect data converted at the correct time instant. They are corollaries. This effect is well documented in the relevant literature. So, while the effect is both theoretically valid and measurable, showing as side-bands on a spectrum analyzer jitter test. However, the threshold for audibility of conversion jitter does remain in dispute.
 
Last edited:
Here are some things I do know:
- Electronic noise affects a DAC negatively (both its clock and its analogue stage)
Correct, and agreed.
- All electronic activity generates electronic noise
Not true at all. Noise is always relative, and always measured relative to it's impact on a desired signal or result. Thus, there is a lot of electronic activity that generates no noise.
- Switched electronics, especially switched voltage regulators, often produce lots of electronic noise.
Often is a relative term. You cannot deal with noise of any kind using sweeping generalities. There are many switched regulators, properly designed, that do not produce electronic noise in the resulting regulated voltage any greater than an analog regulator. If measurements are taken, the problem, if any, would be identified.
- All metallic conductors can transport electronic noise
"Can" is correct, "Do" would not be. Again, this is case-specific. In order of a conductor to transport electronic noise, the noise must be introduced into the conductor. There are several known ways for this to happen, and thus many known means of avoiding the situation. It is possible for conductors to be arranged in a manner where they carry desired signals while rejecting noise. Balanced signal transmission is one, shielded conductors are another, proper grounding techniques a third...and so on.
- Galvanic isolation only removes some types of electronic noise (its not even used for that purpose in ethernet, its to prevent voltage spikes to traverse through whole networks)
"Galvanic isolation" is simply preventing conduction. It doesn't remove noise at all, but it may prevent certain means of introducing noise to a conductor. In Ethernet devices it's necessary because devices are not guaranteed to share the same ground, and unshielded twisted pairs are used. Those are two different issues, an transformer at each end solves both by rejection of common-mode signals. Voltage spikes are a special case of common-mode signals.
With this in mind, its not hard to understand why more than the actual bits affect the sound from the digital side of the DAC, and why tweaks like the ones I have made to the ethernet card works.
You have it backwards. All of these issues have already been dealt with in a NIC, by convention, and of necessity. And the reason they have is the concern for data integrity, in other words, the impact noise could have on the transmitted data, the "bits" themselves.
And yet I see again and again talk about "affecting transmitted data", "not enough to cause bit-faults" etc.
You see that because the entire Ethernet system along with TCP/IP is specifically and purposely designed to eliminate the effects of noise and other forms of data corruption.

Again, I ask, if you don't change the "bits", what difference in the reconstructed audio contain?
 
...the threshold for audibility of conversion jitter does remain in dispute.

Hopefully people here understand that the term 'threshold of audibility' as used in psychoacoustics is not a hard limit. Rather, it is an estimate of an average value for a particular population (i.e. half the people in the population are not expected to be able to hear below the threshold).
 
Last edited:
You are changing the sound by reducing the amount of electronic noise going into the DAC.
Noised does not go through the DAC in the first place. The DAC's single job is to take a series of digital words and reconstruct them into an analog signal. If the digital words are correct, and unchanged, the signal integrity is preserved all the way to the audio.
And if you don't think that matters, ask yourself why all DACs have linear voltage regulators, when switched ones are cheaper, smaller and more efficient.
Not all have linear regulators, in fact most DACs in the world do not. The audiophile market demands linear regulation as a marketing concept. If you were to look at the pro audio market you would find the bulk of DACS use switched regulators because it's the only practical means of handling the kind of power and regulation required. I can show you a device with 16 DACs in it, and a switching regulator.

Switching regulators are not all bad. Some are, some are not. Not all produce harmful noise in all applications. Good engineering permits their effective use without negative impact.

You've already admitted your lack of knowledge of electronics in Post #17, and yet you vehemently ignore information for knowledgeable individuals on the basis of your "superior knowledge". You ask for help, then reject actual engineering help. Sometimes asking for assistance with a project will result in the revelation that the project itself is a bad idea. That's life in technology.
 
Noised does not go through the DAC in the first place. The DAC's single job is to take a series of digital words and reconstruct them into an analog signal. If the digital words are correct, and unchanged, the signal integrity is preserved all the way to the audio.

Roughly true in theory, sometimes not true in reality. Timing errors of I2S and MCLK signals, and including noise mixed with those signals is known to have audibly affected the analog outputs of some dacs. The effects of jitter are well documented the dac literature.
 
Again, I ask, if you don't change the "bits", what difference in the reconstructed audio contain?
Didn't I already answer this? From a data-integrity standpoint, there is no change, but we are talking about audible changes in a DAC.

I am sure electronic noise affects a DAC in many ways, but the two I know of is:
1. The clock oscillator will decrease in accuracy, which will mean more clock jitter (google "audio jitter" or something similar if you want to read up on what this is)
2. The analogue output section of the DAC, which deals with generating a very accurate analogue signal to the amplifiers, will add electronic noise into the mix and end up doing it less accurately.
 
Perhaps I'm misreading it, but I'm puzzled by the assertions of this post. Bit-perfect transfer is all that matters as long as those bits remain only in the digital domain. There, jitter is irrelevant, so long as it is not so severe as to provoke actual bit errors in the transfer. Should those bits represent an signal, however, and be eventually converted to the analog domain, then jitter does comes in to play, and at magnitudes far lower than required to provoke bit errors. Instability in the domain conversion interval will produce errors in the reproduction relative to the original waveform. This is 'conversion jitter'.

You must keep in mind that signals are two-dimensional constructs, consisting of an amplitude-dimension and an time-dimension. The sample value carries the amplitude information while the D/A conversion clock carries the time information. The D/A conversion clock intervals ideally will exactly match the A/D conversion clock intervals. Which is why signal jitter is measured in time-amplitude (nano-seconds, pico-seconds). It represents the degree of error in the time-dimension information. The correct data converted at the incorrect time instant, is equivalent to incorrect data converted at the correct time instant. They are corollaries. This effect is well documented in the relevant literature. So, while the effect is both theoretically valid and measurable, showing as side-bands on a spectrum analyzer jitter test. However, the threshold for audibility of conversion jitter does remain in dispute.

All correct, but look at the context. Network data transmission with Ethernet and TCP/IP layers assumes there will be issues in the time domain, sometimes massive latency, and some packets won't even arrive in sequential order because of a retransmission request. There must be a buffer somewhere to reassemble this mess into coherent and error-free data. The buffer size could be very significant, even for local data links from PC to DAC over Ethernet. But the buffer implies reclocking, by definition. Reclocking is where the jitter would be removed, or if badly done, introduced. Bad reclocking is highly unlikely.

As to jitter audibility, there can be no single figure threshold. Jitter is essentially frequency modulation of the audio signal. The resulting spectrum is determined by the modulation index, the magnitude and frequency of the modulating signal. Because the jitter (modulating signal) spectrum could be a simple single frequency, or far more complex than a single frequency tone, the resulting spectrum and thus threshold of audibility must at least included that spectrum. Then, like all forms of distortion, audibility threshold varies with the spectrum of the desired signal because of masking. All we know for sure is that jitter is easily measured, but not easily quantified as to audibility. All we can say is "less is better".

But again, since packet transmission is not synchronous or even sequential, there must be buffered reclocking. Any reclocking completely takes care of data time domain issues.
 
Status
Not open for further replies.