Hi,
Appears seems the operative word here... There is very limited hard data in the public domain and (surprisingly perhaps, given that it is such a perfect chip) no independent measurements.
First, there is latency, as you comment, 833uS or in other words 36 Samples.
So clearly, it is not real time.
I am not at liberty do discuss the technical details of AMR's solution, but I can tell you that similar delay is obtainable using a FIFO System that correctly designed.
However, I would suggest that debating two solutions of which one has no real knowledge or technical details is for loosers.
Well, first, this figure seems not in line with the claims for the performance elsewhere in the document (e.g. a at best 16Bit equivalent noisefloor, compared with claims of 127dB whcih is a little over 21 Bit).
Secondly, it is quite trivial to make jitter "disappear as if by magic" (a very apt term in this case BTW, magic, as in stage-magic, as in "illusion") using an ASRC. There are plenty of other DAC's that do it.
As I do not "believe" in magic I prefer not to rely on ASRC's to "remove jitter, as if by magic".
Yes. A very draconian NDA.
Well, if I was cynical, I would suggest that the whole mystique regarding the data, performance etc. is clearly calculated to get the DAC talked about and talked up.
I am aware of a number of cases where direct evaluation against competitors did not result in a design-win for ESS, one may speculate as to why this is. However, surely, if this DAC was really as good as being said, everyone would use it (after all, it is readily available for use in volume, unlike some of the other "magic" DAC's, such as TDA1541 or AD1862).
You are aware that "resonessencelab has the ESS in the name for a reason?
What I find interesting is the way in which their measurements fail to confirm the performance claims in the datasheet.
It does. How? You mean by integrating ASRC and DAC on one die? There is nothing in the performance results in the public domain that some others DAC's (if necessary in mono mode and several parallel, after all, we have a lot of budget to match or beat the ESS) combined with an ASRC cannot do.
I am not aware of a lot of design wins for the ESS DAC's, especially given how long they have been available and how good they are supposed to be.
Why should I be curious? I am not curious at all. I know. I could even tell you, ahhhm, actually I can(da)not.
Ciao T
I'm still fascinated by this ESS DAC that appears to have it all: the best THD and noise specs in the industry, oblivious to incoming sample rate, and not even slightly affected by source jitter - using this DAC, jitter does, indeed, appear to be a "non-issue".
Appears seems the operative word here... There is very limited hard data in the public domain and (surprisingly perhaps, given that it is such a perfect chip) no independent measurements.
Thorsten, you said it could not perform like that and be "real time". Well, Googling around, I find that the chip's designer saying that it has a latency of about 833us - obviously FIFOs and DPLLs/'frequency locked loops' are for losers!
First, there is latency, as you comment, 833uS or in other words 36 Samples.
So clearly, it is not real time.
I am not at liberty do discuss the technical details of AMR's solution, but I can tell you that similar delay is obtainable using a FIFO System that correctly designed.
However, I would suggest that debating two solutions of which one has no real knowledge or technical details is for loosers.
Figure 3 shows the DAC being completely unaffected by '2ns random jitter' while a "competitor" suffers badly. So how do you eliminate source jitter without any buffering..? I can't get my head around it.
Well, first, this figure seems not in line with the claims for the performance elsewhere in the document (e.g. a at best 16Bit equivalent noisefloor, compared with claims of 127dB whcih is a little over 21 Bit).
Secondly, it is quite trivial to make jitter "disappear as if by magic" (a very apt term in this case BTW, magic, as in stage-magic, as in "illusion") using an ASRC. There are plenty of other DAC's that do it.
As I do not "believe" in magic I prefer not to rely on ASRC's to "remove jitter, as if by magic".
The publicly available data sheets and test results are obviously just meant to be tokens of the conventional documentation that other manufacturers provide, and you have to sign an NDA before being given anything very technical.
Yes. A very draconian NDA.
Despite (because of?) this, the DACs are widely acknowledged as the best in the world. As a result, everyone expects these DACs to sound great, and they do, apparently.
Well, if I was cynical, I would suggest that the whole mystique regarding the data, performance etc. is clearly calculated to get the DAC talked about and talked up.
I am aware of a number of cases where direct evaluation against competitors did not result in a design-win for ESS, one may speculate as to why this is. However, surely, if this DAC was really as good as being said, everyone would use it (after all, it is readily available for use in volume, unlike some of the other "magic" DAC's, such as TDA1541 or AD1862).
Here's the manual of a commercial DAC based on one of these devices, with seemingly comprehensive test results:
You are aware that "resonessencelab has the ESS in the name for a reason?
What I find interesting is the way in which their measurements fail to confirm the performance claims in the datasheet.
Purely out of intellectual curiosity, can anyone suggest how this miracle works? We're talking about a designer who has come up with something that seemingly blows the efforts of massive corporations like TI and Analog Devices out of the water, with other big names queuing up to use these devices.
It does. How? You mean by integrating ASRC and DAC on one die? There is nothing in the performance results in the public domain that some others DAC's (if necessary in mono mode and several parallel, after all, we have a lot of budget to match or beat the ESS) combined with an ASRC cannot do.
I am not aware of a lot of design wins for the ESS DAC's, especially given how long they have been available and how good they are supposed to be.
Surely you FIFO people must be a little bit curious?!
Why should I be curious? I am not curious at all. I know. I could even tell you, ahhhm, actually I can(da)not.
Ciao T
It's extremely simple and clever.
@peufeu
Many thanks for that explanation. I haven't quite worked it out yet, though! I hope you don't mind me asking a lot of questions on how you think it might work!
What would you say are the fundamental differences between your suggestion for ESS's method and the conventional methods (AD1896, say)? Yes, they may use different mechanisms, but why aren't they equivalent mathematically? Is it just that the AD1896 doesn't do its calculations to high enough precision, or that its interpolating filters are in some way flawed?
OK. So, I can see how you can measure the arrival time of an incoming sample to a certain quantization in time - say to the nearest 10ns. By low pass filtering these sample times you can accurately deduce the jitter-less time between samples, or the sample rate - it's the same thing. You can presumably get it to picosecond accuracy if you do it for long enough and with double precision floats or whatever. Presumably it would be equivalent to implementing a (D)PLL.
I can see that if I have incoming samples at one sample rate I can oversample the waveform to a higher rate and low pass filter the result to get intermediate sample points. This is normally done with a brick wall sinc filter..? I've played with changing sample rates like this myself (using a PC so I had access to double precision and large kernel), and it sounds perfect to my ears, and looks good on the FFT of a sine wave.
When you say the upsampling rate has to be synchronous with the incoming sample rate (and is therefore jittery itself), do you mean it has to be an integer multiple of it? In other words, I ignore when the samples arrive, and just calculate a perfect upsampled and filtered version as though they were sampled perfectly regularly? I'm assuming I'm using a sinc filter, but are you saying that ESS do something different? Bicubic interpolation?
Tying these together, I can see how I could take another clock - asynchronous at high speed (not the same as the upsampling rate? Higher? Lower?) - and stream the interpolated data out with linear interpolation to 'join the dots'. But I have to know the incoming sample rate extremely accurately so I don't 'run out of data' or be permanently reading out too slowly and slipping against the incoming waveform (maybe that wouldn't matter if it was a small enough error)...
If the incoming sample rate changes suddenly I may be in trouble..? Isn't this what the large FIFO people are trying to avoid, and also they are ensuring that any output playback rate adjustments are tiny and low frequency? How does a DAC without a FIFO do better than that, because at first glance it would have to tweak the output rate more brutally in response to an incoming sample rate change (= jitter)?
(Apologies if I've got the whole thing completely wrong!)
Thanks Thorsten.
(Sorry, I should have put a 'smiley' next to the "losers" bit. I was being a bit overly ironic!)
I thought that your idea was to use a large FIFO so that you could make very small and infrequent adjustments to the sample rate (which sounds perfectly sensible to me). How does this fit in with a guaranteed(?) latency of just 36 samples?
You're right, I am discussing two systems of which I have no real knowledge. But humour me! I am fascinated by this stuff, for some reason.
(Sorry, I should have put a 'smiley' next to the "losers" bit. I was being a bit overly ironic!)
I thought that your idea was to use a large FIFO so that you could make very small and infrequent adjustments to the sample rate (which sounds perfectly sensible to me). How does this fit in with a guaranteed(?) latency of just 36 samples?
You're right, I am discussing two systems of which I have no real knowledge. But humour me! I am fascinated by this stuff, for some reason.
Last edited:
Hi,
I think you fail to address several issues:
1) Oversampling - can only be done to a fixed multiple, otherwise we are back at ASRC. ASRC can of course have arbitrary conversion of rates.
ASRC must take place somewhere to get from the source clock to 40MHz (which 833.3 period times 48KHz and 927.0294 something times 44.1KHz).
So we do not get away from ASRC and linear interpolation ASRC does not really work unless the actual sample rates are for all practical prurposes identical.
The precise way ESS does it is still not clear (to me at least) from the Patent Application. But it is subject to the same universal laws.
It does appear to be better than some of the other stand alone ASRC Chips, however this is overall hard to be sure... The problem is that the limited analogue quality of the DAC obscures this and hence shows no better results than common garden ASRC's...
2) Clock rate does not equal sample rate, so the DS modulator in the Sabre works at 40MHz but that does not mean an equivalent sample rate of 40MHz.
The Sabre does have a lot more DAC hardware on board than other DAC's if you use the 8 Channels combined into two stereo channels, but is by far more comparable to other manufacturers DAC's if all channels are used as singles.
Once we use 8 Channels worth of the leading manufacturers DAC's with a good ASRC Chip we will find the results not dramatically different from the Sabre (at least in measured terms) and still likely have a lower BOM cost that using the Sabre DAC (which may correspond to the comparable lack of design wins).
Ciao T
Here's how I think it works (I may be mistaken of course).
I think you fail to address several issues:
1) Oversampling - can only be done to a fixed multiple, otherwise we are back at ASRC. ASRC can of course have arbitrary conversion of rates.
ASRC must take place somewhere to get from the source clock to 40MHz (which 833.3 period times 48KHz and 927.0294 something times 44.1KHz).
So we do not get away from ASRC and linear interpolation ASRC does not really work unless the actual sample rates are for all practical prurposes identical.
The precise way ESS does it is still not clear (to me at least) from the Patent Application. But it is subject to the same universal laws.
It does appear to be better than some of the other stand alone ASRC Chips, however this is overall hard to be sure... The problem is that the limited analogue quality of the DAC obscures this and hence shows no better results than common garden ASRC's...
2) Clock rate does not equal sample rate, so the DS modulator in the Sabre works at 40MHz but that does not mean an equivalent sample rate of 40MHz.
The Sabre does have a lot more DAC hardware on board than other DAC's if you use the 8 Channels combined into two stereo channels, but is by far more comparable to other manufacturers DAC's if all channels are used as singles.
Once we use 8 Channels worth of the leading manufacturers DAC's with a good ASRC Chip we will find the results not dramatically different from the Sabre (at least in measured terms) and still likely have a lower BOM cost that using the Sabre DAC (which may correspond to the comparable lack of design wins).
Ciao T
Hi,
The question of course is "how big is 'large'"?
If my sample frequencies at input and output of the FIFO have less than 0.001Hz difference for long terms averages, how long would a buffer have to be to ensure an update rate of over 180 Seconds for the clock?
In practice a larger buffer than the theoretical minimum is needed of course, as we need to balance off reliable and quick locking with an absolute requirement to never over/underflow the buffer.
First, we cannot be sure that group delay of the Sabre is really fixed at this precise level and cannot vary. Second, 36 samples actually is a LOT of samples.
Some of the extant secondary PLL systems rely on the buffering of samples in either the NPC SM5842 Digital Filter or the CS8412/8414 Receiver, which in either case is only three samples...
Sure. Do me a favour though.
Please study the different extant approaches to "jitter rejection"/"jitter reduction" etc. and then draw your own conclusions and design your own solution. It will teach you by far better how to do things and how this stuff works.
Ciao T
I thought that your idea was to use a large FIFO so that you could make very small and infrequent adjustments to the sample rate (which sounds perfectly sensible to me).
The question of course is "how big is 'large'"?
If my sample frequencies at input and output of the FIFO have less than 0.001Hz difference for long terms averages, how long would a buffer have to be to ensure an update rate of over 180 Seconds for the clock?
In practice a larger buffer than the theoretical minimum is needed of course, as we need to balance off reliable and quick locking with an absolute requirement to never over/underflow the buffer.
How does this fit in with a guaranteed(?) latency of just 36 samples?
First, we cannot be sure that group delay of the Sabre is really fixed at this precise level and cannot vary. Second, 36 samples actually is a LOT of samples.
Some of the extant secondary PLL systems rely on the buffering of samples in either the NPC SM5842 Digital Filter or the CS8412/8414 Receiver, which in either case is only three samples...
You're right, I am discussing two systems of which I have no real knowledge. But humour me! I am fascinated by this stuff, for some reason.
Sure. Do me a favour though.
Please study the different extant approaches to "jitter rejection"/"jitter reduction" etc. and then draw your own conclusions and design your own solution. It will teach you by far better how to do things and how this stuff works.
Ciao T
This I understand, but personally I am happy with the supposedly horrendous jitter performance of a PLL-locked system - or something like the PCM2702. I thought the idea was to get much better performance than that, though.36 samples actually is a LOT of samples.
Some of the extant secondary PLL systems rely on the buffering of samples in either the NPC SM5842 Digital Filter or the CS8412/8414 Receiver, which in either case is only three samples...
Please study the different extant approaches to "jitter rejection"/"jitter reduction" etc. and then draw your own conclusions and design your own solution. It will teach you by far better how to do things and how this stuff works.
My problem is that it all seems roughly equivalent to me: whether it's done by playing back the 'bit perfect' sample values at the corrected time, or interpolating values asynchronously. Where do the nuggets of real, novel genius lie? Studying the existing systems on paper doesn't always make that clear without teasing further information out with a lot of questions!
What I'm really trying to understand is why people yourself are striving to get better than what we already have. I read all the time that all the digital sources in my possession are inferior because of jitter and that I am therefore not an audiophile, and I just want to make sure that when these critics of my system (not you) implement seemingly miraculous jitter-reduction schemes they really are doing what they claim!
Designing my own system would be a purely academic exercise, and if I was actually minded to build my own DIY DAC I would probably just bypass the problem completely and have a go at asynchronous USB (now that I know it exists!).
Hi,
Good for you.
Actually, correctly implemented and with the PC corrently set up to minimise software induced jitter, the PCM270X series of USB Chips is not that bad. Better than any "bog standard" SPDIF system.
Sure, who says the results of our DAC are no better than a PCM270X?
Sure, I agree on roughly equivalent. Just as oranges and lemons are roughly equivalent as well, I mean they a fruits and the even citrus fruits, right?
(Oranges and Lemons it rings from St Clements...)
I did not say study on paper, I said "make your own"...
Several reasons.
First, the better is always the enemy of the good.
Secondly, familiarity breeds contempt.
Hogwash. If you like it and it's good enough for you, even next to and directly compared with what you have, it's good enough.
Plus, who wants to any sort of -phile? Seems to all rhyme with pedo-...
With the right instrumentation it is easy to measure.
Setting the AP2 up for jitter measurements and the sticking 1 or 2 UI worth of random jitter onto the output signal makes a complete mess out of the output of most DAC's, some even loose lock, ours performs so-so, showing that getting SPDIF right does help, but is not the cure.
Switch on the Jitter Killer and the noisefloor drops, sidebands disappear, the broadening of the spectral lines in the FFT disappears. Use instead a single jitter frequency and you can see loads of related sidebands on the "without jitter killer" plot and they magically disappear when it is engaged.
So in the technical sense it clearly works. Measurably.
This I agree with. But this is not a solution that I'd like to present as the only option for a commercial DAC. And if the Async USB sounds excellent and the SPDIF input rotten, the customers will not be too pleased.
Ciao T
This I understand, but personally I am happy with the supposedly horrendous jitter performance of a PLL-locked system - or something like the PCM2702.
Good for you.
Actually, correctly implemented and with the PC corrently set up to minimise software induced jitter, the PCM270X series of USB Chips is not that bad. Better than any "bog standard" SPDIF system.
I thought the idea was to get much better performance than that, though.
Sure, who says the results of our DAC are no better than a PCM270X?
My problem is that it all seems roughly equivalent to me: whether it's done by playing back the 'bit perfect' sample values at the corrected time, or interpolating values asynchronously.
Sure, I agree on roughly equivalent. Just as oranges and lemons are roughly equivalent as well, I mean they a fruits and the even citrus fruits, right?
(Oranges and Lemons it rings from St Clements...)
Where do the nuggets of real, novel genius lie? Studying the existing systems on paper doesn't always make that clear without teasing further information out with a lot of questions!
I did not say study on paper, I said "make your own"...
What I'm really trying to understand is why people yourself are striving to get better than what we already have.
Several reasons.
First, the better is always the enemy of the good.
Secondly, familiarity breeds contempt.
I read all the time that all the digital sources in my possession are inferior because of jitter and that I am therefore not an audiophile,
Hogwash. If you like it and it's good enough for you, even next to and directly compared with what you have, it's good enough.
Plus, who wants to any sort of -phile? Seems to all rhyme with pedo-...
and I just want to make sure that when these critics of my system (not you) implement seemingly miraculous jitter-reduction schemes they really are doing what they claim!
With the right instrumentation it is easy to measure.
Setting the AP2 up for jitter measurements and the sticking 1 or 2 UI worth of random jitter onto the output signal makes a complete mess out of the output of most DAC's, some even loose lock, ours performs so-so, showing that getting SPDIF right does help, but is not the cure.
Switch on the Jitter Killer and the noisefloor drops, sidebands disappear, the broadening of the spectral lines in the FFT disappears. Use instead a single jitter frequency and you can see loads of related sidebands on the "without jitter killer" plot and they magically disappear when it is engaged.
So in the technical sense it clearly works. Measurably.
if I was actually minded to build my own DIY DAC I would probably just bypass the problem completely and have a go at asynchronous USB (now that I know it exists!).
This I agree with. But this is not a solution that I'd like to present as the only option for a commercial DAC. And if the Async USB sounds excellent and the SPDIF input rotten, the customers will not be too pleased.
Ciao T
So correct me if I'm wrong: are we saying that if we play an audio stream through an ASRC DAC without a FIFO (except maybe for a 'vestigial' one that exists in a filter or interpolator) then sure, we can tolerate and eliminate N ns of random jitter as long as the stream is at the 'right' frequency for the duration of the test - the output would be as if there was no jitter. However, if the playback rate is 'wrong' and has to be adjusted during the test, then that's when things get interesting, and this would show up in the FFTs etc.? If so, how would we specify this in the test procedure and results?
I'm still fascinated by this ESS DAC that appears to have it all: the best THD and noise specs in the industry, oblivious to incoming sample rate, and not even slightly affected by source jitter - using this DAC, jitter does, indeed, appear to be a "non-issue".
I've built a few DACs based on the ESS chip and they are rather nice. I have not measured jitter (I'm not a "jitter bug") but they do sound nice, whatever they are doing. They tend to have a rather subtle, smooth sound compared to other chips. Perhaps a little too smooth for some folks, judging by the comments at audio shows. Must people are used to more "bite."
Don't know if it's the jitter or something else but - anecdotally - they do sound different than the BB, AD, TI chips.
Hi,
We can hide the jitter from being measurable with conventional tests. That is not the same as "eliminating" it.
FFT is only valid for steady state signals and many averaged samples. You cannot FFT a single sample...
Clearly it is the wrong tool to investigate transient or dynamic phenomenae...
Ciao T
So correct me if I'm wrong: are we saying that if we play an audio stream through an ASRC DAC without a FIFO (except maybe for a 'vestigial' one that exists in a filter or interpolator) then sure, we can tolerate and eliminate N ns of random jitter
We can hide the jitter from being measurable with conventional tests. That is not the same as "eliminating" it.
then that's when things get interesting, and this would show up in the FFTs etc.? If so, how would we specify this in the test procedure and results?
FFT is only valid for steady state signals and many averaged samples. You cannot FFT a single sample...
Clearly it is the wrong tool to investigate transient or dynamic phenomenae...
Ciao T
Don't know if it's the jitter or something else but - anecdotally - they do sound different than the BB, AD, TI chips.
Quite possibly its noise modulation that's related to 'bite'. I've found the TDA1541A has more 'bite' than my previous reference DAC, an AD one. ESS claim the noise modulation on their DAC is better than other low-bit ones, even they admit its not perfect though.
Yes, I've wondered about that. Or simply their implementation of the output buffers and filtering inside the chip.
Hi,
You have a TDA1541 with heavy noise modulation? Interesting.
I sincerely hope it is not one of these boards from the chinese e-bay, all copied mindlessly of Thomas's DAC presented here ages ago, with more bugs than you find under a stone in a wet part of a park and fitted with a re-marked R1 Chip (R1 chip = relaxed spec)?
The problem with DS DAC's, they use noise modulation as fundamental operating principle. The ESS DAC has (in theory) a 128 Level modulator in each channel, so it needs less noisemodulation than parts from vendores with only 25 Levels.
Ciao T
Quite possibly its noise modulation that's related to 'bite'. I've found the TDA1541A has more 'bite' than my previous reference DAC, an AD one.
You have a TDA1541 with heavy noise modulation? Interesting.
I sincerely hope it is not one of these boards from the chinese e-bay, all copied mindlessly of Thomas's DAC presented here ages ago, with more bugs than you find under a stone in a wet part of a park and fitted with a re-marked R1 Chip (R1 chip = relaxed spec)?
ESS claim the noise modulation on their DAC is better than other low-bit ones, even they admit its not perfect though.
The problem with DS DAC's, they use noise modulation as fundamental operating principle. The ESS DAC has (in theory) a 128 Level modulator in each channel, so it needs less noisemodulation than parts from vendores with only 25 Levels.
Ciao T
FFT is only valid for steady state signals and many averaged samples.
Nonsense. It is trivial to use Fourier transforms on nonrepetitive phenomena and they are routinely employed in this manner is many fields, not just audio. See, for example, my papers on using FT to elucidate charge flow response to voltage steps in germanium and on electrodeposition of semiconductors.
The statement "You cannot FFT a single sample..." is rather misleading. For example, consider the Fourier transform of a delta function.
edit: Anyone with Excel and three brain cells can demonstrate this for themselves.
You have a TDA1541 with heavy noise modulation? Interesting.
No, rather the opposite. I had an AD1955 which I considered to be fairly good until I compared it with a TDA1541A, that's when I realised what I'd been missing. Bite.
I sincerely hope it is not one of these boards from the chinese e-bay, all copied mindlessly of Thomas's DAC presented here ages ago, with more bugs than you find under a stone in a wet part of a park and fitted with a re-marked R1 Chip (R1 chip = relaxed spec)?
It is indeed one from Taobao with crap design (like async reclocking from the CS8412). But I've tarted it up a bit - thrown out a lot and installed my own digital filter. At under 500rmb I'm by no means complaining 🙂
The problem with DS DAC's, they use noise modulation as fundamental operating principle. The ESS DAC has (in theory) a 128 Level modulator in each channel, so it needs less noisemodulation than parts from vendores with only 25 Levels.
Yep. But seeing as I can't make head nor tail of their ASRC marketing pitch I've been giving that one a very wide berth. Wisdom from the 'Sage of Omaha' comes to mind - don't invest in what you don't understand 😀
Hi,
Not sure what "bite" means, maybe what I'd call "musical realism"?
As for the qualities of the TDA1541(A), I have know them since the late 90's...
You should have at least gotten the one with the WM8805 receiver.
Ciao T
No, rather the opposite. I had an AD1955 which I considered to be fairly good until I compared it with a TDA1541A, that's when I realised what I'd been missing. Bite.
Not sure what "bite" means, maybe what I'd call "musical realism"?
As for the qualities of the TDA1541(A), I have know them since the late 90's...
It is indeed one from Taobao with crap design (like async reclocking from the CS8412). But I've tarted it up a bit - thrown out a lot and installed my own digital filter. At under 500rmb I'm by no means complaining 🙂
You should have at least gotten the one with the WM8805 receiver.
Ciao T
"Bite" usually means attack on the transients. At least that's how I was using it. More "ting" in your cowbell.
Whether or not more bite is more accurate, I don't know.
Whether or not more bite is more accurate, I don't know.
Sy,
I did not say you could not apply FFT, only that it was not an appropriate Tool.
Okay, you CAN apply FFT to a single sample, however the result is not telling you much.
So, we still have the problem that we need to look at essentially individual, single sample accuracy. How do YOU propose to do that using FT?
Ciao T
Nonsense. It is trivial to use Fourier transforms on nonrepetitive phenomena and they are routinely employed in this manner is many fields, not just audio.
I did not say you could not apply FFT, only that it was not an appropriate Tool.
The statement "You cannot FFT a single sample..." is rather misleading. For example, consider the Fourier transform of a delta function.
Okay, you CAN apply FFT to a single sample, however the result is not telling you much.
So, we still have the problem that we need to look at essentially individual, single sample accuracy. How do YOU propose to do that using FT?
Ciao T
Not sure what "bite" means, maybe what I'd call "musical realism"?
If you like yes. I was speaking in relative terms. Like you only notice the taste of MSG when its absent and suddenly things start tasting more like their original taste.
As for the qualities of the TDA1541(A), I have know them since the late 90's...
My first CD player had one (maybe not the A, can't be sure) and that was in the 1980s. Of course, the implementation sucked. LM833 for I/V.
You should have at least gotten the one with the WM8805 receiver.
Can't for the life of me think why. I'm feeding it from a QA550 with I2S.
Asking my question again another way:
If I performed ten 1kHz sine wave FFT tests on a DAC and found that in nine of them I had a hint of a double, or 'spread' peak (say), but in one of them I had a single, perfect peak, purely as a function of how the DAC was performing for a couple of seconds, how would I then specify its performance?
If it was the other way round, and only 1 out of 10 tests showed a problem? Or 1 in a 1000? Who knows, that single anomaly might actually be audible if listening for it, but only occurs once every hour.
Just suggesting that testing and specifying DACs might not be as straightforward as it seems.
If I performed ten 1kHz sine wave FFT tests on a DAC and found that in nine of them I had a hint of a double, or 'spread' peak (say), but in one of them I had a single, perfect peak, purely as a function of how the DAC was performing for a couple of seconds, how would I then specify its performance?
If it was the other way round, and only 1 out of 10 tests showed a problem? Or 1 in a 1000? Who knows, that single anomaly might actually be audible if listening for it, but only occurs once every hour.
Just suggesting that testing and specifying DACs might not be as straightforward as it seems.
- Status
- Not open for further replies.
- Home
- Source & Line
- Digital Source
- Jitter? Non Issue or have we just given in?