Plain FIFO is a workaround for aligning independent clock domains. If the source clock cannot be controlled from the sink (USB async, AFAIK LMS network protocol), some form of ASRC is basically the only correct way. It can be either proper resampling (camilladsp, jackd, pulseaudio, ESS DACs, AD1896, SRC4190), or primitive dropping/repeating/extrapolating samples (gstreamer). Some FIFOs use a combination of large memory and sample dropping/repetition on music silence which is basically analog to the primitive ASRC done by some softwares. Professional hardware uses ASRC chips or does ASRC programmed in their FPGAs.
Regarding Mark's question about debugging, to find the bug, BasicHIFI1 could try this:
Repeat
Find a rubber duck
Ask the duck for assistance
Until you found a rubber duck willing to help
Explain each and every line of code to the duck, paying particular attention to signed and unsigned integers
Even if the rubber duck doesn't know the answer, the process of explaining the code may help to find the error.
Repeat
Find a rubber duck
Ask the duck for assistance
Until you found a rubber duck willing to help
Explain each and every line of code to the duck, paying particular attention to signed and unsigned integers
Even if the rubber duck doesn't know the answer, the process of explaining the code may help to find the error.
Last edited:
Maybe a little bit different take on FIFO, ASRC, and or SRC as it pertains to dacs: There are always tradeoffs in engineering. For bit-perfect reproduction the gold standard is probably FIFO buffered-in-the-PC asynchronous USB. Computers are fast enough nowadays to play more than 100 tracks at once in a DAW, all bit perfect. There hopefully should be no need to drop or add samples and or frames to stay in sync with USB DAC-clocked data requests.
The next best option in terms of SQ is probably a well-designed hardware FIFO at the dac (such as iancanada FIFO_Pi, now in its 3rd generation IIUC). The main downside of such a FIFO is the time delay, its not a good solution for real time use. The benefit is that it can be effectively bit-perfect. Buffer underruns and or overruns during playback can be made very rare if they ever occur at all in practical use.
Next best is well designed ASRC (better than FIFO, if real time performance is needed). Problem with ASRC is the PPLL which must track incoming jitter. As a type of PLL it can only attenuate jitter, not eliminate it completely. Also, numerical accuracy is limited in most hardware FIFO chips. AK4137 and SRC4392 have the best measured distortion specs. They specify jitter tolerance differently, so not as easily compared. IME, in practice most hardware ASRC implementations do not minimize power supply noise enough to reduce ASRC PPLL jitter to very minimal levels, although very good performance is achievable if incoming data jitter is minimal.
Relative to the term ASRC, SRC implies synchronous 'Sample Rate Conversion' which has no PPLL and does not attempt to attenuate jitter. Its best used in a PC where fast buffering can keep the SRC process operating without synchronization problems.
All the foregoing just my personal opinion, nothing more 🙂
The next best option in terms of SQ is probably a well-designed hardware FIFO at the dac (such as iancanada FIFO_Pi, now in its 3rd generation IIUC). The main downside of such a FIFO is the time delay, its not a good solution for real time use. The benefit is that it can be effectively bit-perfect. Buffer underruns and or overruns during playback can be made very rare if they ever occur at all in practical use.
Next best is well designed ASRC (better than FIFO, if real time performance is needed). Problem with ASRC is the PPLL which must track incoming jitter. As a type of PLL it can only attenuate jitter, not eliminate it completely. Also, numerical accuracy is limited in most hardware FIFO chips. AK4137 and SRC4392 have the best measured distortion specs. They specify jitter tolerance differently, so not as easily compared. IME, in practice most hardware ASRC implementations do not minimize power supply noise enough to reduce ASRC PPLL jitter to very minimal levels, although very good performance is achievable if incoming data jitter is minimal.
Relative to the term ASRC, SRC implies synchronous 'Sample Rate Conversion' which has no PPLL and does not attempt to attenuate jitter. Its best used in a PC where fast buffering can keep the SRC process operating without synchronization problems.
All the foregoing just my personal opinion, nothing more 🙂
Thanks, phofman and Markw4, it is clear to me now. So there is no way to have it bit exact without dropped or repeated or interpolated samples when the transmitting and receiving side are not synchronized, and different programs use different approaches.
So there is no way to have it bit exact without dropped or repeated or interpolated samples when the transmitting and receiving side are not synchronized
Exactly. Therefore engineers should strive for the correct option - only one domain clock. Preferably close to the DAC as that's the place where the clock actually makes a difference. Every decent recording studio uses distributed master clock and slaves all processing devices incl. computer soundcards to this clock signal.
I am not aware of any network streaming protocol apart of LMS with rate sync feedback. DLNA uses RTP, I could not find any rate feedback field in the RTCP receiver report format. Anyone knows some other network protocol with rate feedback built in?
Does this mean that somewhere in the sequence errors creep in that are not error-corrected?The noise can get into the USB power which can then affect dac sound quality. The effect was measured by one of the forum members here, KSTR. That said, if you were only asking about corrupt files in particular, most often they won't playback at all or will abruptly stop playing if disk drive data read errors occur. CD has different error handling so certain types of potentially audible data-read errors area possible if playing back from CD in real time.
As for CD's, I have some CDs that stop playing in the middle of a song and skip to the next one. I also have a CD that skips and is unplayable, but if I rip the CD to disk, it transfers fine. Makes you wonder if there are errors somewhere that affect sound quality, but because there are no interruptions we assume everything is OK. What sorts of distortions come through?
The correct explanation was found only some time after a number of dac users reporting audible differences had been assured they were imagining something that was physically impossible.
What sort of distortions? Maybe this?
https://www.sweetwater.com/insync/digital-distortion/Digital Distortion
SHARE
By Sweetwater on Oct 12, 2015, 6:28 PM
This refers to clipping that occurs in the digital domain. There are two basic types of digital distortion: distortion resulting from a signal overloading a digital circuit, which sounds harsh and unpleasant, or an overdrive effect generated by a digital algorithm.
I think I was misled by advertising slogan of "Perfect Sound Forever"
https://www.stereophile.com/asweseeit/656/index.htmlWhen some unknown copywriter coined that immortal phrase to promote the worldwide launch of Compact Disc in late 1982, little did he or she foresee how quickly it would become a term of ridicule.
Audio CD read errors: https://en.wikipedia.org/wiki/C2_error
In contrast, data CDs have an further layer of error detection and correction. Thus wav files on data CDs are likely to have less read error issues.
Some other potential CD player issues: http://www.industrial-electronics.c...ital_data_and_vibrational_jitter_effects.html
In contrast, data CDs have an further layer of error detection and correction. Thus wav files on data CDs are likely to have less read error issues.
Some other potential CD player issues: http://www.industrial-electronics.c...ital_data_and_vibrational_jitter_effects.html
Last edited:
Back to DSP using software: there is an example here, but it may require some study before I understand it and even more time to implement.
https://new.pythonforengineers.com/blog/audio-and-digital-signal-processingdsp-in-python/
https://klyshko.github.io/teaching/2019-02-22-teaching
https://new.pythonforengineers.com/blog/audio-and-digital-signal-processingdsp-in-python/
https://klyshko.github.io/teaching/2019-02-22-teaching
They do this with music, really? I don't want to listen to a 'best guess'!Audio CD read errors: https://en.wikipedia.org/wiki/C2_error
In contrast, data CDs have an further layer of error detection and correction. Thus wav files on data CDs are likely to have less read error issues.
Some other potential CD player issues: http://www.industrial-electronics.c...ital_data_and_vibrational_jitter_effects.html
Is it better then to use a digital player with wav files as the source? Do these have errors as well?The drive can compensate by supplying a "best guess" of what the missing data was, then supplying the missing data. - Wikipedia
They do this with music, really? I don't want to listen to a 'best guess'!
You're kidding, right?
Is it better then to use a digital player with wav files as the source? Do these have errors as well?
IMHO, CD rips on a PC are probably the better way to go. Other people enjoy their old CD players.
Regarding errors, seems like you still don't quite get it. All audio reproduction is imperfect. Every little teeny, tiny distortion and or noise is an error. The question is more like, what does it take for you to be happy with a practical reproduction system? How perfect do you require?
I guess I am hung up on the 'error correction' that take place in the computer world: if you copy a file and it copies without any errors, then it is assumed to be a perfect copy. I am talking about the digital part that was supposed to be perfect, but that, too, seems to experience errors. Come to think of it, computer files sometimes do not turn out correctly, for example a word processing document may have some strange characters in it. So there it is, nothing is perfect.
File data can get corrupted by editing software. Editing is not the same thing as copying one file to make an second file with identical contents. Data files have things like checksums used to validate that the data is error-free. When a file is copied the checksum is copied too. A data error occurring in that process should be easily detected. On the other hand, after editing some data, when new version of the file is saved there is also a new checksum value that has to be calculated and added to file. That is a very different process from simple copying.
Will "Good enough sound for most for quite awhile" do ?I think I was misled by advertising slogan of "Perfect Sound Forever"
MY long time classmate who was part of the team to bring the first PC drives by MiniScribe told me stories of the challenges of disk drives in general. First it was initially thought by makers of mainframe disk drives that what MiniScribe was aiming for was a lost cause. ( Most readers probably don't remember MiniScribe) Well we have PC drives today. So that says it all. https://en.wikipedia.org/wiki/MiniScribe
What was shocking to me was the error rates in hard disk drives. They are HUGE. So how are we able to get perfect files....error correction algorithms built into drives. This explained why we had higher performance drives sporting dual processors etc. One of the challenges was error correction on the fly and as processor speeds increased, it also helped and allowed drives to become faster in that area as well.
There will always be some level of bit rot as time goes along. As long as there is sufficient data to recover the file you are OK. It's all mathematics once again.
What was shocking to me was the error rates in hard disk drives. They are HUGE. So how are we able to get perfect files....error correction algorithms built into drives. This explained why we had higher performance drives sporting dual processors etc. One of the challenges was error correction on the fly and as processor speeds increased, it also helped and allowed drives to become faster in that area as well.
There will always be some level of bit rot as time goes along. As long as there is sufficient data to recover the file you are OK. It's all mathematics once again.
I have actually used the IBM XT that used this type of drive, I started out with 'floppy disks' that were 5 1/4 inch plastic. Is there a perfect lossless storage system for data? Of course Digital Mastering of CDs takes data loss to new levels. Is there any hope those recordings can be recovered?
Is there a perfect lossless storage system for data?
Not really. Nothing lasts forever.
- Home
- Source & Line
- Digital Source
- How does digital audio even work? (Specs, protocols, error handling, clocks??)