Linux Audio the way to go!?

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Exactly!

phofman said:
I thought this has been thoroughly discussed here before. The whole chain is clocked by sound card clock (or the 1ms USB controller HW clock). If the SW side cannot keep up, sonically ugly xruns occur. No other buffers matter, the whole chain is asynchronous. There is no blackbox in linux PC audio chain (there are many black boxes in windows though).

The latency setup has no direct influence on the resultant jitter - i.e. on the sound card clock. The only impact can be through noise of power supply/EMI/RMI. I have not seen any study/measurement relating variations in CPU load (percentage points when decoding FLAC in playback rate) to PSU noise.


Exactly what I like to hear... rational thought! There are no magical unexplained gremlins (much as audio 'enthusiasts' always like to claim exist eg- cables sound different).

Was bored so did a little test using RMAA with variable load conditions:

http://www.grimrulers.f2s.com/loadmyth/Comparison.htm

CPU usage:

An externally hosted image should be here but it was not working when we last tested it.


All identical, even under this impossible worst case scenario.
 
phofman said:
I thought this has been thoroughly discussed here before.

As usual it ended up in a purely theoretical explanation from your side and a practical finding from my side.
I am not saying that your therories sound wrong to me. I ( and a few others) just experience something else.

phofman said:
The whole chain is clocked by sound card clock (or the 1ms USB controller HW clock).

1. Afaik the USB 12Mhz clock pretty much impacts the receiver (especially on adaptive devices)
2. If I reduce the 1ms frame in the Alsa USB driver - my sound improves
3. If I switch off dynamic bandwith allocation of USB in the kernel - my sound improves
4. If I limit nrpacks to one per URB - my sound improves.

phofman said:
No other buffers matter, the whole chain is asynchronous.

Theoretically.

phofman said:
The latency setup has no direct influence on the resultant jitter - i.e. on the sound card clock.

As I said, due to whatever reasons people (incl. myself) experience different things.

Your statement I regard again as a theoretical statement.

phofman said:
I have not seen any study/measurement relating variations in CPU load (percentage points when decoding FLAC in playback rate) [/B]

That's the point. The PC must be regarded as black box. There are no studies. Just thesis, theories and endless different experiences and opinions.

Even USB-DAC freaks like Gordon Rankin (Wavelength Audio) meanwhile do have to admit that something odd is going on inside the PC. In they early days they where shooting at the people claiming to hear differences in sound when using different players. From what I read they slightly changed that behaviour.

So prove your theories or rather thesis - as they should be called - (and don't forget to put all these fragments into an overall framework. And please always describe the HW setup (PC&Audio) you are refering to. Otherwise we can forget about this discussion anyhow.

We can stop this discussion once and forever once we have all the evidence on the table. ;)
 
Re: Exactly!

Theo404 said:
Exactly what I like to hear... ............. (much as audio 'enthusiasts' always like to claim exist eg- cables sound different).

Here it is again. It has absolutely no value to discuss topics with people showing such an attitude. They just want to read what they think is right ( and pretty easy to understand and explain).

I can not talk about different cables to somebody who might use external Logitech PC speakers and never experienced a difference. That's correct.

However, if somebody had been awake at highschool physics class or similar, he should have learned something:

It is a physical fact that inductance, capacitance, resistance, shielding, connectors, termination - you name it, will impact a signal.

Ever heard about filters? EMI/RFI? reflections?

Just claiming that all people are nuts who are experiencing this difference I'd call pretty ignorant. (Perhaps it can be excused that these people just don't know any better - though this would be even worse.)
 
rossco_50 said:
http://www.diyhifi.org/forums/viewtopic.php?f=2&t=1834

See hifizen's comments near the end of the thread. He appears to have seen measurements, but not necessarily related to audio.

His arguments seem plausible. Although from the CPU core voltage POW it might make sense to run at 100% CPU, I personally put more emphasis on minimum power consumption and heat generation, minimizing the real sonic noise :)

When playing back FLAC you can see the bit rate changing, this suggested to me variation within the decompression load - but might not be the case.

Decompressing FLAC probably generates a variable CPU load, but do you know what CPU load pattern your VGA driver generates? Or your desktop environment with applets? Or your kernel filesystem cache manager? These all share the CPU. Plus the modern multicore CPUs make it even less predictable.



Yes, the discussion revolves around midi playback. That scenario has a completely different set of requirements, all driven by the low latency need.


I am sure there are other internal timers within alsa and dependent on the playback software chosen. Can we not see what these are from source code?

Yes there are several software timers in the alsa chain. They relate to the difference between application playing into alsa buffer and sound card reading from the buffer. The application is not necessarily driven directly by sound card interrupts (the IRQ handler must be kept as fast as possible as it directly affects kernel latency and stability - all other interrupts are switched off at that time. At the end it is the sound card clock which dictates the sound pace and jitter.
 
Re: Exactly!

Theo404 said:

Exactly what I like to hear... rational thought! There are no magical unexplained gremlins (much as audio 'enthusiasts' always like to claim exist eg- cables sound different).


Honestly, I have no problem with the cable claims as they are explainable by various EMI/RFI shielding performance and different EMI/RFI resilience of the audio equipment.

All identical, even under this impossible worst case scenario.

Thanks a lot for the measurements, they should be linked to the Permanent links of this section :)
 
phofman said:


His arguments seem plausible. Although from the CPU core voltage POW it might make sense to run at 100% CPU, I personally put more emphasis on minimum power consumption and heat generation, minimizing the real sonic noise :)

This has been my approach so far using a fanless and compact flash based system and an old cyrix iii cpu which has around 10W heat dissipation.



Decompressing FLAC probably generates a variable CPU load, but do you know what CPU load pattern your VGA driver generates? Or your desktop environment with applets? Or your kernel filesystem cache manager? These all share the CPU. Plus the modern multicore CPUs make it even less predictable.

My system is headless and boots into the commandline. Im not sure if there is a way to completely turn off the vga driver. Filesystem cache manager, can that be disabled? Puppy linux only writes to disk every 30 minutes, but I havn't looked specifically at what is going on cache wise.
 
soundcheck said:
As usual it ended up in a purely theoretical explanation from your side and a practical finding from my side.

I am not saying that your therories sound wrong to me. I ( and a few others) just experience something else.

Please post links to experience of the others describing better sonic (!) results of decreasing the latency.

1. Afaik the USB 12Mhz clock pretty much impacts the receiver (especially on adaptive devices)

Yes, and the 1ms clock is produced by dividing the 12 MHz USB HW clock (generated by PLL in the southbridge anyways). I have posted a link to the diagram in Intel UHCI datasheet.

2. If I reduce the 1ms frame in the Alsa USB driver - my sound improves
3.. If I switch off dynamic bandwith allocation of USB in the kernel - my sound improves
4. If I limit nrpacks to one per URB - my sound improves.

The only result I get by forcing the USB controller throw 1000 IRQs per second are occasional xruns which are almost impossible to avoid completely on my low performance system forced to this latency level. Which I am afraid is the case for many of inexperienced linux users following your resolute tutorials step by step. By drastically lowering latency they have to introduce ramdiscs, disable many services, pre-convert formats to native wavs, fight with highly unstable RT kernels (I keep getting crashes with ubuntu studio RT kernels), etc. to avoid the xruns. Just because audiophiles are on the useless quest for the low-latency playback. Because someone said low latency = better sound.


As I said, due to whatever reasons people (incl. myself) experience different things.

Your statement I regard again as a theoretical statement.

Again, why don't you quote those people.

That's the point. The PC must be regarded as black box. There are no studies. Just thesis, theories and endless different experiences and opinions.


Even USB-DAC freaks like Gordon Rankin (Wavelength Audio) meanwhile do have to admit that something odd is going on inside the PC.

Please quote.

So prove your theories or rather thesis - as they should be called - (and don't forget to put all these fragments into an overall framework. And please always describe the HW setup (PC&Audio) you are refering to. Otherwise we can forget about this discussion anyhow.

What kind of proof technically do you have in mind? BTW I have not seen anything on your side either, apart of your subjective listening experience. At least I tried to show the code which you have refused to look at, claiming you do not care about technical details.
 
>>What kind of proof technically do you have in mind?

1. Please avoid fluffy RMAA loopback diagrams.

2. As long as you can't hear any differences, doesn't matter what you do, you'll be chasing
ghosts.

3. There is PC-HW, Sound-HW, HW-related SW (firmware), HW-clocks, SW-timer,
Power-Supplies, OS, scheduling, IRQ handling, Alsa, Applications,........., float vs.fixed
processing,DSP........................... All this is technical stuff. If you bring all that together
including the demanded evidence, you might find the answer.
A PHD thesis couldn't be more comprehensive.

>>BTW I have not seen anything on your side either, apart of your subjective listening experience. At least I tried to show the code which you have refused to look at, claiming you do not care about technical details.

I do admit and always did that I don't have any evidence about what I am saying.
But here I am not alone. Nobody can prove the opposite either.


That's one reason why I put some effort into the Wiki.
Everybody will be able to duplicate what I am saying. I always said, everybody is
invited to try it and to give feedback.

I do have a lot of confirmation when looking at Audio Asylum user CICS with CPLAY and CMP solution, which goes almost 100% into the direction of what I am saying. There are quite a number of users who confirm this approach. He goes even further e.g. by changing voltage of RAM changeing even the RAM asf.

Using a small machine like a FitPC seems also to be delivering better results. (Rossco!?) and John Swensson (AA)
As a matter of fact I started to do it the same way on a normal PC at times where no FitPcs were available. I just turned everything possible off, gave the audio chain max priorities and used the applications which are made for realtime performance. I don't have any HDD access. My displayports are switched off and even my Thinkpad T60P Notebook can run without fans during playpack.
There are other tweaks (kernel and scheduler parameters) which I skip for now.

If you ask me for proving what I am saying: Try the Wiki and give feedback. If more people
are doing it we might find a pattern and an answer.

BTW: I am just not able to analyse the code that well. I am not a programmer.
That's why I "refused" to look at it. My time is pretty limited to cover all my little projects. ;)
 
Re: Re: Exactly!

Here it is again. It has absolutely no value to discuss topics with people showing such an attitude. They just want to read what they think is right ( and pretty easy to understand and explain).

Maybe the cable comment was a little inflamatory, I mearly meant to suggest that some people can over think (for want of a better phrase) audio chains and the product is $1000 cables and such.

I can not talk about different cables to somebody who might use external Logitech PC speakers and never experienced a difference. That's correct.

Damn! Got me...


However, if somebody had been awake at highschool physics class or similar

I think university was when I started to drift off....



EMI/RFI? reflections?

I'm not saying cables shouldn’t be shielded it... just that after dropping large amounts of cash on essentially JUST A CABLE suddenly declaring your system to have become more 'open' and 'dynamic' is heinous when the electrical characteristics that have changed are so miniscule that audibility is virtually impossible...



1. Please avoid fluffy RMAA loopback diagrams

What would you suggest I use.... golden ears maybe?


I hope this doesnt sound too rude, as I'm very grateful for things like your wiki soundcheck, your work in advancing the PC as a source is brilliant. I have followed it after my own endeavours with linux audio and do be believe it to be the best my system has sounded so far... It just annoys me that the audiophile verbiage brigade seem to have got into it and are starting down the same line as cables... pure silver usb cables anyone? CPU demagnetizers?
 
Member
Joined 2004
Paid Member
phofman said:


Yes, and the 1ms clock is produced by dividing the 12 MHz USB HW clock (generated by PLL in the southbridge anyways). I have posted a link to the diagram in Intel UHCI datasheet.


That is one of the few places where direct interaction can occur. PLL's can be very noise sensitive and the USB clock was not designed for low jitter/phase noise- not really necessary for a mouse. However the rest of your remarks are sound.

I don't have the stuff at hand (or a machine setup right) to measure the radiated noise, I'll try for that this weekend but I suspect that the differences come from secondary effects, not primary effects.

If any of this is the case switching the main CPU clock to spread spectrum should change the sound significantly.
 
Member
Joined 2004
Paid Member
Jitter in Juli@ soundcard

I have been measuring the jitter in the clocking scheme in the Juli@ card to see how much jitter is introduced in its logic.

Its not very much measured on a cycle to cycle basis. Measuring at bitclock on the card to card interface-
44.1 155 pS
88.2 110 pS
176.4 66.8 pS

48 132 pS
96 90 pS
192 157 pS

mclock

44.1 87.9 pS
88.2 60.0 pS
176.4 59.7 pS

48 145 pS
96 138 pS
192 130 pS

word clock

44.1 200 pS
88.2 135 pS
176.4 83.6 pS

48 194 pS
96 158 pS
192 130 pS

22.5792 MHz 65 pS
24.576 MHz 120 pS

Measured cycle to cycle HP 5370A w/ HP 5363A probes

200 pS is the worst case. The jitter buildup through the logic is pretty small and all of these readings are probably as close to the limits of measurement as possible today.

They won't show lower jitter and drift below the measurement windows, there were all 10K samples standard deviation. The internal limit is 20 pS and it does check that well on a very low noise crystal oscillator.

They were all make playing content.

playing the same file as
flac Bitclock jitter 138 pS
wav bitclock jitter 143 pS

I don't see any smoking gun here.

Looking at phase noise is the next task, but its much harder to do and I need to get some really good VXCO's.

(I'm borrowing from Linus Torvald's approach to saving important data: publish it on a public web site and let the world do the work for you.)
 
1audio said:


If any of this is the case switching the main CPU clock to spread spectrum should change the sound significantly.


Asynchronous USB and galvanical isolation is IMO the solution to cover up this area, if we talk USB devices.

However, this still won't be sufficiant to get rid of all the mess.

Even the "Asynchronous USB" guru Rankin is talking about "timing" and "more" issues.

Gordon Rankin - USB TIMING

Steve Nugent talks about "drifts", which are ususally not covered by standard measurements.

It won't be that easy to figure it out. All these guys have tons of EQ to do some decent measurements.

Steve Nugent is still using Foobar 0.83 (5 years old I guess ;) ) because it sounds best to him.
 
One more:

What I experienced already in the early days, the better the setup was the better the
low-end became. This was most obvious. Very dry and sharp bass was the result.

Why is that?

Just an idea - (shoot me if you want ;) ):

The low frequencies will be spread over quite a huge number of samples. What would happen, if these frequencies become out of sync, if there'd be a random sample "drift" in the system?

I would like to post again the RME theory about Latency jitter. This might be one of the sources:

RME Latency Jitter

All kind of interrupts,timers and potentially several buffers in the chain will cause this or that poblem even though they run asynchronous, they are still part of the "almost realtime" audio chain. These latency jitter effects should be somewhat cumulative.
 
Member
Joined 2004
Paid Member
The hazard of an in-out loop is that it will conceal any problems related to sampling since the same sampling is used for both the in and the out. Use two separate machines.

"Drift" should be easy to measure, it would be analogous to tape speed irregularties (Wow, Flutter etc.) or changes in group delay. All standard analog measurements.

The problem becomes harder to master when the cpu is modifying every sample (gain adjust and eq). Lets get it right first and then try to modify the audio.
 
rossco_50 said:

My system is headless and boots into the commandline. Im not sure if there is a way to completely turn off the vga driver. Filesystem cache manager, can that be disabled? Puppy linux only writes to disk every 30 minutes, but I havn't looked specifically at what is going on cache wise.

I gave it as an example that the CPU has always more to do than running the playback application only, thus making the CPU consumption pattern rather difficult to predict and make recommendations on a general basis.
 
soundcheck said:
>>What kind of proof technically do you have in mind?

3. There is PC-HW, Sound-HW, HW-related SW (firmware), HW-clocks, SW-timer,
Power-Supplies, OS, scheduling, IRQ handling, Alsa, Applications,........., float vs.fixed
processing,DSP........................... All this is technical stuff. If you bring all that together
including the demanded evidence, you might find the answer.
A PHD thesis couldn't be more comprehensive.

Heaping a bunch of technical words does not make the issue more complicated. IF the chain is bit-perfect and fast enough to deliver the data on time (which basically covers all your above mentioned "causes" and is very simple to check), the only other influence is electrical noise (both radiated and power supply-based). You can make PhD about impact of all possible causes on noise, but very likely the results would not be consistent across various motherboards, components, SW versions etc.

I do have a lot of confirmation when looking at Audio Asylum user CICS with CPLAY and CMP solution, which goes almost 100% into the direction of what I am saying. There are quite a number of users who confirm this approach. He goes even further e.g. by changing voltage of RAM changeing even the RAM asf.

If you read the discussions, it is all about generating less noise (plus the bit-perfect playback which is non-trivial to achieve in windows but a non-issue in linux). Some recommendations make sense (turning off HDD), some are very situation-specific (a different combination of SW/HW can produce a very different noise level). And I believe some are plain wrong - i.e. the quest for low latency.

Using a small machine like a FitPC seems also to be delivering better results. (Rossco!?) and John Swensson (AA)
As a matter of fact I started to do it the same way on a normal PC at times where no FitPcs were available. I just turned everything possible off, gave the audio chain max priorities and used the applications which are made for realtime performance. I don't have any HDD access. My displayports are switched off and even my Thinkpad T60P Notebook can run without fans during playpack.
There are other tweaks (kernel and scheduler parameters) which I skip for now.

I am all for low-performance, low-consumption, low-load scenario, EXCEPT the low latency. If you raise latency, you do not need the higher-load RT kernel. That requirement stands in complete opposite to the other ones.

I would like to post again the RME theory about Latency jitter. This might be one of the sources:

RME Latency Jitter

All kind of interrupts,timers and potentially several buffers in the chain will cause this or that poblem even though they run asynchronous, they are still part of the "almost realtime" audio chain. These latency jitter effects should be somewhat cumulative.

Soundcheck, did you read the article? It has nothing to do with audio jitter. RME talks about timing of merged output from several data sources, how to make sure the SW generated metronome beats fit samples triggered by MIDI input. You are mixing completely unrelated technical issues.

Steve Nugent talks about "drifts", which are ususally not covered by standard measurements.

Please quote, http://www.google.com/search?q=audioengr+drift+site:audioasylum.com&hl=en&start=30&sa=N produced no meaningful results.
 
I don't see why steve nugents or cics etc comments should provide any more evidence than soundchecks. Their posts are also only based on subjective listening.

There is evidence that jitter is not changing betwen software players or lossless file formats, and that there is a growing number of people claiming to hear differences between software where all other equipment variables remain the same.
 
rossco_50 said:
I don't see why steve nugents or cics etc comments should provide any more evidence than soundchecks. Their posts are also only based on subjective listening.

There is evidence that jitter is not changing betwen software players or lossless file formats, and that there is a growing number of people claiming to hear differences between software where all other equipment variables remain the same.

If it is jitter or somebody is shaking the wire, I couldn't care less. At least there are others with
high resolution systems and best of bread DACs & reclockers confirming that there are differences.

I really had a good laugh the other day, when I read the comment from Gordon "There is
something else ongoing we don't know yet - for sure it can't be jitter - it must be a timing issue" .
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.