Linux Audio the way to go!?

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
So, is there a way to start applications in realtime then?
perhaps you can try using "schedtool" instead of chrt (it should not make any difference, but just in case...).

Unless by chance that happen to work, basically you should try to debug your troubles with chrt and understand what's goin' wrong.

Have you looked at the system logs & dmesg? any message?

Try at least using strace, ltrace, etc. to see where (and possibly get an hint on why) the launched program does crash, if that's what happens (or is it chrt itself that crashes before being able to actually launch the requested program?).

As a quick guess, a possibility could be that you don't have the right permissions to run an application in real-time. Remember that recent Linux systems (including latest Ubuntus) do use "SELinux" to improve security, thus you need extra (SEL policy) permissions to do just about anything which may have security implications. Even if an executable is suid, if the (original) user does not have the right (SELinux) permissions, it will fail.

That said, I don't see why you'd need RT to just play audio on a otherwise (mostly) idle machine.

On my own audio system I do online hdcd decoding (if present) and upsamplig to 24/192 using the best possible algorithms available (either using sox or ALSA). I do not use realtime scheduling for audio processes (let alone any special "RT" kernel).

Yet it works perfectly, sounds great... and I never experience a single audio underrun unless I really heavily load the machine by running some other (really very heavy) task at the same time. Which means that the machine can "keep the pace" without any need of RT scheduling... and that using it would not make any practical difference.
 
Last edited:
Member
Joined 2004
Paid Member
How real time is the Linux kernel with the RT extensions? As I understand it improves the responsiveness of the Kernel and makes sure no process will hog the system but thats not the same as absolutely deterministic performance.
I seem to get better audio when I increase the latency, to a point. However I'm not interested in using a desktop or laptop box for maximum quality audio. I think a dedicated box makes more sense. Especially when the rest of the audio chain costs more that the PC. Some digital audio cables cost more than a fully decked out gamer PC. I think more value lies in an optimized playback system.

Also recording needs are in a sense in conflict with playback. The low latency is important for overdubbing. In the ancient world of tape the overdubbing used the record head to playback audio reference for syncing on overdubbing. The play head which gave much better performance was actually used for playback. I think there is a bit of a parallel here.
 
Last edited:
That said, I don't see why you'd need RT to just play audio on a otherwise (mostly) idle machine.
...
Yet it works perfectly, sounds great... and I never experience a single audio underrun unless I really heavily load the machine by running some other (really very heavy) task at the same time. Which means that the machine can "keep the pace" without any need of RT scheduling... and that using it would not make any practical difference.

Hmm, the first RT Kernel and the rt scheduling with the xmms player was an audible step forward to these ears of mine. (Good for everybody who is having an excellent sound without all the setup trouble! I don't.) Ok with me, if anybody hears/thinks/says different.
There was already much discussion (even in these 169 pages) about what we hear and what we don't. I really didn't want to kick that off again.
Point is, there is no sense for me in still having my UbuntuStudio's RT Kernel running when I can't fully use it the way it worked before because of a buggy glib that comes with it. And in the past, there have been a handful of people, getting their rosegarden to work properly after they'd changed their RTC frequencies to 1024, not before. So, that RT thing might quite do a thing to the performance of a PC. I would have liked the idea, that the UbuntuStudio people, who definitely seem to care about rt performance, (else they wouldn't have applied the full set of Ingo Molnar's rt-patches i.e.) wouldn't have forced the use of pulseaudio. And would hand out an rt kit that works for geeks like me even if I want to start audio players in rt as before. (whatever it is good for, accepted.)

Generally, maybe one distribution a year instead of x.04 and x.10 might
take a little pressure off the Ubuntu developers and might be a step forward in terms of a well-engineered distribution.
 
jack audio question

hi,

very good thread here!

Perhaps you can help:

i have a problem with following setup:
gmusicbrowser - brutefir -jack

on each new song i play, a new connection (directly to the outputs of Jack :mad: ) is made from Jack itselfe in addition to the connections, i have done.
Thats bad because the music plays through my filters and directly to the outputs at the same time.

How can i stop that?
i would like to have a permanent setup - how can i prevent, that Jack makes a new connection, every time the next song of the playlist starts, or when i click on a new song to play.

How can i do a automatic setup, which starts brutefir and Jack, waiting for the player?

thanks a lot.
 
UbuntuStudio people, who definitely seem to care about rt performance, (else they wouldn't have applied the full set of Ingo Molnar's rt-patches i.e.)
that's for completely different reasons. "UbuntuStudio" was never meant to be used for "Hi-End" audio. It is (mostly) about recording & multimedia (A/V) production. For such applications latency does matter, for various reasons (e.g. A/V sync, using MIDI keyboards, etc).

On the other end, to just play an audio stream (which needs not to be synchronized with anything else) only takes to fill up a buffer without ever allowing it to get empty (otherwise you'd get an underrun, that's a clearly audible "skip"). Latency in this case does not matter: buffer filling is asynchronous while the data is always retrieved at a fixed pace defined by the audio clock (which is completely independent from latency).

Maybe there can be some (very indirect) consequences on the audio output only if you use a isochronous USB (or firewire) audio link. But that's something you should never ever use anyway if you care about audio quality... use a PCI or PCI-e internal card or an USB 2.0 asynchronous audio link instead and you can forget about all that madness.

Generally, maybe one distribution a year instead of x.04 and x.10 might
take a little pressure off the Ubuntu developers and might be a step forward in terms of a well-engineered distribution.
I tend to agree with that. One per year, perhaps with some optional partial upgrade in the middle should be more than enough. If you like to be on the edge, you can trace Debian testing (or even Sid).

On the other end, that's why there are "LTS" releases... though IMHO there should be something wrong with the way they assign the LTS status to a release: in fact those are not necessarily any better than a regular one when they are released! (fortunately at least they tend to get better after a while).
 
Last edited:
Hi there.

1. Latency

By now it should be a known fact that load, poor drivers, poor SW asf
on the PC is causing clearly audible non-linearities on the audio-stream -
even if that one is bit-transparent

99% of all people I've talked to reporting changes in sound on highest quality equipment if the PC environment gets improved. (Have a look at
Audio Asylum discussions or elsewhere)

The "lower latency" factor is just an indicator that fewer processes are impacting the stream or better HW is used or better drivers are in place to get the overall impact on the data-stream down.


An isynchronous vs. asynchrounous USB discussion is kind of irrelevant at this point. Even highest quality asynchrounous USB DACs do sound different if improvements are applied on the PC side.

It seems that only full galvanical isolation and advanced rebuffering/reclocking usually with an FPGA+buffer type of device on the
DAC side will resolve the issues.

Again . "Low Latency" just indicates that the stream gets less impacted by other sources. It is a good way to measure how well - in audio terms - your PC is performing.


2. Distribution Life Cycles

I do not agree to increase the release cycles as long as Unix is facing severe
driver problems at all ends. As long as the driver handling is connected that
close to the kernel and a specific kernel is tight to an LTS release I see a need to lower those cycles.

Instead of running after fancy Window Managers, with Gnome shells or similar, the Linux Workforce should concentrate on certain key projects or key areas.
There is plenty of stuff to do. Ubuntu people seem to completely ignore
that "poor" base -- incl. Alsa -- they build their stuff on.

A good - rather rock solid - base is actually the main successfactor of Apple. They gotta a pretty rock solid base. They set the base and rules. And others do the job (programming rather great apps and drivers)

The latest Linux project "WeTab" ( based on Meego Linux) clearly shows that it is quite easy to put something together quickly that "works".
Their initial idea was to compete against an iPad.
Though I guess slowly but surely they realize what it means to count on a Linux and respective apps as base for a highest quality consumer system.
Just read the first reviews of that device and you'll see what I'm talking about. I felt like "Been there, experienced that"


Since I switched to Squeezebox Touch I finally managed to spent 85% of my time on the music and no longer on the PC.
That's gonna be that way for a while. :)

I have no idea how Linux would be able to get out of the mess. I am sure
there are options.


Cheers
 
Maybe I'm OT or maybe I'm opening a can of worms here...
but golden eared audiofiles are always passing judgements on equipment that are hard to verify.

Just because some people known in the industry makes a claim doesn't automatically make it true or audiable to the rest of us.
As long as the data is correct and the transfer to the dac is asynchronous the pc should not matter. Add optical transmission and we have galvanic separation as well.

Feel free to prove me wrong but I prefer some scientific explanation over subjective opinoins. Just my $0.02...
 
Maybe I'm OT or maybe I'm opening a can of worms here...
but golden eared audiofiles are always passing judgements on equipment that are hard to verify.

sometimes they behave like those scientists that invent the problem to get a publication!

Feel free to prove me wrong but I prefer some scientific explanation over subjective opinoins. Just my $0.02...

the problem is that not everything is measurable and in this field small marginal differences are really important. However I find that most audiophile do not use rational thinking to make judgments.

It seems not very clear is that *sound*, in the music context, is a component of the art itself. I personally find nothing wrong about having an *aesthetic reference for sound* as a term of comparison. In my opinion most audiophiles perceive their *own sound*, nothing wrong with that. The wrong thing is that when people do not realize this and they think that their *sound's aesthetics* is the reality (the hifi) that everybody should perceive.

Sorry for the OT philosophical digression :)
Pietro
 
The point is:

1. The manufacturers were claiming in the early days bit's are bits.
2. The software programmers were claiming bits are bits.
3. The DIY-Audio gurus were claiming bit are bits.

IMO non of them proved to be right about it.

They all had to learn that things are a bit more complicated running a PC as source.

Meanwhile you see manufacturers admitting the effects.
Meanwhile you see software companies (see Sonic Studio - Amarra or Pure Music, J.R. River) addressing the obvious issues.

Non of them showed relevant measurements yet!

Asking for measurements is always the easiest way to shut up hobbyists with limited budgets. That's a pretty well known fact.
Though I am more then happy that even manufacturers won't manage to
measure those effects. That's why I won't even try.

Here is a nice article from Jeff Rowland: AudioMeasurements
 
the problem is that not everything is measurable

By definition your ears are simply a "measuring apparatus", hence by definition if it's not measurable then it's irrelevant with regards to how we hear it...

That said, your comment would be reasonable in the context of "not easily measurable", since clearly the way the ear integrates sound is a complex affair and involves both time based measurements and frequency averaging and some function of peak envelope levels.

We don't yet seem to have many good models of how to predict how audio will actually sound to the listener. However, have a look at some of the analysis which drops out of the DRC project - there are some models there which seem to be getting closer
 
The point is:

1. The manufacturers were claiming in the early days bit's are bits.
2. The software programmers were claiming bits are bits.
3. The DIY-Audio gurus were claiming bit are bits.

IMO non of them proved to be right about it.

What complete, unsubstantiated b*llox!

The point is:
They all had to learn that things are a bit more complicated running a PC as source.

?? "They *all* ..."?

Well group 1) in general claim that a PC is no good because they have something else more expensive to sell you

Group 3) are by definition learning avidly about everything

Group 2) are a class claim that running a PC is incredibly complicated and are paid to make that simpler

Myth busted?

Meanwhile you see manufacturers admitting the effects.

..."the effects"?

Unsubstantiated claim which doesn't even state what "the claim" actually is

Make specific claims so that we can show you are talking complete rubbish please...

Meanwhile you see software companies (see Sonic Studio - Amarra or Pure Music, J.R. River) addressing the obvious issues.

.."the obvious issues"...

Another unsubstantiated claim refering to some unknown issue

Look it's simple:
Scientific theory - Wikipedia, the free encyclopedia
The defining characteristic of a scientific theory is that it makes falsifiable or testable predictions.

You are not even offering good "opinion", let alone theory.

Non of them showed relevant measurements yet!

"None" of them?

Tell me what "measurements" (since you made only a handwaving claim) you want to see and it's fairly likely we can find a manufacturer offering some measurements.

In my experience many of the bigger and better manufacturers write papers for the AES and offer quite substantial measurement data. One that's quite prolific say is Harmon Kardon and you can find a load of really good info from them.

Asking for measurements is always the easiest way to shut up hobbyists with limited budgets. That's a pretty well known fact.

Err, we are on DIY audio right? Have you read any of the posts here?

It's practically a definition that you move from being an amateur to a "hobbyist" the moment you stop guessing and take some measurements? Does anyone seriously develop stuff *without* taking measurements?

Lets redefine the field right now and state that if there are no measurements then "it's an opinion", it only becomes science once we see some measurements...

eg here is a description of a crossover:
http://www.acourate.com/HorbachKeeleCrossover/AES_Keele_LinearPhaseXOFilters.pdf
It's science because it comes with formulae and measurements.

On the other hand your post fails even to make specific claims, let alone support them with evidence or measurement...

Though I am more then happy that even manufacturers won't manage to
measure those effects. That's why I won't even try.

I'm not even sure what you are talking about, but if you could say find me a single speaker manufacturer in the whole world who doesn't actually measure their product as part of the development process then .... well I'm not buying from them anyway...

Measurement is just a basic part of good science. It's prevalent in all industries as a basic premise of improving how we work. To claim it doesn't exist in ANY mature industry is just madness?

Here is a nice article from Jeff Rowland: AudioMeasurements

Which seems to disprove your complete post?!!! Did you not read it?

It starts: "There is definitely a close relationship between test measurement (specifications) and subjective sound quality."


Look, you are chap with plenty of time on your hands, just cut the useless posts and write about specific things that you know about and stop wasting your time with this pointless, non specific handwaving?
 
Hi there.

1. Latency

By now it should be a known fact that load, poor drivers, poor SW asf
on the PC is causing clearly audible non-linearities on the audio-stream -
even if that one is bit-transparent

I don't know where to start taking your post apart... There is so much wrong here:

- You start with the caption "Latency" and then within one sentence you bounce from system load to non-linearity culminating in some chatter about bit-transparency

Look, your claim is basically that:
- Given a single stream of bits output from a PC
- Given a single device which takes in those bits and outputs an analogue set of levels
- Feeding that *exact* stream of bits into the device can cause different analogue output levels assuming certain apparently external factors change

Now as it stands that claim seems reasonable (when stated a bit more precisely than your hand waving). However, it's dull and useless without some statement about how this influence could occur

Certainly claiming that the bits themselves are somehow magic and can be different is just madness... A better attack is:
- System load on the PC affects voltage or RFI, which in turn affects our device
- The communication channel between the PC and the device also carries a timing channel and the timing frequency varies from run to run


The "lower latency" factor is just an indicator that fewer processes are impacting the stream or better HW is used or better drivers are in place to get the overall impact on the data-stream down.

None of the above is correct or accurate in any realistic sense? OK, granted in a relaxed context we might let this pass since there is an element of truth, but since you are trying to offer an opinion from a position of "expertise" I think we need to correct this:

"lower latency" means exactly "lower latency", nothing else.

1) It does not mean that fewer processes are involved. In fact even a reasonable grasp of the subject would show that exactly the same number of processes are involved - the whole point of the low latency linux kernel is that the same number of processes get to run, but they run more frequently and for a shorter time each. (In general this means a loss of efficiency and it's almost never a beneficial change except where certain levels of interactivity are required)

2) It certainly does not mean that better hardware is in place. I can see almost no way to link "better-ness" of hardware with latency? Perhaps we could strain an analogy and see that certain USB devices would be affected?

3) I can see no way that "better drivers are in place to get the overall impact on the data-stream down" can be affected by system latency? In fact by definition as you decrease latency you will raise system load since more time is spent spinning your wheels.


Low Latency with computers is not a "have / have not" thing, it's about the computer being able to schedule tasks at a certain maximum frequency. Increasing the speed you spin between processes is rarely valuable and in general you want to spin your wheels as slowly as possible, while still hitting your scheduling requirements

An analogy might be watching a tennis (or baseball or cricket) machine which fires balls out. It has a hopper which needs feeding from time to time and you could either:
- Run each ball over super fast after each shot
- Fill it up every X balls
- Fill it up when it's just about empty

Clearly with a well engineered machine it should make zero different to it's performance how often you fill it up with "ammunition", caveat that you fill it up before it actually runs out.

DACs roughly fit this analogy. They have a hopper into which you load often very large amounts of data (many will take more than 2 seconds of data at a time). We might postulate that turning off the computer for the 2 seconds between buffer fills might give the best sound, but you are claiming that things sound best when we fill up the buffer repeatedly when it's only a tiny bit empty (which should cause more RFI and more power fluctuations in general).

Clearly with a "well engineered" device it should make zero difference how often you refill the data hopper, but definitely any differences which appear in an arbitrary design will be down to other influences such as power ripple or clocking caused by your refill schedule and not the action itself of refilling



An isynchronous vs. asynchrounous USB discussion is kind of irrelevant at this point. Even highest quality asynchrounous USB DACs do sound different if improvements are applied on the PC side.

Well it seems 100% relevant to *your* point? Do you actually understand the difference?

"Isochronous" usb (note your mis-smelling) means that latency is massively important since the computer latency defines the accuracy of the clocking implementation on the end device.

Asynchronous (again check your spelling) means that latency is largely completely irrelevant assuming good engineering of the end device (sure you can always create a badly implemented device where it makes a difference). With this kind of device the clocking should be externally implemented, hence the latency of the feed is irrelevant


It seems that only full galvanical isolation and advanced rebuffering/reclocking usually with an FPGA+buffer type of device on the
DAC side will resolve the issues.

Unless the issues were RFI?

However, NOW you are starting to actually get to a smattering of science. So yes you now appear to be hinting at an opinion that it's not the bits themselves which vary, but that the DAC is affected by external factors such as having to transmit a clock, power and not transmit RFI down to the device in addition to the data bits?

This would be a good line to continue - why not just post this rather than all the cr*p in the first half of your email?

Again . "Low Latency" just indicates that the stream gets less impacted by other sources. It is a good way to measure how well - in audio terms - your PC is performing.

No it's not. No it's not? It's not even on the agenda as any of these things?

Low Latency means how often the computer spins it's wheels servicing tasks. Consider scheduling timing: if you had some processes running and a latency defined as 1/100th of a second then each process would get to run for a maximum of 1/100th of a second before being interrupted and another process being given a go. Lowering the latency to 1/1000th of a second means that you now interrupt each process much more frequently and spend more time swapping processes to run. Now that's scheduling frequency and part of what we mean by "Low Latency Linux", just for completeness I will point out that there is second aspect and that's making the operating itself run it's own processes in such a way that they are of lower granularity

However, I repeat that for non live playback (ie you aren't running a recording studio), you can fill your DAC buffer with perhaps 2 seconds of audio data and so you have no special low latency requirements here

Actually I run my own system with much more aggressive latency requirements because I insert a realtime filter into my audio playback, so now I do have "realtime" requirements and still I can easily meet 10ms scheduling requirements on a stock kernel under load doing video playback...


Myth busted?
 
Dear Ed.

Did you have a bad day!?!? Man -- relax a bit.

If you want to play the smart..s please open up your own scientfic thread.
And don't highjack this one.

All those subjects have been discussed over here since 4 years several times.

I don't know what you try to prove here. You don't have ANYTHING in your hands either.

And please don't come with all your DRC background. I don't want to get into
soundquality messed up by FIR filtering and questionable quality of in-room mesurements again.


Please - do me a favour -- open up your own thread and stay out of this one.

Cheers
 
I forgot:

As some of you might know I applied the same OS tweaks to the
Linux based SB Touch and made the tweaks public.


I am receiving quite an amount of mails confirming the improvements
made with those tweaks.


I am 100% sure if dear Ed would start measuring before/after there wouldn't be any.

Guys -- I couldn't care less. My proposal won't cost you more then 30 minutes of fun.

BTW. Did I tell you that even the Stereophile refered to my Touch blog
within last months SB Touch review article.

Guys -- I am not the one who lives on the wrong planet.

Cheers
 
And please don't come with all your DRC background. I don't want to get into
soundquality messed up by FIR filtering and questionable quality of in-room mesurements again.

I'm not sure how/why you are defending your posts about "Low Latency" by randomly jumping topic to spew your rant about DRC?? How did you even make that logical leap??!

Actually, you have completely failed to address any of the shortcomings in your original post and retorted something like: "scientific method is bad, shoot from the hip is good"

However, lets examine your most recent response in the light of some of the things you have said previously on this very thread:


soundcheck said:
I am currently trying to get my EMU 0404USB running to do some recordings for DRC.
I am running the Touch now with brutefir convolution on the SqueezeboxServer.
In my acoustically pretty poor living room this makes a huge difference.

..interesting...

soundcheck said:
Me too! I am looking forward to a FIR filter based crossover.

Look, I'm not attacking you. Don't be so defensive. You simply seem to spend a bunch of time writing about random things outside of your experience, which in turn gather responses disputing your "facts"

Why not write some more stuff about these wonderous SB modifications of yours? Or try to be more specific about these unspecified "it's so much better" responses you received from people who allegedly tried your mods? This is DIY audio and modifying and improving things is well on topic! As I stated already, I actually think the SB things are a really, really interesting idea as the basis for some decent quality audio. However, I absolutely hate "handwaving" and unproven claims

Look, the SB is a "budget" item and so it's absolutely and very likely that it can be easily improved. However, get to the meat - which mods improve what? How have you measured the improvements? What else can be improved? That is the kind of stuff I like to read about?

Good luck
 
Aha, it occurs to me that your rant about DRC was your failure to read my 3 line post to someone else. Honestly, it's not hard to read three sentences and get the correct context.

So to rebuke your rant:

I was making no claims for the quality of the correction filters for DRC, however, I was pointing out that the project also provides some very powerful tools for quickly and easily analysing an audio system. Here are some example output graphs (these are generated for you by a script from the impulse response of any system - how you generate the IR is up to you, but the project also contains tools to help you produce those)

Example output:
DRC: Digital Room Correction

(Seriously, don't you all immediately want this kind of transparency on how your listening space measures?!)
 
To make one thing clear related to DRC.

On my lower quality living room gear, situated within an acoustically pretty poor room, DRC is in fact the lesser of two evils. The performance of that system improved substantially.

Though I would never apply that kind of filtering on my main system, which is situated in an acoustically optimized room.


And I do not switch subjects. You came up with DRC. I consider DRC respectively filtering just another layer of messing around with the original data.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.