LADSPA plugin programming for Linux audio crossovers

That's why I advise people to stick to ecasound.

Honestly, what I have seen done with ALSA is simply a couple of filters. I would hardly consider that a crossover. You need to be able to accommodate many filters, and different ways to route the signal. Then you need to make it easy for the user to update/modify the filter parameters. Ecasound does all of this in spades.
 
Hi Charlie,

That's why I advise people to stick to ecasound.

Honestly, what I have seen done with ALSA is simply a couple of filters. I would hardly consider that a crossover. You need to be able to accommodate many filters, and different ways to route the signal. Then you need to make it easy for the user to update/modify the filter parameters. Ecasound does all of this in spades.

First, as I noted in my first post, I posted the configuration because you had expressed an interest in alternative to ecasound, but since you are no longer interested, I will cease polluting your thread.

Second, since you have not pursued the ALSA route, it stands to reason that you have no support for the implication that ALSA can do only "a couple of filters".

Although not concerning the ALSA vs ecasound decision, assuming, arguendo, that your implication is correct, the number of filters is a design criterion. To wit, some people report good results with Samuel Harsch's crossover, requiring two filters and delay per channel, clearly doable with ALSA.

Kindest regards,

M
 
Although not concerning the ALSA vs ecasound decision, assuming, arguendo, that your implication is correct, the number of filters is a design criterion. To wit, some people report good results with Samuel Harsch's crossover, requiring two filters and delay per channel, clearly doable with ALSA.

[High horse] Any crossover is part of a series of compromises in which you give and you get. In the case of our digital crossovers, these choices affect the very integrity of the digital source. From here, they then influence the amplification chain and all of the usual parameters of transducer choice, implementation, and acoustics. Ideally, a crossover system should contain as many compensatory tools as possible. Yet, diligent design and smart choices in other areas can limit the need for complex signal processing. How we balance compromises is very individual. IMHO, there is no substitute for using highly capable drivers in optimal enclosures. Then, simple filters are the best solution! That's how I roll... :spin:
[/High horse]
 
[High horse] Any crossover is part of a series of compromises in which you give and you get. In the case of our digital crossovers, these choices affect the very integrity of the digital source. From here, they then influence the amplification chain and all of the usual parameters of transducer choice, implementation, and acoustics. Ideally, a crossover system should contain as many compensatory tools as possible. Yet, diligent design and smart choices in other areas can limit the need for complex signal processing. How we balance compromises is very individual. IMHO, there is no substitute for using highly capable drivers in optimal enclosures. Then, simple filters are the best solution! That's how I roll... :spin:
[/High horse]
Once you bring the signal into the digital domain with sufficient headroom and data precision you can do many, many, many operations on the data without incurring any real "harm" in terms of noise, distortion, etc. This is very much unlike analog processing, where each active stage adds noise and distortion, and the physical circuit can add hum and noise, crosstalk, etc. The "less is more" mantra simply does not apply to DSP the way it does with other types of signal processing. For every signal processing operation that you omit from the crossover under the guide of "simpler is better" you have sacrificed the performance it brings to the system. Comparing the benefits to the loudspeaker to the detriments to the signal, the scale is tipped strongly towards intensive DSP processing.
 
I hope Burning Amp went well, Charlie!

Based on hearing experience, I would make a distinction. When signals are resampled between frequency domains (and their multiples, e.g. 44.1 -> 48) AND THEN intensively DSP'd, fidelity suffers. In those instances better to resample as high as possible and keep the filters simple. Of course, the ear can and should decide.

But hey, especially now that non-resampling sources are really coming into their own, I really hope we can get your filters running on my little non-resampling platform. Once the ALSA syntax is cracked, it is broadly sharable. More on that after a bunch more shop work...

Frank
 
I hope Burning Amp went well, Charlie!

Based on hearing experience, I would make a distinction. When signals are resampled between frequency domains (and their multiples, e.g. 44.1 -> 48) AND THEN intensively DSP'd, fidelity suffers. In those instances better to resample as high as possible and keep the filters simple. Of course, the ear can and should decide.

But hey, especially now that non-resampling sources are really coming into their own, I really hope we can get your filters running on my little non-resampling platform. Once the ALSA syntax is cracked, it is broadly sharable. More on that after a bunch more shop work...

Frank

Compared to DSP filters manipulating samples, the resampling process has the potential to do some bad things to the audio stream, and (using your words) this can certainly impact the fidelity to some degree. This is not as bad as, for instance, multiple A-to-D and D-to-A conversions.

I assume you have and listen to audio sources that have a variety of sample rates? Maybe some ripped CDs at 16/44.1, some high res files, and some streams. At least this is the case for me. Is your goal or current approach to keep the same sample rate throughout the chain all the way up to and including the DAC, with the sample rate changing on the fly as needed? If so, that IS removing the possibility of sample rate conversion effects.

My opinion is that one sample rate conversion, using one of the high quality sinc-based algorithms, doesn't really affect the fidelity (as you say) of the audio stream to a significant or audible degree. In my system, I know that there is some sample rate limitation at some point along the way between source and the DAC's output. This could be the highest sample rate that I can effectively transmit my data at (I'm streaming audio wirelessly), or perhaps my DACs highest sample rate, or perhaps one of the pieces of software that I use to process, send, or receive the audio stream. I resample all streams to that rate no matter what the rate of the source (if different) right at the beginning of the chain.
 
Is your goal or current approach to keep the same sample rate throughout the chain all the way up to and including the DAC, with the sample rate changing on the fly as needed?.

Yes. The Debian kernel running the signal renderer adapts to the recording's sample rate multiple. This rate is then respected by the player and filters to avoid any/all data interpolations. My quest is to make the most of the BBB player's SOC for digital filtering. This recipe may not best serve all systems, but it is pretty sweet with my high resolution DACs, amps and transducers.

The DIY isolator/reclocker hardware comes from Twisted Pear Audio using a BBB or Amanero, and some analogous hardware is available from ACKO.

So this is one case where the 'simplicity' equation comes in. On the one hand, is avoiding fractional rate resampling but also being limited (by the CPU) to simpler DSP filters better or worse than - on the other hand - resampling the signal but having tons of CPU to run any DSP imaginable? In the past, simple has worked best for me. But the future... It would be interesting to see if multiple asynchronous USB->Amanero isolater/reclockers could give the best of DSP control plus avoid fractional resampling. ...for someone else to wrangle - my wagon is hitched to synchronous filtering in the little BBB for now. 😛

Cheers!
 
Hi Charlie,

It was great to see you and the RP2 DSP system at Burning Amp. The system sounded really nice!

I liked the usb wifi and DAC you used. What were they?

Regards,

Rob

Hi Rob,

Sure, happy to help. The DACs I was using are the Sabre Tiny USB DAC from HiFiMeDiy:
HiFimeDIY Sabre Tiny USB DAC

The WiFi dongles with antenna I was using were bought via Ebay and are NLA. This one seems similar, from Adafruit:
USB WiFi (802.11b/g/n) Module with Antenna for Raspberry Pi
The least expensive option that is plug and play under Raspbian is the WiPi mini WiFi adapter sold by MCM:
Wi-Pi Raspberry Pi 802.11n Wireless Adapter
The only drawback is that when plugged directly into the Pi, the Pi itself tends to interfere with the signal because there is metal (the USB port housings on the Pi) close to the adapter and the Pi is likely spewing interference. I solved this problem by adding a 12" USB extension cable like this one cable and plugging the WiPi into the end of that. The added distance improved connectivity a lot.

The WiFi adapter with antenna is slightly better in terms of signal pickup, but the little WiPi is still pretty good and, since they are plug and play on all of my linux systems, are handy to have around.
 
Charlie, could you please share something related to multiple USB cards feed with ecasound?
Was the result OK? Could you post a script or how you piped the output to these cards?

I want to implement it in my system. Maybe do some diy simple NOS dacs. I have a cheap Behringer UCA202 usb card and the sound is very good/natural. I want to buy 2 more soundcards.
 
Charlie, could you please share something related to multiple USB cards feed with ecasound?
Was the result OK? Could you post a script or how you piped the output to these cards?

I want to implement it in my system. Maybe do some diy simple NOS dacs. I have a cheap Behringer UCA202 usb card and the sound is very good/natural. I want to buy 2 more soundcards.

Sure, no problem. Sound quality is excellent. I use the DACs I linked to two posts above...

My system is still dismantled after a recent show, but recalling from memory what I do is this:
I connect all the DACs to the Raspberry Pi
I use the operating system command "aplay -L" and "aplay -l" (small L) to see what ALSA sees in terms of devices, what they are called, etc.

In my system when I connect one DAC to the Pi is it just called "DAC". If I connect another identical DAC to the Pi, it appears as "DAC_2" or something like that. The o/s appends the number when there are two of the same type present. The order (which is DAC and which is DAC_2) always remained the same and may have something to do with which USB port the devices are plugged into.

To output to these from ecasound I use -o:alsa,DAC and -o😀AC_2 as explained in the man pages:
ecasound

Before you get excited about implementing a particular DAC, I would first search on the web to see if anyone has been able to successfully use that DAC under Linux, or if people are reporting problems. Some USB chips or DAC chips just don't work under linux without a lot of extra programming. Luckily, many ARE plug and play under Linux, just not all.
 
Charlie - I'm still using the filters with great results. I'm now moving to next project - how to input 6 channels to ecasound from kodi for surround when watching movies. In order to do that I think the solution is to use alsa loopback. I havent used loopback before so would you mind sharing how you set up your loopback (for the 2 channel input)?

The internal kodi engine uses 32 bit float. Is this the same input requirements for the ladspa filters - I want to avoid having to resample on the input side to save cpu power.
 
Charlie - I'm still using the filters with great results. I'm now moving to next project - how to input 6 channels to ecasound from kodi for surround when watching movies. In order to do that I think the solution is to use alsa loopback. I havent used loopback before so would you mind sharing how you set up your loopback (for the 2 channel input)?

The internal kodi engine uses 32 bit float. Is this the same input requirements for the ladspa filters - I want to avoid having to resample on the input side to save cpu power.

ALSA Loopback:
sudo modprobe snd-aloop will add an ALSA loopback "card" (e.g. for testing, works until reboot)
Add "snd-aloop" to the end of the list in file "/etc/modules" (at least under Raspbian) to make this permanent, e.g. on reboot your loopback is still there.

Always check which card number the loopback is assigned. After you add the line in the modules file, the number should always remain the same afterwards.

In my system I "play" to the loopback device, e.g. card 1, device 0. I receive that same signal "out" of the loopback on card 1, device 1. It's like a "tube" where you put audio into end "0" and get it out of the other end "1".
 
Thank you! Got it to show up in kodi but no sound yet. Is the ecasound syntax in your case something like: ecasound -i alsa,hw:1,1 -o alsa

In ecasound I use:
Code:
-i:alsahw,2,1,0
-o:alsa,front:DAC
for input and output respectively. The main difference seems to be that I am also specifying the subdevice. The syntax is:
alsahw,CARD#,DEVICE#,SUBDEVICE#
IN this case I am using card 2, device 1, subdevice 0. In my player I send the output to card 2 (it shows up by name as "loopback"), device 0 (this was not clear and I had to figure out if it was playing to device 0 or 1 by trial and error), and subdevice 0. For the subdevices, in my player there was a list of 8 different loopback "modes", like "default, front, 2.1, ..., 7.1 channel" and I just chose the first one, which is evidently mapped to subdevice 0.

NOTE that in my system and in the -o option shown above, the loopback is card 2. To discover which card is which, use aplay -l (that's a small "L"). In my system the loopback is always the last card, so if I add or remove audio hardware (e.g. a USB DAC) the card number changes.
 
Many thanks!
After some hard fighting with alsa, I got it to work with Kodi - 2 channels works really good. I only tested 6 to 8 channels at 44100 briefly with no filters applied but will need to spend some time to map out the channels properly and see if the processing will cause too much issues with lip sync. I'm using a rpi2 and Ecasound with AC3 5.1 and no filters applied only used 6% CPU for ecasound and 60% for the movie. Will see how much the RPI2 can handle once I add some active cross-over for the mains and Low-pass and high pass for the rest and play some more demanding video content.

Will try out how many channels and filters I can use before running into trouble.

BTW: I'm using OSMC as the base-system which is preconfigured for KODI. I believe is based on Debian Jessie. Pretty slimmed down and optimized for the rpi, which makes it a good starting point if one doesn't want to start out from scratch. I'm still haven't been able to confirm Kodi's audio engine but I believe it's 32 bit float that it's automatically downsampled to the DAC spec. I need to see if I can tap into the 32 bit directly.
 
NOTE that in my system and in the -o option shown above, the loopback is card 2. To discover which card is which, use aplay -l (that's a small "L"). In my system the loopback is always the last card, so if I add or remove audio hardware (e.g. a USB DAC) the card number changes.

That is why it is recommended to use card names instead of card indices. E.g. hw:CARDNAME,1, where CARDNAME is the first short (single word) name in aplay -l.
 
Greetings Charlie and all,

Life has been good over in BBB land and I have been playing with a system for creating remote controls of 'smart stuff' on an Android or iOS device. It is generally applicable to any Linux machine, and can interface via TCP, UDP, and HTTP (but not SSH). You guys could make some pretty cool controllers for your RPi systems! E.g., manage Ecasound options, etc... I have a system-specific prototype running on my BBB using a Python Asyncore server that is very lightweight. Anything that can be done from a shell script or via a GPIO pin can be controlled using a number of RTG widgets. I cobbled working code together without much knowledge of Python nor of best practices for efficiency and security. It seems there may be mutual benefit in chatting about this. The Android/iOS app is called NetIO, easily found via web search... I started a thread: http://www.diyaudio.com/forums/twisted-pear/281776-control-bbb-based-audio-appliances.html. Your comments and ideas would be much appreciated either here or there.

All the best,

Frank
 
That is why it is recommended to use card names instead of card indices. E.g. hw:CARDNAME,1, where CARDNAME is the first short (single word) name in aplay -l.

I was not able to get that to work for input from the loopback "card", at least when I was trying to get everything going. I should re-check. This might be because the loopback also has a subdevice associated with it, and the CARDNAME type specifier for ecasound (-i:alsa,format, CARDNAME) doesn't include the subdevice... again, I will have to check (or someone else can do this).

On the other hand, I use the CARDNAME when specifying which output to use and that works well.