CamillaDSP - Cross-platform IIR and FIR engine for crossovers, room correction etc.

Now that you said it, I realised reading earlier about Jack https://jackaudio.org/. So thank you! I think this will be the way since I don't think CamillaDSP directly supports the use of multiple output devices. I will investigate and report back.

Since it is a single actual device and I'm using the internal clock of the MK5 as the clock source, there should not be problems with drifting? I mean I have configured the MK5 to use it's internal clock as the clock source. I don't fully understand yet how this relates to having multiple audio devices on windows side.
 
I hit a roadblock; the Motu drivers created multiple audio devices one per two output channels. So for the total 10 output channels I have 5 devices. How would I configure these in the CamillaDSP playback section?
I guess splitting the channels up on individual devices like that makes sense in some cases. Unfortunately it's really inconvenient for apps like camilladsp that only supports outputting to a single device. If you manage to combine them back into a single device using something like jack then it should be ok, there should not be any drift.

Outputting to several devices has been requested before and I have considered it. Unfortunately this would require a major redesign, and I don't think the benefits are large enough to motivate it.
 
Henris: Splitting the multiple channels to separate stereo devices is done by the Motu driver, IMO. In linux the device reports as a single multichannel device https://www.diyaudio.com/community/...overs-room-correction-etc.349818/post-6759897

Unfortunately the stock MS UAC2 driver will not handle your card correctly as it uses implicit feedback which the driver does not support.

An option would be connecting your card to RPi4 with linux/camillaDSP/USB audio+network gadget configuration, running the DSP on the RPi and presenting the combo to windows as a composite X-channel USB audio + network device for configuration. This option will be available, eventually.
 
... An option would be connecting your card to RPi4 with linux/camillaDSP/USB audio+network gadget configuration, running the DSP on the RPi and presenting the combo to windows as a composite X-channel USB audio + network device for configuration ...

Really great and interesting option, not only in terms of this specific use!

Therefore, I woluld eagerly like to set up such a Rpi CamillaDSP audio gadget to perform DSP on the rpi, all along including recent approaches like your https://github.com/pavhofman/gaudio_ctl. But I stupidly do not know how to best set up the Rpi as a linux audio gadget, and then communicating to it through USB to interact with the ALSA layer, nor did I find appropriate advice how to do so which matches my level of non-experience.

So, is there a kind of HowTo for Dummies for the basics of this audio gadget option, eventually? I would really be very grateful for some more basic clues/advice and the possibility to share experiences. This might also happen along with a dedicated github project, along with a nice ReadMe?
 
Henris: Splitting the multiple channels to separate stereo devices is done by the Motu driver, IMO. In linux the device reports as a single multichannel device https://www.diyaudio.com/community/...overs-room-correction-etc.349818/post-6759897

Unfortunately the stock MS UAC2 driver will not handle your card correctly as it uses implicit feedback which the driver does not support.

An option would be connecting your card to RPi4 with linux/camillaDSP/USB audio+network gadget configuration, running the DSP on the RPi and presenting the combo to windows as a composite X-channel USB audio + network device for configuration. This option will be available, eventually.
Well, the Jack / Jack2 was yet another rabbit hole ;) The only documentation I could find regarding aggregating devices was this: https://jackaudio.org/faq/multiple_devices.html. And the only part potentially applicable to Windows (Use the JACK2 audio adapter) states "More information is needed on this option". Even the windows installer failed (had to manually register JackRouter.dll) so it is evident that this software is not focused on Windows and it mainly intended for pro av experts.

But I did see something interesting in the Jack2 settings:
1641130738712.png

It is listing all the different interfaces (not devices?) and beside the split DirectSound and Wasapi items is a single ASIO "MOTU UltraLite-mk5" item. This item is not visible in Windows devices nor it is visible in the output of CDSP ListDevices. Could this be used with CDSP? I will try it anyways.

Since my end game will be a Linux based NUC running the CDSP, this might be the time to switch. It still was good to have the MK5 connected to a Windows machine since the device settings were available only through the Cuemix 5 software which afaik does not run on Linux. One in particular was pretty nasty; the main volume of the MK5 by default controls only the 1-2 channels and others are 0dB. Almost blew my compression drivers...
 

TNT

Member
Joined 2003
Paid Member
Maybe we could ask phofman who seem to be a real wizard to describe how to get from a newly bought RPi4 to a functioning, >2 channels, Camilla based system? Pretty please? For "dummies"...

Perhaps a new thread would be good? There one could read a how-to for a common part and then we could have specifics per USB sound card... in the end this thread could describe a number of different configurations if owners of different boards could chime in and test/get help to succeed...

//
 
Intel NUC + Ubuntu + MOTU Ultralite MK5 + playback source on same machine
I've got to this part of the wiki:
https://github.com/HEnquist/camilladsp/blob/master/backend_alsa.md#alsa-loopbackhttps://github.com/HEnquist/camilladsp-config
But I'm having hard time getting anything working. I do get sound out of the MK5 if I select it in the OS settings as the output device and then play with Spotify. But when ever I try to introduce CDSP with the loopback device to the playback chain it either does not produce any sound or CDSP fails to start.

Steps I have done
  • Copied the asound.conf to /etc and modified the rate to match CDSP config (96000). No other changes
  • Activated aloop with "sudo modprobe snd_aloop"
  • As OS level output device left the default to "Analog output - Built-in Audio" which is the loopback device
  • As OS level input device tried either the loopback or the MOTU. The loopback was the default but seemed to "reserve" a capture loopback subdevice when selected.

If I select something else than the loopback as the OS level input device then CDSP goes into some weird mode where it says that the MK5 only supports 22 channels. And with 22 channels only 44k is supported. The attached config files are from this state. The target is to have 10 channels and 192000 rate.
Code:
022-01-02 21:06:39.303870 DEBUG [src/alsadevice.rs:402] Playback: supported channels, min: 22, max: 22, list: [22]
2022-01-02 21:06:39.303883 DEBUG [src/alsadevice.rs:403] Playback: setting channels to 22
2022-01-02 21:06:39.303890 DEBUG [src/alsadevice.rs:407] Playback: supported samplerates: Discrete([44100])
2022-01-02 21:06:39.303896 DEBUG [src/alsadevice.rs:408] Playback: setting rate to 96000
2022-01-02 21:06:39.303918 DEBUG [src/alsadevice.rs:402] Capture: supported channels, min: 1, max: 32, list: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32]
2022-01-02 21:06:39.303928 DEBUG [src/alsadevice.rs:403] Capture: setting channels to 2
2022-01-02 21:06:39.303937 DEBUG [src/alsadevice.rs:407] Capture: supported samplerates: Range(8000, 192000)
2022-01-02 21:06:39.303941 DEBUG [src/alsadevice.rs:408] Capture: setting rate to 96000
2022-01-02 21:06:39.303947 DEBUG [src/alsadevice.rs:412] Capture: supported sample formats: [S16LE, S24LE, S24LE3, S32LE, FLOAT32LE]
2022-01-02 21:06:39.303950 DEBUG [src/alsadevice.rs:413] Capture: setting format to S32LE
2022-01-02 21:06:39.304018 ERROR [src/bin.rs:344] Playback error: ALSA function 'snd_pcm_hw_params_set_rate' failed with error 'EINVAL: Invalid argument'

I think I'm just missing something trivial. To me it seems like whenever I get CDSP running it does not actually bind to loopback, the output of aplay -l does not change.
 

Attachments

  • config_motu.zip
    907 bytes · Views: 49
  • asound.zip
    694 bytes · Views: 49
Perhaps a new thread would be good? There one could read a how-to for a common part and then we could have specifics per USB sound card... in the end this thread could describe a number of different configurations if owners of different boards could chime in and test/get help to succeed...
It will take a while for the patches to settle down, there are still some unresolved issues being discussed with other gadget developers now to make the technology practically usable. The gadget is running well on my RPi4 but having patches accepted upstream takes more than just their proper function. The added features must satisfy use cases of other kernel developers, a concensus must be reached and responses are not always immediate (if any) :)

Afterwards the gadget will almost certainly be merged/integrated with camilladsp in some convenient way. Several options are possible but the final way cannot be implemented yet. There was a reason for the word "eventually" :) If things work out well, linux 5.17 should have all the missing pieces.
 
Hi all

Rust is a main topic of mine, and I wrote some of the SIMD libs which you might want to use (https://github.com/Lokathor/wide) for performance; Rust is great at threading so maybe I can take a look at the code on that?

My question was: Is there a basic set of hardware that people recommend to create a home theatre DSP with this. I think 8 inputs, 8 outputs (RCA) or possibly HDMI? I want to achieve Source -> AVR -> Preouts -> NUC PC with CamillaDSP -> PowerAmps.

Most PCs only seem to have mic in (2 ch), though some have a full 7.1 out.
 
If you want to do analog in / out an audio interface like the MOTU Ultralite Mk5 would work. You would use the Mk5 as both the capture and playback device in CamillaDSP.

HDMI solutions are more difficult but if you have a 7.1 PCM HDMI source you can use a Meridian HD621 to split that in to 4 stereo AES outputs and then route those to a USB / AES input DAC like the Okto DAC8 pro which has the ability to take an AES input, send it to a computer via USB for DSP and then receive the DSP'd signal back via USB. As the Okto is difficult to get you could similarly use a miniDSP U-DIO8 to receive the AES inputs, send them via USB to your computer for DSP and then route the DSP'd signal to it's AES outputs feeding 4 stereo AES input DACs.

Michael
 
I want to achieve Source -> AVR -> Preouts -> NUC PC with CamillaDSP -> PowerAmps.
For HTPC I would be a bit cautious about the latency introduced by the buffers capture -> camilladsp -> playback, configured for a reliable operation. But hopefully the latency could be more or less constant, allowing its compensation by delaying video in the player/AVR.
 
I have a convolution algorithm question if someone can answer I would be very appreciative.

I have been trying a couple of ways to "merge" a correction FIR with an XO FIR to generate a single merged FIR filter. The 2 ways that I have tried produces an output FIR twice the length of the individual input FIRs. I was assuming they would merge in parallel and retain the same amount of taps/samples/delay, but it appears they are merging serially and doubling the amount of taps/samples/delay.

Is there a "parallel" convolution versus a "serial" convolution ? If so, are their any good reads/links on why one would be used over another in an audio application (besides the obvious latency issues) ?

Thanks much.

Code:
# Combining FIR filters with CamillaDSP
capture:
    channels: 1
    format: FLOAT64LE
    type: File
    # Target PS Correction FIR - # matching 256K Tap FLOAT64LE FIR
    filename: /path_to/PS_FIR.pcm

playback:
    channels: 1
    format: FLOAT64LE
    type: File
    filename: /path_to/MERGED_PSXO_FIR.pcm

filters:
  XO_FIR:
    type: Conv
    parameters:
      filename: /path_to/XO_FIR.wav  # matching 256K Tap FLOAT64LE FIR
      type: Wav
      channel: 0

pipeline:
  - type: Filter
    channel: 0
    names:
      - XO_FIR
 
Last edited:
I have been trying a couple of ways to "merge" a correction FIR with an XO FIR to generate a single merged FIR filter. The 2 ways that I have tried produces an output FIR twice the length of the individual input FIRs.
This is just how convolution works. Mathematically it's equivalent to convolve a signal first with one IR and then another, as to first convolve the two IRs and then convolve the signal with that result.

Let's say both IRs consist of a large main feature in the middle, surrounded by smaller wiggles that get smaller and smaller towards each end. To calculate the discrete convolution of them, we slide one over the other. For each step, we multiply all elements, and then sum all the products. This sum is one point of the result. The first point is when the IRs overlap in just a single point, and the last point is when they have passed so only the last points overlap. This gives lenth1+lenght2 new points.

Because the wiggles at the start and end of the IRs are of small amplitude, the result will have very very low amplitude far from the center. You should be able to truncate it back down to the length of the IRs you started with, just apply a window function to make sure the ends go to zero.