CamillaDSP - Cross-platform IIR and FIR engine for crossovers, room correction etc.

I didn't realise that CPAL is not able to provide meaningful feedback to the caller. If it requires custom patches for CPAL, then that's what must be done. Pity that I don't know my way around Rust. As much as I'd love to add another language to my resume, my IRL situation precludes me from doing that for the time being. I love the pace and direction you're taking CamillaDSP, no easy feat!
The thing with CPAL is that is presents a single, simple API for a whole bunch of very different systems. This is really convenient, but it also means you dont get access to anything that is specific to one API. CPAL is also a big project which is used by many, which means development is slow and careful. I tried to start a discussion some time back about adding support for WASAPI exclusive mode (and I was willing to work on it) but I never got any feedback. But it's probably outside the scope for CPAL anyway, same as any CoreAudio specific notfications.
I think the only reasonable way forward is to skip CPAL and use CoreAudio and WASAPI directly. But this is a big job and not something I'm planning on starting right now.


Hello! I know very little about the intricacies of Linux, I can only generate "brilliant" ideas, I will ask questions, sorry if they are stupid.
1. When will it be possible to use Camillia as an already debugged application with a working graphical interface on an ARM device? Will the opportunity be realized, download an image that can be loaded onto MicroDS in Pi or BBB and enjoy life.
2. How well is Camillia's signal processing? It's no worse than DSP in ROON, HQP, JRIVER and other P (I'm talking about the version for ARM)
3. (Very important IMHO) Is it possible to implement Camilia so that on the ARM device on which it is installed, you can send a stream from any UPnP player, the same BubbleUPnP, and from Camillia, in turn, an already processed signal to another network endpoint capable of output 2-8 channels. For example BBB (BeagleBone Black) with PURE firmware from Pavel Pogodin, which can output a signal via UPnP (MDP), squeezelite and others. Sobsvenno IMHO a bunch when each device does its own thing. Camillia is running on Pi 3 \ 4, she sends already processed DSP channels to the BBB endpoint.
4. Is Camillia's version possible for the BBB?
5. Will the filter pulses generated in Rephase work in Camilli?
6. Is there a time delay setting?
1. What should the application do? CamillaDSP is shipped with the latest Moode release as a preview, and there is ongoing work to integrate it nicely. Others are also working on pCP.
2. You mean the numerical accuracy? CamillaDSP gives results close to the theoretical limit for the float format you use, with the numerical noise in the -150 dB range for 32-bit and -300 dB for 64-bit. The others are probably simliar.
3. CamillaDSP itself doesn't support any networking but it would be possible to use it as the DSP engine for a project like this. I know of one project like this, ConvoProxy. It's still in very early development and not generally available yet, but worth keeping an eye on. I only know of this trhread about it (in German): ConvoProxy (Convolving Proxy System) - aktives-hoeren.de
4. Haven't tried on a BBB, but I think the CPU might be a bit weak.
5. Yes
6. Yes

@HenrikEnquist:

It would be nice to either rate limit the "No data to play, dropping a callback" message to once per second or have a way to turn it off completely. I want to be able to see the other debug and trace messages, but this one quickly clutters the screen and/or log files if you pause the playback or hit a track with a different sample rate.


The easiest way is if you compile the code yourself, with Cargo it's very easy. Just change the code here to your liking: camilladsp/cpaldevice.rs at develop * HEnquist/camilladsp * GitHub
 
The thing with CPAL is that is presents a single, simple API for a whole bunch of very different systems. This is really convenient, but it also means you dont get access to anything that is specific to one API. CPAL is also a big project which is used by many, which means development is slow and careful. I tried to start a discussion some time back about adding support for WASAPI exclusive mode (and I was willing to work on it) but I never got any feedback. But it's probably outside the scope for CPAL anyway, same as any CoreAudio specific notfications.
I think the only reasonable way forward is to skip CPAL and use CoreAudio and WASAPI directly. But this is a big job and not something I'm planning on starting right now.
If all else fails, the CPAL repo can be forked and changes made there. Since upstream moves slowly, it shouldn't be too much work to stay in sync with the changes. I agree that reworking CamillaDSP to use the CoreAudio and WASAPI APIs directly is too much work. I'd much rather see if I could hack a solution in place first.


The easiest way is if you compile the code yourself, with Cargo it's very easy. Just change the code here to your liking: camilladsp/cpaldevice.rs at develop * HEnquist/camilladsp * GitHub
Thanks, I'm already compiling the code myself anyways. I just figured that I might not be the only one having the desire to dial it down a bit, since it obscures relevant debug and trace information. Not receiving any sample rates when playback is stopped is after all perfectly normal behaviour and hardly justifies logging at Warning level, IMHO of course :)

Another thing: If I watch a YouTube video @ 48 KHz, and then play a few tracks using Roon Radio with 44.1 KHz, 88.2 and 96 KHz (Tidal Master), then that's not something CamillaDSP can handle even with the resampler turned on. Is it only intended for batch processing of a single sample rate, or should it be able to handle arbitrary input sample rates as well?
 
I did measurements with ARTA and CamillaDSP and I'm not happy.

My setup:

Line out L (Laptop, ARTA) -> Line in L (PC, CamillaDSP)/Line out L -> Line in L (Laptop, ARTA)

and parallel for dual channel measurement mode (phase-information for ARTA):

Line out R (Laptop, ARTA) -> Line in R (Laptop, ARTA)

The result of a good measurement is this:
Dropbox - OK.jpg - Simplify your life
Dropbox - OK2.jpg - Simplify your life

But when I restart the PC and CamillaDSP it looks like this:
Dropbox - notOK.jpg - Simplify your life
Dropbox - notOK2.jpg - Simplify your life

Everytime I restart my PC and CamillaDSP I get another:
- Ripple before the impuls
- Phase (With Cursor on max Impuls)
- Delay (between 68 an 87 ms)

Any ideas/solution?

Here are my configs:

Start CamillaDSP with terminal
/home/media/Schreibtisch/camilladsp -v /home/media/Schreibtisch/8Channel.yml

asound.conf
pcm.Soundcard {
type hw
card D2
device 0
format S32_LE
rate 192000
}

ctl.Soundcard {
type hw
card D2
}

8Channel.yml
devices:
samplerate: 192000
chunksize: 8192
target_level: 4096
adjust_period: 10
capture:
type: Alsa
channels: 2
device: "Soundcard"
format: S32LE
playback:
type: Alsa
channels: 8
device: "Soundcard"
format: S32LE

mixers:
8ChannelMixer:
channels:
in: 2
out: 8
mapping:
# TT; channel 0 links/channel 1 rechts
- dest: 0
sources:
- channel: 0
gain: 0
inverted: false
- dest: 1
sources:
- channel: 1
gain: 0
inverted: false
# MT; channel 0 links/channel 1 rechts
- dest: 2
sources:
- channel: 0
gain: 0
inverted: false
- dest: 3
sources:
- channel: 1
gain: 0
inverted: false
# HT; channel 0 links/channel 1 rechts
- dest: 4
sources:
- channel: 0
gain: 0
inverted: false
- dest: 5
sources:
- channel: 1
gain: 0
inverted: false
# Open; channel 0 links/channel 1 rechts
- dest: 6
sources:
- channel: 0
gain: 0
inverted: false
- dest: 7
sources:
- channel: 1
gain: 0
inverted: false

pipeline:
- type: Mixer
name: 8ChannelMixer
 
Last edited:
If all else fails, the CPAL repo can be forked and changes made there. Since upstream moves slowly, it shouldn't be too much work to stay in sync with the changes. I agree that reworking CamillaDSP to use the CoreAudio and WASAPI APIs directly is too much work. I'd much rather see if I could hack a solution in place first.
I have considered forking cpal, but after looking closer at the code I decided not to. But there are large chunks of code in cpal I can lift into CamillaDSP with little changes. I don't think the amount of work is unreasonable, and I think it's worth doing. Just not right now.

Thanks, I'm already compiling the code myself anyways. I just figured that I might not be the only one having the desire to dial it down a bit, since it obscures relevant debug and trace information. Not receiving any sample rates when playback is stopped is after all perfectly normal behaviour and hardly justifies logging at Warning level, IMHO of course :)
I don't want to hide buffer underruns! You get a lot of them because that RME device is quirky. Not unusual, all devices seem to have quirks. Especially when using spdif input, and of course all have different quirks.

Another thing: If I watch a YouTube video @ 48 KHz, and then play a few tracks using Roon Radio with 44.1 KHz, 88.2 and 96 KHz (Tidal Master), then that's not something CamillaDSP can handle even with the resampler turned on. Is it only intended for batch processing of a single sample rate, or should it be able to handle arbitrary input sample rates as well?
Where is it getting the audio from? Still spdif?
It's built to process a single sample rate (that is allowed to drift a little) until told to load a different configuration. When running with Alsa, this can ve solved by the alsa_cdsp plugin. Other platforms need other solutions.
 
... My setup:

Line out L (Laptop, ARTA) -> Line in L (PC, CamillaDSP)/Line out L -> Line in L (Laptop, ARTA)

and parallel for dual channel measurement mode (phase-information for ARTA):

Line out R (Laptop, ARTA) -> Line in R (Laptop, ARTA)
This might not be a problem of CamillaDSP. It's certainly a problem of you setups.

With the first setup you introduce another set of conversions, and therefore a daisychain DA-AD-DA-AD stages into your signal path. Setup II has only one DA-AD stage. Therefore I guess this is not a problem of camilladsp itself. E.g. you could test another configuration

Line out L (Laptop, ARTA) -> Line in L (PC, Sox/ or Brutefir)/Line out L -> Line in L (Laptop, ARTA)

I am shure you will get the same result. Another test would be to connect your CP/Camilladsp by SPDIF In/Out. Going digital and avoiding the useless extra AD-DA stage within you PC, I bet you will find no more differences between your two setups, then. And best would be to go completely digital, without any DA and AD stages.

SPDIF out L (Laptop, ARTA) -> SPDIF in L (PC, CamillaDSP)/SPDIF out L -> SPDIF in L (Laptop, ARTA)
 
Last edited:
Yes I believe it will give the same result!

I bet alsaloop will give the same mess. Because it does not omit the daisychained DA-AD-DA-AD's.

You are both right! It's the same issue with alsaloop. Everytime I stop/start the loop-connection I get another result. Apparently the two sound cards build up each other.
My ARTA-soundcard has got optical S/PDIF and the CamillaDSP-PC coaxial S/PDIF. Therefore I cannot test this option.

I assume that the setup is ok now?!
 
Last edited:
No CamillaDSP lover running Windows? :-(
//
There should be quite a few. I compiled the number of downloads for all the binaries:
downloads_bars.png
downloads.png
 
Yes, that is what I assumed. It sounds great. I also cast my impulse in rew format and got back my frequency domain input, so it must be correct. Running pink noise seems off in the measurement. Eventually I will remeasure a frequency sweep through the pi to check further.
Thank you again Camilladsp is a wonderful app. No reason not to use it for dsp once youare already in the digital domain.
 
Banned Sock Puppet
Joined 2020
Hello! I know very little about the intricacies of Linux,

1. When will it be possible to use Camilla as an already debugged application with a working graphical interface on an ARM device? ...


you can send a stream from any UPnP player, the same BubbleUPnP, and from Camilla, in turn, an already processed signal to another network endpoint capable of output 2-8 channels.


Henrik Thank you for what you are doing. I am especially pleased that you are making a version of Camillia for single-board devices, since there are a lot of programs on x86, and for ARM there is no such software at all, as far as I know.


When will it be possible to use Camillia as an already debugged application with a working graphical interface on an ARM device?
It's exactly what we are doing!
I have been testing lots of different configurations since the end of last year, as Raspberry pi OS (Buster) works on both 32 & 64 bit Linux, as well as x86-x64 Linux.


This with a PROPER PCI sound card makes for the best test and comparison system ever.


I don't rely on USB audio, it's inherently drifty and unreliable, and my opinions of RME (over the years) are rude and unprintable :rolleyes:, and if you add that dreadful sony-philips consumer interface you are asking for trouble (Spdif).

The way forward is gonna be audio over IP (gigabit + PCi-e, which the CM4 has).

As far as I am concerned for ARM the way forward is called CM4, and forget the rest.

I also wanted to say, if you want streaming and networking stuff, you can send it out of VLC no problem, (inc multicast!) so why make life difficult when you can make it impossible?
 
Last edited:
@HenrikEnquist:

I think I've stumbled upon an unintended limitation. The step size on delay settings appears to be 0.1 ms, which unfortunately is too course. I discovered this because whenever I made small increments from 3.80 ms to 3.84 ms nothing would change in the REW impulse measurements, but when I changed it to 3.85 ms, REW suddenly showed a change from 3.80 ms to 3.90 ms. This limitation sadly means that I cannot time align my speaker drivers to the degree that I need.

Since 0.1 ms corresponds to a rather large offset distance of 3.4 cm, a step size of 0.01 ms is needed to allow for proper time alignment. I noticed that MiniDSP also allows for this granularity.
 
@HenrikEnquist:

I think I've stumbled upon an unintended limitation. The step size on delay settings appears to be 0.1 ms, which unfortunately is too course. I discovered this because whenever I made small increments from 3.80 ms to 3.84 ms nothing would change in the REW impulse measurements, but when I changed it to 3.85 ms, REW suddenly showed a change from 3.80 ms to 3.90 ms. This limitation sadly means that I cannot time align my speaker drivers to the degree that I need.

Since 0.1 ms corresponds to a rather large offset distance of 3.4 cm, a step size of 0.01 ms is needed to allow for proper time alignment. I noticed that MiniDSP also allows for this granularity.
That's odd, there should be no 0.1ms limit. The minimum step is one sample, so at 44.1 kHz the step size should be 0.023 ms (and 0.0104 ms at 96k). I'll check the math again to make sure it's not rounding off the number somehow.
 
Banned Sock Puppet
Joined 2020
Yesterday we spent several hours running the entire modular system at a local "hi end audio" shop.

This should be of interest here.

This was as a "LIVE" demo showing how effective it can be to cure a violent room resonance which the shop had actually paid for as an acoustic study, but done nothing about in 3yrs.

I introduced a -7dB notch using Camilla - running one of the latest Linux kernels (SMP Debian 4.19.160-2 (2020-11-28) i686 GNU/Linux)
Linux raspberrypi_x86 4.19.0-13-686-pae.

A notch at 70hz was introduced with a steep Q value of 5, which means literally a 1/4 octave from across the 0dB lines.

People were then allowed to "vote" on the effect of introducing the notch or removing it and running it "flat".

To aid in the easy swops of profiles, Camilla was run as a system service, with 2 iterations of "service" and 2 seperate copies of the DSP as named "camilla" and "camilla2", we could then leave VLC as the player and swop in the different "services" on the fly, to demo the presence or absence of the notch.

The whole system was run 'light" off a micro SD flash, so Linux in fact runs in RAM memory not on a hard disk of any sort, so this makes it run even better.

It's very astonishing to be able to show someone a whole system running on a USB card reader with a MicroSD but that is what we used!

The owner of the shop unsurprisingly voted AGAINST, the removal of his room resonance (being driven by the superb valve amp, with balanced XLR out from the onboard AKM DAC on the PC card)..stating it removed "emotion".

The others funnily didn't agree.
What was astonishing, was to see how much extra energy content was in the room because of the resonance, which made the music sound actually louder! So much for emotion, it's all psychological.

We all know the brain trains itself to remove repetitive resonance.

This fell into exactly this category, so when you suddenly remove the "ear filter" it comes as a big shock, but of course I had to explain this in great detail as an engineer (why, what, how, when).

It makes it a commercial sales moot point, how a pair of speakers soldfrom a shop can sound underwhelming or simply "bad" in another environment...and more..

In short, a fascinating couple of hours.
I did explain if you played speakers with no DSP in a dead room (no resonance) and recorded the result with some good DPA mics....

..then MEASURE the room into which they were going to be installed, you could most definitely develop a sales advantage by being able to correct for the new environment already in advance, instead of selling speakers by plain guesswork,

(never mind the big changes introduced by the amps, we found on a another test....)

so:-
After proving I had made ZERO changes to the levels, and that Camilla is totally incapable of making changes to the rest of the frequency spectrum, we concluded, the speakers used (brand new Wharfedale) could sound very different depending on which room they were installed in, and that the removal of the nasty resonance by electronic means, meant a literally incredible improvement in clarity.

(eg. Some speakers sound terrible on heavy metal or hard rock, but sound great on Classical stuff....idem the amps...)

Finally fyi.

We had some glitches I removed with the sound card interface, where I tried block size changes from 512 (bad) to 1024, which was still glitchy, to end up finally at 4096, which was really bombproof and is playing back native 24bit audio.

I could write more about it, but to me it was the most convincing demonstration I have ever seen, about the validity of software programmable HD audio.

I compare now with say dedicated and much more expensive solutions such as ARCAM, Oppo or others, and even more simply because if one given DAC gives bad results, you just throw it out and use a different one.
In one test a 1000EURO Arcam was a complete fail, but the OEM went into denial when shown the evidence.

Camilla again shows we can get around all that, because you only have to change relatively cheap hardware ifaces to get a different result.
 
Last edited:
All are wrong, because it was not a blind test :)

Need of DSP is clear to anybody, who measured his speakers and found out his room accustic instead. That's why no high-end studio likes this. I have an audiophile friend, he has Accuphase with very expensive speakers. I offered him my Umik-1, but he admits, he does not want to know it. It would spoil his believe.