CamillaDSP - Cross-platform IIR and FIR engine for crossovers, room correction etc.

All is now perfect! The attached updated Python script properly controls sample rate switching and launches camilladsp. Set it as a system.d service to automatically run on boot.

Please disregard the original Python script.

Unfortunately, this setting has broken my sample rate switching script. I probably need to switch it back to the internal clock, restart camillaDSP, then switch it to the toslink clock source. I will try this tomorrow (if I can figure out the alsamixer command line) and, if this doesn't break anything else :rolleyes:, all will be perfect!
 

Attachments

  • startdsp.zip
    873 bytes · Views: 53
phofman: Indeed it is. It's not related to the kernel or Linux at all, just the default on the USBStreamer to use the internal clock rather than the TOSLink clock.

I'm not sure why they would make this default, but maybe they don't have a good way to detect the frequency outside of a few percent of the internal clock. So setting the internal clock "close" based on the detection of the WM8804 and launching camillaDSP/arecord, then afterwards telling it to zero in the XMOS PLL frequency by switching to the TOSLink recovered clock seems to work.
 
I have a few comments on the script if you want to make it a bit nicer. There is a lot of code duplication, which makes it easy to make mistakes. I would start by making a lookup table for the frequency:
Code:
rate_table = { 
    (True, False, False): 32000, 
    (False, True, False): 44100, 
    (True, True, False): 48000, 
    (etc...) 
}
Then get the frequency like this:
Code:
frequency = rate_table.get((in0, in1, in2), lastfrequency)
That returns the new rate from the table, or the old one if the combination of wasn't found in the table.


The long function then becomes something like this (untested!):
Code:
rate_table = { 
    (True, False, False): 32000, 
    (False, True, False): 44100, 
    (True, True, False): 48000, 
    (etc...) 
} 
 
def freqChange(channel): 
    sleep(1) 
    global lastfrequency 
    global p 
    in2 = GPIO.input(INPUT2) 
    in1 = GPIO.input(INPUT1) 
    in0 = GPIO.input(INPUT0) 
    frequency = rate_table.get((in0, in1, in2), lastfrequency) 
    if (frequency != lastfrequency) : 
        print("Frequency Changed to: {}".format(frequency)) 
        lastfrequency = frequency 
        p.terminate() 
        returncode = p.wait() 
        o = subprocess.Popen(["/usr/bin/amixer", "-c", "USBStreamer", "sset", "miniDSP Clock Selector Clock Source", "miniDSP Internal Clock"]) 
        sleep(1) 
        p = subprocess.Popen(["/etc/camilladsp/camilladsp","/etc/camilladsp/{}.yml".format(frequency)]) 
        sleep(1) 
        q = subprocess.Popen(["/usr/bin/amixer", "-c", "USBStreamer", "sset", "miniDSP Clock Selector Clock Source", "miniDSP TOSLINK Clock"]) 
        sleep(1)
 
phofman: If the sample rate of the input SPDIF stream changes (like from 44.1kHz to 96kHz or whatever), the USBStreamer can't seem to figure out the new rate on the TOSLINK clock unless the internal clock is used first to get it close. I'm not sure why it behaves like that, maybe the PLL for clock recovery isn't that advanced. That was the bug that crept up after I figured out the amixer clock switching stuff.
 
LSP

I published my setup for playing music at home, GitHub - thoelf/Linux-Stream-Player. The player supports two modes, both using CamillaDSP. I switch between the modes by clicking on an icon in a dock.

One mode is playing directly on the server with Squeezebox server at a variable samplerate, piping from SqueezeLite to CamillaDSP.

The other mode is to stream from the browser to MPD on the server at a static samplerate, using the loopback interface for CamillaDSP. I couldn't pipe directly from MPD.
 
Member
Joined 2008
Paid Member
After some hours of trying different stuff, I am now running MPD with Spotify as a service. When I start camilladsp from the command line with my test configuration (1 kHz stereo lowpass), I hear lowpassed music in the headphone out of the HP T510 thin client. So this is a success for me. I need help with one last step - running camilladsp as service, so that it starts automatically without login. I was able to google some clues, but failed so far. Could you please help me with this? (I failed miserably to install the web interface for CamillaDSP, too, but that would be the next step.) Thanks in advance!
 
Novice user

Novice here ....
Don't understand the Virtual Audio/CoreAudio stuff. Maybe someone can PM me and explain if a Virtual Audio device like, BlackHole (total failure for me) or SoundFlower (total success for me), has inputs or outputs or both and how they work.

.... In any-case taking baby steps. Have the following playing music :

itunes --> Soundflower 2ch-> camiladsp -> Okto Dac8 pro -> B&K ST4420M -> Mission 761

Man Mini 2011 server quad core i7 2 GHz. 16G Mmeory and SSD
OS X High Sierra 10.13.6
DAC8 Pro in PureUSB mode.

Just in case a fellow novice needs audio path check camiladsp config, I've attached the config.

Thanks Henrik. Great stuff.
Regards
Simmonds



---
devices:
samplerate: 44100
chunksize: 1024
capture:
type: CoreAudio
channels: 2
device: "Soundflower (2ch)"
format: FLOAT32LE
playback:
type: CoreAudio
channels: 2
device: "DAC8PRO"
format: FLOAT32LE

mixers:
2chpassthrough:
channels:
in: 2
out: 2
mapping:
- dest: 0
sources:
- channel: 0
gain: 0
inverted: false
- dest: 1
sources:
- channel: 1
gain: 0
inverted: false

pipeline:
- type: Mixer
name: 2chpassthrough

PS : For those not familiar with .ymll files the above cut and paste of the config will bomb, because there is no indentation.
 
Last edited:
Novice here ....
Don't understand the Virtual Audio/CoreAudio stuff. Maybe someone can PM me and explain if a Virtual Audio device like, BlackHole (total failure for me) or SoundFlower (total success for me), has inputs or outputs or both and how they work.
BlachHole / SoundFlower are loopback devices. They present themselves as real soundcards, but it's all "fake". In practice they behave just like a real card that has its inputs connected directly to its outputs. So when some application is using a virtual soundcard to play some signal, that same signal can be recorded from the capture side of the same virtual card by another application.



There is no easy and general way to send sound directly from one application to another. That is why we have to put these these virtual cards in between.
 
Thanks.
I think I got confused with CoreAudio's "Built-In Output". Seems obvious, but then they have "Internal speakers". So does iTunes send to "Built-In Output" or "Internal Speakers". Throw in the OS X's Audio Midi "Aggregate device" and "Multi Channel Output Devices", and routing between apps ended up with me having more questions, than answers. Are Virtual Devices in CoreAudio automatically connected to Built-In Audio? Do I need to use an aggregate device with a Virtual Audio Device ? Didn't have to with SoundFlower .... and the list goes on.
Way off topic.

Any case reading up on RePhase to try and do room correction.
 
Built -In output is just the built in soundcard of the mac. On my macbook I have "Internal Speakers" of type "Built-in". If I plug in headphones in the built in headphone jack, it changes name to "Headphones" and type to "Headphone port".

Aggregate device is another virtual device that merges several cards together as one, to make a 4-channel card out of two stereo cards for example. Seems like a great idea, but in practice the synchronization isn't good enough to use for crossovers. You don't need one.


I haven't seen a "Multi Channel Output device" so don't know what it means.



You should simply select SoundFlower as the default playback device. Then all applications will output their sound to SoundFlower. Then camilladsp will record from the SoundFlower, and use the card you specify in the config (DAC8PRO) for playback.
 
Finally! Version 0.5.0 is published. I put in two new features since the last beta. Now camilladsp can read filter coefficients from .wav (which can have more than one channel), and there is subsample precision on delays.
Adding the wav-reading capability means some changes to the config for convolution filters. The old "File" type has been renamed "Raw", and there is a new type "Wav". See the readme!
Old config files with "File" are accepted by CamillaDSP for now, but not by the latest plotting tools.



The fill list of changes since 0.4.2:
New features:
- Add RMS and Peak measurement for each channel at input and output.
- Add a `Volume` filter for volume control.
- Add exit codes.
- Adapt `check` output to be more suitable for scripts.
- Search for filter coefficient files with relative paths first in config file dir.
- Add `ShibataLow` dither types.
- Add option to write logs to file.
- Skip processing of channels that are not used in the pipeline.
- Update to new faster RustFFT.
- Overriding samplerate also scales chunksize.
- Use updated faster resampler.
- Enable experimental neon support in resampler via `neon` feature.
- Add `Loudness` volume control filter.
- Add mute options in mixer and Gain filters.
- Add mute function to Volume and Loundness filters, with websocket commands.
- Add `debug` feature for extra logging.
- Improve validation of filters.
- Setting to enable retry on reads from Alsa capture devices (helps avoiding driver bugs/quirks for some devices).
- Optionally avoid blocking reads on Alsa capture devices (helps avoiding driver bugs/quirks for some devices).
- Read FIR coefficients from WAV.
- Add subsample delay.

Bugfixes:
- Don't block playback for CoreAudio/Wasapi if there is no data in time.
- Validate `silence_threshold` and `silence_timeout` fields.
- Fix panic when reloading config if a new filter was defined but not added to the pipeline.
- Check for mixer parameter changes when reloading config.
- Token substutution and overrides also work via websocket.
- Don't exit on SIGHUP when waiting for a config.
- Fix handling of negative values when reading filter coeffs in S24LE3 format.
- Gain filters react to mute setting on reload.
- Fix noise in output when resampling and muting all channels in mixer.
- Fix handling of negative values for input and output in S24LE format.


Get it here: Release v0.5.0 * HEnquist/camilladsp * GitHub
 
Finally! Version 0.5.0 is published. ...
Adding the wav-reading capability means some changes to the config for convolution filters. The old "File" type has been renamed "Raw", and there is a new type "Wav". See the readme!
Old config files with "File" are accepted by CamillaDSP for now, but not by the latest plotting tools.
...

GREAT. Thanks for all of your hard work !!!

Thanks for adding the n-channel .WAV FIR filter format support.

As far as CamillaDSP is concerned, is there any benefit of choosing between the new .WAV versus the old .DBL formats ?

I know programs like Audiolense can place 8 channels of FIR filters into a single .wav file but does CamillaDSP have a speed/performance preference ?
 
As far as CamillaDSP is concerned, is there any benefit of choosing between the new .WAV versus the old .DBL formats ?
Once the DSP is running, it doesn't matter from what type of file the coefficients were read.

The advantages of .WAV are that you can't mess up the format, and it's convenient to be able to store more than one channel in a single file.
When the DSP starts, each convolution filter reads its coefficients from disk. When it reads from wav, it first reads all the audio data from it (all the data for all the channels). Then it extracts the data for the channel it needs. That means if you have many channels in a file, then that makes the reading a little slower. But the file system cache should be efficient at helping with that. For a single channel wav, there is no speed difference compared to .dbl.