CamillaDSP - Cross-platform IIR and FIR engine for crossovers, room correction etc

IMO this has nothing to do with CDSP. The java controller cannot get URL of the next song (the previous error log we already discussed), yet asks MPD to play it (To :1) , making MPD output: Number too large: 1.
I completely agree. CDSP itself works nicely. If I run CDSP as service and let convert mpd all the files to a constant sample rate, everything works. However, when I use alsa_cdsp to handle different sample rates and bit depths, it does not work. The problem seems to be the communication between alsa_cdsp and the openhome playlist with both mediaplayer and ohPlayer. In mediaplayer, when I play the first track in the playlist and then press "stop", alsa_cdsp should stop CDSP but it does not, instead the system hangs. Also, alsa_cdsp is not able to handle track switching from track1 to track2 in an openhome playlist. In ohPlayer, playing the first track works too, I can press stop and (re)start and it works, but switching from track1 to track2 in the play list does not work (like in mediaplayer) and the system hangs. Thus, it seems that alsa_cdsp has a problem with the track management in openhome playlists. Both programs (mediaplayer, ohPlayer) work perfect without alsa_cdsp, e.g. when the audio output is directed to an ALSA device or with CDSP as service and the ALSA loopback. Thus, alsa_cdsp seems to be the problem.
 
@Dual01: IIUC your chain is : mediaplayer as a controller starts and controls mpd which does the actual audio work, accessing the tracks (as instructed by mediaplayer) and sending them to alsa device.


The problem seems to be the communication between alsa_cdsp and the openhome playlist with both mediaplayer and ohPlayer.
There is no direct communication between the two, they are in different layers, there is MPD between them.
In mediaplayer, when I play the first track in the playlist and then press "stop", alsa_cdsp should stop CDSP but it does not, instead the system hangs.
IMO you need to diagnose what actually happens when you press the stop in the controller, what command is sent to MPD, what MPD does with it. Yes, it is possible mpd uses some sequence of alsa API which was not assumed when developing alsa_cdsp and it does not respond correctly, e.g. interrupting the playback and staying stuck instead.

Also, alsa_cdsp is not able to handle track switching from track1 to track2 in an openhome playlist
alsa-cdsp has no idea about some track1 or 2, it receives audio samples from MPD.

The errors you have sent are not related to the actual alsa layer, they are from upper layers - mediaplayer cannot form URL of the next track, MPD complaining about being told to play track 1 while no track 1 was added prior to the request

Code:
Jan 11 16:09:01 raspberrypi mpd[659]: exception: Number too large: 1
Jan 11 16:09:01 raspberrypi mpd[659]: exception: No such song

Code:
2025-01-05 01:04:50,480 [Thread-6] DEBUG [org.rpi.player.PlayManager] Set Next AV Track :
2025-01-05 01:04:50,480 [Thread-6] DEBUG [org.rpi.mpdplayer.MPDPlayer] PreLoad Next Track:
2025-01-05 01:04:50,480 [Thread-6] ERROR [org.rpi.radio.parsers.FileParser] java.net.MalformedURLException: no protocol:
2025-01-05 01:04:50,481 [Thread-6] ERROR [org.rpi.mpdplayer.TCPConnector] Error: ACK [50@0] {addid} No such song

I would suggest to move this discussion to a different thread as it does not seem to be related to CDSP.
 
there is a problem with the webinterface on my mobile phone
I made some changes to improve the looks on computer screens, but those seem to make it worse on mobile. Making a ui that works well on both is tricky and very time consuming. I can see if the next version can behave a bit better, but it will never be great on mobile since it's not made for it.

Does this include the new version of the mixer UI ?
No, that will come in the next version. It requires making the rules for mixer config a bit stricter, which is a (small) breaking change.
 
Have you considered having separate ui depending on screen resolution?
Yes this is the best solution, but making the gui fit nicely on a small phone screen will need a major redesign, and then both versions need to be maintained.

A more realistic way is to use the desktop version and just make sure it's rendered ok-ish also on small screens, so that it's at least possible to use (with a lot of scrolling and zooming). It used to work, so should not be too hard to sort out.
 
  • Like
Reactions: fb
Hi Henrik

Thank your for No.3 !

I saw on GitHub that you seem to proceed your work ... e.g. on a RACE filter for CamillaDSP. Nice (nice, nice, once again, nice ...). Certainly many habitual stereophonic listeners have also amused themselves with xtalk-minimizing techniques. CDSP-RACE could therefore become a very easy-to-use and fun option for many of us. I even guess that my own, trustworthy, efficient and RACE custom processor (RACE == Rather Awkward Carpenter Edition) slowly is on it's faith to extinction now, you doomed it! Simply because CDSP-RACE might eventually be more comfortable to handle than constantly having to keep the rim of a wall precisely 2cm in front of one's nose while desperately trying to relax by listening to music. Time soon will tell ...


Race800.JPG



By the way, let me have some phantasms about an eventually different and quite complex approach to "other" (better ?) stereo: What about RACE'd trinaural? One might be first upsampling the two stereo channels into three-channels, the third channel being the time-frequency synthesized center channel. The upsampled third center cannel then is no more relevant for Xtalk, because it is centered. As there are now three channels emitting the audio energy, the L and R channels together might propagate less energy than within a 2-way system. Instead, the two trinaural sides may contain a much higher part of relevant side information than in a classic two-channels stereo system, where both sides have to provide the central stage information also. Be aware when talking about trinaural, that this kind of setup might be not only more complex in terms of gear, but also be acoustically quite tricky: You may physically quite easily match the placements of both side speakers to symetrically mirror the listening room's geometry, thus approaching theirs both acoustic radiation impedances and theirs both room resonances stimulation patterns. Instead, the acoustic radiation impedance for the center speaker and it's room resonaces stimulation pattern will be competely different. Therefore center and sides acoustically will behave quite differently, causing a mutual mismatch. Consequently, you would have to go to free field conditions for a well matched trinaural setup. Therefore and personally, I would not opt for a trinaural system for home listening. Nevertheless, it would be interesting to theoretically investigate the effects, the options and the eventual benefits of RACE for true trinaural systems, also.
 
I. Addendum:

Oh my ... too much off-topoic writing about the real life room-acoustical consequences of a physical trinaural setup, and no on-topic writing at all about the idea I wanted to expose. So then, the idea is ...

... not to appy RACE directly to a stereo signal, but to apply/process RACE to the two side channels of a previously upmixed trinaural dataset instead.
  • Perform a decomposition and an upmixing of the two stereo channels into to three trinaural channels, the third channel being the middle channel, as desctibed in the linked paper. The two upmixed side channels may then have a higher density/relevance of lateral information as the two original stereo signals.
  • Now apply RACE to the two upmixed trinaural side channels only, leaving the center channel unprocessed.
  • Finally downmix the two RACE'd trinaural sidechannels and the the centre channel into a two-channel stereo signal again.
Does such an approach make sense?

II. Erratum:

Sorry for my confusion of the terms in my first post. I erroneously wrote about upsampling. This is false. I meant decomposition and upmixing by that, as described in the linked paper.
 
Last edited:
I would like to thank the author for the excellent DSP software. Today, I updated to version 3.0.0 and probably encountered a minor bug in the GUI. When listing available devices for capture or playback via ALSA, the list of available devices is incomplete – the names are not displayed. And if a device does not have a description in .asoundrc, it does not appear in the list of available devices at all (e.g., dsnoop).


Best regards,
Marek



arecord -L
null
Discard all samples (playback) or generate zero samples (capture)
mono_in
First input channel to mono input device
hw:CARD=sndrpihifiberry,DEV=0
snd_rpi_hifiberry_dacplusadc, HiFiBerry DAC+ADC HiFi multicodec-0
Direct hardware device without any conversions
plughw:CARD=sndrpihifiberry,DEV=0
snd_rpi_hifiberry_dacplusadc, HiFiBerry DAC+ADC HiFi multicodec-0
Hardware device with all software conversions
default:CARD=sndrpihifiberry
snd_rpi_hifiberry_dacplusadc, HiFiBerry DAC+ADC HiFi multicodec-0
Default Audio Device
sysdefault:CARD=sndrpihifiberry
snd_rpi_hifiberry_dacplusadc, HiFiBerry DAC+ADC HiFi multicodec-0
Default Audio Device
dsnoop:CARD=sndrpihifiberry,DEV=0
snd_rpi_hifiberry_dacplusadc, HiFiBerry DAC+ADC HiFi multicodec-0
Direct sample snooping device

pcm_slave.ins {
pcm "hw:0,0"
rate 44100
channels 2
}

pcm.mono_in {
type dsnoop
ipc_key 12345
slave ins
bindings.0 0
hint {
description "First input channel to mono input device"
}
}



Snímek obrazovky z 2025-01-13 20-12-47.png
 
When listing available devices for capture or playback via ALSA, the list of available devices is incomplete – the names are not displayed.
Please any example?
And if a device does not have a description in .asoundrc, it does not appear in the list of available devices at all (e.g., dsnoop).
Yes, while the description is not required in aplay -L https://github.com/alsa-project/alsa-utils/blob/master/aplay/aplay.c#L355-L365 , it is required in cdsp https://github.com/HEnquist/camilla...c24c17182b63d051/src/alsadevice_utils.rs#L126 . Maybe the description requirement could be dropped, that is true.
 
Version 2.0.3, which I have been using until now, displayed the name in the device selection for playback or capture, including whether it was HW, plughw, etc. In the new version, i can't see this anymore (see the screenshot above). I tested it in Brave 1.73.105 and Firefox 134.

screenshot from version 2.0.3:
Snímek obrazovky z 2025-01-13 21-15-30.png
 
A more realistic way is to use the desktop version and just make sure it's rendered ok-ish also on small screens, so that it's at least possible to use (with a lot of scrolling and zooming). It used to work, so should not be too hard to sort out.
I really only need to use the basic functionality on mobile. Could an alternative option be to optimise the "Shortcuts" tab/screen for mobile screens? The optimised page displaying the shortcuts to change configs and adjust volume, then the sidebar's summary boxes displayed directly below that to enable scrolling without zooming or panning.
 
  • Like
Reactions: mdsimon2
I saw on GitHub that you seem to proceed your work ... e.g. on a RACE filter for CamillaDSP
I have implemented RACE with camilladsp and have bee using it for a few month now. When set up properly I find it addictive. I have a version with adjustable delay, gain and effect bandpass frequencies via the new V3 UI and another using fixed delay and gain using convolution. I will share in a separate post but will need a bit of time to clean things up.
 
  • Thank You
Reactions: TNT
Oh my ... too much off-topoic
😄

... not to appy RACE directly to a stereo signal, but to apply/process RACE to the two side channels of a previously upmixed trinaural dataset instead.
I was thinking about this, but didn't really know how to generate the center channel. I will read the thesis! Hopefully there is some example code or something so it's possible to try how it works without implementing the whole thing.
 
  • Like
Reactions: TNT
I was thinking about this, but didn't really know how to generate the center channel. I will read the thesis! Hopefully there is some example code or something so it's possible to try how it works without implementing the whole thing.

Beware ... the described method in this thesis has been protected by a patent in 2015. Do patents eventually restrict DIY-implementations and non-commercial use? Or not?

Before diving into all these maths, I guess it would be useful to theoretically assess the benefits of RACE-ing the trinaural sides vs. the standard stereo channels. Maybe there is no real difference at all? So all this work would be in vain?

Uli Brüggemann's acourate convolver performs center channel extraction to trinaural, as an example. As described in another forum, most probably approximating or even following the approach described in the thesis of Sebastian Kraft et al.. This said, be aware that the acourate convolver has a rather slim user's bandwidth: Windows only, along with an awkwardly proprietary filter format, demanding the purchase of acourate also. Therefore an open and universally trinaural processor might be a nice toy, indeed ...
 
Beware ... the described method in this thesis has been protected by a patent in 2015. Do patents eventually restrict DIY-implementations and non-commercial use? Or not?
Hmm that's a problem. I think it's ok to use patented technology in personal projects, but I don't think it's ok to share the code as open source.
Before diving into all these maths, I guess it would be useful to theoretically assess the benefits of RACE-ing the trinaural sides vs. the standard stereo channels. Maybe there is no real difference at all? So all this work would be in vain?
Yes, it needs a pretty big investment of time, need to have some idea if it's worth it or not.
Maybe by just trying it out with Acourate?