The capture and playback devices are independent for opening. But of course should e.g. a particular device be used by PA/pipewire, it cannot be used by CDSP. Which is what I suspect happening. The chain needs a proper description (a text "diagram" would do) to start with, IMO.
What might be happening is that when the first program opens the device, a second one cannot access it. Have you tried doing e.g. a loopback measurement with a single program. For example under Windows I use ARTA. This would confirm that the MOTU mk5 can be operated as full duplex (simultaneous read/write) under ALSA.
On Ubuntu (ALSA only) if CamillaDSP is NOT running I can make loopback measurements with REW (both physical loopbacks and the UL mk5 software loopback) without issue.
On Mac OS I can make loopback measurements in REW with CamillaDSP running using Blackhole-2ch and also UL mk5 software loopback as capture devices.
So it seems like the issue is specific to Linux and not inherent to CamillaDSP or REW. It does seem like once CamillaDSP is started on Liinux any playback / capture devices are no longer accessible by other programs. For example I am unable to use the UL mk5 software loopback (play to outputs 1-2 of UL mk5 and it will provide that as inputs 9-10) when CamillaDSP is running as I get device busy errors.
Michael
Last edited:
If CDSP opens the alsa multichannel playback UL device, logically that device gets busy and no other process can access the output channels 1-2.So it seems like the issue is specific to Linux and not inherent to CamillaDSP or REW. It does seem like once CamillaDSP is started on Liinux any playback / capture devices are no longer accessible by other programs. For example I am unable to use the UL mk5 software loopback (play to outputs 1-2 of UL mk5 and it will provide that as inputs 9-10) when CamillaDSP is running as I get device busy errors.
If the above is the case, you could create two PCM devices with .asoundrc, one with channels 1, 2 for the loopback, the other with channels needed by CDSP.
But maybe your setup is different from what I got the impression from your description.
My feeling is that there are several issues that may not be related.
One of them is that the M4 and UL both have some issues with simultaneous playback and capture. It's possible to trigger that with just aplay and arecord, see the this message: https://github.com/HEnquist/camilladsp/issues/114#issuecomment-1260514331
I think they are both using the standard usb audio driver, @phofman please correct me if I'm wrong. So it's quite possible that more devices are affected by this. I have an M4, but I don't really know where to start digging into this.
One of them is that the M4 and UL both have some issues with simultaneous playback and capture. It's possible to trigger that with just aplay and arecord, see the this message: https://github.com/HEnquist/camilladsp/issues/114#issuecomment-1260514331
I think they are both using the standard usb audio driver, @phofman please correct me if I'm wrong. So it's quite possible that more devices are affected by this. I have an M4, but I don't really know where to start digging into this.
I agree. IMO the problems described in this thread ("device busy") are caused by processes really taking the device, the by far most common problem being hit here and in that audioscience thread. And by far most frequently caused by PA/pipewire/both fighting for the device (even both PA and pipewire running simultaneously, which a recent report on a different forum revealed).My feeling is that there are several issues that may not be related.
One of them is that the M4 and UL both have some issues with simultaneous playback and capture. It's possible to trigger that with just aplay and arecord, see the this message: https://github.com/HEnquist/camilladsp/issues/114#issuecomment-1260514331
I think they are both using the standard usb audio driver, @phofman please correct me if I'm wrong. So it's quite possible that more devices are affected by this. I have an M4, but I don't really know where to start digging into this.
That issue from audiosciencereview seems something else - something like as if the playback devices is not snd_pcm_pause'd and does not receive data (which CDSP does, IMO), the capture alsa device of the duplex USB device gets stalled too, resulting in read/write error on capture (see that arecord log in https://www.audiosciencereview.com/...amilladsp-tutorial.29656/page-34#post-1324541 ). It may be a problem in the snd_usb_audio driver, a bug in the USB receiver firmware (many manufacturers use common base for their UAC2 implementation), maybe CDSP should call the pause by alsa specs (I do not know). It's possible to ask at the alsa-devel mailing list.
Really appreciate all the help. I will do a proper writeup on my setup so we can dig deeper. The error message from REW is the same as mdsimon2 is getting but there might be several differences still in the setup.
The pc in question is an Intel NUC fully dedicated for this purpose so I can also start from scratch. But as said I will provide full details on the setup shortly.
The pc in question is an Intel NUC fully dedicated for this purpose so I can also start from scratch. But as said I will provide full details on the setup shortly.
As a workaround, I wonder if you can open the UL mk5 with a process that simultaneously maps inputs and output to separate subdevices of the ALSA loopback. Then another process could come and read and write from/to the ALSA loopback devices as needed. For example, using ecasound it would be something like this:
Above, X is the ALSA card number for your UL mk5 interface. Y is the ALSA loopback card number. Note that these may change after reboots.
This uses the ecasound alsa I/O format, which is:
-i[:]alsahw,card_number,device_number,subdevice_number
If you omit the subdevice, or any of the alsa parameters, the value will be defaulted to 0 by ecasound.
The formatting statement is -f:[bit depth],[channels],[rate] and controls the audio parameters. Change to suit. These specifications can be different for each line, since the loopbacks are independent.
The ALSA loopback is similar to N pipes, with the default installtion giving you 8 separate pipes. The SUBDEVICE number indicates which pipe to use. Each pipe has two ends, numbered 0 and 1. The end is specified by the DEVICE of the loopback. The in/out direction for an end is arbitrary, however, I have used the convention that inputs to the loopback are on end=device=0 and outputs from the loopback are on end=device=1.
I tried the ecasound command on a computer, reading from the built-in codec to the loopback and writing from the loopback to the built in codec at the same time, and it seemed to work, although I did not actually check the audio itself. I belive that, when nothing is reading or writing from/to the loopback, it will either send zeroes or send un-claimed inputs to /dev/null and keep on happily doing that forever, while keeping the actual audio interface open/busy. This means you can have this arrangement running in the background anytime and use it whenever you want.
Once you the ecasound command is running sucessfully, open a new terminal and run other programs that will get or put data to each loopback instead of from ALSA.
To get (input) data, specify as input the ALSA device hw:Y,1,0
To write (output) data, specify as output the ALSA device hw:Y,0,1
Make sure you use the same bit depth, channel count, and sample rate that you specified in the ecasound command. Note that each loopback subdevice is independent of the others, so the one getting data from the audio interface could use 2 channels only, and the other one putting audio data back to the audio interface could have 8 channels, as long as the audio interface supports being used in that way.
Feel free to give that a try if you want. Might solve your issue.
Any questions, just post a follow up or PM me. It's always possible that I made a mistake when writing the commands above! So if it doesn't work right away, check the loopback usage or ask me for help.
Code:
ecasound -B:rt -z:mixmode,sum \
-a:getdata -f:32,4,96000 -i:alsahw,X,0 -o:alsahw,Y,0,0
-a:putdata -f:32,4,96000 -i:alsahw,Y,1,1 -o:alsahw,X,0
Above, X is the ALSA card number for your UL mk5 interface. Y is the ALSA loopback card number. Note that these may change after reboots.
This uses the ecasound alsa I/O format, which is:
-i[:]alsahw,card_number,device_number,subdevice_number
If you omit the subdevice, or any of the alsa parameters, the value will be defaulted to 0 by ecasound.
The formatting statement is -f:[bit depth],[channels],[rate] and controls the audio parameters. Change to suit. These specifications can be different for each line, since the loopbacks are independent.
The ALSA loopback is similar to N pipes, with the default installtion giving you 8 separate pipes. The SUBDEVICE number indicates which pipe to use. Each pipe has two ends, numbered 0 and 1. The end is specified by the DEVICE of the loopback. The in/out direction for an end is arbitrary, however, I have used the convention that inputs to the loopback are on end=device=0 and outputs from the loopback are on end=device=1.
I tried the ecasound command on a computer, reading from the built-in codec to the loopback and writing from the loopback to the built in codec at the same time, and it seemed to work, although I did not actually check the audio itself. I belive that, when nothing is reading or writing from/to the loopback, it will either send zeroes or send un-claimed inputs to /dev/null and keep on happily doing that forever, while keeping the actual audio interface open/busy. This means you can have this arrangement running in the background anytime and use it whenever you want.
Once you the ecasound command is running sucessfully, open a new terminal and run other programs that will get or put data to each loopback instead of from ALSA.
To get (input) data, specify as input the ALSA device hw:Y,1,0
To write (output) data, specify as output the ALSA device hw:Y,0,1
Make sure you use the same bit depth, channel count, and sample rate that you specified in the ecasound command. Note that each loopback subdevice is independent of the others, so the one getting data from the audio interface could use 2 channels only, and the other one putting audio data back to the audio interface could have 8 channels, as long as the audio interface supports being used in that way.
Feel free to give that a try if you want. Might solve your issue.
Any questions, just post a follow up or PM me. It's always possible that I made a mistake when writing the commands above! So if it doesn't work right away, check the loopback usage or ask me for help.
Last edited:
If CDSP opens the alsa multichannel playback UL device, logically that device gets busy and no other process can access the output channels 1-2.
If the above is the case, you could create two PCM devices with .asoundrc, one with channels 1, 2 for the loopback, the other with channels needed by CDSP.
But maybe your setup is different from what I got the impression from your description.
Thanks for the suggestion!
I believe I have created the PCM devices but for some reason I cannot use them in CamillaDSP. They work perfectly outside of CamillaDSP, I can play to ch 1-2 using aplay with one PCM device and at the same time use REW to play to channels 3-4 with another PCM device all while using the REW RTA to record the loopback channels on inputs 9-10. However when I specify my "playback" PCM device as my CamillaDSP playback device I get the following error.
[src/bin.rs:344] Playback error: ALSA function 'snd_pcm_open' failed with error 'ENOENT: No such file or directory'
I can use other PCM devices (like default) as playback devices without issue in CamillaDSP.
Here is my .asoundrc which I pretty much stole from here -> https://bootlin.com/blog/audio-multi-channel-routing-and-mixing-using-alsalib/
Code:
pcm_slave.outs {
pcm "hw:UltraLitemk5"
rate 88200
channels 18
}
pcm.loopback {
type dshare
ipc_key 4242
slave outs
bindings.0 0
bindings.1 1
}
pcm.playback {
type dshare
ipc_key 4242
slave outs
bindings.2 2
bindings.3 3
bindings.4 4
bindings.5 5
bindings.6 6
bindings.7 7
bindings.8 8
bindings.9 9
bindings.10 10
bindings.11 11
bindings.12 12
bindings.13 13
bindings.14 14
bindings.15 15
bindings.16 16
bindings.17 17
}
Is there anything special that needs to be done to specify these PCM devices as playback devices in CamillaDSP?
Michael
Code:ecasound -B:rt -z:mixmode,sum \ -a:getdata -f:32,4,96000 -i:alsahw,X,0 -o:alsahw,Y,0,0 -a:putdata -f:32,4,96000 -i:alsahw,Y,1,1 -o:alsahw,X,0
Oops, I noticed that I forgot a line continuation character at the end of the second line above.
Also, Ecasound commands are often multi-line, so I typically put them in a bash shell script. Good practice is to start the file with @!/bin/bash to tell the interpreter what to use. Here it is altogether now:
Code:
#!/bin/bash
ecasound -B:rt -z:mixmode,sum \
-a:getdata -f:32,4,96000 -i:alsahw,X,0 -o:alsahw,Y,0,0 \
-a:putdata -f:32,4,96000 -i:alsahw,Y,1,1 -o:alsahw,X,0
Save the (text) file as "test.sh" or whatever name you wish. Then make that file executable using chmod:
Code:
chmod +x test.sh
Run the file as usual, e.g.
Code:
./test.sh
Thanks for the suggestion!
I believe I have created the PCM devices but for some reason I cannot use them in CamillaDSP. They work perfectly outside of CamillaDSP, I can play to ch 1-2 using aplay with one PCM device and at the same time use REW to play to channels 3-4 with another PCM device all while using the REW RTA to record the loopback channels on inputs 9-10. However when I specify my "playback" PCM device as my CamillaDSP playback device I get the following error.
[src/bin.rs:344] Playback error: ALSA function 'snd_pcm_open' failed with error 'ENOENT: No such file or directory'
I can use other PCM devices (like default) as playback devices without issue in CamillaDSP.
Here is my .asoundrc which I pretty much stole from here -> https://bootlin.com/blog/audio-multi-channel-routing-and-mixing-using-alsalib/
Code:pcm_slave.outs { pcm "hw:UltraLitemk5" rate 88200 channels 18 } pcm.loopback { type dshare ipc_key 4242 slave outs bindings.0 0 bindings.1 1 } pcm.playback { type dshare ipc_key 4242 slave outs bindings.2 2 bindings.3 3 bindings.4 4 bindings.5 5 bindings.6 6 bindings.7 7 bindings.8 8 bindings.9 9 bindings.10 10 bindings.11 11 bindings.12 12 bindings.13 13 bindings.14 14 bindings.15 15 bindings.16 16 bindings.17 17 }
Is there anything special that needs to be done to specify these PCM devices as playback devices in CamillaDSP?
Michael
OK, realized I needed to use /etc/asound.conf instead of .asoundrc. I can now use the PCM devices as output devices but it still looks like once CamillaDSP is running it locks out the use of the interface.
Here is what I get when I try to play a test file to the "loopback" PCM device which using the "playback" PCM device in CamillaDSP.
aplay -D loopback 1k_88200_L.wav
ALSA lib pcm_direct.c:2127🙁_snd_pcm_direct_new) unable to create IPC semaphore
aplay: main:831: audio open error: Permission denied
Very much out of my depth with this ALSA configuration stuff so maybe there is something to be done so that it doesn't lock out?
Michael
Michael, should not bindings channels start from zero, so that all channels in the new PCM device are defined?
I'm still working on getting my setup documented. I'm also running tests with the recommended aplay/arecord commands trying to figure out if things work in the lower level. But I realised that I had forgotten something potentially important related to Fedora versions. Especially versions 34 and 35 contain major changes to pipewire/wireplumber.
@mdsimon2 which OS/version are you running? When going through my notes I realised that I had briefly managed to get this (CDSP+REW) working. This was when I was on Fedora 34. I was in middle of setting the new mic/loopback up and then on one of reboots Fedora suggested running updates. I did not realise it was going to do version upgrade to 35 but that was the result.
On Fedora 35 my Input device list contained only "Analog input - Built-in Audio". On Fedora 34 I had "UltraLite-mk5" listed and also selected as the default input device. And that worked with CamillaDSP + REW, I was able to run measurement sweep and both microphone input and loopback reference signal was coming through. This was so brief moment that I forgot about it. Even then I remember trying to get REW show channel numbers instead of L/R was a hassle.
I am now on Fedora 36, thought it could not go worse. Initially it also contained only the "Analog input..." device. The "UltraLite-mk5" was missing. It took me a while to find that the default "Profile" for the MK5 was set to "Off". When I changed it with pavucontrol to any other value than "Off" it became visible in the Input device selection list. I ended up using "Pro" profile because it was recommended for these kinds of pro audio interfaces.
Using MK5 with Pro profile did not directly help with REW since if I selected the "UltraLite-mk5 Pro" as the input device the microphone did not work in REW regardless of CDSP running or not. I had to create a virtual source inside pipewire configuration connecting the MK5 capture-AUX0 to it. This is named "MOTU MIC1" in below screenshot. When it is selected as the input device, selecting "default (default)" in REW makes the mic work (without CDSP).
TLDR; For me it seems like something broke the CDSP+REW combo when upgrading from Fedore 34 -> 35. I can identify two changes so far, Wireplumber becoming the default media session manager and the introduction of "profiles". Most likely a lot of changes to Pipewire too.
I will continue my lower level testing. I will also try switching back to the old pipewire-session-manager to see if it makes a difference.
Thank god this is a hobby 😉
@mdsimon2 which OS/version are you running? When going through my notes I realised that I had briefly managed to get this (CDSP+REW) working. This was when I was on Fedora 34. I was in middle of setting the new mic/loopback up and then on one of reboots Fedora suggested running updates. I did not realise it was going to do version upgrade to 35 but that was the result.
On Fedora 35 my Input device list contained only "Analog input - Built-in Audio". On Fedora 34 I had "UltraLite-mk5" listed and also selected as the default input device. And that worked with CamillaDSP + REW, I was able to run measurement sweep and both microphone input and loopback reference signal was coming through. This was so brief moment that I forgot about it. Even then I remember trying to get REW show channel numbers instead of L/R was a hassle.
I am now on Fedora 36, thought it could not go worse. Initially it also contained only the "Analog input..." device. The "UltraLite-mk5" was missing. It took me a while to find that the default "Profile" for the MK5 was set to "Off". When I changed it with pavucontrol to any other value than "Off" it became visible in the Input device selection list. I ended up using "Pro" profile because it was recommended for these kinds of pro audio interfaces.
Using MK5 with Pro profile did not directly help with REW since if I selected the "UltraLite-mk5 Pro" as the input device the microphone did not work in REW regardless of CDSP running or not. I had to create a virtual source inside pipewire configuration connecting the MK5 capture-AUX0 to it. This is named "MOTU MIC1" in below screenshot. When it is selected as the input device, selecting "default (default)" in REW makes the mic work (without CDSP).
TLDR; For me it seems like something broke the CDSP+REW combo when upgrading from Fedore 34 -> 35. I can identify two changes so far, Wireplumber becoming the default media session manager and the introduction of "profiles". Most likely a lot of changes to Pipewire too.
I will continue my lower level testing. I will also try switching back to the old pipewire-session-manager to see if it makes a difference.
Thank god this is a hobby 😉
Henris, do you really need pipewire/pulseaudio in your measurement chain? What functionality does it serve which cannot be provided by plain alsa, in your specific setup?
I have no idea. I ended up to Linux/Fedora when I hit problems on Windows side. And since then I have followed the path set by Henrik (Fedora, camilladsp-config project). I have yet to see a tutorial using plain alsa config. I'm all for simpler setup but I'm at a point where I understand that I do not understand enough to be able to be creative.Henris, do you really need pipewire/pulseaudio in your measurement chain? What functionality does it serve which cannot be provided by plain alsa, in your specific setup?
These are my playback, capture and measurement loopback chains. I will read more using plain alsa. Your examples to Michael are a good starting point.
Playback chain (stereo to two two-channel speakers + two subs): Spotify/REW -> alsa loopback device -> CDSP -> MK5 alsa hw playback device (6 channels) -> MK5
Capture chain: Analog mic -> MK5 MIC1 input -> MK5 alsa hw capture device AUX0 -> pipewire virtual loopback source -> REW
Measurement loopback chain (cloned from R): REW -> alsa loopback device -> CDSP -> MK5 alsa hw playback device (1 channel) -> MK5 output 10 -> cable -> MK5 input 3 -> > MK5 alsa hw capture device AUX0 -> pipewire virtual loopback source -> REW
Found this in Pipewire open issues:
Audio Input regression on behringer umc404hd after update
https://gitlab.freedesktop.org/pipewire/pipewire/-/issues/2703
I'm at least running exactly the same Pipewire and kernel versions. Gives even more motivation to perform those aply/arecord tests and find out if I could go plain alsa.
Audio Input regression on behringer umc404hd after update
https://gitlab.freedesktop.org/pipewire/pipewire/-/issues/2703
I'm at least running exactly the same Pipewire and kernel versions. Gives even more motivation to perform those aply/arecord tests and find out if I could go plain alsa.
Michael, should not bindings channels start from zero, so that all channels in the new PCM device are defined?
Thanks for the tip, I am out of town for the weekend but will give that a try when I return.
@mdsimon2 which OS/version are you running?
Ubuntu Server 22.04 64 bit on a raspberry pi 4 using plain ALSA.
Michael
I assume the above is not running at the same time with the below, right? Otherwise you would need to create the split devices like we discuss with Michael in the previous posts.Playback chain (stereo to two two-channel speakers + two subs): Spotify/REW -> alsa loopback device -> CDSP -> MK5 alsa hw playback device (6 channels) -> MK5
Capture chain: Analog mic -> MK5 MIC1 input -> MK5 alsa hw capture device AUX0 -> pipewire virtual loopback source -> REW
Measurement loopback chain (cloned from R): REW -> alsa loopback device -> CDSP -> MK5 alsa hw playback device (1 channel) -> MK5 output 10 -> cable -> MK5 input 3 -> > MK5 alsa hw capture device AUX0 -> pipewire virtual loopback source -> REW
IMO there is no need to put PW in your capture chains. Just let REW capture directly from the alsa hw device. Should duplex problems like those of Michael arise, they need to be fixed.
Of course you must tell PW to ignore the MK5 device. That may be the current problem - IMO PA/PW cannot ignore a device in one direction only, so you cannot tell it to ignore MK5 on playback and use on capture. As a result PW "consumes" the playback MK5 alsa device, making it "device busy" for CDSP. Solution - get rid of PW in all your chains.
Did both CDSP and aplay run under the same user? Alsa IPC semaphores have permissions 0600 by default, i.e. only owner rw. Alternatively you can try adding ipc_perm 666 below both occurences of ipc_key in your asound config.Here is what I get when I try to play a test file to the "loopback" PCM device which using the "playback" PCM device in CamillaDSP.
aplay -D loopback 1k_88200_L.wav
ALSA lib pcm_direct.c:2127🙁_snd_pcm_direct_new) unable to create IPC semaphore
aplay: main:831: audio open error: Permission denied
Ran the arecord and aplay tests with my UL MK5. I decided to record to a wav just to make sure the microphone is really recorded. While the recording worked nicely the playback failed due to incorrect channel count. Did some more reading and wound up defining slave devices for testing purposes in /etc/asound.conf.
After that I was able to record with:
And if recording was active the playback failed with:
While doing the alsa slave config I noticed that the card and device id's were the same for playback and capture:
aplay -l
arecord -l
Should it be like this? Wouldn't this explain why the simultaneous playback and capture fails?
Code:
pcm_slave.ins {
pcm "hw:UltraLitemk5,0,0"
rate 96000
channels 16
}
pcm.mic1 {
type dsnoop
ipc_key 516001
slave ins
bindings.0 0
}
pcm_slave.outs {
pcm "hw:UltraLitemk5,0,0"
rate 96000
channels 18
}
pcm.out0 {
type dshare
ipc_key 516002
slave outs
bindings.0 0
}
After that I was able to record with:
And playback with:arecord -f S24_3LE -d 30 -r 96000 -c 1 --device="pcm.mic1" /tmp/test-mic.wav
But the simultaneous playback and capture failed. If playback was active the recording failed with:aplay --device="pcm.out0" /tmp/test-mic.wav
And if recording was active the playback failed with:
While doing the alsa slave config I noticed that the card and device id's were the same for playback and capture:
aplay -l
arecord -l
Should it be like this? Wouldn't this explain why the simultaneous playback and capture fails?
- Home
- Source & Line
- PC Based
- CamillaDSP - Cross-platform IIR and FIR engine for crossovers, room correction etc