I aint very bright so have a stupid question.
The manual states "For performing fixed ratio resampling, like resampling from 44.1kHz to 96kHz (which corresponds to a precise ratio of 147/320) choose the "Synchronous" variant". . . . which is cool because thats what I am doing.
It also states "When using the rate adjust feature to match capture and playback devices, one of the "Async" variants must be used"
Just for my education, under what scenarios would you need to use rate adjust.
Thanks,
Peter
The question to ask is whether your capture and playback devices have different clocks. If they have different clocks you need rate adjust.
In the case of something like a USB audio interface using an internal clock and an analog input/output both the capture and playback devices are using the internal clock so no rate adjust is required. In this case you should see a very stable buffer level.
If your capture and playback devices have different clocks you will need to use rate adjust to bridge the clock domains and prevent buffer under/over runs. For example if you were using a SPDIF to USB card as input (clocked by incoming SPDIF stream) and a USB DAC as output (using it's internal clock) you would need to enable rate adjust and async resampling. Without rate adjust / resampling you would see a decreasing or increasing buffer level. With rate adjust / resampling you will see a varying buffer level but it should always CamillaDSP should always be adjusting to try and maintain your target level.
If your capture device is an ALSA loopback rate adjust is still required but async resampling is not as CamillaDSP can adjust the virtual clock of the loopback device.
Hope this helps.
Michael
You could change it to log? You ever think about making the config another tab and leaving the Equalizer scale thing with status stuff on the main page. That would allow you to stretch it out.This is actually quite similar to how it looks now 🙂 Look at the screenshot here: https://www.diyaudio.com/community/...m-correction-etc.349818/page-170#post-7087214
Right now the scale starts at -50dB, and it's linear in dB. I would like to change that to get a larger range. Sometimes it's useful to see if there is some very low level noise, so would like to expand the scale range up to 100 dB or so. I'm thinking something similar to your sketch, where it gets finer towards the right. Gridlines at 0, -3, -6, -12, -24, -48 and -96? I'll try that to see how it looks. But this is maybe something that should wait until the next version. Have to stop somewhere if it's ever going to get ready 🙂
Hi Henrik and all the rest,CamillaDSP runs all filters in a single thread. This is because the filtering pipeline can be build very freely, and it's difficult to figure out what can be run in parallel with what. I am thinking about ways of splitting it in several threads but it won't be easy. And the splitting into more threads is likely to increase the minimum possible latency because of the overhead of shuffling data between threads.
However already a weak cpu today is quite powerful. For example a Raspberry Pi 4 can do FIR-filtering of 8 channels, with 262k taps per channel at 192 kHz, while using just over half a core. A Gemini Lake should be in the same ballpark.
I was going to ask about the HW requirements needed until I found this post from you back in Dec -21, but I don't know if this post fully answers what I would like to do, I have long had the idea of a 4 way active DIY speaker project, but beside the crossover filtering I also want to linearize both the frequency and phase response to a very fine degree (using rePhase) which probably translate to many many taps I guess, would you say an RPi4 is still powerful enough or do one have to step up to a mini PC with a more powerful AMD or Intel CPU?
And a quick question regarding choice of OS, I am not all to familiar with RPi but what OS is it possible to run and use CDSP on?
Would it be possible and reasonable to run a so called headless Linux distro, maybe something like TinyCore, I was just thinking of running the leanest Linux possible so it can spend as much processing power as possible running CDSP.
oh and one more question, how about specialized kernels, does it help run some low latency kernel, any thoughts and suggestions around this?
Regards Michael
If your capture device is an ALSA loopback rate adjust is still required but async resampling is not as CamillaDSP can adjust the virtual clock of the loopback device.
Hope this helps.
Michael
Many thanks for that!!!
The above is my use case and as per Camilla doco, 44.1 -> 96 is integer based so sync (not async) works
@HenrikEnquist
Whats the probablity of routing the ALSA playback on a per channel basis (ie. Front L+R to dac A, Surround L+R to dac B etc) rather than global
This would allow the use of multiple 2 channel dacs... yes you might get some slight latency difference but that would not be noticable (IMHO) given the acoustical differences due to speaker placements/room acoustics.
Thanks,
Peter
Whats the probablity of routing the ALSA playback on a per channel basis (ie. Front L+R to dac A, Surround L+R to dac B etc) rather than global
This would allow the use of multiple 2 channel dacs... yes you might get some slight latency difference but that would not be noticable (IMHO) given the acoustical differences due to speaker placements/room acoustics.
Thanks,
Peter
You can do it with an Alsa multi plugin. See here: https://alsa.opensrc.org/TwoCardsAsOne@HenrikEnquist
Whats the probablity of routing the ALSA playback on a per channel basis (ie. Front L+R to dac A, Surround L+R to dac B etc) rather than global
This would allow the use of multiple 2 channel dacs... yes you might get some slight latency difference but that would not be noticable (IMHO) given the acoustical differences due to speaker placements/room acoustics.
Thanks,
Peter
This seems to work alright as long as the DACs have synchronized clocks. In practice this basically means simple usb DACs that run in syncronous mode. They lock their sample rate to the USB bus clock, so they will keep in sync with each other as long as they are connected to the same USB bus. High quality DACs normally use async mode. Then each dac runs from its own clock, and they drift apart. This means you get a varying latency difference, and when the difference gets too large there will be glitches from buffer underruns.
I hate to jump in regarding this issue and talk about some "other" software, but it is something that I brought up with Henrik recently about CDSP so it's relevant here.@HenrikEnquist
Whats the probablity of routing the ALSA playback on a per channel basis (ie. Front L+R to dac A, Surround L+R to dac B etc) rather than global
This would allow the use of multiple 2 channel dacs... yes you might get some slight latency difference but that would not be noticable (IMHO) given the acoustical differences due to speaker placements/room acoustics.
Thanks,
Peter
What you describe is possible under other audio processing software. I use Gstreamer for my own routing and DSP work. It happens to do many other things that I like, so I designed an app around it. It is much more complicated than CDSP, so get ready for a deep dive if you want to give it a try.
Gstreamer allows the user to define a reference for the pipeline clock. The user can choose whether this is the source, sink, or the local clock on the computer. All other sources or sinks are slaved to this clock by manipulating samples or the current location in the ring buffer. This works even for multiple asychronous clocks. Gstreamer's method for synchronization is more crude than what Henrik has done in CDSP, but Henrik (from what he told me) has designed the CDSP architechture for only one source and one sink and to make it possible for CDSP to use multiple sinks would require re-writing much of the code. So it looks like that is not going to happen. But wait, there is another way...
Gstreamer's ability to sync to the local clock is very useful to me. I stream uncompressed audio from one computer (let's call it my HTPC) to various loudspeaker systems in my home, over WiFi. On some systems I use two separate computers to run the left and right loduspeakers, and on each there is a Gstreamer instance that receives the audio stream and does all the DSP. These must be kept tightly in sync, to within 100 usec or so, or the stereo image will start to wander. At the same time I have set up a stratum 1 GPS based time server on my LAN and I use chrony to sync all of my audio network clients to that reference clock. With only one hop via WiFi I can get very good synchronization, to e.g. 10usec. Then I configure Gstreamer to use the local clock (the client's clock) as the pipeline clock, and sync RX and TX to that. As a result, every single audio client is in sync. I can play multiple loudspeaker systems in the same room and they playback as one. Under this scenario I can, for example, have multiple distributed subwoofers in addition to the main L/R speakers, each with their own computer client, all in sync. I do this because one of my hobby interests is creating wireless active loudspeaker projects.
Anyway, even if Henrik did not change CDSP to use more than one source or sink, if he could make it possible to sync both source and sink to the local clock you could deploy multiple computers, each running one DAC, and get the same benefit as long as the local clocks were all tightly syncd to one time reference. It probably would not be as good as a single client running a multichannel DAC, but would really increase the utility from the user standpoint. Raise your hand if you think this would be a useful feature in Camilla.
Sorry missed this one before! The numbers in that old post are still valid, there haven't been any significant speed increases since then. For a pair of 4-way speakers you also need 8 channels like in the old post. I would say that 262k taps per channel is quite a lot, you will hardly need more than that. It's definitely possible to create longer filters, but I don't think you would gain anything useful from the extra length. So a Pi4 is very likely more than sufficient.Hi Henrik and all the rest,
I was going to ask about the HW requirements needed until I found this post from you back in Dec -21, but I don't know if this post fully answers what I would like to do, I have long had the idea of a 4 way active DIY speaker project, but beside the crossover filtering I also want to linearize both the frequency and phase response to a very fine degree (using rePhase) which probably translate to many many taps I guess, would you say an RPi4 is still powerful enough or do one have to step up to a mini PC with a more powerful AMD or Intel CPU?
And a quick question regarding choice of OS, I am not all to familiar with RPi but what OS is it possible to run and use CDSP on?
Would it be possible and reasonable to run a so called headless Linux distro, maybe something like TinyCore, I was just thinking of running the leanest Linux possible so it can spend as much processing power as possible running CDSP.
oh and one more question, how about specialized kernels, does it help run some low latency kernel, any thoughts and suggestions around this?
Regards Michael
For OS I would suggest to start with Raspberry Pi OS since it's simple to set up and common. TinyCore is a special one that runs from RAM. It's very slim, but more difficult to customize than "normal" distributions. If you want something lean, just go with the RPiOS version without a graphical desktop. And don't bother with special kernels.
I see two scenarios where this would be possible. One it when capturing from the alsa loopback. Since it has a pitch control, that could be used to control the rate of the incoming data. Then the asynchronous resampler could be used to sync the playback device. The other case is when capturing from file, pipe or stdin. This would need a complete rewrite of that backend, to allow it to read and pass along data at a very precise rate. The other capture sources don't provide any way to control the rate of the data.Anyway, even if Henrik did not change CDSP to use more than one source or sink, if he could make it possible to sync both source and sink to the local clock you could deploy multiple computers, each running one DAC, and get the same benefit as long as the local clocks were all tightly syncd to one time reference. It probably would not be as good as a single client running a multichannel DAC, but would really increase the utility from the user standpoint. Raise your hand if you think this would be a useful feature in Camilla.
There is one remaining problem though. For this to make sense at all, the playback on several individual systems must be started at the same time, with high precision. I guess RTP or whatever Gstreamer uses can help with that, but I really don't have any experience.
Some more progress on the GUI 🙂
The level meter now goes down to -108 dB.
And I split the too complicated "Apply to DSP button" into three separate buttons with one function each. There is also separate controls for saving and applying automatically, as well as indicators for when there are changes that have not been saved or applied to the DSP.
I'll clean up a little, then release a new preview.
I'll update the macOS setup script also so TNT can run it and give feedback 🙂
The level meter now goes down to -108 dB.
And I split the too complicated "Apply to DSP button" into three separate buttons with one function each. There is also separate controls for saving and applying automatically, as well as indicators for when there are changes that have not been saved or applied to the DSP.
I'll clean up a little, then release a new preview.
I'll update the macOS setup script also so TNT can run it and give feedback 🙂
Exactly, the gstreamer setup works because the transmitter uses RTP which puts timestamps within the stream, and all the gstreamer receivers synchronize playback to these timestamps.There is one remaining problem though. For this to make sense at all, the playback on several individual systems must be started at the same time, with high precision. I guess RTP or whatever Gstreamer uses can help with that
Also got some errors here.Ah yes of course, the wonderful CODE block garbled stuff as usual.
Try this one: Translate REW xml filter config * GitHub
BTW: NOOB in python. How to run this script? Macos Python --version 3.8.9
There was a typo on the first row, fixed now (an "i" had gotten lost so it said "mport" instead of "import"). Try again now! If you still have trouble, just post the error message here.Also got some errors here.
BTW: NOOB in python. How to run this script? Macos Python --version 3.8.9
Thanks, totally blind here it seems 🙂
Started now like this:
Now it works.
Started now like this:
Had to install pyyamlpython3 xmltranslate.py "test.xml"
Traceback (most recent call last):
File "xmltranslate.py", line 3, in <module>
import yaml
ModuleNotFoundError: No module named 'yaml'
pip3 install pyyaml
Now it works.
New gui release candidate!
You need
On macOS, you can try the updated (but untested) setup script in branch "gui_rc100": https://github.com/HEnquist/camilladsp-setupscripts/tree/gui_rc100
You need
- camilladsp v1.0.1: https://github.com/HEnquist/camilladsp/releases/tag/v1.0.1
- pycamilladsp v1.0.0: https://github.com/HEnquist/pycamilladsp/releases/tag/v1.0.0
- pycamilladsp-plot v1.0.2-rc1: https://github.com/HEnquist/pycamilladsp-plot/releases/tag/v1.0.2-rc1
- camillagui-backend v1.0.0-rc6: https://github.com/HEnquist/camillagui-backend/releases/tag/v1.0.0-rc6
On macOS, you can try the updated (but untested) setup script in branch "gui_rc100": https://github.com/HEnquist/camilladsp-setupscripts/tree/gui_rc100
I tried updating the GUI but get some weird behavior where it smashes all of my configs together.
Switching back to rc5 solves the issue.
This is on a RPi4 running Ubuntu Server 22.04 64 bit (5.15.0-1013-raspi kernel).
Michael
Switching back to rc5 solves the issue.
This is on a RPi4 running Ubuntu Server 22.04 64 bit (5.15.0-1013-raspi kernel).
Michael
It looks like it's running the old frontend still. The browser may be getting that from its cache instead of fetching the new one. Can you try just reloading the page?I tried updating the GUI but get some weird behavior where it smashes all of my configs together.
Thanks. Seem to work just fine. Much better sounding also...... kidding!
However in my install it always says I have pycamilladsp v0.6.0 instead of v1.0.0 (have installed it many times, cleared cache, tried different browsers etc). Raspberry pi os lite on a rpi3b+. On my rpi4 with ubuntu it says v1.0.0
Probably just a minor mistake
EDIT: Fixed
However in my install it always says I have pycamilladsp v0.6.0 instead of v1.0.0 (have installed it many times, cleared cache, tried different browsers etc). Raspberry pi os lite on a rpi3b+. On my rpi4 with ubuntu it says v1.0.0
Probably just a minor mistake
EDIT: Fixed
Last edited:
Hey guys,
Hope you are doing well and having a nice day today.
I have searched through this mega thread but am still a little uncertain about whether or not this is a dumb plan I'm hatching up. I could use the briefest hand holding if anyone has a spare hand for a moment?
As this system will be used for my computer speakers, I can't allow for any noticeable lag between the screen content and the audio. I'm just not sure that this is even possible?
So my main question at this point is whether this is even possible or not?
Hope you are doing well and having a nice day today.
I have searched through this mega thread but am still a little uncertain about whether or not this is a dumb plan I'm hatching up. I could use the briefest hand holding if anyone has a spare hand for a moment?
The preamble
I do animation for a living and (for fun) I am building a small DIY speaker system to use in my studio (2× satellite desktop speakers + 2× separate woofers, crossed at around 160Hz). I am very interested in incorporating CamillaDSP to apply the crossover for each woofer, as well as doing some room EQ for 2 possible configurations (sitting and standing desk). I can run this on the computer I work on, it will not be required for any other audio input.As this system will be used for my computer speakers, I can't allow for any noticeable lag between the screen content and the audio. I'm just not sure that this is even possible?
The plan
- Install a Motu M4 interface on my main workstation computer (M1 MAX Mac Studio).
- Install CamillaDSP on the same workstation.
- Route system audio into CamillaDSP (via Rogue Amoeba's Loopback or Blackhole 2).
- Apply room EQ and 2× crossovers.
- Analog out to 2× amps driving 2× desktop speakers and 2× separate woofers.
So my main question at this point is whether this is even possible or not?
- Home
- Source & Line
- PC Based
- CamillaDSP - Cross-platform IIR and FIR engine for crossovers, room correction etc