Multichannel WLAN streaming

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Hi experts,

Is it/ would it be possible to Stream the separate channels of a Surround audio source via WiFi ?

Hope my question ist clear....
Will try to explain a bit more.


Currently i am running a stereo streaming Setup with NAS to Raspberry with I2S DAC. Fine!
Is it possible e.g. to somehow split surround audio into separate channels and Play Front in my stereo Setup and send back left and right to different raspis or similar with DAC and AMP plus speakers?
So kind of building a wireless Surround setup.
 
Hi experts,

Is it/ would it be possible to Stream the separate channels of a Surround audio source via WiFi ?

Hope my question ist clear....
Will try to explain a bit more.


Currently i am running a stereo streaming Setup with NAS to Raspberry with I2S DAC. Fine!
Is it possible e.g. to somehow split surround audio into separate channels and Play Front in my stereo Setup and send back left and right to different raspis or similar with DAC and AMP plus speakers?
So kind of building a wireless Surround setup.

It is definitely "possible" to do. The important question is "how".

You could use gstreamer to do this. Are you familiar with that platform? You would need to accept input into gstreamer using a loopback device from the audio subsustem, then de-interleave the audio into separate "channels". From that you create two new streams for front and rear, re-interleave these, and send them out via WiFi as separate streams.

I created a gstreamer application in Linux that does some similar stuff, but it only accepts stereo input. Link here:
GSASysCon: A bash-script-based streaming audio system controller for gstreamer
It can stream left/right/mono/stereo to any client or set of clients (e.g. a Raspberry Pi). I just use the "testing" variation gstreamer, called gst-launch-1.0 to run the whole thing. Works great.

There would be a learning curve for you in gstreamer. I could give you some help if/when you get stuck. But it is definitely something that gstreamer can handle.

The (much) simpler approach would be to stream all channels to front and rear and just use the ones that you need where you need them. In that case you might be able to find an off-the-shelf application.
 
Last edited:
gstreamer is a very capable and very complicated platform. Documentation is not exactly great in terms of examples of how to get started, an overview of how to assemble a pipeline to test it, etc. I can help with this.

Have you installed gstreamer? Check by typing on the command line:
Code:
gst-launch-1.0 --version
This will produce a short message about gstreamer if it is installed.

I only know the command line "test" version of gstreamer. I like this because I can try it from the command line without having to code up a program (e.g. in C++ or whatever). I found that I can do pretty much everything that I need using this approach, and while this would not be acceptable for a commercial software it works perfectly.

So, about gstreamer:
A typical "pipeline" consists of a source, some stuff in between that says what to do with the data, and a sink. Pipelines are built from elements. When describing the pipeline, each element must be separated by an exclamation point "!", with a couple of exceptions, but let's just use that rule for now. Before the pipeline on the command line is placed the command "gst-launch-1.0", which invokes the gstreamer interpreter for the remaining input. Note that there is no "!" after "gst-launch-1.0" and before the first element in the pipeline.

Unless you want to just play files from the command line with gstreamer, you will probably want to use another program to play the file, and then use gstreamer to do the processing (in this case split the channels into two or more audio streams, send those streams over WiFi, etc.). You will need a program like jack, or the loopback of ALSA (snd-aloop) to connect the output of your "player" software to the input used by gstreamer. I do NOT recommend piping stdout to stdin. It sucks. Instead, invoke the ALSA loopback by typing
Code:
sudo modprobe snd-aloop
If the OS comes with the loopback alsa driver, it will be installed. To do this automatically each time at bootup, edit the file /etc/modules and add "snd-aloop" on its own line, without the quotes of course. I will now assume that you have the ALSA loopback working. What you will do is tell your audio player or streamer program to "play" to the loopback. Then you tell gstreamer to use the loopback as its source. I usually address ALSA devices by card number, and the hardware (no conversion) interface. In gstreamer, you do something like:
Code:
gst-launch-1.0 alsasrc device=hw:X,1 ! alsasink device=hw:Y,0
In the above command I have used "X" and "Y" to indicate an unknown card number. You need to know what this is for your own system. To find that, type
Code:
aplay -l
That's a lower case "L", above. Note the card number for the loopback, and for your soundcard. Put these in for X and Y and run the command in a terminal.
Now, play some audio to the loopback from your player. It probably doesn't work! Why? Well the loopback sets its properties to the first connection made to it. But that was gstreamer, which did not specify any properties, so it probably just used the default sample rate and format of ALSA, which is often 16bit 48kHz stereo audio. If you played something else to the loopback, it doesn't work at all, or the audio is not right.

Kill the gstreamer process and stop and close your player (this will make sure it releases its alsa connection). Next, open the player first and play something to the loopback. Then run the gstreamer command. Now you probably will get audio from your playback device (DAC), or whatever you specified as an audio sink in your gstreamer pipeline.

This is a simple example that I wrote off the top of my head. To achieve what you want to do will take a more complicated gsteramer pipeline, but see if you can get this working first just to get some practice playing around. Then go online and read about the following gstreamer elements:
Code:
deinterleave
interleave
audioconvert
queue
multiudpsink
udpsrc
rtpjitterbuffer
rtpL16depay/rtpL24depay
You will need these to build a pipeline that splits the audio and streams certain channels to certain remote clients over you LAN using WiFi or hardcable connections.

If you look into my code, look at the files GSASysCon.sh and config.sys. In GSASysCon I build the pipeline based on input that the user specifies in text files, and if you understand bash scripting you will be able to see how I build my "sender" and "receiver" pipelines in the functions build_and_launch_gstreamer_pipeline and "launch_system_clients". I do all of this from my "server" computer where my audio files and player software reside, and the gstreamer pipeline is built on the client by first connecting by ssh and then sending commands over the connection that are run on the client. You can do that, or something different, but you will need some way to run gst-launch-1.0 on the client when you want it to receive the audio stream.
 
Last edited:
I am also planning some mods to my own gstreamer application, including local playback of the input (it currently only supports remote playback clients, over a local LAN). At the same time I am trying to figure out an interface for implementing LADSPA plugins so that gstreamer can handle both streaming and DSP filtering without having to pass the output off to another program. I might just have the user write gstreamer code for the crossover, if I can make it simple enough to do. This will really extend the capabilities of the program, so it should be a nice update for the next release.

While I was thinking through all of these plans, I realized that there is a problem with using gstreamer alone for what you want to do. I believe you want to run gstreamer on the computer near your front speakers, and stream audio to your rear speakers over WiFi. The problem is that this will create latency (sending the audio over a wireless or wired network). It might only be a few tens of milliseconds but like will be more depending on the type of connection used and processing on the client. With WiFi you usually want at least 50-100msec of buffering on the receive client as well. These latencies are fixed - you choose them to suit the application to minimize data loss, etc. To keep the front and rear speakers "in time" you need to also delay the output locally to the front speakers by the same amount. But there is no native way to do this in gstreamer, which I found a bit surprising. I am not aware of a gstreamer element that introduces a delay in the data stream without having to tweak timestamps or other low level stuff. Seems like a bit of an omission on their part!

Luckily there is a solution - use a LADSPA plugin! These are the same type of plugins I will be using to implement the crossover functionality. It turns out that there are a few different delay lines available as LADSPA plugins. In fact, if you just install LADSPA under Linux like this:
Code:
sudo apt-get install ladspa-sdk
You get a couple of plugins as part of the install, and one of those is delay.so, a "Simple Delay Line". Gstreamer is pretty smart about LADSPA plugins. As soon as the plugins are installed on the system, gstreamer converts them to a gstreamer pipeline element. Nice. To see a list of LADSPA plugins available under gstreamer, run
Code:
gst-inspect-1.0 ladspa
To get more info on each one, replace "ladspa" in the line above with the actual name of the plugin, e.g. "ladspa-delay-so-delay-5s". Gstreamer comes up with these names automagically somehow. When you list a plugin, you will see all the available properties. You will need to set these to make the plugin work the way you want. For example, the 5sec delay plugin has:
name
parent
qos
delay
dry-wet-balance
Only the last two are useful. The delay is the delay in seconds. The dry-wet-balance is the mix of delayed and non-delayed signal. You want this set to 1, which is 100% wet (meaning delayed) audio.

Once you get to that stage I will explain how to implement the delay.

I thought I would put this out there in case anyone else wants to do this kind of thing.
 
I wanted to post an update about the streaming setup in the original post of this thread. It turns out that my current GSASysCon streaming audio controller can more or less accommodate this type of arrangement, with the source feeding the front speaker and two clients receiving streaming audio via WiFi, for the rear speakers.

I designed GSASysCon for streaming audio to remote clients over a LAN (wired or WiFi). To do this I use ssh to log into the remote client, launch the gstreamer RX pipeline, and then run other programs if/when necessary (like ecasound, for a DSP crossover). But one can also ssh into the localhost and then stream to a local port. It seems redundant, and in a sense it is, but it actually solves the problem in which a delay is needed on the local computer to accommodate for the latency of streaming to and buffering on the remote clients. When ssh-ing into, and streaming to, the localhost the gstreamer RX buffering introduces some fixed amount of latency just as if it was on a remote client.

I will be testing out this "local streaming" capability of GSASysCon while I continue to develop it as a LADSPA host (e.g. DSP crossover functionality via gstreamer+LADSPA). At the same time I will try to expand the streaming capability to an arbitrary number of channels (it is currently designed primarily for mono or stereo audio).
 
Last edited:
OK, after a little troubleshooting I have been able to get "local playback" working under GSASysCon. I needed to specify the localhost address, 127.0.0.1 as the CLIENT IP, and used sshpass to feed the password to ssh when logging into the localhost.

I listened through headphones and using GSASysCon had a loudspeaker system playing in the next room. With the latencies set equal for each system there was a delay, but I could dial that in just by ear so that playback timing seems identical for each. This required an additional 45 milliseconds delay via the LATENCY setting in GSASysCon for the local client. I could dial this in by making a dual-channel measurement in ARTA, or other software that can measure time differences between two channels.

I never thought of doing "local streaming", but I am glad you brought it up. Once I implement the LADSPA plug-in DSP capabilities, GSASysCon could be used on the localhost simply to implement a DSP crossover. It could completely replace my current software for that (ecasound) and would replicate the function of hardware DSP boxes like the miniDSP. GSASysCon can also be used to stream to clients and then perform LADSPA DSP on the client as well.
 
Great news Charlie.
Can You estimate the needed computing power for the „Head“ and the "Clients".?
Will a Raspi be good enough?
I think so, yes. I currently use R-Pi 2 and 3 models with the existing GSASysCon and they seem to have plenty of "headroom".

One advantage of using gstreamer for both streaming and the crossover processing is that gstreamer is a multithreaded application. My current crossover software, ecasound, is not so gstreamer should be able to use the CPU in a more efficient way by spreading the processing over multiple cores.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.