A bash-script-based streaming audio system client controller for gstreamer

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
It works because the gstreamer plugin performs asynchronous reclocking (adding/removing samples
as needed) between the incoming stream timed by the synchronized system clock and the output DAC clock (USB controller clock). Very likely it would work for asynchronous usb dac too (clock by the DAC).

According to a gstreamer reference page on clocks and synchronization that I found:
When the pipeline goes to the PLAYING state, it will go over all elements in the pipeline from sink to source and ask each element if they can provide a clock. The last element that can provide a clock will be used as the clock provider in the pipeline. This algorithm prefers a clock from an audio sink in a typical playback pipeline and a clock from source elements in a typical capture pipeline.

I don't typically send data directly to the DAC on systems using multiple clients, but instead route it through a local ALSA loopback. I assume the loopback uses the system clock as its timing reference. I use the loopback to route the audio to ecasound, with which I implement DSP crossover functions. Ecasound send its output directly to the DAC(s). This is typical of systems where left and right speakers are separate gstreamer clients.

I do have a system consisting of one client only where I just send the gstreamer output to a single stereo DAC (that uses USB adaptive mode), but I can't carefully compare its synchrony to other systems and within the system synchrony is a non-issue because there is only one client and one DAC.

Anyway, I would like to know about all of this on a more intimate level, so thanks for bringing up the topic. Let's keep up the discussion.
 
Six months down the road from the last post in this thread, and I am working on some more "features" to add to this package.

In the last post, above, I mention that on clients in the system gstreamer is receiving the streaming audio and then passing it to an ALSA loopback so that another program (e.g. ecasound) can use it as input. Ecasound is used to implement a DSP crossover (in software) before passing the audio out via the DAC(s). Ecasound uses LADSPA filter plugins that I wrote to do all the filtering, and provides routing functions for splitting the input audio into the different bands that are then filtered.

What has been a challenge is to get the clock that controls the ALSA loopback to run at the same speed as the source clock. I have been using NTP for this purpose, and even built a stratum 1 (GPS based) time server using a Raspberry Pi 3 that helps to keep all my linux boxes in very tight sync (jitter is under 50usec ). But I recently discovered (thanks Phofman!) that gstreamer can support LADSPA plug-ins. I had known that gstreamer came with a couple very simple ones, but I did not know that any and all LADSPA plugins that are registered to the O/S are available to gstreamer. Gstreamer sort of "repackages" each plugin, giving it a new gstreamer-specific name, and assigning names to each parameter.

After a lot of experimentation, I finally figured out how to create the same crossover functionality under gstreamer that I have been doing with ecasound. This means that I do not need to pass the audio via a loopback, instead I can implement the DSP crossover right within gstreamer itself, and gstreamer can send the audio to the DAC(s) directly. Because timing information will be managed completely within gstreamer, this may eliminate the need for NTP (or at least I hope so).

So far I have tested out a skeleton of a DSP crossover. By "tested out" I mean I have routed and processed the audio using LADSPA plugins and gstreamer seems to work (e.g. not crash). I have not yet been able to implement a real crossover on a loudspeaker and/or do any testing to make sure that the LADSPA filters are working properly, timing is correct between channels, etc. I am nonetheless very encouraged at this stage.

I have been thinking about how the user can describe the crossover system within GSASysCon (the existing program I wrote) without too much additional work. Initially I thought that I could create a crossover meta-language that could be read in as a config file and then translated into the appropriate gstreamer commands. But after thinking about it, the user would probably be just as well served if I gave a good overview on how to write the gstreamer (gst-launch) commands and then let the user write the pipeline themself. Then I can just insert the user's gstreamer pipeline from the config file directly info the pipeline that GSASysCon already builds as part of the streaming audio chain. The complexity of gst-launch will be about the same as whatever meta-stuff I come up with, and I won't need to write an entire interpreter for it!

A side bonus of implementing everything using gstreamer is that all the code and config files can be located on the server, and nothing on the client. This helps to simplify setup, and allows swapping out client hardware without a lot of re-configuration.
 
A couple weeks later...

I started planning the recoding, and doing some more testing. I have local playback working now (actually, it was always possible).

Next up is extending the input format to an arbitrary number of channels so that it would be capable of handling e.g. surround sound system audio. I have that planned out. Just need to start coding the changes.

After that I will be implementing the crossover functionality. For now this will be on the client side only (seems to make sense, right?). The user will need to write gstreamer code and implement LADSPA plugins, but I have figured out how to do this and it is not too complicated to explain to others. The user gstreamer elements will just be inserted into the client pipeline directly, which allows for a lot of freedom to do whatever is desired. At the same time I will release a slightly modified version of my LADSPA plugins that has more succinct control names so that instead of having to write filter_pole_frequency-in-hertz=250 you can just write Fp=250. Since you need to specify multiple control values for each LADSPA plugin, keeping the control names short helps to make the code readable.

Finally, just today the issue of INPUT SWITCHING came up. Currently GSASysCon continually monitors ONE input, which is a bit limiting although you could mix multiple inputs at this input. It is certainly possible to have another application feeding the lone GSASysCon input that would perform input switching for GSASysCon by connecting and disconnecting other inputs to the GSASysCon input. This is probably best be written as a separate application that also uses gstreamer, but does not necessarily need to be integrated into the current code. I will be giving the design of this "input switching" code some thought over the next few days and see what I come up with.
 
Last edited:
Working on allowing for an aribtrary number of inputs channels... the motivation is to allow the input to be some kind of multichannel surround sound.

The question then is how to allow the user to specify the formula for mixing audio channels. I think I have a solution to this, which would be to allow operations within the following format:
output_channel = SUM{ scalar*input_ch1, scalar*input_ch2, ..., scalar*input_chN }
where scalar is a float value that can be positive or negative.

Then a downmix of 5 channel audio (Lfront,Rfront,Center,Lrear,Rrear) can be accomplished via an equation like this:
DOWNMIXED_LEFT = L + 0.707*C + 0.5*L_surround
is achieved by writing for the output channel:
input_ch0+0.707*input_ch2+0.5*input_ch3

Using the same formalism, it would be possible to UPMIX. For example given LEFT, RIGHT input channels input_ch0 and input_ch1:
(L-R) would be written as: input_ch0-input_ch1
(L+R) would be written as: input_ch0+input_ch1
(R-L) would be written as: input_ch1-input_ch0
This is one version of Gerzon's "trinaural" reproduction formulas.

This type of user-directed system-specific input mixing permits playback on loudspeaker systems with a range of playback channels no matter how many channels are present in the audio input. The configuration file for each playback system specifies how the N input channels are mixed up/down and sent on to the loudspeaker system connected to the M-channel playback client. In this way a movie in 5.1 surround can be played in the TV room in 5.1 surround, in another room as 2-channel audio thru stereo loudspeakers, and somewhere else in the home in mono from a wall or ceiling speaker.
 
After some planning I had the chance to code this up today. Works great!

I managed to simplify how the user must declare the output channels, upmixing, and downmixing equations, etc. The user supplies a list of input channels in the order they should appear in the output. Simple expressions for mixing channels are also possible with this formalism.

Given that the channel numbering starts with channel 0, we have:
stereo = "0 1"
left = "0"
mix stereo to mono = "0+1"
downmix 7.1 to stereo = "0+0.7*2+0.5*4 1+0.7*3_0.5*5"​

In the above, the equations for mixing are formed by connecting expressions of the form "scalar*channel_id" via a "+" or "-". So "0+0.7*2+0.5*4" means:
output channel#0 should be created by summing input_channel#0, 0.7 times input_channel#2, and 0.5 times input_channel#4.
Rules: The scalar must always appear to the left of the "*". Since the channel list is space delimited, no spaces are allowed within a mixing expression.

I can pretty much handle anything this way! Nice.
 
After thinking more how this can be a "whole house" streaming solution I realized that I needed to integrate some kind of volume control into the interface.

I turned to experience I gained on a previous project:
Linux USB Preamp project

The idea is to create a new ALSA device that feeds into the loopback that is in turn feeding GSASysCon. Software should send audio to this new device instead of directly to the loopback. I found that something like this in my ASLA config file (~/.asoundrc) worked pretty well:
Code:
pcm.GSASysConInput {
   type   softvol
   slave {
      pcm   "CARD=Loopback,DEV=0"   #send the output of this device to the slave device
   }
   control {
      name   GSASysConVol    #new control name, or name of existing control to override
      card   Loopback        #can use card name or number
   }
   min_dB -40.0    # Minimum dB when slider is at 1% (0% is muted)
   max_dB 10.0     # Maximum DB when slider is at 100%, >0dB is a boost of the signal
                   # These result in the default control value (80%) having a level of 0dB (unchanged)
   # Resolution: number of levels the control is able to take on, including the 0% and 100% endpoints
   #   NOTE - for muting use a control with resolution=2 
   resolution 101  #resolution=101 ensures that each 1% step will actually result in a volume change
}
To manipulate this new "volume control" I will add some code in GSASysCon that uses amixer to change the control level. This is the same approach that I used in my Linux USB Preamp project. The general UI will be something like this (below), a "bar graph" rendered using text, e.g.:
Code:
VOL #|||||||||||||||||||||||||                          # [55]
The above is supposed to look like a volume control slider, with the setting (in percent) in square brackets at right. When the user presses v or V the volume control mode is entered and the slider graphic is displayed. By pressing +/- or the keys u/U or d/D the volume can be made to increase or decrease and the slider will be re-rendered to reflect the new level. After some preset time the display will revert back to the usual playback client list. I could also implement muting in this way via m/M keys.

One motivation behind this functionality is so that I (or the user) can use a CD player connected as a source via a SPDIF-->USB dongle, for which there is no volume control. This would skip D-to-A and A-to-D conversion (e.g. if the analog output of the CD player was used instead). Any source that lacks a volume control is a good candidate for this control.
 
reading your prev linux usb preamp project, i was wondering if you would be ever interested in making a hard ware product for gnu/linux DSP?

I could imagine such a hardware product being achieved, more cheaply, easily using this free software & hardware projects computer card spec:
website wiki, lots of info, try the site map: Rhombus-Tech
tons of info in updates and a intro: Earth-friendly EOMA68 Computing Devices | Crowd Supply
mailing list, tons of info: arm-netbook Info Page

the brains are in a standard factor. so for a gear maker this would mean, no MOQ for socs, software support, not designing ones product around a SBC which change all the time and go obsolete/unspotted quickly.
Instead the gear maker gets to focus on the functions of there gear.

so DAC, Power supply, arduino like chip for lots of diff control options, screen and designing a box to fit it all in. they can make a more simple PCB as less layers = cheaper pcb cus the ££ pcb cpu, ram layers is done in the computer card.

in one of the existing housings aka devices being made, the laptop. A arduinio like chip is already used to do lots of things.

when the soc of a computer card is EOL,etc. The maker can still make the same DSP product and just provided a different, newer computer card with it.

options of more powerful cpu for more fancy filters or cheaper less powerful cpu card for basic dsp at less cost for the user.

Yea im keen about this eoma68 project :D. I would fancy pledging money for a ready made free software DSP using eoma68 :)

sorry, i guess your just interested in developing the software music servers, crossovers, and equalization etc. Which is great :), I just can’t help my self at the excitement of thought of a free software DSP. :D
 
reading your prev linux usb preamp project, i was wondering if you would be ever interested in making a hard ware product for gnu/linux DSP?

As you have guessed I am more of a software person. I find all the hardware capabilities that I could ever need in inexpensive single board computers and the like. I don't think that there is a need for dedicated hardware per se. You can buy a decent DAC for not much money. It just plugs in a does its job. The fun for me is in developing software to do new things when I cannot buy and customize something to my satisfaction.

If you look around this web site you will find there are other people who have developed or are developing projects along the line of what you are (likely) envisioning.
 
Reporting on some progress:

The server-side channel routing/mixing is working. I am experiencing some issues with gstreamer, and am trying to get more info on that, but it is looking like it will more or less work as advertised.

The client-side ladspa for filtering/crossover/DSP is currently not working. It has been coded up, but I need to find out why the pipeline isn't playing correctly. I plan to test this out in more detail tomorrow. I had tested a couple of pipelines that included ladspa filters a few weeks back but evidently not with a sufficiently detailed amount of debug information. At the time it looked like there were no errors and everything was working. Now I can see that the pipeline sets up without errors but then goes into a PAUSE mode, so it doesn't produce any audio. Not very useful!

The volume control is working, but I am still deciding how to implement the control functionality itself in GSASysCon. I might start out having only a numerical readout and no "bar graph" to keep it simple.
 
UPDATE:

After a long day of trial and error I seem to have figured out how to get the LADSPA plugins working. This really great news, because this allows GSASysCon to be a DSP crossover platform that can replace ecasound. It should prove to be an improvement over ecasound in several ways.

I will continue to experiment with gstreamer pipelines and ladspa plugins over the next few days to make sure I have all of this figured out and coded up properly within GSASysCon.
 
AND THE GSTREAMER RABBIT HOLE GETS DEEPER!!!

So, just when I thought I had everything working I did some multi-channel testing. By multichannel I mean more than 2 channels (2=stereo). That's when the wheels started to fall off...

I have been using my computer's onboard 7.1 (e.g. 8 output channels) audio to test gstreamer pipelines outside of GSASysCon. It turns out that with 2 channel stereo you get a "free pass" so to speak. Things just work as expected - perhaps stereo is a default case in gstreamer? When I increased the number of channels to 4, or 5, or 8 things would either break or would mixdown to stereo and those stereo channels were then duplicated into the other output channels. Hmmm... that's not right!

After a lot of Googling and reading of gstreamer source code and documentation, I started learning how channel-masks can be used to place channels correctly. Thanks to the skeleton of documentation available for gstreamer it has been a bit of a learning curve but now I seem to have at least gotten 8 channels working properly and am able to put the output into the desired channels. I need to take what I learned and update some of the code that I thought was already finished, so it is a bit of one-step-forward-two-steps-back.

At the same time I have been experimenting with LADSPA using my ACDf plugin and thankfully that is working as expected.

Looking Ahead...
I will continue to code, debug, and test everything related to the channel bitmasks. Up to 8 channels seems relatively well defined, however, I have an audio interface with 10 output channels and I wonder what channels 9 and 10 are supposed to be? Gstreamer defines 28 different channel types, so there are still lots of possibilties...

To allow for flexibility and future reconfiguration I will probably make it possible for users to provide channel masks. In this way different types of hardware can be accommodated. I can code up some internal defaults so the user does not need to provide the channel masks when their hardware obeys the "normal" channel assignments.

In any case, this is more or less a bump in the road and I am still confident that all of the new features will be implemented in the near future as time permits me to work on the code.
 
UPDATE:

Still chipping away at the coding tasks here. I have been able to get more things figured out and working in bits a pieces by directly writing and running gstreamer code. I am doing lots of "planning" of how to implement them in the main program, which builds the gstreamer pipelines on demand. Lots to do yet, but still looking promising.
 
It's alive! I'm doing some listening streaming to the localhost while I shake out the system.

I have been testing various clock management modes. Controlling the source of the clock for the system, from end to end, is one interesting new capability of the code. This should allow the user to deploy multiple DACs (e.g. two or more stereo DACs) and synchronize playback across them.

I still need to confirm that the LADSPA plugin handling is working as expected.
 
Yesterday I tested out a system under Gstreamer 1.4 (a rather old version!) using my ACDf LADSPA plugin to create a simple crossover system between a small monitor speaker and mono subwoofer. Once I had it figured out it all went pretty well. I created a stream, of left, right, and summed left and right channels. Using the mixing equation formalism, this is "0 1 0+1". These three channels are then streamed to the client, where I do the DSP filtering. The output is split between two stereo DACs. Everything stays nicely synchronized without glitches. I am still using NTP to synchronize all the computers on my LAN, so I can use GstSystemClock on each machine.

On the client I came up with a formalism to describe how the input should connect to LADSPA plugins and to output channels. I do this via:
ROUTE=A,B,C,D
where
A=index of an input channel
B=index of an output channel
C=index of a DAC or sink
D=channel mask
Each ROUTE describes how to connect a single channel, like in a patchbay. As an example, in my system I have three input channel (channels 0,1 and 2) and each DAC has two output channels, e.g. DAC0_channel0, DAC0_channel1, DAC1_channel0, and DAC1_channel1. So the route descriptions are:
ROUTE=0,0,0,0 -->routes input ch 0 to DAC 0, ch 0
ROUTE=1,1,0,1 -->routes input ch 1 to DAC 0, ch 1
ROUTE=2,0,1,0 -->routes input ch 2 to DAC 1, ch 0
ROUTE=2,1,1,1 -->routes input ch 2 to DAC 1, ch 1

I won't get into the channel masks here.

Now let's say we want to apply some LADSPA filters to these routes. All you need to do is add some gstreamer LADSPA elements after each ROUTE statement and the filters will automatically be inlined into the pipeline for that route. To make things readable, you can declare each element on a separate line and they will all be joined with gstreamer's exclamation point mark behind the scenes.

Gstreamer automatically collects all the existing LADSPA plugins that have been installed on the operating system and transforms them into the Gstreamer version, giving them a compound name. For instance, my plugin is called ACDf. Under gstreamer the name is "ladspa-acdf-so-acdf". So, for example if I want a 100Hz second order Butterworth high pass filter on the route for input channel 0 to output channel 0 on DAC 0 I write:
ROUTE=0,0,0,0
ladspa-acdf-so-acdf type=22 fp=100 qp=0.707
ROUTE=...


A 4th order Linkwitz-Riley filter - this is just two identical Butterworth filters in series. To use a 100Hz LR4 the description becomes:
ROUTE=0,0,0,0
ladspa-acdf-so-acdf type=22 fp=100 qp=0.707
ladspa-acdf-so-acdf type=22 fp=100 qp=0.707
ROUTE=...

You can add as many LADSPA filters as you would like, since they are computationally very lightweight.

Since this seems to be working well so far I am thinking about how I can improve the crossover functionality and flexibility.
 
I have been thinking about how to expand on my "ROUTE" nomenclature. What I would really like to do is include filters that could then feed multiple downstream channels. This is often done in loudspeaker crossovers, e.g. the "PRE" block in the following diagram:
.
two_way1.jpg

There is a single channel entering at "IN". It passes thru two filters within PRE and then is split (or teed) into two new channels WOOFER and TWEETER where a couple of other filters are applied.

Under my existing code the ROUTEs can only specify an input channel as their source, so any filters in the "PRE" block would need to be duplicated in all ROUTEs that use that channel as input. It's workable, but a bit cumbersome.

Instead I could borrow an idea from ecasound: the loop. In ecasound's LOOP device you can output to it, and then use it as input for as many channels as you would like. In gstreamer this would be a TEE element. I could implement the very same thing as shown in the figure above, like this:
Code:
TEE=0,A
  pre filter 1
  pre filter 2
ROUTE=A,0,0,0
  woofer filter 1
  woofer filter 2
ROUTE=A,1,0,1
  tweeter filter 1
  tweeter filter 2
  tweeter filter 3
ROUTE=END
Let's step through the instructions, above.
TEE=0,A says to use input channel 0 as the source for a tee called "A". So here we are sending input channel 0 to "A". The next two lines describe filters that we want to apply to the audio produced by A.
ROUTE=A,0,0,0 says to take audio from "A" and send it to channel 0 of DAC 0. The next two lines are the filters for this route, for the woofer.
ROUTE=A,1,0,1 says to take audio from "A" and send it to channel 1 of DAC 0. The next three lines are the filters for this route, for the tweeter.
ROUTE=END says that the declaration for this route is completed.

This would essentially duplicate the functionality of ecasound's LOOP, and would allow for a branching structure like what is shown in the figure. It's often used in loudspeaker crossover work, where the PRE filters are system-wide, and the woofer and tweeter filters are specific to that driver. PRE filters are often EQ, and it is useful to apply them globally.
 
UPDATE:

I have implemented a new ROUTE formalism that allows the user to tee an audio channel whenever that is desired on the client side. This will provide more flexibility for LADSPA crossover signal routing, as I illustrated with the figure in the last post.

With that in place, I started to do more testing of multichannel streaming. Then I encountered some strange problems that I can't seem to solve. Streaming up to 2-channels of audio is no problem. But when I attempt to stream 3 channels (I also tried 4) things don't work on the client side. After lots of testing, it seems that has something to do either with the RTP depayloading or receiving the UDP stream on the client side, and channel assignments.

I can't find any kind of documentation about this online, and multiple posts to the gstreamer-devel forum have gone unanswered. So at this point after trying to figure a way past this problem for a few days, I am about ready to throw in the towel on multchannel streaming. This would be a bit of a disappointment, since I thought I would be able to stream e.g. 5.1/7.1 audio, or mixed-down/up-mixed audio to clients. Until I can figure out a solution, 2 channels will be the limit. There is still a possibility that I will come to understand a solution that will support multichannel streaming, but at this point I am more or less done wasting time on it.

For my own uses (and quite possibly for most other people too) this doesn't represent any kind of setback. I only listen to stereo sources, and my primary goals were to be able to implement a DSP crossover using LADSPA under gstreamer, and get better control of the clocking used on both the sender and receiver sides. All of that seems to be working pretty well.

The only functionality that I need to modify is to move the "mixing" capability from the server side to the client side. Currently all you can do on the client side is "split" (e.g. tee) the input. But if you wanted to stream a stereo source and support a system of left speaker, right speaker, and mono subwoofer you need to be able to downmix the input to mono for the sub.
 
yet another UPDATE:

I have been doing more testing on different platforms and OSes. I have at my disposal R-Pi 2 and 3 as well as a couple of BayTrail J1900 Linux boxes. These run different versions of Gstreamer. It seems that after some checking, some problems are due to the Gstreamer versions that I am running and the fact that I am runnging the command line version of Gstreamer, which is not really the best for complicated applications. I SHOULD be able to do everything I need with it, but perhaps the command line launcher does not receive as much vetting as the code-based API stuff... at least that is my guess.

I have had the best success overall on my J1900 Linux boxes. These are running Ubuntu 16.04, which comes with Gstreamer 1.8.3, and Ubuntu 17.04, which comes with Gstreamer 1.10.4. I can run multiple USB DACs, implement LADSPA plugins, etc.

I have a Pi2 system that was running an old version of Raspbian that was not working until I upgraded it to Raspbian Stretch. Now it works mostly. Sometimes it's not running properly when I attempt to use multiple DACs, however, supposedly it also has Gstreamer version 1.10.4. Could be a problem with USB, or the build on the ARM. I just don't know.

Today I tried a couple other boards: Tinker Board running Tinker OS 2.0.4, and a Pi 3 running Ubuntu Mate. I could not get either of these to work properly, but I was not all that patient when trying.

Gstreamer 1.12.4 is currently available via Ubuntu 17.10, however I have not tried it yet since I am waiting for 18.04 to be released in May before upgrading my systems, which will come with Gstreamer 1.14.x. With each new version comes some bugfixes and eventually everything should work as advertised.

Currently I don't think that I will release the new code that includes the LADSPA DSP crossover functionality until I have had more time to test it with various hardware and software and I see things are working more reliably, or at least it is clear on what systems I could expect success... kind of frustrating, but unless I want to re-write everything in C++ or Python this is the nature of the beast.

I will post new info here when I have it. In the meantime I will keep testing. I am hoping to try out some new hardware platforms in the next couple of months, so there are lots of testing possibilities ahead.
 
UPDATE: most problems have now been fixed!

By chance I discovered a complete set of detailed gstreamer docs online. As a result I was able to implement some addition buffering and latency options for a few key pipeline elements. This seems to have eliminated a couple of strange and not-very-reproducible problems that were occasionally causing audio glitches. Why complete and detailed documentation is not available on the gstreamer site itself is a bit perplexing...

The good news is that I seem to have most everything working as I had envisioned. I tested out the input mixing, multiple channel streaming (well, at least more than 2 channels!), the LADSPA DSP plugins, etc. on multiple hardware platforms including the TinkerBoard and a Pi3 running Rapbian Stretch. These generally are running Gstramer version 1.10.4 or later. I will continue to do more testing and listening but so far it's looking very positive.

With that in the rear mirror I can move ahead with some ideas that would improve the flexibility of the program. Currently mixing is only done on the server side, and the LADSPA DSP crossover filtering is only done on the client side. It would be beneficial if all capabilities were available on both the server and client. Maybe you want to apply the DSP filtering on the server and then just stream the audio right to the client's DAC? Or perhaps you want to implement a DSP crossover on the "server" (where the audio source is located) without any streaming. I will figure out how to implement this during the next few days.

Check back here for additional updates and progress reports.
 
Hi, I have been testing using MPD in a "client/server" mode, and found this japanese site that documents a simple method using ncat: Home * papalius/symphonic-mpd Wiki * GitHub

In MPD you define one output per "client", with the following syntax:

audio_output {
type "pipe"
name "PIPE"
format "44100:16:2"
always_on "yes"
command "ncat 192.168.x.x 4444"
}

(not sure the format option does anything)

Then on the client side I run this command (not sure all is necessary either):

/usr/bin/ncat -kl 4444 -e "/usr/bin/aplay -M -t raw -Dplug:default -f cd"

It works. I imagine you could create a command to run ecasound, for example.

What are the advantages of the solution you are developing ?
 
Last edited:
Hi, I have been testing using MPD in a "client/server" mode, and found this japanese site that documents a simple method using ncat: Home * papalius/symphonic-mpd Wiki * GitHub

In MPD you define one output per "client", with the following syntax:

audio_output {
type "pipe"
name "PIPE"
format "44100:16:2"
always_on "yes"
command "ncat 192.168.x.x 4444"
}

(not sure the format option does anything)

Then on the client side I run this command (not sure all is necessary either):

/usr/bin/ncat -kl 4444 -e "/usr/bin/aplay -M -t raw -Dplug:default -f cd"

It works. I imagine you could create a command to run ecasound, for example.

What are the advantages of the solution you are developing ?

Well, if your goal is to stream audio to one client, then there are MANY ways to do that. Netcatting the stream is one of the more primitive ones. And yes, you probably could pipe/loop the audio to ecasound on the client. This is exactly what I have done in the past with my own code.

One of the goals of my project is to achieve accomplish tight synchronicity of all playback sinks, across all clients. Some programs claim to "tightly" sync playback, but this they mean they can at best get around 1msec synchronicity, and often its worse. One application I have in mind is when there is a separate client in left and right loudspeaker of a stereo pair. In this case 1msec timing differences are clearly audible and at least 10x better synchronization is necessary.

Since Gstreamer includes mechanisms for clock control that are not available in most other platforms it makes sense to use it. Gstreamer is freely available, is continually being updated and developed, and comes preinstalled with many flavors of Linux (e.g. Debian, Ubuntu, Raspbian, etc). At the same time I am folding in the DSP processing aspect that I used to achieve through ecasound. Then means that Gstreamer will be in control of the timing from source to sink, something that is not possible with netcat, or when passing the audio off to ecasound.

My code includes a system of configuration and playback control files, including the DSP filtering capabilities. There is also a simple interface to turn systems on and off, and systems can include multiple local and streaming clients. It can be run in interactive mode, or in a one-shot mode that would permit it to be called from script files or as the backend of another control system (e.g. with a pretty GUI). This makes it very versatile and powerful. I have multiple DIY loudspeaker systems in my home and this allows me to switch them on and off using any interface that can connect to the main computer via ssh. I use an Android tablet that I can take around with me. It runs the control interface as well as an MPD control client with which I can adjust volume and playback source, but any mobile playback interface to any player software would work equally as well. I can also adjust the DSP filtering via the same SSH connection, so that I can tune the system from my listening position.

I basically cooked up all the features that I need as a DIY loudspeaker builder and built the system around Gstreamer. It's perfect for my needs and, thanks to the easy-to-use control interface, for others in my household who want to fire up some music. Simple to use, yet extremely capable.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.