the DIY Streaming Audio Thread

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
@Forta: Wow, that seems... quite complicated! For example, I am currently able to stream using a concatenated (piped) combination of ffmpeg and vlc, like this:

I cannot see why this ought to be more complicated. Both approaches require launching one program at each host.The only difference is that I wrote explicitly about some additional aspects that you have taken for granted that if not treated properly lead to suboptimal performance at best (like ALSA loopback case).

In the meantime, I am still considering using Gstreamer, but am still deciding whether it will bring me much more than I have now in terms of performance.

Certainly GStreamer API gives you more control and flexibility. E.g. better control of clock source selection, which you might find interesting in your xo projects. But it's off-topic. If you are inclined look at GStreamer Application Development Manual as well as Plugin Writer's Guide.
 
I cannot see why this ought to be more complicated. Both approaches require launching one program at each host.The only difference is that I wrote explicitly about some additional aspects that you have taken for granted that if not treated properly lead to suboptimal performance at best (like ALSA loopback case).



Certainly GStreamer API gives you more control and flexibility. E.g. better control of clock source selection, which you might find interesting in your xo projects. But it's off-topic. If you are inclined look at GStreamer Application Development Manual as well as Plugin Writer's Guide.

Well I have to admit that you are probably right on these points. I'm still clinging to VLC for streaming... see next post for update on that.
 
Update: I have been able to eliminate ffmpeg from the streaming audio tool chain. I've moved to a new player, Audacious.

I have been experimenting with several different ways to connect a player (VLC), a transcoder (ffmpeg), and a streaming server (VLC). These have included combinations of alsa loopback and piping stdout-to-stdin under Linux. More often than not, when I could get the toolchain working I would discover some drawback or shortcoming and the future of that was looking rather dim.

I decided to try eliminating ffmpeg and just using VLC, both as a player and as a stream. Connecting these was tried both via ALSA (successfully, sort of...) and via piping data through stdout/stdin route (unsuccessful). When connecting with ALSA, the problem I was experiencing had to do with resampling - the "ugly" resampler was always used and this causes some nasty artifacts, most notable on piano or trumpet. I like to listen to Jazz, and I could pick out these artifacts (if I listened for them) from the other side of my house. But the vlc streaming solution was working pretty well. I could transcode to AAC and stream using RTP/RTSP over my LAN. How could I continue to use that?

Well, how about another player? My server is using Ubuntu, so I browsed for music player applications that I could install. One that I found works really well is Audacious. This has a simple interface that should be usable on my x-windows/tablet controller. It can play local files and internet streams by direct URL entry, and can maintain playlists (have not done much yet with those). Most importantly, it has a high-quality resampler. Now all audio is resampled to 48kHz (ALSA default rate) before entering the ALSA loopback so that the VLC streaming doesn't have to do that when transcoding, resulting in a clean audio stream.

In the meantime, I've been reading up on Gstreamer and may try that out in the near future now that I have figured out how to install in under Raspbian.
 
Hi Charlie,

You may be able to use just vlc. The example below uses vlc from command line. Its from old notes of mine using vlc 1.03.

The example shows how to setup vlc on endpoints (hosts) to listen on udp port 1234 for incoming udp stream muxed as mpeg-ts (transport stream), and how to setup vlc on the sending host to stream a file simultaneously to 3 endpoints with and w/o transcoding. Note that sending host can also be an endpoint since one instance of vlc can stream to another running on the same host :)

Endpoint
vlc udp:mad://1234 --demux=ts

Sender (no transcoding)
vlc FILE.EXT --sout=#duplicate{dst=std{access=udp,mux=ts,dst=ENDPOINT-A:1234},dst=std{access=udp,mux=ts,dst=ENDPOINT-B:1234},dst=std{access=udp,mux=ts,dst=ENDPOINT-C:1234}} --sout-keep

Sender (with transcoding)
vlc FILE.EXT --sout=#transcode{acodec=mp4a,ab=256,channels=2,samplerate=44100}:duplicate{dst=std{access=udp,mux=ts,dst=ENDPOINT-A:1234},dst=std{access=udp,mux=ts,dst=ENDPOINT-B:1234},dst=std{access=udp,mux=ts,dst=ENDPOINT-C:1234}} --sout-keep

Regards,
Tim
 
Hi Charlie,

You may be able to use just vlc. The example below uses vlc from command line. Its from old notes of mine using vlc 1.03.

The example shows how to setup vlc on endpoints (hosts) to listen on udp port 1234 for incoming udp stream muxed as mpeg-ts (transport stream), and how to setup vlc on the sending host to stream a file simultaneously to 3 endpoints with and w/o transcoding. Note that sending host can also be an endpoint since one instance of vlc can stream to another running on the same host :)

Endpoint
vlc udp:mad://1234 --demux=ts

Sender (no transcoding)
vlc FILE.EXT --sout=#duplicate{dst=std{access=udp,mux=ts,dst=ENDPOINT-A:1234},dst=std{access=udp,mux=ts,dst=ENDPOINT-B:1234},dst=std{access=udp,mux=ts,dst=ENDPOINT-C:1234}} --sout-keep

Sender (with transcoding)
vlc FILE.EXT --sout=#transcode{acodec=mp4a,ab=256,channels=2,samplerate=44100}:duplicate{dst=std{access=udp,mux=ts,dst=ENDPOINT-A:1234},dst=std{access=udp,mux=ts,dst=ENDPOINT-B:1234},dst=std{access=udp,mux=ts,dst=ENDPOINT-C:1234}} --sout-keep

Regards,
Tim

I didn't know that you could do that, that is stream to a port on another machine, which is then played on that machine locally by connecting a player to that (local) port. I guess the only disadvantage is that you need to know the (fixed) IP addresses of all the clients where you want the stream to play. I have been doing it the opposite way: an instance of vlc streams audio to a local port on the audio SERVER and then all the clients look to connect there to pick up and play the stream. Using that scheme only the server IP address must be fixed (so the clients know where to find the stream), although since this is operating only on my local network I can fix the IP address on all the client if I want to.

I will give this a try tomorrow and see how it works.

Thanks for the suggestions!
 
I didn't know that you could do that, that is stream to a port on another machine, which is then played on that machine locally by connecting a player to that (local) port. I guess the only disadvantage is that you need to know the (fixed) IP addresses of all the clients where you want the stream to play. I have been doing it the opposite way: an instance of vlc streams audio to a local port on the audio SERVER and then all the clients look to connect there to pick up and play the stream. Using that scheme only the server IP address must be fixed (so the clients know where to find the stream), although since this is operating only on my local network I can fix the IP address on all the client if I want to.

I will give this a try tomorrow and see how it works.

Thanks for the suggestions!

Hi Charlie,

Couple things:

(1) Assuming names resolve on your network, just use a host name instead of IP address, that way it doesn't matter what IP address is assigned to endpoint hosts.

(2) Once vlc is started on the endpoint host, it listens for incoming streams indefinitely, starts play back when stream received and then when stream ends or is paused, it returns to listening state. If you configure so vlc starts at boot then its totally hands-off after that.

(3) This form of streaming is called push-streaming. Its similar to IP multicast but uses multi-unicast protocol instead which will work nicely over WiFi.

Its not limited to just audio files. Video files, streaming sources, and HDTV MPEG-TS feeds can also be simultaneously streamed to endpoints :)

Regards,
Tim
 
Hi Charlie,

You may be able to use just vlc. The example below uses vlc from command line. Its from old notes of mine using vlc 1.03.

The example shows how to setup vlc on endpoints (hosts) to listen on udp port 1234 for incoming udp stream muxed as mpeg-ts (transport stream), and how to setup vlc on the sending host to stream a file simultaneously to 3 endpoints with and w/o transcoding. Note that sending host can also be an endpoint since one instance of vlc can stream to another running on the same host :)

Endpoint
vlc udp:mad://1234 --demux=ts

Sender (no transcoding)
vlc FILE.EXT --sout=#duplicate{dst=std{access=udp,mux=ts,dst=ENDPOINT-A:1234},dst=std{access=udp,mux=ts,dst=ENDPOINT-B:1234},dst=std{access=udp,mux=ts,dst=ENDPOINT-C:1234}} --sout-keep

Sender (with transcoding)
vlc FILE.EXT --sout=#transcode{acodec=mp4a,ab=256,channels=2,samplerate=44100}:duplicate{dst=std{access=udp,mux=ts,dst=ENDPOINT-A:1234},dst=std{access=udp,mux=ts,dst=ENDPOINT-B:1234},dst=std{access=udp,mux=ts,dst=ENDPOINT-C:1234}} --sout-keep

Regards,
Tim

I can't establish the connection with the clients, that is to say they won't accept the input. I get warnings about dropped UDP packets sent too late, and the error:
Code:
udp access out warning: send error: Connection refused

Any ideas? Do I need to open the port on the clients? How do I do that?
 
I can't establish the connection with the clients, that is to say they won't accept the input. I get warnings about dropped UDP packets sent too late, and the error:
Code:
udp access out warning: send error: Connection refused

Any ideas? Do I need to open the port on the clients? How do I do that?

Hi Charlie,

My notes were from old vlc 1.03 version so its possible that new(er) vlc version won't work with the syntax.

Firewall on endpoints might be blocking incoming packets to port 1234...?

Regards,
Tim
 
I got it working. I was trying to use cvlc from the login shell. Didn't work. If I fired up the GUI and then opened a terminal shell window there the exact same command works! Also the GUI VLC could open the stream.

I'm on a client now. Using cvlc -vvv udp://@:1234 the stream plays. The -vvv is used to output lots of debug info so I can see what is going on. When both clients are playing I get a similar experience to when I use the "push" method that I have been using previously AND the TS muxer: the streams quickly lose sync. I can hear a slight hiccup in the playback and I get this message in the cvlc output:
Code:
[74700508] core input error: ES_OUT_SET_(GROUP_)PCR  is called too late (pts_delay increased to 200 ms)
The problem is that the change in pts delay is different on each machine - on this one 200msec and on the other 120 msec. Oops there is now an 80 msec difference. This happens from time to time in unpredictable intervals. I think it is in response to a glitch of one kind of another and I get messages that might indicate some problem or another.

I have also tried the mp4 muxer and the performance seems a little better. But it still cannot prevent the streams from getting out of sync eventually. I would rather the clients just drop the stream and rebuffer it but that's not how vlc works evidently.

Anyway, I was able to modify your setup to accept the alsa loopback as before (on the server) so that part is working fine. I will keep playing with this setup and try a few things. Will post again later.
 
UPDATE:

I was still experiencing problems with the UDP streaming (quick loss of sync). I decided to try multiple unicast RTP streams with the SDP for each fed by RTSP from a separate port on the server (I guess that could be pushed to the clients as well).

This definitely works better. Between RTP and RTSP and the client's -vvv output I could see what was going on. The VLC player on the client is using Live555 for demux and RTSP handling. I still get messages about PTS delay changing, etc. but the bid difference is that somehow there is some resampling of the audio happening now and then as well to bring the audio back into relatively close sync. I think this is coming from the RTP protocol. RTSP is also sending data from the clients back to the server once per minute, but I am not sure this really does anything related to playback sync. I believe I could send the DSP info for the RTP streams using HTTP, but I have yet to try that.

Will post more later.
 
There are definitely minor playback adjustments going on under this scheme. I see messages like:
Code:
[008fc638] core audio output warning: playback too late (60158): up-sampling
[008fc638] core audio output debug: resampling stopped (drift: -55 us)
and
Code:
[008fc638] core audio output warning: playback too late (60034): up-sampling
[008fc638] core audio output debug: resampling stopped (drift: 414 us)

This seems to keep things better synchronized. Using a larger cache helps minimize the need for these corrections, too.
 
Last edited:
After some extended inspection of vlc -vvv log files over the past couple of days I have figured out a number of playback pitfalls and come up with a way around them. I was experiencing some odd behavior from the clients: the system might play fine for awhile and then lose sync by hundred of msec to a second or one or both clients would stop playing altogether. It turns out that many types of things can go wrong with the stream. These can cause the player to abort, rebuffer with an added delay, etc. By looking at the log file I was able to identify how these events are reported. I then compiled a list of keywords to look for in the log output.

Once I knew how to identify when a problem had occurred the question was how to "fix" it. The solution I came up with is to simply kill and restart the vlc instance on the client when a problem is detected. After quite a lot of Googling to bootstrap my bash FU I was able to write a shell script that does this and creates some additional log files that record the error message and the time at which the process is killed and restarted. I can also parse out lines from the vlc log that related to the temporary resampling messages that I show in the post above.

When the clients are run from the shell script playback remains sync'd. Audio is interrupted for a second when the client vlc is killed and restarted and the output is simply muted for that time. I tested this configuration by running the audio stream overnight and checking on the message logs and the playback sync in the morning. Everything worked as planned.

As I reported previously, the larger the vlc client's stream cache the less often the problems are encountered. For example, when I set the cache to 1 second it might take an hour or so before anything triggers a restart. There is likely no "very large" cache value for which problems will NEVER happen, so IMO it's better to have a reasonably sized cache and respond to any problems that come along. Currently I am using 500msec but 300msec works well, too.

I will test out this scheme some more and then report back, with more details on the streaming setup and shell script.
 
Hi Charlie,

You may be able to use just vlc. The example below uses vlc from command line. Its from old notes of mine using vlc 1.03.

The example shows how to setup vlc on endpoints (hosts) to listen on udp port 1234 for incoming udp stream muxed as mpeg-ts (transport stream), and how to setup vlc on the sending host to stream a file simultaneously to 3 endpoints with and w/o transcoding. Note that sending host can also be an endpoint since one instance of vlc can stream to another running on the same host :)

Endpoint
vlc udp:mad://1234 --demux=ts

Sender (no transcoding)
vlc FILE.EXT --sout=#duplicate{dst=std{access=udp,mux=ts,dst=ENDPOINT-A:1234},dst=std{access=udp,mux=ts,dst=ENDPOINT-B:1234},dst=std{access=udp,mux=ts,dst=ENDPOINT-C:1234}} --sout-keep

Sender (with transcoding)
vlc FILE.EXT --sout=#transcode{acodec=mp4a,ab=256,channels=2,samplerate=44100}:duplicate{dst=std{access=udp,mux=ts,dst=ENDPOINT-A:1234},dst=std{access=udp,mux=ts,dst=ENDPOINT-B:1234},dst=std{access=udp,mux=ts,dst=ENDPOINT-C:1234}} --sout-keep

Regards,
Tim

Tim, thanks again for suggesting the above scheme using duplicated output, one per client. It didn't work for me but did inspire me to experiment with the concept.

What I came up with is duplicated unicast rtp plus rtsp (the latter providing the sdp info) for each client. The rtp protocol seems to work well for me (timing information onboard?). After some testing, it seems that I need to use separate rtsp communication for each client so that they don't step on each other in terms of playback management. Here is the command string that I use on the server to stream audio, taken from the ALSA loopback, to the clients on the LAN (only two currently):
Code:
vlc -vvv --intf dummy --no-media-library alsa://plughw:0,1 --sout '#transcode{vcodec=none,acodec=mp4a,ab=320,channels=2,samplerate=48000}:duplicate{dst=rtp{dst=192.168.10.111,port=1234,sdp=rtsp://@:10111/stream},dst=rtp{dst=192.168.10.112,port=1234,sdp=rtsp://@:10112/stream}}' --sout-keep
 
Have been doing some experimenting with the previously described streaming setup. Learning some interesting things...

STREAMING 96kHz AUDIO:
The AAC codec is capable of generating up to 96kHz streams. In VLC you can't configure this via the GUI - the maximum listed rate is always 48kHz no matter which codec is chosen. But if you manually modify the samplerate parameter in the command line version you can specify any of the valid rates. So I decided to try streaming 96kHz AAC. This caused the Raspberry Pi clients CPU utilization climb to 50-60%. It turns out that the USB DAC that I have been using has a maximum rate of 48kHz, so VLC on the clients suddenly had to resample the audio. I swapped out that DAC for a 96kHz capable DAC and retried the stream. The CPU utilization fell back to its normal level of around a few (e.g. 5) percent and I could open the stream in the GUI version of VLC and check the codec. Sure enough it showed 96kHz AAC. So it seems that if you have a DAC that can accept 96kHz audio data you can stream at this (higher) rate over your LAN without difficulty.

I have also been monitoring the network throughput from the server. There are no other wireless clients on this LAN, so everything is going to the loudspeakers. I am using the 320k setting for AAC encoding for both 48kHz and 96kHz trials. But the network bandwidth used is about 170kB/s (320kbps * 2 channels * 2 clients / 8 bits per Byte) for 48kHz and only increases to about 270kB/s for 96kHz. This is only an increase of around 1.5 times, not 2 times while the sample rate is doubling. Perhaps the higher rate is more efficiently encoded by AAC?

Using an SDP file instead of RTSP to launch the stream
I see the SDP information for the RTP streams in the -vvv debug info that cvlc spits out on the server. I decided to try and put this info into a file on each of the clients and then point the client's vlc to the sdp file to play the stream. Sure enough, it worked. Unfortunately when the vlc instance on the server is killed and restarted the sdp information for each stream changes, so the clients could not continue playing. When I use RTSP, the clients keep trying to open the RTSP link on the server which gives them the latest SDP, meaning that if and when I decide to kill and restart the server side VLC streamer (e.g. to change some parameter, etc.) the clients can get the updated SDP info. Using RTSP is a convenient way to make this info available, however, I think that VLC can be directed to write the SDP info to a file when starting up. If the clients could be directed to open that file, then RTSP might no longer be necessary. I might explore this possibility in more detail in the future.
 
Update: I have been able to eliminate ffmpeg from the streaming audio tool chain. I've moved to a new player, Audacious.

I have been experimenting with several different ways to connect a player (VLC), a transcoder (ffmpeg), and a streaming server (VLC). These have included combinations of alsa loopback and piping stdout-to-stdin under Linux. More often than not, when I could get the toolchain working I would discover some drawback or shortcoming and the future of that was looking rather dim.

I decided to try eliminating ffmpeg and just using VLC, both as a player and as a stream. Connecting these was tried both via ALSA (successfully, sort of...) and via piping data through stdout/stdin route (unsuccessful). When connecting with ALSA, the problem I was experiencing had to do with resampling - the "ugly" resampler was always used and this causes some nasty artifacts, most notable on piano or trumpet. I like to listen to Jazz, and I could pick out these artifacts (if I listened for them) from the other side of my house. But the vlc streaming solution was working pretty well. I could transcode to AAC and stream using RTP/RTSP over my LAN. How could I continue to use that?

Well, how about another player? My server is using Ubuntu, so I browsed for music player applications that I could install. One that I found works really well is Audacious. This has a simple interface that should be usable on my x-windows/tablet controller. It can play local files and internet streams by direct URL entry, and can maintain playlists (have not done much yet with those). Most importantly, it has a high-quality resampler. Now all audio is resampled to 48kHz (ALSA default rate) before entering the ALSA loopback so that the VLC streaming doesn't have to do that when transcoding, resulting in a clean audio stream.

In the meantime, I've been reading up on Gstreamer and may try that out in the near future now that I have figured out how to install in under Raspbian.

Another change to the player. Audacious is now out and I have started using MPD, with both a local client (GMPD) and an Android client (DROID MPD Client). I should have done this a lone time ago! Rather than having to use an X-windows interface to interact with the player running on the server I can use a client that has an interface that is actually designed to use on the device I happen to be operating. This makes a huge difference on my Android tablet, since the X-windows control was sometimes difficult to manage or misinterpreted gestures, etc.

What I find to be really nice is that MPD has a configuration file within which you can set the desired audio format, enable or disable resampling, and set the resampler quality. Initially I used vlc and then later Audacious. VLC would try to resample all outgoing audio but was typically defaulting to a very poor quality resampling algorithm (changing config settings did not help this). It seems that unless the audio is of float type vlc will not use the high quality resampler that is built into it and most of the time it converted the format first to fixed (e.g. S16_LE) and then tried to resample it. I found a workaround in Audacious, which would do clean resampling to the native ALSA rate of 48kHz, however, I could not find a reliably way to change the rate. At least the resampling was clean. On the other hand, in MPD I can set the re-sampling method, sample rate, and bit depth in the configuration file (need to reboot MPD for changes to take effect). If I decide to move from streaming at 48kHz to streaming at 96kHz I can now do the resampling correctly, and cleanly, via MPD.

This is important, because later in my playback chain I am (currently) using a another instance of VLC to stream the audio. Again, due to the problem with poor quality resampling, I want the rate to already be at the rate that I will be streaming at. Thus, having the playback client (MPD) resample correctly was very important for me at this stage.

Now that I have the playback side working well, I have to start planning how I will test out Gstreamer...
 
Main premises of this solution:

1. RTP clients of comparable performance/load
2. no world clock or similar overhead
3. LAN-restricted.
4. one server, numbers of clients limited by the size of the LAN
5. Linux only (could be ported easily, but that's not my concern)

1. Install dependencies

Python and Gstreamer 1.x (1.6 tested) is needed. Details for Debian-like distro can be found at [1]. Those packages need to be installed on the server and clients. I also recommend using/installing screen utility - that will ease managing of persistent terminal sessions. I didn't bothered to daemonize and the like. Alsa-utils also comes in handy.

2. Check system variables

2.1 Identify LAN broadcast address to be used. On the server issue the following:

$ ip r s t 255|grep '^broad.*255'|grep -v '127.255.255.255'|cut -d ' ' -f 2

One of displayed addresses is your broadcast.

2.2 On every client issue the following:

# for i in /proc/sys/net/core/rmem_*; do echo "${i##*/}: $(cat $i)"; done

If values of both are at least 32-64KB, you should be OK. For higher sample rates or more than 2-channel streams these values should be increased. E.g. to change default read buffer size to 96KB issue the following:

# for i in /proc/sys/net/core/rmem_*; do echo 98304 > $i; done

To make changes persistent: man sysctl.conf

2.3 Identify ALSA loopback capture device on the server. Should be something like "hw:x,1" or "hw:x,1,y"

2.4 For every client identify a DAC as seen by ALSA. Should be something like "hw:x,0".

2.5 You may stop NTP daemons. No need for them running.

3 Grab software [2] and customize:

In file mserver-udp.py replace DEFAULT_BROADCAST with 2.1 and DEFAULT_ALSASRC with 2.3

In file mclient-udp.py replace DEFAULT_ALSASINK with 2.4. You may also change sample rate that will be mandated by RTP clients with DEFAULT_RATE. More on this limitation later.

Copy mserver-udp.py to the server and mclient-udp.py to clients.

Ensure client ingress firewall policies permit UDP traffic at DEFAULT_PORT.

4. Launch software

Some digression, first. Launching server side script might be tricky, because of how ALSA loopback works and because this software won't do any resampling explicitly (my design decision). ALSA loopback won't resample either, but what's more important audio stream parameters are defined by the first application that connects to loopback device, remain constant and cannot be changed unless all ends of the loopback substream are closed [3]. E.g. either single loopback substream will handle only tracks with one designated sample rate or player that connects to ALSA loopback will handle resampling, or ALSA will do that implicitly (if you define ALSA device in your player config using plughw plugin), or simply your player returns an error. This also shows that if you plan to use ALSA loopback explicit RTP channel definition as found in mclient-udp.py won't restrict functionality even further. You may overcome this easily by having separate infrastructures, e.g. for 44.1kHz 2-channel stream and 44.1kHz 2-channel stream. To accomplish this you need two substreams of single ALSA loop device (instead of one), two instances of mserver-udp.py (with the same broadcast but different UDP ports and ALSA sources), two instances of mclient-udp.py (with different UDP ports and same ALSA sinks). Instances of mclient-udp.py cannot be run simultaneously on a given host; some form of management is required, not necessarily RTSP, but this is beyond the scope of this lengthy post, so I will stop here.

4.1 Server side

Ensure that ALSA loopback substream will be configured to the same specs as RTP clients are.

Launch a player first or issue something like this (assuming hw:1,0 is playback end of the ALSA loopback):

$ gst-launch-1.0 filesrc location=path.to.some.flac ! flacparse ! flacdec ! audioconvert ! alsasink device=hw:1,0

path.to.some.flac contains standard CD audio format track.

On a new screen terminal session:

$ /path/to/script/mserver-udp.py

Detach screen session. Once Python script running, you may stop/terminate gst-launch-1.0/player.

4.2 Client side

On a new screen terminal session:

$ /path/to/script/mclient-udp.py

Detach screen session. Then issue the following (this optional if client is under load):

# chrt -arp 60 $(ps -eo pid,cmd|grep [m]clie | cut -d ' ' -f 1)


[1] https://wiki.ubuntu.com/Novacut/GStreamer1.0#Installing_GStreamer_1.0_packages
[2] https://github.com/fortaa/tmp/archive/master.zip
[3] Matrix:Module-aloop - AlsaProject

Forta! Thanks for spelling all of this out for me and for providing the python scripts to generate the gstreamer chain. Initially it all seemed very complicated, but after reading it over a few times, and reading some gstreamer documentation and examples, I realized that you were just being very detailed. I finally set this up today and it's working, well mostly...

The problem that I am experiencing can be described as very brief dropouts or very brief (100-200msec) muting or similarly brief fade out and back in. Also some brief skipping ahead by that duration in the track or other playback irregularities. The CPU utilization on the Pi 2 clients is very low, e.g. 5% or less. On the server I can see one core running between 60%-90% and another core coming into service here and there, but the other 2 cores are doing nothing. My player is resampling the audio so that could be what I am seeing. So I don't think that CPU power is too low on either server or client. The system read buffer on the clients was 162k so I left the value alone. I ran the chrt command on the clients as well.

Other than that, gstreamer seems very promising. Perhaps some additional stream buffering is needed somewhere? I didn't see that included in your server or client pipelines... I had to buffer the stream on the clients by 200msec or more for reliable playback when I was using VLC.
 
Last edited:
A quick update after some changes...

So... it's pretty clear to me now that broadcasting or multicasting just is not going to work for my (and probably your) Wireless network. Evidently these schemes do not incorporate any packet acknowledgement and re-transmission (and I thought only TCP did this?) so a good percentage of packets can be "lost" or "dropped" in transmission. I read that up to 15% of packets (in one instance) were dropped. In that case I can see why my audio was not sounding very "healthy" and there were glitches...

After extensive searching for examples to follow I managed to implement a gstreamer sink called "multiudpsink" that creates multiple UDP (unicast) streams in parallel to N destinations. This is similar to what I have been doing using VLC. I managed to get this working but there were still very minor glitches going on if you listened carefully.

I just so happened to receive today some new USB WiFi dongles that I ordered a few days ago. I decided to put these into play and see what happened. The form factor on the new dongles is similar to the old ones I have been using for some time, it's like a very small usb thumb drive, with only about 1.5-2 cm sticking out. But the new ones are MUCH better. I have been using the program "wavemon" to monitor the signal strength at the clients. Previously I was getting about -37dB to -40dB signal strength and it worked fine. Now I am getting -19dB on one client and -20dB on the other! That's pretty incredible - that's 10 times more signal! Even better, these new dongles are also dual-band where the old ones were only single 2.4G band capable.

Now that I am using the new WiFi dongles the audio is completely clear and glitch free. I am streaming uncompressed 16-bit 48kHz audio to two clients via unicast UDP. Sounds great and synchrony between clients is solid. I need to spend some more time evaluating and listening, but so far this seems like a very good result indeed.
 
A quick update after some changes...

After extensive searching for examples to follow I managed to implement a gstreamer sink called "multiudpsink" that creates multiple UDP (unicast) streams in parallel to N destinations. This is similar to what I have been doing using VLC. I managed to get this working but there were still very minor glitches going on if you listened carefully.

Multiple UDP streams generates more traffic than a single broadcast. If you have problem with glitches you may want to experiment with rtpjitterbuffer plugin (to be put between udpsrc and rtpL24depay).

PS. And I really recommend using Ethernet for streaming. WLAN only for C&C.
 
Multiple UDP streams generates more traffic than a single broadcast. If you have problem with glitches you may want to experiment with rtpjitterbuffer plugin (to be put between udpsrc and rtpL24depay).

PS. And I really recommend using Ethernet for streaming. WLAN only for C&C.

Unfortunately wired connectivity is not an option. I do not own a wired home, and I'm not planning to install ethernet to each and every place where I would like to locate a speaker. Wireless is my only option, and I have more or less committed to that as part of the system.

I was able to get pcm streaming at 16/48 to two speakers with good quality (see above). I plan to try and put another two clients into the mix today to see how the network loading influences things.

I would really like to use gstreamer to encode to AAC. I tried adapting your python code yesterday using "faac" as the plugin or module but that didn't work. After some more digging I will try avenc_aac today. I think I need to pipeline the encoder and then a muxer like mp4 on the client side, and to the reverse on the client side... is that correct? Can you point me to any example of that, or help me write some of that code? There is only basic python documentation for gstreamer that only consists of lists of modules and their functions or whatever. Since I don't know python at all it's a bit of a learning curve for me at this point.

Also, why are you using python when this could be done on the command line using gst-launch?
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.