the DIY Streaming Audio Thread

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Interesting. Which wifi dongle are you using?

The "new" dongles that have MUCH better signal pickup compared to the ubiquitous WiPi dongles that I have been using are the Edimax EW-7811UTC. I got them from NewEgg at $13 each.

I had to install a driver for them to work. To do this I had to have both the WiPi dongle (that was working) and the Edimax dongle (that was not) plugged into the Pi. Then I followed the directions in the second post on this page:
https://www.raspberrypi.org/forums/viewtopic.php?f=28&t=102323&p=782878

I am reposting these steps below. You need internet connectivity to download the installer and then the driver. Follow each numbered step:

1. wget https://dl.dropboxusercontent.com/u/80256631/install-wifi.tar.gz (NOTE: full link text is: dl.dropboxusercontent.com/u/80256631/install-wifi.tar.gz)
2. gunzip install-wifi.tar.gz
3. tar xvf install-wifi.tar install-wifi

There is now a utility called "install-wifi" in the current directory. Run it like this:
4. ./install-wifi -c

The utility scans the usb bus for edimax devices (this is why it needs to be plugged in) and reports which driver is needed. In this case, I typed:

5. ./install-wifi 8812au

The driver is installed. Remove the WiPi or whatever you were using to provide internet connectivity. Leave the Edimax dongle plugged in. Then:

6. sudo reboot

I logged in and started up the GUI desktop with startx. In the desktop I clicked on the internet icon at the top right, selected my WiFi network, and then re-entered the password. The dongle then connected.
 
Unfortunately wired connectivity is not an option. I do not own a wired home, and I'm not planning to install ethernet to each and every place where I would like to locate a speaker. Wireless is my only option, and I have more or less committed to that as part of the system.

You don't need to wire a house. Use powerline adapters; even some of them - at the price of 30$ per pair - can handle IGMP sessions.

Doing it WLAN-way IMO you've got three options:

1. Buy a really good router (cisco/juniper/fortinet et al) and client adapters, so that the chances are pretty high every part of your network implements IEEE 802.11 reasonably well. Nonetheless buying $$$ boxes might not be enough to avoid Charlie Foxtrot [1]
2. Wait until 802.11aa gets widespread adoption. Or if you are brave enough start experimenting yourself with chips with open-source firmware now [2]
3. Resort to multiple unicast connections. Scaling capabilities aside, I'm not sure whether this is a viable option for active XO or similar near-realtime DSP.

I would really like to use gstreamer to encode to AAC. I tried adapting your python code yesterday using "faac" as the plugin or module but that didn't work. After some more digging I will try avenc_aac today. I think I need to pipeline the encoder and then a muxer like mp4 on the client side, and to the reverse on the client side... is that correct? Can you point me to any example of that, or help me write some of that code? There is only basic python documentation for gstreamer that only consists of lists of modules and their functions or whatever. Since I don't know python at all it's a bit of a learning curve for me at this point.

Try this [3]

Also, why are you using python when this could be done on the command line using gst-launch?

Sure, simple setups as above can be created with gst-launch. Those scripts are stripped-down forks of mine and that's the main reason. Besides a music server of my choice is written in Python and uses GStreamer extensively, so any custom changes to a stream and its transport that suit my needs are applied at the music server, e.g. sidestepping ALSA loopback and its limitations.

[1] Why do some WiFi routers block multicast packets going from wired to wireless? - Super User
[2] http://www.sigcomm.org/sites/default/files/ccr/papers/2014/January/2567561-2567567.pdf
[3] https://github.com/fortaa/tmp/tree/master/streamer-aac
 
Forta:

Unfortunately I can't get faac installed/working. I just spent the ENTIRE day trying to do that, even installing a recent version from sources i hopes that it would come along with the "bad" plugins. But I managed to do little more than damage my OS and after fixing that I had to remove, clean, and then reinstall the Gstreamer package. Now I am up and running again... with version 1.6.0.

Using gst-inspect I found that I have the avenc_aac plugin. I tried to edit the code to substitute that in place of faac. The code would run without errors but there is no output on the network. The pipeline seems stalled. I tried to figure out this problem on my own but the documentation is so poor that it's impossible for me. Maybe its caps related, or I need to link to a specific pad, or maybe I need a queue, or maybe I need a muxer, or, or or??? It's pretty disappointing to have such a promising software tool that is so difficult to use.

You seem to be very familiar with Gstreamer somehow. Any ideas on how to get a pipeline working with avenc_aac instead of faac?
 
Unfortunately I can't get faac installed/working. I just spent the ENTIRE day trying to do that, even installing a recent version from sources i hopes that it would come along with the "bad" plugins. But I managed to do little more than damage my OS and after fixing that I had to remove, clean, and then reinstall the Gstreamer package. Now I am up and running again... with version 1.6.0.

This is a distro related problem. Check support forums for more help [1].

You seem to be very familiar with Gstreamer somehow. Any ideas on how to get a pipeline working with avenc_aac instead of faac?

I guess setting 'compliance' property to -2 should solve problem. Check the sink pad specs of this plugin with gst-inspect to see if any additional constraints apply. Whatsoever I'm not familiar with this plugin (and your distro), and you are smart enough to solve this problem. Just a bit of patience.

[1] https://bugs.launchpad.net/ubuntu/+source/gst-plugins-bad1.0/+bug/1299376
 
This is a distro related problem. Check support forums for more help [1].

I guess setting 'compliance' property to -2 should solve problem. Check the sink pad specs of this plugin with gst-inspect to see if any additional constraints apply. Whatsoever I'm not familiar with this plugin (and your distro), and you are smart enough to solve this problem. Just a bit of patience.

[1] https://bugs.launchpad.net/ubuntu/+source/gst-plugins-bad1.0/+bug/1299376

Setting compliance to -2 did not help. You are very complementary about my abilities. Unfortunately I haven't been able to get aac working (yet).

I decided to try using the command line pipelines so that I might see some debug info or error messages. For instance, I can replicate what your python script is doing like this:
Code:
gst-launch-1.0 alsasrc ! audio/x-raw,format=S32LE,rate=48000 ! audioconvert ! rtpL24pay  ! udpsink host=192.168.10.112 port=1234
The above creates a single RTP stream to a target host. It works - at least I see the stream data on my network; my wife is asleep so I can't turn on the clients to actually listen to the audio stream. Actually, I may need to debug the client script/pipelines, too...

When I try to string something together that could also do AAC encoding using the avenc_aac encoder, etc. I am using this pipeline:
Code:
gst-launch-1.0 -e alsasrc ! audio/x-raw,format=S32LE,rate=48000 ! audioconvert ! avenc_aac compliance=-2 ! mp4mux faststart=true streamable=true ! rtpmp4apay  ! udpsink host=192.168.10.112 port=1234
The result is this output:
WARNING: erroneous pipeline: could not link mp4mux0 to rtpmp4apay0
I have tried lots of variations on this theme but I haven't been able to figure it out.

You have been a great help so far. I welcome more help! If not, what is a good gstreamer forum in which I can post my questions and get help?

-Charlie
 
When I try to string something together that could also do AAC encoding using the avenc_aac encoder, etc. I am using this pipeline:
Code:
gst-launch-1.0 -e alsasrc ! audio/x-raw,format=S32LE,rate=48000 ! audioconvert ! avenc_aac compliance=-2 ! mp4mux faststart=true streamable=true ! rtpmp4apay  ! udpsink host=192.168.10.112 port=1234
The result is this output:

I have tried lots of variations on this theme but I haven't been able to figure it out.

You have been a great help so far. I welcome more help! If not, what is a good gstreamer forum in which I can post my questions and get help?

Not that I'm not willing to help, but I do not have debian nor ubuntu, so sooner or later it will become a prime support service from Bangalore. We could try one more time, but afterwards you should direct yourself to ubuntu/debian support or gstreamer-devel mailing list, ok? It's not my fault that the debian maintainers ditched the faac library. Alternatively you could change a distro.

I did not use mp4mux in my script. It breaks the pipeline. Issue the following:

Code:
GST_DEBUG=4 gst-launch-1.0 -e alsasrc ! audio/x-raw,format=S32LE,rate=48000 ! audioconvert ! avenc_aac compliance=-2 ! rtpmp4apay  ! udpsink host=192.168.10.112 port=1234 &> log1.txt

Additionally:

Code:
gst-inspect-1.0 avenc_aac > log2.txt

Upload log1.txt and log2.txt here.
 
Not that I'm not willing to help, but I do not have debian nor ubuntu, so sooner or later it will become a prime support service from Bangalore. We could try one more time, but afterwards you should direct yourself to ubuntu/debian support or gstreamer-devel mailing list, ok? It's not my fault that the debian maintainers ditched the faac library. Alternatively you could change a distro.

Well anyway thanks for your help and for posting your script, which will remain in this thread for future reference. I don't want to trouble you with this any longer. Honestly, I have experienced nothing but frustration with Gstreamer. Your python script works, but only for streaming PCM and not all that well IMHO. There were always some small timing jumps/skips/dropouts and the bandwidth of the resulting rtp stream was probably a little high for my clients and the wireless connection, where multicast doesn't work. Those are just the realities of my hardware I am afraid - this might work better over wired. Since I can't seem to make any modifications whatsoever to an existing script or pipeline, or create a new one that works, I can't see the fruit in continuing to bang my head on this.

I already have a VLC streaming solution (AAC encoded multiple unicasted rtp streams) that works well and is fault-tolerant on both the server and client ends. I plan to stick with that for now. It was a useful, albeit frustrating, exercise to work with Gstreamer and perhaps a couple of years down the road when documentation and tutorials improve for version 1.x I will give it a try again. For now I have other problems to solve and things to do.
 
Well anyway thanks for your help and for posting your script, which will remain in this thread for future reference. I don't want to trouble you with this any longer. Honestly, I have experienced nothing but frustration with Gstreamer. Your python script works, but only for streaming PCM and not all that well IMHO. There were always some small timing jumps/skips/dropouts and the bandwidth of the resulting rtp stream was probably a little high for my clients and the wireless connection, where multicast doesn't work. Those are just the realities of my hardware I am afraid - this might work better over wired. Since I can't seem to make any modifications whatsoever to an existing script or pipeline, or create a new one that works, I can't see the fruit in continuing to bang my head on this.

I already have a VLC streaming solution (AAC encoded multiple unicasted rtp streams) that works well and is fault-tolerant on both the server and client ends. I plan to stick with that for now. It was a useful, albeit frustrating, exercise to work with Gstreamer and perhaps a couple of years down the road when documentation and tutorials improve for version 1.x I will give it a try again. For now I have other problems to solve and things to do.

Whatever suits you, Charlie. Hopefully you will find the right solution.
 
After my experience with Gstreamer I thought I would try some new things using VLC as the "streaming engine". Yesterday I figured out how to get VLC to stream 16-bit, 48kHz PCM data and have been doing some listening trials since then. The sound is good - output from each Pi is via the spdif (toslink) port of the UCA202 at 48kHz to a miniDSP. The only resampling in the entire chain is done by the MPD player when it outputs audio to ALSA. No transcoding either - just a change from float to S16 for streaming. The wireless internet handles the higher bandwidth stream remarkably well, and I have a very good connection with the new Edimax dongles even though they are operating at 2.4G. The streaming is done (out of necessity) as multiple unicast RTP streams, with the SDP info made available by HTTP instead of RTSP. Buffering does not need to be high on the client side - 200msec works well.

This is a very nice development.
 
Hey, Charlie.

I'm very impressed by your determination, nice job :)

I'm also thinking about doing wireless audio streaming solution in my home. I plan to use raspberry pi B+ as receiving nodes and raspberry pi 2 as a stream sender. But I'm trying to do it with pulseaudio and its rtp implementation. So far I tested this with one pc and two laptops. pulseaudio uses only multicast solution for rtp. It works very good with ethernet but no so good with wifi. this is because of my router don't like multicasts. But I think there is a good solution. I created separate wifi network like this:
[my wifi router] -- thernet --> [my pc acting as a access point with wifi dongle] -- wifi --> [two laptops]

in this solution my pc was acting as a router with access point.
from this article:
http://www.wi-fiplanet.com/tutorials/article.php/3433451/Implementing-Wi-Fi-Multicast-Solutions.htm

I understand that if I send multicast from actual access point then there will be only one packet send through the air and all receivers can receive it at once. So the performance of this solution is not affected by the number of receivers. In my tests, this was behaving as with ethernet solution.
I believe that this should work also with wifi connection between router and pc, but I can't test it now.

I'm curious if this solution would work for You ? You just need to add another wifi dongle, that is capable of acting as an access point, to your media server pc.
 
Hey, Charlie.

I'm very impressed by your determination, nice job :)

I'm also thinking about doing wireless audio streaming solution in my home. I plan to use raspberry pi B+ as receiving nodes and raspberry pi 2 as a stream sender. But I'm trying to do it with pulseaudio and its rtp implementation. So far I tested this with one pc and two laptops. pulseaudio uses only multicast solution for rtp. It works very good with ethernet but no so good with wifi. this is because of my router don't like multicasts. But I think there is a good solution. I created separate wifi network like this:
[my wifi router] -- thernet --> [my pc acting as a access point with wifi dongle] -- wifi --> [two laptops]

in this solution my pc was acting as a router with access point.
from this article:
Implementing Wi-Fi Multicast Solutions

I understand that if I send multicast from actual access point then there will be only one packet send through the air and all receivers can receive it at once. So the performance of this solution is not affected by the number of receivers. In my tests, this was behaving as with ethernet solution.
I believe that this should work also with wifi connection between router and pc, but I can't test it now.

I'm curious if this solution would work for You ? You just need to add another wifi dongle, that is capable of acting as an access point, to your media server pc.

I was not able to get multicast working to any sort of acceptable level over a wifi connection. I won't go into why that is from a technical standpoint, and why it will work on a wired LAN but not over wifi, however the very article that you linked to gives several hints about why this is the case.

I suggest that you consider abandoning multicast and move to multiple unicast streams if you want to stream over wifi. I am currently using this approach with two clients - the audio is sent as uncompressed 16-bit PCM at 48kHz using the RTP protocol. My streaming setup is built around VLC (for both client and server) plus some code on the client side to handle the occasional signal dropout or whatever. This approach is robust and is all around better than when I was trying to use compressed audio (e.g. MP3, AAC, etc) to reduce the required wifi bandwidth. It turns out that a modest WiFi system (e.g. 802.11g @ 2.4GHz, inexpensive USB WiFi dongles) has plenty of bandwidth for a pair of unicast clients. Unicast can't be scaled to many, many clients like multicast but how many clients do you really need? Also, there is likely a limit on the sample rate that can be streamed over WiFi, however, in limited testing I was able to stream 96kHz over my WiFi setup (described above) just fine.
 
Hi Charlie,

Never thought before i might use VLC to stream my vinyl stuff around the house. I am using an http streamer built around an RPI2 with a Cirrus audio card, VLC and Sox as a FLAC ( 48khz, 96khz, and even 192khz...) encoder.

sox -t alsa -r 96000 hw:0 -t flac - | vlc -vvv --intf dummy - --sout '#standard{access=http,mux=ogg,dst=:8080/}'

Works wonderfully, thanks! ;)
 
Last edited:
The projects discussed in this thread are running on linux where full support for AP192 has not been deprecated and will not be for many years to come. The always up-to-date drivers come with the kernel.
And that's why I said " it's getting old" . I'm looking for something to replace the AP192. I read the thread, ive had ubuntu, Mint, Cinnamon on various computers over the years. ....I run an RME UCX in my studio , but I'm not familiar with what's out there now for the home audio market and I don't need another $2000 soundcard just for home audio ...conversely, a sound blaster would NOT be enough. ...so again, I ask the simple question. What's everyone running for a sound card ?
 
I was not able to get multicast working to any sort of acceptable level over a wifi connection. I won't go into why that is from a technical standpoint, and why it will work on a wired LAN but not over wifi, however the very article that you linked to gives several hints about why this is the case.

I suggest that you consider abandoning multicast and move to multiple unicast streams if you want to stream over wifi. I am currently using this approach with two clients - the audio is sent as uncompressed 16-bit PCM at 48kHz using the RTP protocol. My streaming setup is built around VLC (for both client and server) plus some code on the client side to handle the occasional signal dropout or whatever. This approach is robust and is all around better than when I was trying to use compressed audio (e.g. MP3, AAC, etc) to reduce the required wifi bandwidth. It turns out that a modest WiFi system (e.g. 802.11g @ 2.4GHz, inexpensive USB WiFi dongles) has plenty of bandwidth for a pair of unicast clients. Unicast can't be scaled to many, many clients like multicast but how many clients do you really need? Also, there is likely a limit on the sample rate that can be streamed over WiFi, however, in limited testing I was able to stream 96kHz over my WiFi setup (described above) just fine.

yeah, You are probably right. The problem is the multicast rate in WLAN. With current linux driver stack it is fixed at the lowest available rate (e.g. 1MB/s). Maybe some 3rd party drivers could behave differently ... Back to your approach, i was trying with vlc but couldn't get two devices to synch :/ do I need to take any specific steps besides of using rtsp ?
Anyway, I tried also with gstreamer and it works pretty well. on the pc server i use gst-rtsp-server and run the test-launch app from the examples:
Code:
./examples/test-launch "( pulsesrc ! audioconvert ! rtpL16pay name=pay0 pt=96 )"

on the clients I run:
Code:
gst-launch-1.0 rtspsrc location=rtsp://$host:$port/test ! rtpL16depay ! audioconvert ! alsasink device=sysdefault

it some times desynchronize for a moment, but maybe i need to play with some bufferring settings. The other problem is on the raspberry pi client. the gst-launch actually takes a lot of cpu power, around 40-50%. This is why i need to use directly alsa device=sysdefault, when i try with pulsesing the cpu usage is to high (50% pulseaudio - this is not normal and 40% gst-launch). This is weird because if I use mplayer to play the same rtsp stream:
Code:
mplayer rtsp://$host:$port/test
even with pulse audio the cpu usage is below 20% (mplayer 8%, pulseaudio 8%). Does anyone know what may be the problem ? do gstreamer needs to be tweeked for raspberry pi ?
 
Back to your approach, i was trying with vlc but couldn't get two devices to synch :/ do I need to take any specific steps besides of using rtsp ?

Here are some details about my current vlc based implementation, where vlc is used on the server and the client sides. This is slightly different than what I was using before (no longer use RTSP for instance). For each client I generate (on the server) a unicast stream, the destination of which is a port on the client. I use the RTP protocol to keep the multiple independent unicast streams in sync.

On the server I use mpd to play all files/internet streams, etc. Mpd has a good quality resampling algorithm (note that you have to set this up in the config file, along with the bit rate). With mpd I set the sample rate for the audio that will be used for playback on the clients. In this way vlc does not need to resample - I have identified some problems with vlc resampling (sometimes is uses a very low quality algorithm for some reason) and this approach avoids that completely, in fact you can "turn off" resampling in vlc's configuration if you would like. The output of mpd is sent, via an ALSA loopback, to vlc as a PCM bitstream. VLC then repackages that with RTP and sends it off to the various unicast destinations.

On each client I use VLC to receive the RTP audio stream from the local port. VLC unpacks the RTP and outputs the audio to an ALSA loopback, making it available as an input for other programs running on the client. In the event that the stream is dropped or experiences interference (it is over wireless after all) I call vlc from a shell script. In brief, the script calls vlc from within a loop such that if vlc exits (because the stream ends or any other reason) it is simply restarted again. This means that I can start the clients before the server and the system will still work because the script continuously "tries" to start up the audio stream until it is successful. There is one other condition that is tested and if found true vlc is restarted (I will get back to this later).

Serving up the Stream: previously I was using RTSP to serve up the stream. Before I really understood what was going on I simply found that this worked (more or less). Now I know a little more about what is going on. RTSP includes control commands that can be issued from the client back to the server and uses the RTP protocol for data. This is why it worked for me initially - RTP was being used to synchronize the clients. But it turns out that RTSP actually causes some problems and I experienced frequent issues. This is what drove me to create the shell script in the first place. What I was able to do with RTSP was make SDP info (descriptors for the stream data) available to the clients. Without these VLC can't play the stream. Now what I do is I make these available to the clients via HTTP, in fact this is what the client "plays". VLC (on each client) is directed to open an HTTP file that contains the SDP info for the actual audio stream. VLC uses the SDP info like "directions" to "find" the audio stream and open it (from the local port). Because the audio is sent as a separate unicast stream to each client, there is a separate HTTP file for each containing the SDP info that is specific to the current stream and the current client. SDP includes an identifier that is unique to each stream and even if the destination and other parameters remain unchanged if the server stops and restarts the transmission the SDP info will change. For this reason you cannot simply copy the SDP file to the clients, you have to make it available in real time to them. VLC does this well.

Pitfalls of this approach: As I mentioned, I had to develop a shell script that runs on the clients that restarts VLC when a dropout or other problem occurs. One really frustrating problem occurs when the "DTS delay" is automatically adjusted by the client in response to input not arriving in a timely manner. This is sort of like a dynamic increase of the buffering of the incoming stream. The algorithm behind this behavior only seems to increase the buffer size by a few tens or hundreds of milliseconds each time, however, it can occur from time to time, whenever the stream is not coming through fast enough to prevent the buffer from nearly or completely emptying. Each client adjusts its buffer separately (they are not aware of each other) and often the amount of adjustment is not the same for each. The result is both increasing "delay" between server and client as well as inevitable loss of synchrony between clients. For example, if I leave the system running and "uncorrected" all day long (e.g. 12 hours) a delay of 2 or 3 seconds might build up between the time that I mute the audio on the server and when the audio playback (coming out of the speakers) finally mutes. In contrast, there is only a few hundred milliseconds of delay when the streaming audio is first started up. For this reason I run vlc on the clients with debug info enabled and then my shell script greps the output. Whenever a keyword is found that indicates that the DTS delay has been increased I simply terminate and restart VLC. Luckily this doesn't happen all that frequently and there is only a dropout of 500 msec or so, which I find tolerable and you hardly notice it. I believe that this is due to occasional problems with wireless transmission in my home (e.g. from interference or whatever) as the clients are located a good distance away from the WiFi transmitter. In any case, using this method I have created a system that is robust in terms of faults and startup order, etc.

From my brief trial of Gstreamer, I think that similar things are going on. Gstreamer tries to keep latency as low as possible, and this inevitably leads to problems/dropouts. Gstreamer seems to correct these automatically because I was able to hear periodic glitches in the audio that seemed to indicate Gstreamer was re-syncing. I eventually turned away from Gstreamer because I was not able to really do much with it - the documentation is poor and I found Gstreamer's syntax to be confusing. It certainly has potential but I was not able to make changes to the canned implementation that I was graciously handed by another forum user, which I found both frustrating and limiting. I've been able to create a system that works pretty well based only on vlc, and that is what I will continue to use for the time being unless I can find a suitable and more convenient alternative.
 
Hi Charlie,

Never thought before i might use VLC to stream my vinyl stuff around the house. I am using an http streamer built around an RPI2 with a Cirrus audio card, VLC and Sox as a FLAC ( 48khz, 96khz, and even 192khz...) encoder.



Works wonderfully, thanks! ;)

After messing some time with VLC and its scary learning curve, i recently stumbled with this...

Hipster music player: when MPD is just too mainstream - Bytopia.org

... and discovered how simple it is to send a stream from my turntables/cd players to any speaker in the house without overkill solutions like MPD or VLC.

sox -r 44100 -b16 -c2 -t alsa hw:0,0 -t flac - | sshpass -p 'pi' ssh pi@192.168.1.41 sox -t flac - -t alsa hw:2,1

SSH for TCP ( no data lost...) streaming in flac from a capturing RPI to another playing RPI seems to be more than enough...:p
 
Last edited:
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.