Need Linux Audio Player that can play local input

Status
Not open for further replies.
I am looking for a linux (Ubuntu) music player software that can "play" from a local input, e.g. the audio input on the motherboard AND can send the audio output to an ALSA device by device and subdevice number. I would like to use a GUI player, however, a command line player tool could probably work just as well for my needs. For instance FFMpeg might work. I'm just wondering what is out there these days (since I last looked about a year ago).

This would be used to accept an analog audio input at my "audio server" computer (e.g. from a tuner or CD player) and pass it on to my audio streaming software, which streams it to loudspeaker systems in my home. I have been using MPD, however, that cannot use a live input. I can do this with the VLC player but it sometimes invokes a terrible quality sample rate converter, so that is out of the running.
 
Not sure what you're up2.

There's not much change on the Linux audio front since quite some time (years). (If we take all these current distros (mpd or squeezelite based) and all the ARM related stuff out of equation)

Meanwhile I'm not using any GUI based player on my Ubuntu desktop system. I didn't find a single player that would satisfy my wishes.

I'm using squeezelite and LMS in the background. Unfortunately I'm not aware of a stdin/stream-recording plugin. There used to be a wave plugin reading audio streams straight from the OS.
Never tried it myself though.


You could e.g. use a script. sox piped into ecasound. But I guess, that's nothing new to you.

sox doesn't have the fancy output routing options of ecasound, but it comes with the better DSP part and reads all kind of inputs.
 
Last edited:
The sound quality of OSS is better than ALSA. OSS has its own player:. ossplay. It supports mmap(), which bypass all buffer and SRC. Please see my post at OSS forum.

How does the communication with the soundcard differ between oss and alsa with mmap enabled (https://git.kernel.org/cgit/linux/kernel/git/tiwai/sound.git/tree/sound/pci/ice1712/ice1724.c#n927 ) ? How does the additional buffer allocated by the driver (i.e. in r/w mode, if mmap mode is not requested by the player) change the sound quality?

All Envy24-based cards accept only 32bit format over DMA (in alsa the format is S32_LE), they work with 24 MSBs internally.

The RW access mode is hardcoded e.g. in:

sox https://github.com/uklauer/sox/blob/master/src/alsa.c#L118

mpv https://github.com/mpv-player/mpv/blob/master/audio/out/ao_alsa.c#L681

audacious https://github.com/audacious-media-player/audacious-plugins/blob/master/src/alsa/alsa.cc#L340


Some players have it configurable:

mpd https://github.com/sol/mpd/blob/master/src/output/alsa_output_plugin.c#L410

aplay git.alsa-project.org Git - alsa-utils.git/blob - aplay/aplay.c
 
...
This would be used to accept an analog audio input at my "audio server" computer (e.g. from a tuner or CD player) and pass it on to my audio streaming software, which streams it to loudspeaker systems in my home. I have been using MPD, however, that cannot use a live input. I can do this with the VLC player but it sometimes invokes a terrible quality sample rate converter, so that is out of the running.
...
CharlieLaub,
Maybe this clever solution to get analog audio (vinyl!) into the LMS/Squeezelite ecosystem gives you some inspiration and can be tweaked to what you want to achieve...

Forum:
Streaming vinyl wirelessly to squeezebox receivers
Blog:
https://iotsblog.wordpress.com/2016/02/25/wireless-streaming-vinyl-to-squeezebox/

/bart
 
OK, some confusion here on what I am trying to do so I will explain in a little more detail.

Currently I use MPD to play files and internet streams. MPD is running on my "audio server" and I access it from the server (logged in) or from other computers on my network using an MPD client. The output of MPD is pointed to an ALSA loopback on the audio server. The other end of the loopback pipe is the input for my streaming audio control system, which allows me to simultaneously stream to any of N other computers on my network. These computers are small ARM boards located at/in each loudspeaker system. They receive the audio stream and typically process it in memory (e.g. software DSP crossover yadda yadda) before outputting the audio via one or more DACs to amps and then the drivers.

All of this works great.

But I am limited to playing files and internet streams. What if I want to play a CD? Or just connect a tuner and stream that? Or how about my non-existent turntable with granite platter and moon rock needle? How do I stream that over my system? The answer is I need the player software (e.g. my MPD or whatever) to be able to use as its input the analog input jack on the back of my audio server. Then it has to be able to output the audio to the ALSA loopback that I am using to connect player to my streaming audio controller system.

As far as what I have available now, it seems that FFMpeg can do this and I hope to give that a try soon. I will report back here about it. But I wanted to see what else might be out there that could do the same thing without being a command line program, although that is certainly not a real problem for me.

As a practical example, let's say I want to take one of my systems to an audio show. There, someone has a turntable and wants to pair up with me - they supply the source and I supply the rest. I'd like to be able to do that. Or your buddy comes over with from tracks on CD that they want to play. Or the wife wants to listen to her favorite radio show. Etc. You get the picture.
 
I am afraid the mpd source code is not ready yet to accept user-defined stream params in alsa input

master/mpd.git -

A simple aplay | aplay pipe will do. Nevertheless, again two clock domains are at work here - the one of the capture card (producer) and the kernel timer in the loopback virtual soundcard (consumer).

Perhaps the Zita-ajbridge could be used somehow if the input chain involves jack. It could be restarted at various samplerates, easily scriptable.

BTW did you check [LAU] First release of zita-njbridge ?
 
I am afraid the mpd source code is not ready yet to accept user-defined stream params in alsa input

master/mpd.git -

A simple aplay | aplay pipe will do. Nevertheless, again two clock domains are at work here - the one of the capture card (producer) and the kernel timer in the loopback virtual soundcard (consumer).

Perhaps the Zita-ajbridge could be used somehow if the input chain involves jack. It could be restarted at various samplerates, easily scriptable.

BTW did you check [LAU] First release of zita-njbridge ?

Thanks for the info and link to Zita-AJBridge. Seems similar to what I am doing with gstreamer but it's really only going to work with wired networks since it uses multicast and stresses "analog like" low latency. So for my needs it's not suitable, and I don't use JACK.

Thanks for the warning about the different clocks. The capture will be done via onboard ALC892 audio (input). I will give it a try and then see if I experience any audible dropouts, buffer under/overruns, or other undesirable stuff.
 
I went ahead and tried it with FFmpeg and... it works. I was having some buffering problems and then I recalled that on my machine the default input sampling rate is 96k, so I had to enable resampling (using the Sox resampler) under FFmpeg to 48kHz (the rate I stream at) since that is what the streaming software is currently expecting via the loopback. I also adjusted the gain of the input to 0dB using alsamixer. After that, everything seems to be working well.

I could avoid the resampling by writing an alsa plug that specifies the sample rate, but at this point I don't think I will bother. There are probably other ways to tell the machine to use 48k, or I could just stream at 96k.

Here is the ffmpeg command string I used:
Code:
ffmpeg -f alsa -ar 96000 -i hw:1,0 -af aresample=resampler=soxr -ar 48000 -f alsa hw:0,0

I may try out some other options to see how it goes.

One thing that is for sure is that my input ADC does not overload gracefully. Otherwise it seems quite usable as an input.
 
Last edited:
Quick update:

After experimenting a bit, I realized that there is no need for resampling at all. FFmpeg tells the onboard audio system what sampling rate to use when capturing audio from the line input. But I do have to specify that rate as part of the command string, which I was not doing before. Now I run the command:
Code:
ffmpeg -f alsa -ar 48000 -i hw:1,0 -f alsa hw:0,0
Samples of the input at 48k (and the 16 bit default depth, but I could change that) are captured and then passed to the ALSA device (snd-aloop, a loopback) where it can be used as the input for streaming audio. No extra resampling is required after the capture anywhere until the DACs spit it out at the other end. Cool. Now I can use my audio server as a preamp!

As for xruns, there are a number of them at the beginning and then they stop coming. Even when there is an xrun, they do not produce anything audible, or at least I haven't been able to detect anything. Maybe phofman has some insight into this behavior...

Tip: add -loglevel quiet just after ffmpeg to silence all info and debug output.
 
Last edited:
I had to help someone with some ecasound problems today, and I decided to try and see if ecasound could be used like ffmeg, above. Sure enough it also works well. The ecasound command string I used was:
Code:
ecasound -B:rtlowlatency -b:512 -f:16,2,48000 -i:alsa,hw:PCH -o:alsa,hw:Loopback
Ecasound makes it easier to control the buffering, so I could reduce latency somewhat compared to FFmpeg.

I think someone mentioned JACK earlier in this thread. For some reason JACK mystifies me and I have never been able to understand how to use it well, so mostly I avoid it. This kind of connection seems exactly what JACK is meant for, so if someone wanted to help me get it working under JACK I would appreciate it so that I could start learning how to use it!
 
And for a different approach, I have been using jack for years for switching and routing analog inputs (and SPDIF, for the case). Mpd (or whatever source) to jack, and then to brutefir. I suppose that your setup (that I don't understand completly) would be feasible also.

Edit: Sorry, I haven't read tour last post about jack. Learning curve is steep at the begining, but if I can help at some point, I will.
 
Last edited:
Hi all,

As suggested, i think JACK is our friend.

With Jack you have a pro sound server in your Linux. Learning may be steep, but not much 🙂

Below some info, i hope it helps:

If you have Pulseaudio, first of all you need to inhibite it to use your card:

# List of PA cards:
pactl list short cards
0 alsa_card.pci-0000_02_04.0 module-alsa-card.c
1 alsa_card.usb-miniDSP_miniStreamer-01 module-alsa-card.c

# Inhibite PA to use our card:
pactl set-card-profile alsa_card.pci-0000_02_04.0 off

# List of alsa cards
cat /proc/asound/cards
0 [PCH ]: HDA-Intel - HDA Intel PCH
HDA Intel PCH at 0xfbff4000 irq 34
1 [miniStreamer ]: USB-Audio - miniStreamer

# Run JACK over our card
jackd -dalsa -dhw😛CH -r44100 & (or whatever sample rate you need)

At this moment your integrated MB sound card is controled by JACK. You can connect an analog source to it then playback directly to the output, not much useful ;-)
jack_connect system:capture_1 system_playback_1
jack_connect system:capture_2 system_playback_2

Qjacktl GUI tool make things easy. Also you can script whatever connection you need.

JACK offers to you lots of posibilities..., e.g.:

- Ecasound links to jack easily, you can use filter plugins, etc...

ecasound -q --server -r -b:2048 -f:f32_le,1,44100 -G:jack,ecasound,notransport \
-n:"2xFonsA_4band_dualMono" \
-a:left \
-i jack \
-eli:1970,1.0,0.0,1.0,10.0,1.0,0.0,1.0,10.0,1.0,0.0,1.0,10.0,1.0,0.0,1.0,10.0,1.0,0.0 \
-o jack,system: playback_1 \
-a:right -n:FonsA_fil-plugin \
-i jack \
-eli:1970,1.0,0.0,1.0,10.0,1.0,0.0,1.0,10.0,1.0,0.0,1.0,10.0,1.0,0.0,1.0,10.0,1.0,0.0 \
-o jack,system: playback_2 &

This provides a 4 band Parametric high quality EQ (flat here).

Now Ecasound can receive any JACK readable port, i.e your analog input.
Also you can telnet to Ecasound in order to modify the parametric EQ. See ecasound-iam
> telnet localhost 2868


- For Desktop users, Pulseadio apps can easily be rerouted to JACK: just deactivate PA to use your Jack card and use the PA2Jack module:

# Install the PA to Jack module:
sudo apt-get install pulseaudio-module-jack
# Load and link the module to Jack:
pactl load-module module-jack-sink channels=2 client_name=pulse_sink connect=False
# Set PA to route any audio (i.e. youtube) to Jack by default:
pacmd set-default-sink jack_out


- Temporary sound interfaces (usb..) can be added to the system, then the card can be "HQresampled" to jack main card easily, by using zita-aj2 tool (Fons Adriaensen). Some extra CPU% is needed.

- You can easily route your audio with Jack...

...
...
 
Oooops, i'm sorry I've not read your previous posts.

mpd an gmpc are my friends too, but also Jack. This is because my active loudspeakers are xovered by Brutefir. All gear is connected through by Jack.

Network ports (netjack and RTP pulseaudio) are also available to send audio to others "loudspeaker" in the LAN.

If you need to play a CD on your "audio server", mplayer is your friend. Just connect mplayer to Jack.

Also mplayer can manage a DVB-Radio receiver plugged in to your "audio server".

If you need to play an analog source, just connect to line_in on the sound card controled by jack.

If you need to plug an extra sound card, just run zita-a2j resampler over your jack "audio server".

(This is explained in my previous post)

BR
 
Oooops, i'm sorry I've not read your previous posts.

mpd an gmpc are my friends too, but also Jack. This is because my active loudspeakers are xovered by Brutefir. All gear is connected through by Jack.

Network ports (netjack and RTP pulseaudio) are also available to send audio to others "loudspeaker" in the LAN.

If you need to play a CD on your "audio server", mplayer is your friend. Just connect mplayer to Jack.

Also mplayer can manage a DVB-Radio receiver plugged in to your "audio server".

If you need to play an analog source, just connect to line_in on the sound card controled by jack.

If you need to plug an extra sound card, just run zita-a2j resampler over your jack "audio server".

(This is explained in my previous post)

BR

I understand that this is essentially a JACK application. Like I said, I can't seem to really understand how to use JACK to an adequate level, and I have found these other ways to route the audio. The Ecasound command I listed above actually just invokes JACK without explicitly asking for that, so I essentially already am using it.
 
Status
Not open for further replies.