|
Home | Forums | Rules | Articles | diyAudio Store | Blogs | Gallery | Wiki | Register | Donations | FAQ | Calendar | Mark Forums Read |
PC Based Computer music servers, crossovers, and equalization |
|
Please consider donating to help us continue to serve you.
Ads on/off / Custom Title / More PMs / More album space / Advanced printing & mass image saving |
![]() |
|
Thread Tools |
![]() |
#101 |
diyAudio Member
Join Date: Sep 2018
|
![]()
Hi,
has anyone been able to test it with Daphile? I would appreciate a short comment. Thanks! |
![]() |
![]() |
#102 | |
diyAudio Member
Join Date: Sep 2020
|
Quote:
Many thanks for your script, it has helped me a lot! I've been trying your script out with a Raspberry Pi Zero, coupled with a HifiBerry MiniAmp. I'm using old iMac's speakers that render a pretty good sound, given their size. Also I'm powering the whole directly from my Windows computer's USB and... it works very well! I have two questions: -> Do you allow me to publish your (slightly modified) scripts and HOW-TO for the rpi zero on a GitHub repo? I would of course credit you as you want for the scripts. Sadly, I ran into a wall with the MiniAmp: I don't know yet how to set the volume. The MiniAmp doesn't have Software Control, so I added it thanks to this post, but of course it doesn't work. -> With your gstreamer wisdom, do you know how I could set the volume of gstreamer dynamically, let's say with a little rotary controller like this one? |
|
![]() |
![]() |
#103 | ||
diyAudio Member
Join Date: Mar 2007
Location: Michigan
|
Quote:
Quote:
Here is how to create the softvol: This example creates a new control for capture only (See NOTES below): Code:
pcm.mysoftvol { type asym firstone.pcm { type softvol slave.pcm "hw:CARD=Audio,DEV=0" control.name "Gain Capture Volume" control.card Audio min_dB -50.0 max_dB 0.0 resolution 51 } }
To set the volume of a control using amixer I use this syntax: Code:
amixer -D hw:CARD=Audio -- sset Gain XXX Code:
charlie@ApolloLake-1:~$ amixer -D hw:CARD=PCH -- sset Master 45 Simple mixer control 'Master',0 Capabilities: pvolume pvolume-joined pswitch pswitch-joined Playback channels: Mono Limits: Playback 0 - 64 Mono: Playback 45 [70%] [-19.00dB] [on] Any problems or questions, just post again here or PM me for help. Have fun! |
||
![]() |
![]() |
#104 |
diyAudio Member
Join Date: Apr 2005
Location: Pilsen
|
Or running the gstreamer pipeline from python. This is a simple (crude) example for starting the pipeline where the volume element has a specific name, passing the pipeline as parameter to a new thread which can modify the volume by finding the element in the pipeline and changing its volume property. The thread could easily wait for encoder change and increment/decrement volume as needed.
Also many other things could be done quite easily (reconfiguring the pipeline, catching messages and events, etc.). I really recommend trying the dynamic gst way, the existing pipeline can be used with Gst.parse_launch and access to individual elements via pipeline.get_by_name(). Catching gstreamer messages, events, everything surprisingly easy to debug e.g. in Pycharm with breakpoints, watches and code evaluation. Code:
from threading import Thread from time import sleep import gi gi.require_version('Gst', '1.0') from gi.repository import Gst VOL_ELEM_NAME = 'volume_element' VOLUME_PROP = 'volume' def thread_function(pipeline): while True: global stop_threads if stop_threads: break volume_elem = pipeline.get_by_name(VOL_ELEM_NAME) volume = volume_elem.get_property(VOLUME_PROP) print("Current volume: %f" % volume) volume += 0.1 volume_elem.set_property(VOLUME_PROP, volume) sleep(1) Gst.init(None) # build the pipeline pipeline = Gst.parse_launch( "audiotestsrc ! volume name=%s volume=0.0 ! level ! fakesink silent=TRUE" % VOL_ELEM_NAME) # start playing pipeline.set_state(Gst.State.PLAYING) stop_threads = False thread = Thread(target=thread_function, args=(pipeline,)) thread.start() # wait until EOS or error bus = pipeline.get_bus() bus.add_signal_watch() msg = bus.timed_pop_filtered( Gst.CLOCK_TIME_NONE, Gst.MessageType.ERROR | Gst.MessageType.EOS ) if msg: t = msg.type if t == Gst.MessageType.ERROR: err, dbg = msg.parse_error() print("ERROR:", msg.src.get_name(), " ", err.message) if dbg: print("debugging info:", dbg) elif t == Gst.MessageType.EOS: print("End-Of-Stream reached") else: # this should not happen. we only asked for ERROR and EOS print("ERROR: Unexpected message received.") # free resources pipeline.set_state(Gst.State.NULL) stop_threads = True thread.join() |
![]() |
![]() |
#105 |
diyAudio Member
Join Date: Apr 2005
Location: Pilsen
|
Just an example of a simple code snippet which tracks mouse clicks inside a video shown from a camera (a probe is added to one of the element pads, filtering upstream events, the event info is passed to on_event function which filters NAVIGATION events and passes mouse click co-ordinates to function mouse_clicked for sending zoom-in and zoom-out HTTP commands to the camera). It took me a while to figure out the structures to get to the event but once the internals are revealed the actual control of gstreamer is trivial.
Code:
def on_event(pad, info): # this is to enable pycharm breakpoints in thread started from C pydevd.settrace(suspend=False, trace_only_current_thread=True) event = info.get_event() type = event.type if type == Gst.EventType.NAVIGATION: struct = event.get_structure() if struct.get_string('event') == 'mouse-button-press': debug(struct) mouse_clicked(struct.get_double("pointer_x").value, struct.get_double("pointer_y").value) return Gst.PadProbeReturn.OK Gst.init(None) # build the pipeline pipeline = Gst.parse_launch( "rtspsrc location=rtsp://%s:%s@%s:554?channel=0 latency=1 ! rtph264depay ! h264parse ! vaapih264dec low-latency=1 name=%s ! vaapisink fullscreen=1" % ( USER, PASSWD, CAM_ADDR, EVENT_BIN_NAME) ) # start playing pipeline.set_state(Gst.State.PLAYING) # for some reason no events come from the vaapisink bin (first in the list), but the second bin (vaapih264dec) works OK bin = pipeline.get_by_name(EVENT_BIN_NAME) # sink = 0, src = 1 pad = bin.pads[0] pad.add_probe(Gst.PadProbeType.EVENT_UPSTREAM, on_event) # wait until EOS or error bus = pipeline.get_bus() bus.add_signal_watch() msg = bus.timed_pop_filtered( Gst.CLOCK_TIME_NONE, Gst.MessageType.ERROR | Gst.MessageType.EOS ) if msg: t = msg.type if t == Gst.MessageType.ERROR: err, dbg = msg.parse_error() print("ERROR:", msg.src.get_name(), " ", err.message) if dbg: print("debugging info:", dbg) elif t == Gst.MessageType.EOS: print("End-Of-Stream reached") else: # this should not happen. we only asked for ERROR and EOS print("ERROR: Unexpected message received.") # free resources pipeline.set_state(Gst.State.NULL) |
![]() |
![]() |
#106 |
diyAudio Member
Join Date: Sep 2020
|
Thank you very much for your long and quick answers!
Charlie, your solution helped me a lot, and allowed me to understand ALSA and softvol a little better. I now have softvol working and will start working on the code to control volume from the rotary controller. phofman, although I like your solution very much (and I'm a Python dev), as I'm on a Raspberry Pi Zero, I am afraid to go in this direction and find out it's not powerful enough to run all code at the same time. Currently the CPU is overloaded when I run Gstreamer and I access via SSH, so I know there is not much overhead. Gstreamer takes about 75-90% CPU when I send it some data. I'll continue and keep you posted. Regards |
![]() |
![]() |
#107 |
diyAudio Member
Join Date: Apr 2005
Location: Pilsen
|
The volume python code has almost no overhead, only for catching the EOF and error messages which is not compulsory, if you can exit the script in other way.
But gstreamer, just like any other audio player, should not take any major CPU unless resampling which should be avoided. IMO the reason of your CPU load should be investigated and fixed. |
![]() |
![]() |
#108 |
diyAudio Member
Join Date: May 2016
|
Thanks Charlie for sharing your script
Last edited by Gordian; 3rd January 2021 at 10:18 AM. |
![]() |
![]() |
#109 |
diyAudio Member
Join Date: May 2016
|
very happy with the script but I don't understand why my pi only accept "plughw" instead of the advised "hw".
I reckon it's something to do with resampling but I can't understand why. Any help is more than welcome |
![]() |
![]() |
#110 |
diyAudio Member
Join Date: Apr 2005
Location: Pilsen
|
hw:X is directly the soundcard driver which supports only a limit set of combinations of channel count, sample size, and sample rate. An unsupported combination is refused with an error. The alsa plug plugin (inserted into the chain by using plughw:X device name) does all minimum necessary conversions to convert the requested parameters of the stream to the accepted parameters of the soundcard driver.
You can list your accepted soundcard params e.g. by running Code:
aplay --dump-hw-params -D hw:X /dev/zero Code:
aplay -v -D plughw:X your.wav |
![]() |
![]() |
Thread Tools | |
|
|
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
DAC for Raspberry Pi | LaxAnErde | Digital Line Level | 41 | 5th October 2020 07:22 AM |
My DAC for the Raspberry Pi | usul27 | Digital Line Level | 157 | 16th July 2020 01:17 PM |
SRC hat for Raspberry Pi | DRONE7 | PC Based | 2 | 26th March 2019 07:46 PM |
I2s DAC + XLR for Raspberry pi? | JonesySA | PC Based | 6 | 7th May 2018 12:01 AM |
DSP for the Raspberry Pi | usul27 | Digital Line Level | 39 | 30th August 2016 08:29 AM |
New To Site? | Need Help? |