A bash-script-based streaming audio system client controller for gstreamer

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
In addition to DAC skew (delta buffer depth) there is also the potential of clock drift (bit sync). I'm not sure how these DACs replay, if its a local oscillator or pll'd to input stream.

Is there a way to determine how much of the cpu usage is related to servicing those 2 USB dacs?
 
You stated above that you were running two usb DACs on a single Pi in an active crossover system. I was under the impression that two usb outputs on one card could not guarantee synchronicity as the DACs buffer the data with a USB input. Is that not the case? The question is unrelated to your software, but I was curious to know.
You raise a very good point. Gstreamer can actually deal with this situation by resampling the audio or skewing the playback pointer to each DAC (separately) to account for different consumption rates by the DAC due to slightly different DAC clocks. Forum member phofman has done some experiments with this feature and he can comment in more detail.

This doesn't have anything to do with buffering by the DAC, it the consumption rate of data that differs due to clock differences. Each will have a slightly different idea of "time" when they have an onboard crystal. For adaptive mode DACs, the DACs get their time reference from the USB bus clock, which is not all that accurate but will keep multiple DACs operating "together" perfectly. In either case, gstreamer can adjust the rate that data is sent to the DAC or add/remove samples to account for the difference between the reference clock that gstreamer is using and the DAC clock.

Previously when using ecasound I used NTP to keep the clocks on each endpoint (e.g. the Pi) running at the same rate as the computer sending the audio to them. Gstreamer has now implemented something like NTP as part of its RTP audio streaming functionality, and you can use it to slave the "receiver" pipeline clock to the "sender" pipeline clock without having to run NTP on either machine. In this way, you can have a single clocksource set the rate for pipelines on two different machines. It's not as important for that clock to be perfectly accurate (what is perfectly accurate anyway?) as it is for that single clock to be used as the time reference for all participants in the playback chain so that buffer under/overruns are prevented.
 
You raise a very good point. Gstreamer can actually deal with this situation by resampling the audio or skewing the playback pointer to each DAC (separately) to account for different consumption rates by the DAC due to slightly different DAC clocks. Forum member phofman has done some experiments with this feature and he can comment in more detail.

This doesn't have anything to do with buffering by the DAC, it the consumption rate of data that differs due to clock differences. Each will have a slightly different idea of "time" when they have an onboard crystal. For adaptive mode DACs, the DACs get their time reference from the USB bus clock, which is not all that accurate but will keep multiple DACs operating "together" perfectly. In either case, gstreamer can adjust the rate that data is sent to the DAC or add/remove samples to account for the difference between the reference clock that gstreamer is using and the DAC clock.

Previously when using ecasound I used NTP to keep the clocks on each endpoint (e.g. the Pi) running at the same rate as the computer sending the audio to them. Gstreamer has now implemented something like NTP as part of its RTP audio streaming functionality, and you can use it to slave the "receiver" pipeline clock to the "sender" pipeline clock without having to run NTP on either machine. In this way, you can have a single clocksource set the rate for pipelines on two different machines. It's not as important for that clock to be perfectly accurate (what is perfectly accurate anyway?) as it is for that single clock to be used as the time reference for all participants in the playback chain so that buffer under/overruns are prevented.

Thanks for your reply. Interesting. I wonder how this can be tested. Small timing differences may not be so obvious to detect - I don't know...

There would defenitely be a benefit to having perfect timing and being able to use two SBCs as opposed to going to multiple channel solutions which are not common/easy to implement, at least with Raspberry Pis. I could see using this setup with two Raspberry Pi, a good digital hat (ex Allo DigiOne) and full digital amplifiers, to keep the entire audio chain in the digital domain at a reasonable cost.
 
Last edited:
Perhaps looking at python would make your project in the long run easier. Complicated bash scripts are difficult to maintain (workarounds due to limited language, no debugging etc.). Today I would not use bash for a project of this scale.

Hats off to what you have achieved.

Actually, I think the bash approach might have one interesting advantage: there is a very good chance that I can run nearly the same bash script under Windows-10 Windows Subsystem for Linux. There are gstreamer Windows binaries available and I think I can connect to the Windows audio susbstem via gstreamer's wasapi element. The rest of the server pipeline is assigning channels and then streaming them to clients over RTP/UDP. The script also needs to make use of ssh, however, since it is all text based (no GUI) it should be available under WSL. Clients would still need to be linux boxes, since LADSPA is not practically possible under Linux. LADSPA is not used on the server side, so no problem there.

I think it would be very interesting if GSASysCon could run server-side under Windows 10 WSL, with clients being small Linux machines like the Raspberry Pi, Asus Tinkerboard, etc.
 
Nothing against bash in linux subsystem for windows, but a decently written python script is portable directly, distributed e.g. using Welcome to PyInstaller official website — PyInstaller .

Well, OK, I did not know about that possibility. Anyway, I have very little programming experience in Python, and even if I did it would be a significant undertaking to duplicate the bash script in python....

In the meantime I have been looking into how to run GSASysCon under Windows. It looks like the Windows Subsystem for Linux can run the script, and I can even install Gstreamer under WSL (or so it seems). Most everything is in place, really. The main hurdle seems to be accessing audio from Linux applications running under WSL. ALSA is not implemented AFAIK and while there are some attempts at porting pulse audio to WSL, they seem to already be outdated.

What could be done, is to use local streaming to send audio from (native) Windows to WSL. I actually have some experience with this - in the past I did it using FFmpeg, which can stream RTP in UDP but only to a local port. That's exactly what we need here. A Windows audio app would send audio to FFmpeg or FFmpeg would capture it using DirectShow. FFmpeg would resample the audio to the desired stream sample rate (this is fixed under GSASysCon) and the stream it to a local port, e.g. 127.0.0.1:1234. Then under WSL GSASysCon would use that RTP/UDP stream as input, and would restream it to clients as usual. FFmpeg has a very good resampler built in and it is highly configurable. Local streaming is quite low latency, so I think this would work well and wouldn't require any new code to GSASysCon.

WSL is pretty amazing, and it seems to be implemented well. I'm hopeful about using it to make it possible to use GSASysCon on a Windows box.
 
Hi Charlie,

I am very interest about your project that I have traced from first steps (2015) on your threads.
I have to say that I am not able to understand everything you write, not in depth, but only the general aspects and the problems that you face.

I am working on a unconventional speaker system where a coaxial loudspeaker (6.5" woofer + 1" compression tweeter) works horizontally on top panel of case. Over it an acoustic lens projects the sound around, while the modeled surface of the top panel works together it like a horn and wave guide.
A similar spread head scheme is shown on some omnidirectional Duevel loudspeakers, however my spread head is divided in 4 (or 5) wave guides that lead the sound not only horizontally but, where necessary e with different angles, upward (for example to avoid sound reflections from side walls as happen with a dipole acoustic, see Sigmund Linkwitz's site) to climb over the listener and get lost at the back of the room. My speaker system is not omnidirectional.

I have described the way in which it is unconventional, for tell you that it is much more susceptible to temporal misalignments than a traditional system can be.
When the two speakers are well positioned the scene is very large, width and deep. A really tridimensional and natural sound.
Is difficult to believe but if now you move a speaker forward also less than 1 cm (0.39") is noticeable that the scene narrows a little and begins to become asymmetrical. Nota bene, the guitar that was in center is still in center, to move it could be not enough a movement forward 10 times larger.
And indeed the balance control is ineffective on "ambience" that is instead dominated by precedence effect.

On the other end in case of temporal shift of sound emission between speakers only the precedence effect is involved, there is not a SPL gap, while in case of a physical misalignment of speakers both play a role. Is difficult to foresee the behavior, but we should expect a wavering scene.
Well, at first of your project you measured a time shift greater than a 50 msec, then with a realignment process is passed to 1 msec (34.5 cm or 13.6") and then 0.5 msec (17.25 cm or 6.8" = +-8 cm, still too much for me).
But recently you have improved the process to obtain until 0.006 msec (2.07 mm ! = +- 1 mm) that "..is really pretty darn good." (#38)

Then on UsageTips.txt I read "the synchronization could be held to better than 0.02 milliseconds. A delay of 2 milliseconds can result in audible artifacts, but 0.02 msec will not.", and probably you are right but, even if I have not yet tried, I would rather 0.006 msec.
Is there a more extreme setting (maxpoll>5)?

Can I query GSASysCon system on run about left/right channels latency, or is it unaware of it?

What does it mean "By setting my NTP server to poll internet timeservers on a long time scale, and setting all local computers to poll my server on a very short timescale.."?
Does it mean that a server is always active to query internet timeservers? Instead if I turn on the server right now, how long does it take to stabilize the system?
In general I understand that you want both the server and the clients to always be on, but I do not understand why. Can you explain what it is for?

It is stupid, but I am anxious that devices to be on even when I am not using them.

And if "The goal is NOT to get accurate time/date on each client, but rather to get their clocks running at the same RATE", and can not use a internal clock of server itself because it is not accurate enough, can I to connect Raspberry to a re-clock card, for e.g. ALLO – Kali i2s Reclocker?
Is it a usable mode?

A question again, I desire use class-D amps like: IQaudIO Pi-DigiAMP+ (44.1-192kHz 16-24bit), HiFiBerry Amp2 (44.1-192kHz 16-32bit) or Beocreate 4 channel amplifier (44.1-192kHz 16-24bit).
These have not inputs USB but use ".. the digital I2S audio signals to reduce CPU load over USB audio solutions", is it a problem?
Can they be used?

Congratulations on your work in this project, for ACD and ACD-L and for the software collection hosted on your site.

Thank you,
Flavio Manganelli
 
The synchronization issue is only when there are multiple playback clients. For instance if you use a Raspberry Pi in each speaker. In contrast, when the system uses only one playback client and that client uses one DAC, synchronicity is not an issue (it will be perfect for all channels). In your case, I suggest you consider using one SBC or computer and a multichannel DAC. I2S-input amplifiers are usually not more than 2 channels. The only exception in your list is the Beocreate amp, which is 4 channels, but I think there are better options in terms of amplification for what it costs.

Using a multichannel USB DAC and analog input amplifiers will make things much easier for you. I have started to prefer pro audio recording interfaces because they have good quality inputs (ADC) and the analog (DAC) output level can often be 2Vrms or more. This also allows you to use the SBC/computer like a hardware crossover. If the audio interface has multiple inputs, you can use the system like a DSP preamp, switching between inputs.

Glad to hear that you are using my tools (like ACD) and streaming audio controller! I'm happy to receive feedback, good or bad.
 
I've been using the new code for a couple of weeks now and I am happy with it, so I will turn my attention to updating the documentation. This will mostly be to cover the new features of local playback, using the code to act as a local software DSP without streaming, etc.

I've only tried the code with my ACD LADSPA plugins. I will be releasing a version that doesn't change anything under the hood, but makes the parameter names easier to use since you set values by name under gstreamer instead of by position, e.g. with ecasound.

Should be just another week or two and then I can release it.

Since the code can now implement DSP in addition to streaming I will probably start a new thread and might rename the program to reflect that. I will post info here if/when that happens.
 
Last edited:
Update:

I still haven't written any documentation! Aaaahrgh. Too lazy. And it's somewhat complicated with the new routing and DSP features.

Instead I have been tracking down the root cause of a strange problem that was occasionally causing the audio to stop or the client to become unresponsive. Seems to have been caused by the way I implemented some new error tracking code. It sometimes prevented the gstreamer pipeline on a client from being killed, which then caused some strange audio problems or even froze the system the next time the client was started up with a new pipeline. I will keep testing that, but it seems to have been fixed now.

In the meantime, I have come up with a new application for the code: a PREAMP. Currently the way that GSASysCon works is that there is a single input and the system switches outputs (e.g. streaming clients) on and off. This causes various remote speaker systems to turn on and play, or turn off. GSASysCon is functioning as a streaming controller. In the new "PREAMP mode" each system can have a input as well as output defined. The major difference is that only one system can be "ON" at a time. This is like the input select knob on an analog preamp. Selecting a new system will turn off the previous system and start up the new one. Since each system can have an input defined, you can configure a different system for each input that you have on your interface (analog input(s), digital input(s), local loopback from the output of a player/streaming player, etc.). The new LADSPA/DSP feature for implementing crossovers remains available for all systems as before.

I think I have figured out how to implement the preamp feature in the code and I will work on that for the time being. I can make use of this in a standalone/demo system that I am building, so I want to get it up and running in the near future.
 
Last edited:
Update on PREAMP mode revisions:

This is coming along well. I've coded up a bunch of things related to the new PREAMP mode and in the last few days have been figuring out how to design the volume control and what capabilities I can implement. This feature will use amixer to set ALSA controls on an audio interface input or output(s). The user can specify multiple alsa controls across multiple audio cards to control, e.g. when using multiple identical stereo DACs for output. The code will adjust the level of these controls up or down together, and the interface for doing so can be shared across multiple GSASysCon instances - when you adjust the volume in one it is immediately reflected in the other.

Another feature I have integrated into the PREAMP mode user interface is a "power control". By pushing a key, GSASysCon calls a user script. The script should interface with GPIO pins or other means of I/O that power up/down amplifiers or other equipment. Since this can be done in many different ways the script lets the user choose how to interface with the power control I/O.

I will post more info when I have been able to debug and test the new code.
 
UPDATE OCT 2018
It's been awhile since I posted about this project. In between I moved across the country and had some other distractions, and little to no time for audio. But recently things have calmed down and I was able to pick up where I left off, do some debugging, and finish some coding tasks. At this point I seem to have the new "preamp mode" up and running.

The previous revisions all used a "one-to-many" concept, that is one source and many "clients" that could be playing at the same time. Clients can be one or more computers that make up a loudspeaker system (e.g. one per speaker plus subwoofers) and multiple systems can operate at the same time. This originally was a solution to whole house audio, where I wanted to be able to direct audio to one or more of several systems that I had set up around my home, all from a single source which I call the "server".

The preamp mode does more of the conventional "preamp" thing, that is to say switching inputs into a single output. The loudspeaker system would/could be physically local but could still have remote clients, e.g. for remote or distributed subwoofers, etc. To be able to use digital sources I have added volume control capability. A digital source is e.g. the SPDIF output of a CD player, which will not include any volume control. In the GSASysCon code, volume control is implemented using the ALSA program amixer. If there is no native volume control capability for the input you can add one using ALSA's softvol and then control that. I dropped the idea of simultaneous adjustment of multiple volume controls since this was prone to problems and typically you just want to control the level of a single input anyway.

I will be testing the preamp mode on a system here that I still have to set up after the move. I will post updates on my progress here, and when the latest version will be available for download.
 
Last edited:
Still coding... I'm working up a new capability.

I realized that I wanted to combine both the "preamp" mode functionality and the "streaming" mode functionality into one system. This would give you not only input switching like a preamp but also multi-client "whole house audio" in which you send the input to multiple places for playback, and can process the audio streams with DSP on the client side.

The way to do this is to make it possible to have two separate instances of GSASysCon running at the same time. One is running in preamp mode and the other in streaming mode. But each mode requires a distinct configuration file, and each mode's "systems" are differently defined and must lie withing distinct directories. Previously the locations of these files was fixed, so I have begun to modify the code so that the user can (and must) specify where these files are located. This will allow multiple instances to run in parallel.

This parallel mode capability will extend GSASysCon nicely, so it's worth the time to implement it.
 
Charlie, kudos to your great effort. Very useful.

I still think your project has long outgrown the bash scripting hell... :)

Thanks for the kudos! I don't mind the scripting hell... it works for now and is a "function before form" kind of approach.

Someday I would very much like to port it (re-write might be more accurate) to C++. I know the language more or less but could use some help with that effort. There is much more that can be done with Gstreamer when you are not using it from the command line, ya know?

Then if someone could write me a GUI interface for it that could run under Debian/Ubuntu/Raspbian as well as some kind of remote control interface it would be a feature-rich tool for the DIY loudspeaker/DSP community.
 
The code is shaping up nicely, and I am almost done with the re-writes. I have done some testing and debugging. I like how the volume control is working. For instance I have been able to fix that ALSA-PulseAudio bug where when you mute a control (e.g. Front audio output) it mutes other controls (like the Master audio output) but when you then unmute "Front" the "Master" remains muted and the sound doesn't come back on. I came up with a solution for this as part of the volume control code.

I will probably start writing documentation and tutorials this week. I have some left over from a much older version but much is no longer applicable.

Once I am ready I will start a new thread about the new code since this one is getting a bit long and things have really evolved since I started, with local playback, streaming, DSP, multiple audio sinks, and so on being added along the way.
 
ALSAINFO.sh - a script to determine soundcard output capabilities for GStreamer

I thought I would post about a "helper script" that I developed to go along with GSASysCon. I call it "ALSAINFO.sh". Remember, it is GStreamer that is doing all the audio work and GSASysCon is just a fancy way of setting up and automating the GStreamer pipelines that run audio on the local and remote computers. During development and testing using a variety of recording and playback interfaces I found that it was not always easy to figure out how to properly present the audio stream to the output device via ALSA. So I wrote ALSAINFO.sh to help figure out these issues.

In the next couple of posts I will provide some examples where I use ALSAINFO on soundcards to reveal what formatting they can accept, eg. at what sample rate(s), channel counts, and bit depths. For the remainder of this post I will provide some background info and motivation for ALSAINFO.sh:

THE PROBLEM:
Most people are familiar with stereo DACs or even 7.1 channel DACs since these are often found in the consumer market. But there are pro DACs that can use many more channels. For example, one USB soundcard that I have been using is the Presonus 1818VSL. This can, as you might guess from the naming convention, 18 possible input channels and 18 possible output channels (8in and 8out of these are via ADAT). How do you route audio to one or more particular channels on this unit? That's where ALSAINFO.sh can be very helpful.

THE TOOL:
GStreamer uses the 22.2 channel numbering convention that exists as an insanely complicated, high-end 24-channel surround-sound and high-definition audio/video system. This means that a GStreamer pipeline can address and route up to 24 separate channels of audio as long as your device can support it. This number of channels seems high, but remember that pro audio interfaces can have many channels approaching and even exceeding this channel count.

THE SOLUTION:
The way ALSAINFO.sh works is that it briefly runs GStreamer processes that send output to the device under a fixed audio format while the number of output channels are increased to 24. When GStreamer throws an error, that combination of format and channel count is not supported. When GStreamer launches sucessfully, it reports the channel mask (channel assignments) that are used to achieve that number of channels. This is a critical piece of information, which was not obvious to me and was one of the motivations for writing ALSAINFO.sh. For example, maybe you only want to send 4 channels of audio to your onboard soundcard. These channels are not 0,1,2, and 3 as you might expect but instead are 0,1,4, and 5. Also, there are no channel assignments used for "pro audio" recording interfaces. Instead all channels are deemed "positionless", meaning they are not assigned to a particular place like "front, right" and "front, left" or "center". Instead you have to supply audio to ALL output channels and only the order in which the channel appears will determine to which physical output on the device the audio will be routed.

ALSAINFO is very helpful in figuring out all of these issues. It was in part inspired by the program alsacap found at volkerschatz dot com for testing soundcard capabilities under ALSA, however, I found that program was useless for determining GStreamer format and channel assignments so I hacked together some code that would do the job.

I will be releasing ALSAINFO.sh along with the next version of GStreamer, once I can get the documentation fleshed out sufficiently.
 
USING ALSAINFO - command line syntax.

ALSAINFO.sh is a bash shell script that runs under Linux and requires GStreamer (version 1.12 or higher is recommended).


ALSAINFO.sh is called using the following command line syntax:

./ALSAINFO.sh alsa_device [sample_rate] [bit_depth_and_format]

where the [...] indicates an optional parameter.

The sample_rate should be provided in Hertz, e.g. CD-quality is 44100 Hertz not 44.1 or 44.1k.

The bit_depth_and_format is one of the following:
S8, U8, S16LE, S16BE, U16LE, U16BE, S24_32LE, S24_32BE, U24_32LE, U24_32BE, S32LE, S32BE, U32LE, U32BE, S24LE, S24BE, U24LE, U24BE, S20LE, S20BE, U20LE, U20BE, S18LE, S18BE, U18LE, U18BE, F32LE, F32BE, F64LE, F64BE

Note that GStreamer can support 24 bit formats like S24LE, S24_32LE, etc.

For more info on GStreamer audio formats, see:
Raw Audio Media Types
 
For the first example I will test a Asus Xonar U7 USB DAC using ALSAINFO.sh:

On my system it's currently card 2, so let's try:

Code:
./ALSAINFO.sh hw:2,0

The output is:
Code:
testing ALSA device: hw:1,0
using the default sample rate of 44100 Hz
using the default audio format of S16LE


This device will accept 2 channels of S16LE audio data
The bitmask for this mode is: 0x0000000000000003
   channel 0 is used in this mode. Its channel mask is: 0x1
   channel 1 is used in this mode. Its channel mask is: 0x2

This device will accept 4 channels of S16LE audio data
The bitmask for this mode is: 0x0000000000000033
   channel 0 is used in this mode. Its channel mask is: 0x1
   channel 1 is used in this mode. Its channel mask is: 0x2
   channel 4 is used in this mode. Its channel mask is: 0x10
   channel 5 is used in this mode. Its channel mask is: 0x20

This device will accept 6 channels of S16LE audio data
The bitmask for this mode is: 0x000000000000003f
   channel 0 is used in this mode. Its channel mask is: 0x1
   channel 1 is used in this mode. Its channel mask is: 0x2
   channel 2 is used in this mode. Its channel mask is: 0x4
   channel 3 is used in this mode. Its channel mask is: 0x8
   channel 4 is used in this mode. Its channel mask is: 0x10
   channel 5 is used in this mode. Its channel mask is: 0x20

This device will accept 8 channels of S16LE audio data
The bitmask for this mode is: 0x0000000000000c3f
   channel 0 is used in this mode. Its channel mask is: 0x1
   channel 1 is used in this mode. Its channel mask is: 0x2
   channel 2 is used in this mode. Its channel mask is: 0x4
   channel 3 is used in this mode. Its channel mask is: 0x8
   channel 4 is used in this mode. Its channel mask is: 0x10
   channel 5 is used in this mode. Its channel mask is: 0x20
   channel 10 is used in this mode. Its channel mask is: 0x400
   channel 11 is used in this mode. Its channel mask is: 0x800

Since it was not specified by the user, the script uses the default sample rate and audio format, which are 44100 and S16LE (16 bit audio).

We can see that four modes are supported, with 2,4,6, and 8 channels. For each mode we get a list of the channels used, their bitmasks and the overall bitmask for the mode. In GSASysCon the user would specify the channel numbers in decimal format (e.g. for "channel 10" the user tells GSASysCon to use channel "10". You can ignore the channel masks - this was needed during development of GSASysCon. Note that "channel 10" is the 7th channel in the 8-channel audio mode. It's not so intuitive, but if you want audio coming out of this soundcard you need to use the channel numbers as shown under GStreamer.

Let's probe some other combinations. How about 32-bit, 96kHz audio? Just enter:
Code:
./ALSAINFO.sh hw:2,0 96000 S32LE
The output is:
Code:
testing ALSA device: hw:2,0
using a sample rate of 96000 Hz
using an audio format of S32LE


WARNING: the audio format was changed to: S24LE
This device will accept 2 channels of S24LE audio data
The bitmask for this mode is: 0x0000000000000003
   channel 0 is used in this mode. Its channel mask is: 0x1
   channel 1 is used in this mode. Its channel mask is: 0x2

Looks like 32 bits is not supported, but let's try 24-bit audio in the format suggested above:
Code:
./ALSAINFO.sh hw:2,0 96000 S24LE
The output is:
Code:
testing ALSA device: hw:2,0
using a sample rate of 96000 Hz
using an audio format of S24LE


This device will accept 2 channels of S24LE audio data
The bitmask for this mode is: 0x0000000000000003
   channel 0 is used in this mode. Its channel mask is: 0x1
   channel 1 is used in this mode. Its channel mask is: 0x2

This device will accept 4 channels of S24LE audio data
The bitmask for this mode is: 0x0000000000000033
   channel 0 is used in this mode. Its channel mask is: 0x1
   channel 1 is used in this mode. Its channel mask is: 0x2
   channel 4 is used in this mode. Its channel mask is: 0x10
   channel 5 is used in this mode. Its channel mask is: 0x20

This device will accept 6 channels of S24LE audio data
The bitmask for this mode is: 0x000000000000003f
   channel 0 is used in this mode. Its channel mask is: 0x1
   channel 1 is used in this mode. Its channel mask is: 0x2
   channel 2 is used in this mode. Its channel mask is: 0x4
   channel 3 is used in this mode. Its channel mask is: 0x8
   channel 4 is used in this mode. Its channel mask is: 0x10
   channel 5 is used in this mode. Its channel mask is: 0x20

This device will accept 8 channels of S24LE audio data
The bitmask for this mode is: 0x0000000000000c3f
   channel 0 is used in this mode. Its channel mask is: 0x1
   channel 1 is used in this mode. Its channel mask is: 0x2
   channel 2 is used in this mode. Its channel mask is: 0x4
   channel 3 is used in this mode. Its channel mask is: 0x8
   channel 4 is used in this mode. Its channel mask is: 0x10
   channel 5 is used in this mode. Its channel mask is: 0x20
   channel 10 is used in this mode. Its channel mask is: 0x400
   channel 11 is used in this mode. Its channel mask is: 0x800

So, under GStreamer this device supports 24 bit, 96kHz audio for 2,4,6, and 8 channels as shown.

Other combinations of sample rate and format are found by trying different combinations of parameters.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.