combine two "single channels" of digital audio

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Sorry when maybe the header is misleading, my english isnt as good as i want.

I have a digital audio source, each output connectors carry only one channel, NOT two channel as usual. Now i want to combine that two "single lines" into one standard digital audio stream, so i can use a single cable as usual for a 2 channel/stereo input. As both single streams comes from one source and should have the same sync, my idea to combine that signal with a basic logic OR circuit.

Refer to the digital audio stream protocol, the right and left channel info are "placed" in the serial digital audio streams in such a way, that the never appear at the same time. Of coarse, with two different (single channel) sources that could happened, but in my case both channels are just get splitted "before" in my source. The source is the trinnov amethyst.

<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

The other idea is to use a passive splitter(transformer) in the reverse direction, so two input coils and one output coil of a transformer. Each single channel go on one coil. The input coils are wired serial.

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Whats your thinking about that, and why or why not it will work .

Thanks for any comment

Robert
 
If what you have now is mono tracks in SPDIF format, then both channels are in use, they just contain identical bits. If you want to combine two mono SPDIF streams into one, it would depend in part where they come from. If they are coming from two different SPDIF devices then they would have at least slightly different clock rates that would have to unified. If they are coming from two mono wav files in one physical device like a computer then that would be different. So, I would say in part how you would do it would depend on some details that you haven't described yet. However, generally speaking, mixing two SPDIF signals down into one SPDIF signal is something DSP chips can do fairly easily, I would think. Seems like a suitable SigmaDSP processor should be able to do it if you need to do it for real-time streaming. If you can do the processing off-line then it could be done in a computer. If you need to the two mono channels combined into one stereo SPDIF signal and you need them continuously musically synchronized in time that might be hard to guarantee unless you use off-line processing.
 
Last edited:
Thanks MarkW4 for your very fast reply and your thoughts.

The Trinnov Amethyst is a audio processor, the big point is the great Room correction (RC) possibility. Its extreme how good the sound get with the Trinnov. Of coarse critical listening and experience is necessary to get that.

The main point is, that the trinnov have two outputs, each carry a single channel. I call that "mono" but its not, its left and right. My speakers have digital inputs only, and i can force them to receive either left or right, so i know its just left or right, no mono here.

As both outputs coming from one source(the Trinnov) and the music comes from the builtin network card(in stereo), i strongly think, that the two channels are synched anyway, as both are "combined" finally via the loudspeaker in the ears.And before both signals get processed(RC), so that is logic for me necessary.

The whole music stream is either from a HD or "live" via TIDAL and ROON. So from my "logic" the channels have to be synchronised. The signals are AES.

The output of that processor go direct to the active loudspeakers, that have only digital inputs(meridian audio DSP).

The outputs of the Trinnov are also single channels, but that no problem as only one channel is needed for each speaker, and the speaker can forced either to channel one or two.

That layout- the standard preamp (Meridian) is in FRONT of the TRINNOV- is only for playback.

For the "calibration" layout, the Trinnov have to be BEFORE the Meridian preamp, in that way also the M-preamp get corrected, as the Trinnov generate the "calibration signal" and send these then through the M-preamp to the speaker and through the room back to the mikes that are connected to the Trinnov. In that way the whole chain is corrected.

But as the Trinnov have single output channels and theM-preamp have the usual two channels (single) connectors, i have to put the single channels together in one audio stream so i can use a single cable with a single AES/XLR connector.

Hopefully my explaination is good enough..

Thanks Robert
 
I guess I'm confused about what you need to do. Somewhere you some digital sources, Roon, HD audio, that come from somewhere, maybe a computer or player, then into a Meridian digital preamp, then out of the preamp and into the Amethyst room EQ and effects processor, then out of there and into the digital speakers. Seems like the signals go through the whole chain as digital until finally the digital signals are converted to analog by DACs inside the speakers.

The Amethyst shouldn't need to compensate or do any corrective processing for the Meridian preamp or the Roon player since the signal is always digital until the speakers. There is nothing to correct for except for the speakers and the room.

At face value, there would not seem to be any reason for needing to combine digital channels.
 
Hello MarkW4,

sorry for the confusion here.

My audio chain to playback every day is(from source to the ears):

TIDAL/ROON network >>>

>>>MeridianID40network card(streaming) inside the M-preamp .......and further still in the Meridian processor .......the upmix/(DSP presets called) to MCh(only the fronts relevant here)>>>>

>>> into the TrinnovAmethyst(processing the input music with the stored data from the calibration)>>>

>>> into the M-digital speakers M-DSP.

xxxxxxxxxxxxxxxxxxxxxxxxxxxx

As i WANT also to correct the M-processor(M do processing what i dont want/like) the M_preamp have to be inside the chain when the Trinnov make the calibration process. To do that, the Trinnov have to be BEFORE the 861 so the calibration runs through the M-preamp it.

xxxxxxxxxxxxxxxxxxxxxxxxxxxx

So the calibration layout have to be:

Trinnov generate the calibration signals(L,R,C)>>>>

>>>> go to the digital input of the M-preamp(standard RCA conectors) ..........All Meridian DSP presets here are forced to be switched off (but still the signal get processed) >>>>>>

>>>> run into the the three fronts digital speakers >>>

>>>> sound runs through the room and >>>>

>>>> back to the mikes, they are feed now the Trinnov.

Do so, the whole audio chain runs through the Trinnov and all get corrected as good as possible.

The Trinnov processor then have the correction data stored in a preset. That data get then used during playback(with the play back layout).

Till the speaker the music stay still digital, no D/A or A/D is done. Inside the speaker the crossover processing and more is done, after all, the D/A conversion is done and the analog music get amplified to the different speaker chassis.

Hoppefully , that explain it.

Robert
 
Last edited:
The type of DSP processing that the Trinnov does can't fix anything you don't like about the Meridian preamp (except maybe if it has tone controls you don't like, but then you should just turn them off instead of try to correct them).

Why do I say that? Becuase Trinnov corrects frequency response and maybe phase response, and it can also add some delay, reverb, and limiter effects. It cannot correct for anything except frequency and phase, but there cannot be anything of that nature to correct for in a digital mixer. The correction Trinnov does is basically only applicable to speakers and room acoustics. I say that in principle based on how correction DSP for hi fi systems can work.

If there is something you don't like about Meridian sound, then you should fix that inside Meridian, and not expect Trinnov to be able to help. If I may ask, what is it about the Meridian preamp sound you don't like and want to correct?
 
xxxxxxxxxxxxxxxxxxxxxxxxxxxx

As i WANT also to correct the M-processor(M do processing what i dont want/like) the M_preamp have to be inside the chain when the Trinnov make the calibration process. To do that, the Trinnov have to be BEFORE the 861 so the calibration runs through the M-preamp it.

xxxxxxxxxxxxxxxxxxxxxxxxxxxx

You can configure how the inputs map to the outputs using the speakers routing matrix.
 
Any properly designed soundcard can/will switch its clock to incoming SPDIF signal. Therefore even two "proper" correctly setup soundcards (USB, PCI(e), ...) with SPDIF input will pass synchronous signals to the computer. It is trivial to merge the channels and play them back to SPDIF output of any of the two cards. Most cards with SPDIF inputs offer SPDIF outputs too, all timed by the same clock.

Few soundcards have multiple SPDIF inputs but many inexpensive cards have one input. Just use two cards and a small fanless+headless PC running linux.
 
Any properly designed soundcard can/will switch its clock to incoming SPDIF signal. Therefore even two "proper" correctly setup soundcards (USB, PCI(e), ...) with SPDIF input will pass synchronous signals to the computer. It is trivial to merge the channels and play them back to SPDIF output of any of the two cards. Most cards with SPDIF inputs offer SPDIF outputs too, all timed by the same clock.

Few soundcards have multiple SPDIF inputs but many inexpensive cards have one input. Just use two cards and a small fanless+headless PC running linux.

I would argue that this is not as trivial as you suggest. This is because many software programs (ecasound, Sox, etc) must be launched with the sample rate known, and if this changes later the software will not know about it. The only reliable method that I know of is doing it via ALSA, maybe via a LADSPA plugin or other ALSA routing, etc. I would not call that "trivial" by any stretch of the imagination, although it is certainly do-able.

If you know of a linux software that will change its internal rate to match a changing source rate of a digital input it is reading from, I would very much like to know about it.
 
Last edited:
OK, the trivial solution is for a single pre-configured fs.

If I were to build this loop, I would write a script periodically checking SPDIF values from the soundcard (either via CLI utility iecset, or directly the corresponding PCM controls via amixer). When a frequency of incoming SPDIF stream is detected by the soundcard's SPIDF receiver and the corresponding buffers are read by the driver, I would start the loopback (e.g. sox) at that fs and keep checking. When the fs changes/stops, I would restart/stop the loopback as needed. Some soundcards have a checking thread implemented in the driver which periodically checks the incoming stream and compares with fs of the currently opened alsa stream. If a change is detected, it closes the stream which kicks out the reading app - e.g. AK4114 SPDIF receiver in ESI Juli: sound/ak4114.c at master * tiwai/sound * GitHub Nevertheless the app would be killed/restarted by the auxiliary periodic check anyway.
 
If resampling is acceptable, in linux just use the alsa plug plugin and loop everything back at a fixed fs. But IMO no need to resample, a decent soundcard knows what is coming in. The more simple ones only report SPDIF preamble, the better ones measure the incoming rate by comparing with a known clock.
 
A sound card capable or receiving multiple SPDIF inputs will have something SRC4192 on each input. One SPDIF input stream or the sound card must be selected as the clock to use. All SPDIF inputs not selected as the clock source will be resampled by hardware to that clock.
 
Okay, for each card you have to select a clock. The card clock or the incoming SPDIF as clock. If the incoming clocks aren't in sync with each other, then the samples will not line up sample to sample, or at least that is normally the case. So they get out of time with each other, unless ASRC is performed on at least one. If the card clocks are used as clock, then both SPDIF streams will be resampled. Also, if more than one sound card, most OS'es will only access one at a time, to prevent problems from multiple out of sync clock domains (since the sound card on board clocks are never exactly at the same frequency).
 
Last edited:
But the OP is asking - merge two stereo SPDIF streams produced by the same device, where only single channel is used in each stream. Why should the two streams be asynchronous?

Linux can work with any number of soundcards at the same time. If the cards run synchronously, no adaptive resampling is needed. It has been discussed here many times, e.g. in Charlie's threads.

What Charlie was issuing is the fact that linux soundsystem is pull-based. The app suggests parameters and the chain either accepts the values, or refuses (or offers working values). Therefore I suggested to read the acceptable values first via some other channel (IEC status variables) and ask directly for correct samplerate.
 
Understood, if the OP described it correctly. However, sound cards and OS'es tend to be designed to work with whether the multiple SPDIF streams are coming from the same clock source or not. Therefore, they may require ASRC whether it is needed or not. That would be the case with Windows except for ASIO devices not used as system defaults, however most application programs aren't designed to work that way. Linux might be different, but it might take some fiddling to get it to work, if it does work. Not clear it would be a foolproof system for an unsophisticated user, or that it could operate reliably for long periods of time without expert intervention.
 
However, sound cards and OS'es tend to be designed to work with whether the multiple SPDIF streams are coming from the same clock source or not. Therefore, they may require ASRC whether it is needed or not. That would be the case with Windows except for ASIO devices not used as system defaults, however most application programs aren't designed to work that way.

I am afraid I will have to ask you to provide specific information. Honestly, I do not understand exactly what you mean. Why should ASRC be required even though not needed?

I try learning soundcards to the detail of their HW implementation. While there may be a professional soundcard with HW ASRC, I do not know of any. Do you have an example?

Linux might be different, but it might take some fiddling to get it to work, if it does work. Not clear it would be a foolproof system for an unsophisticated user, or that it could operate reliably for long periods of time without expert intervention.

Everything requires knowledge. In linux you do not have to guess how things work, all source code is available and sound code/principles are not extremely complicated (unlike e.g. video).

If you take two soundcards with SPDIF inputs which provide at least the SPDIF preamble (such as all envy24-based cards and most others), use simple alsaloop Ubuntu Manpage:

alsaloop - command-line PCM loopback
+ polling the incoming rate with another control loop.

If the streams are asynchronous, it is a bit more complicated to avoid fixed timeshift between the two channels (caused by the delay in adaptive resampling of one channel). But jack with zita Zita-ajbridge or gstreamer with properly configured clock source e.g. GStreamer-devel - multi audio track combine (Charlie knows the details :) ) should handle that OK.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.