External clocking is *hard*. If you read the RME site then you get some insight into the technology that they use. The issue is that the clocks in the card run at several Mhz, and that clock does not travel "long" distances easily. So instead the clock is normally sent at a fraction of normal speed, eg "RME Word Clock", or spdif (where the clock is embeded in the data stream).
Recovering that clock signal is then where all this fuss about jitter comes from. If you need to generate a Mhz speed clock from something running at 44Khz, it's clearly going to have error due even to how sharply the transition is from 0 to 1. There is a ton of stuff on the web, but it's pretty boring and simply serves to demonstrate that if this problem were easily soluble then the Pro firms would be doing it much more easily
However, you don't necessarily need that level of accuracy anyway. If you did then I would suggest that since this is diyaudio.com then simply buy 36 DAC chips , parallel them up with three per channel giving an 8 channel DAC. Then run them all from one clock on the circuit board. Feel free to double or quadruple that number of DACs for even better linearity...
To be clear the issue is like:
- imagine three buckets.
- punch a precise sized hole in each one which lets the water run out at 44.1Khz (whatever that means in water terms)
- Fill up the bucket using a precise sized measuring beaker which holds 4096 bytes of water (again whatever that means).
- Each iteration you add one cup to each bucket
Now it should be obvious that each bucket will leak at a *fractionally* different rate to the others. So eventually one bucket will empty, or get overfilled while the others are all normal.
To make this work you need some way to fiddle the flow rate on each bucket and monitor the levels very precisely so that you can keep them all flowing at near exactly the same rate. This is going to take some clever software, but clearly it's doable.
Fair enough?
Recovering that clock signal is then where all this fuss about jitter comes from. If you need to generate a Mhz speed clock from something running at 44Khz, it's clearly going to have error due even to how sharply the transition is from 0 to 1. There is a ton of stuff on the web, but it's pretty boring and simply serves to demonstrate that if this problem were easily soluble then the Pro firms would be doing it much more easily
However, you don't necessarily need that level of accuracy anyway. If you did then I would suggest that since this is diyaudio.com then simply buy 36 DAC chips , parallel them up with three per channel giving an 8 channel DAC. Then run them all from one clock on the circuit board. Feel free to double or quadruple that number of DACs for even better linearity...
To be clear the issue is like:
- imagine three buckets.
- punch a precise sized hole in each one which lets the water run out at 44.1Khz (whatever that means in water terms)
- Fill up the bucket using a precise sized measuring beaker which holds 4096 bytes of water (again whatever that means).
- Each iteration you add one cup to each bucket
Now it should be obvious that each bucket will leak at a *fractionally* different rate to the others. So eventually one bucket will empty, or get overfilled while the others are all normal.
To make this work you need some way to fiddle the flow rate on each bucket and monitor the levels very precisely so that you can keep them all flowing at near exactly the same rate. This is going to take some clever software, but clearly it's doable.
Fair enough?
Octave filters
Hello:
Free software gurus said: Release early and release often. Well, I can fulfill only the first condition, and here it goes.
The zip file contains three octave .m scripts, that should be copied to a directory in the octave path. Also, for join_imp.m to work, the octave utilities that Denis Sbragion kindly added to the DRC_fir distribution should be in the octave path.
eqf.m
eqf equalizes and filters an impulse response, making a minimum-phase inversion of a smoothed version of its magnitude response and then applying a steep linear phase crossover.
It is intended for the design of a reasonably correct multiway loudspeaker from scratch, before using DRC_fir for additional correction, and should be edited to adapt all variable values between the marks "#User Data" to your needs.
It expects an impulse response in float 32 pcm format, from a measurement of a raw driver in its cabinet. Take care of your tweeters!
We can safely assume that a driver have a minimum-phase behaviour, and so a minimum-phase equalization can give us a good result, if made anechoical. Trying to equalize interferences with this script is a route to disaster, so you have to isolate a portion of the impulse response that leaves outside the first reflection.
That is the purpose of the GSTInit and GSTFinal variables. Just look at your impulse in a sound editor like Audacity and choose sensible points for the time window extremes (expressed in seconds).
GSLExp is the final lenght of the filter as the exponent of a power of two.
GSSmoothWidt is the fraction of octave that you want your response smoothed with, before inversion.
CFLowF and CFHighF are the crossover points. Just set to zero if no crossover at that extreme.
The settings with the TW prefix controls the transition to no equalization at all. The interval beyond the crossover points to make a transition, and absolute limits, that comes handy for tweeters and woofers. They have no effect at all if a crossover comes before them.
join_imp.m
I've said before that an anechioc impulse is convenient if a minimum phase equalization is intended. That poses a problem for low frequency measurements. Fortunately, it is customary to join the magnitude response of a near field and a far field windowed measure to have a useful equivalent of a true anechoic magnitude response.
That is what this script does, taking two impulse responses, filtering them with a linear phase crossover that mixes them over a sensible interval, scales them to join at a correct level in the joint, time align them, and save the mix, so you can use it like any other one with eqf.m
Settings are similar, just note that you have only to window the Far Field impulse, and you should enter the driver radius for the program to find the upper limit of validity in the Near Fiel response, following the rules stated by Keele in http://www.dbkeele.com/PDF/Keele (1974-04%20AES%20Published)%20-%20Nearfield%20Paper.pdf
From the time window settings, a lower limit for validity of the Far Field magnitude response is derived, and the crossover frequency is set in the log center of that interval.
You can set the attenuation of the filters at that limits with CFAtt, in dB. That controls how much the impulses are mixed over that frequency intervals.
And that's all, folks. I hope you will find it useful, and I will wellcome your comments and critics.
Ed, consider this a draft for the DRC wiki, I simply didn't want to delay this anymore, as work pressure makes the time scarce, lately.
Cheers,
Roberto
Hello:
Free software gurus said: Release early and release often. Well, I can fulfill only the first condition, and here it goes.
The zip file contains three octave .m scripts, that should be copied to a directory in the octave path. Also, for join_imp.m to work, the octave utilities that Denis Sbragion kindly added to the DRC_fir distribution should be in the octave path.
eqf.m
eqf equalizes and filters an impulse response, making a minimum-phase inversion of a smoothed version of its magnitude response and then applying a steep linear phase crossover.
It is intended for the design of a reasonably correct multiway loudspeaker from scratch, before using DRC_fir for additional correction, and should be edited to adapt all variable values between the marks "#User Data" to your needs.
It expects an impulse response in float 32 pcm format, from a measurement of a raw driver in its cabinet. Take care of your tweeters!
We can safely assume that a driver have a minimum-phase behaviour, and so a minimum-phase equalization can give us a good result, if made anechoical. Trying to equalize interferences with this script is a route to disaster, so you have to isolate a portion of the impulse response that leaves outside the first reflection.
That is the purpose of the GSTInit and GSTFinal variables. Just look at your impulse in a sound editor like Audacity and choose sensible points for the time window extremes (expressed in seconds).
GSLExp is the final lenght of the filter as the exponent of a power of two.
GSSmoothWidt is the fraction of octave that you want your response smoothed with, before inversion.
CFLowF and CFHighF are the crossover points. Just set to zero if no crossover at that extreme.
The settings with the TW prefix controls the transition to no equalization at all. The interval beyond the crossover points to make a transition, and absolute limits, that comes handy for tweeters and woofers. They have no effect at all if a crossover comes before them.
join_imp.m
I've said before that an anechioc impulse is convenient if a minimum phase equalization is intended. That poses a problem for low frequency measurements. Fortunately, it is customary to join the magnitude response of a near field and a far field windowed measure to have a useful equivalent of a true anechoic magnitude response.
That is what this script does, taking two impulse responses, filtering them with a linear phase crossover that mixes them over a sensible interval, scales them to join at a correct level in the joint, time align them, and save the mix, so you can use it like any other one with eqf.m
Settings are similar, just note that you have only to window the Far Field impulse, and you should enter the driver radius for the program to find the upper limit of validity in the Near Fiel response, following the rules stated by Keele in http://www.dbkeele.com/PDF/Keele (1974-04%20AES%20Published)%20-%20Nearfield%20Paper.pdf
From the time window settings, a lower limit for validity of the Far Field magnitude response is derived, and the crossover frequency is set in the log center of that interval.
You can set the attenuation of the filters at that limits with CFAtt, in dB. That controls how much the impulses are mixed over that frequency intervals.
And that's all, folks. I hope you will find it useful, and I will wellcome your comments and critics.
Ed, consider this a draft for the DRC wiki, I simply didn't want to delay this anymore, as work pressure makes the time scarce, lately.
Cheers,
Roberto
Attachments
Re: word clocks
Fair enough, but look at 4 RAID drives in a SATA configuration. Four 'buckets' of data, and the data all has to get to the controller at the same time. Yes, the analogy breaks down a bit but with CPU designers going to Serial interfaces more and more, getting the bits aligned from multiple streams is essential.
Now, hard drives can pause where an audio stream can't, but we're talking about vastly different data rates here.
But this is just engineering, any digital engineer that works with serial devices could put together an external clocking circuit, I would think. (BS, EE, Cornell, 1982).
I think the big reason these cards are so expensive has nothing to do with external clocks, it's with quantity of production.
For board level devices, how many you make has a much larger impact on final cost than any amount of engineering.
I dont' think the problem is that external clocks are THAT hard to do, I think it's that you stray into an area where now suddenly you are in a much lower demand segment.
Fair enough, but look at 4 RAID drives in a SATA configuration. Four 'buckets' of data, and the data all has to get to the controller at the same time. Yes, the analogy breaks down a bit but with CPU designers going to Serial interfaces more and more, getting the bits aligned from multiple streams is essential.
Now, hard drives can pause where an audio stream can't, but we're talking about vastly different data rates here.
But this is just engineering, any digital engineer that works with serial devices could put together an external clocking circuit, I would think. (BS, EE, Cornell, 1982).
I think the big reason these cards are so expensive has nothing to do with external clocks, it's with quantity of production.
For board level devices, how many you make has a much larger impact on final cost than any amount of engineering.
I dont' think the problem is that external clocks are THAT hard to do, I think it's that you stray into an area where now suddenly you are in a much lower demand segment.
jgwinner:
Your analogy is flawed. There is often a requirement for clocks to be accurate to within a few nano secs of each for certain parts of a DAC.
But, there is no requirement for drives in a raid array to be synchronised to any great extent at all (other than they all finish). Notice that you *can* raid several drives with completely different specifications (in that it is feasible, not that it's a good idea). In an ATA based RAID you may have two devices on the same channel and hence by definition they can't be synced because you can't address both drives at the same time!!
External clocking *is* hard. That is not to say though that the problem you set needs to be solved the hard way. There are a few ways that you could use several non hard-synchronised sound devices to make an adequate solution. However, this is idle speculation unless you are going to step up to write the sync software/hardware?
There exists some linux code which uses resampling to keep several seperate devices in sync. Probably not good enough for several main drivers, but probably good enough for a sub/main, or rear speakers split
RR: that looks excellent. I am keen to give your script a whirl myself. Can I read all the main details on your idea of splicing near and far field in that paper or do you have some other refs as well?
Cheers all
Ed W
Your analogy is flawed. There is often a requirement for clocks to be accurate to within a few nano secs of each for certain parts of a DAC.
But, there is no requirement for drives in a raid array to be synchronised to any great extent at all (other than they all finish). Notice that you *can* raid several drives with completely different specifications (in that it is feasible, not that it's a good idea). In an ATA based RAID you may have two devices on the same channel and hence by definition they can't be synced because you can't address both drives at the same time!!
External clocking *is* hard. That is not to say though that the problem you set needs to be solved the hard way. There are a few ways that you could use several non hard-synchronised sound devices to make an adequate solution. However, this is idle speculation unless you are going to step up to write the sync software/hardware?
There exists some linux code which uses resampling to keep several seperate devices in sync. Probably not good enough for several main drivers, but probably good enough for a sub/main, or rear speakers split
RR: that looks excellent. I am keen to give your script a whirl myself. Can I read all the main details on your idea of splicing near and far field in that paper or do you have some other refs as well?
Cheers all
Ed W
Thanks Ed, I will be very interested in your comments.
I think Keele defined all the main details in his paper. Many programs use the idea, I learned that from using Speaker Workshop, although I don't use anymore, since I find the log sweep method of measuring unbeatable.
Keele also defines a scaling factor for SPL that is very useful for aproximate setrtings of your preamp or program, although difficult to use for scaling inside the program because it is very dependent of a exact measure of effective cone radius.
What I think is new to my knowlewdge is implementing it as a impulse response synthesis, but is a matter of convenience.
Cheers,
Roberto
I think Keele defined all the main details in his paper. Many programs use the idea, I learned that from using Speaker Workshop, although I don't use anymore, since I find the log sweep method of measuring unbeatable.
Keele also defines a scaling factor for SPL that is very useful for aproximate setrtings of your preamp or program, although difficult to use for scaling inside the program because it is very dependent of a exact measure of effective cone radius.
What I think is new to my knowlewdge is implementing it as a impulse response synthesis, but is a matter of convenience.
Cheers,
Roberto
Ed:
Bear with me, it's an out of the box analgy but it's relevant. It wasn't my main point though, mass production was.
When read, no - but when 'played back' they have to be or the bits are scrambled. In a way, using dissimilar drives proves my point - the bytes are retrieved, assembled, and presented to the file level as a single piece, and there's no bit 'jitter' there or a 0 would turn into a 1, and the file is scrambled. Right?
== John ==
Bear with me, it's an out of the box analgy but it's relevant. It wasn't my main point though, mass production was.
But, there is no requirement for drives in a raid array to be synchronised to any great extent at all
When read, no - but when 'played back' they have to be or the bits are scrambled. In a way, using dissimilar drives proves my point - the bytes are retrieved, assembled, and presented to the file level as a single piece, and there's no bit 'jitter' there or a 0 would turn into a 1, and the file is scrambled. Right?
== John ==
John,
Incorrect. The smallest stripe size on most raid arrays would be 512 bytes. Typically you would use larger block sizes, eg 32Kb or 128Kb.
Read ahead would typically see that you read back several Mb from the disk, the disk would then read several tracks from each disk, each disk would be in blocks (with labels to allow re-assembly) and then once each disk had read a significant chunk of data the entire block would be re-assembled and sent back to the server.
So on a bit level the data does not need to be synchronous. The whole system will pause until each underlying device reads all the data and then re-assemble it again at a macro level
In the case of some parts of the DAC, not only does each bit have to be synchronous, but you also need sub-bit accuracy! With an oversampling DAC you may be trying to reconstruct a clock which is clicking 128 times faster than the bit rate. Quite a leap over a RAID system which only needs to be in sync every 10 *million* bits or so...!!!
So, no your analogy does not take us further in understanding this problem. (IMHO).
Why not stop pushing this rock uphill and just read up on the existing bits of software which do exactly what you want to do and report back what you learn from that instead? There is no point spreading ignorance when the information is out there already?
(I don't mean this to sound negative, although no doubt it does. But this problem has solutions. Just read up on them first is all I ask?)
Incorrect. The smallest stripe size on most raid arrays would be 512 bytes. Typically you would use larger block sizes, eg 32Kb or 128Kb.
Read ahead would typically see that you read back several Mb from the disk, the disk would then read several tracks from each disk, each disk would be in blocks (with labels to allow re-assembly) and then once each disk had read a significant chunk of data the entire block would be re-assembled and sent back to the server.
So on a bit level the data does not need to be synchronous. The whole system will pause until each underlying device reads all the data and then re-assemble it again at a macro level
In the case of some parts of the DAC, not only does each bit have to be synchronous, but you also need sub-bit accuracy! With an oversampling DAC you may be trying to reconstruct a clock which is clicking 128 times faster than the bit rate. Quite a leap over a RAID system which only needs to be in sync every 10 *million* bits or so...!!!
So, no your analogy does not take us further in understanding this problem. (IMHO).
Why not stop pushing this rock uphill and just read up on the existing bits of software which do exactly what you want to do and report back what you learn from that instead? There is no point spreading ignorance when the information is out there already?
(I don't mean this to sound negative, although no doubt it does. But this problem has solutions. Just read up on them first is all I ask?)
Forcing thread priority?
First off let me admit that I have not yet read through this entire 40-some-odd page thread.
As a suggestion to those hoping to use a computer as an xover and source/general purpose computer, could setting the thread priority of the xover processes to higher than other processes, I would think that you could eliminate the popping and other noises caused by taking processing power away from the xover threads.
I apologize if this has already been tried/suggested, just thought I would get it out there as it might be a while before I get through this entire thread. (oh but I will! 😉 )
Oh and I would also like to thank the OP for and other contributers to what has been one of the most useful and informative threads to a PC enthusiast such as myself.
First off let me admit that I have not yet read through this entire 40-some-odd page thread.
As a suggestion to those hoping to use a computer as an xover and source/general purpose computer, could setting the thread priority of the xover processes to higher than other processes, I would think that you could eliminate the popping and other noises caused by taking processing power away from the xover threads.
I apologize if this has already been tried/suggested, just thought I would get it out there as it might be a while before I get through this entire thread. (oh but I will! 😉 )
Oh and I would also like to thank the OP for and other contributers to what has been one of the most useful and informative threads to a PC enthusiast such as myself.
Dear mbutzkies,
could you please contact me, your e-mail via the forumm is diabled.
Thank you,
M
could you please contact me, your e-mail via the forumm is diabled.
Thank you,
M
Re: Forcing thread priority?
Yep, that's a mandatory starting point. However, it also requires that your operating system is fairly real time capable....
The point is that you can crank up the priority to maximum, but if the OS only schedules your thread to run every 100ms and you are only buffering 20ms then you will still get pops and clicks
I run my HTPC on linux and in general you can crank the interactivity up pretty high. The convolver thread is running at realtime priority on my machine. Also the HTPC software I use jacks up the priority of the threads doing audio and video processing versus the other threads. Basically it's hard to make it skip even when severely pushing the machine (eg several compile jobs in the background maxing out CPU)
Ed W
stelleg151 said:As a suggestion to those hoping to use a computer as an xover and source/general purpose computer, could setting the thread priority of the xover processes to higher than other processes, I would think that you could eliminate the popping and other noises caused by taking processing power away from the xover threads.
Yep, that's a mandatory starting point. However, it also requires that your operating system is fairly real time capable....
The point is that you can crank up the priority to maximum, but if the OS only schedules your thread to run every 100ms and you are only buffering 20ms then you will still get pops and clicks
I run my HTPC on linux and in general you can crank the interactivity up pretty high. The convolver thread is running at realtime priority on my machine. Also the HTPC software I use jacks up the priority of the threads doing audio and video processing versus the other threads. Basically it's hard to make it skip even when severely pushing the machine (eg several compile jobs in the background maxing out CPU)
Ed W
Has anyone successfully used Console with an M-Audio Firewire 410? I’m going nuts trying to figure this out. The WDM needs to be patched to the ASIO in. Any suggestions or am I just using the wrong card for this process?
Thanks,
-Frank
Thanks,
-Frank
I'm not sure about the firewire versions of the card, but the normal PCI M-Audio cards don't work correctly with console as far as I know. The problem is exactly what you describe - the WDM output does not appear as an assignable/routable source to send to an ASIO input. This may be stale information, though.
adat higher than 16/44.1???
I've got an RME HDSP9652 sound card. I'm currently sending 8-channels of 16/44 sound through one ADAT channel. I'd like to start sending 24/44 sound. I'm also using the Art Teknika VST plugin host Console. The output options in Console only let me specify sample rate (eg. 44.1, 96), but not bit-depth.
My sound card supports S/MUX which apparently allows you to combine two 48khz ADAT channels into a 96khz channel. But for the time being, I dont want higher sample rate, just higher bit depth.
Is this more a limitation of my VST plugin host? Would CuBase or some other plugin host allow easy conversion to 24bit/44.1 khz?
And would this(24bit/44.1khz) still travel over a single ADAT channel or would I have to pair two channels?
BTW, I'm using an Alesis AI4 to break the adat out into AES/EBU or SPDIF.
I've got an RME HDSP9652 sound card. I'm currently sending 8-channels of 16/44 sound through one ADAT channel. I'd like to start sending 24/44 sound. I'm also using the Art Teknika VST plugin host Console. The output options in Console only let me specify sample rate (eg. 44.1, 96), but not bit-depth.
My sound card supports S/MUX which apparently allows you to combine two 48khz ADAT channels into a 96khz channel. But for the time being, I dont want higher sample rate, just higher bit depth.
Is this more a limitation of my VST plugin host? Would CuBase or some other plugin host allow easy conversion to 24bit/44.1 khz?
And would this(24bit/44.1khz) still travel over a single ADAT channel or would I have to pair two channels?
BTW, I'm using an Alesis AI4 to break the adat out into AES/EBU or SPDIF.
jgwinner said:
Fair enough, but look at 4 RAID drives in a SATA configuration. Four 'buckets' of data, and the data all has to get to the controller at the same time. Yes, the analogy breaks down a bit but with CPU designers going to Serial interfaces more and more, getting the bits aligned from multiple streams is essential.
I'm using PCI SATA150 Raid controller from Promise. This has caused me nothing but headaches. Streaming of sound is counterintuitively worse than from a single fast drive. The problem seems to be related to PCI bus itself. A video card or fast disk controller can saturate the PCI bus. It also tends to have higher bus priority than your sound card. The result is small pauses/clicks in audio when the hard drive kicks in.
I have not found an adequeate solution to this. All the media players I've tried will not fully read a song into memory before playing.
BURRometer said:Has anyone successfully used Console with an M-Audio Firewire 410? I’m going nuts trying to figure this out. The WDM needs to be patched to the ASIO in. Any suggestions or am I just using the wrong card for this process?
Thanks,
-Frank
I've got an M-Audio Firewire Solo and Art Teknica Console. I, too, am having problems bridging the gap between WDM applications and ASIO.
Most Windows media players and apps use WDM. I havent found any way to connect a WDM music player to an ASIO input in software. So I did it the cheesy way.... I physically connected the output of the M-Audio to the input on an RME HDSP9652. Now I can play out any WDM windows app into and out of the ASIO sound card running crossover duty.
I was not able to use my HDSP9652's outputs to go back into one of its inputs. I had to use another sound card. I might have made sense to use a sound card with wordclock output and synch to the input of the RME HDSP9652.
Daveis said:
I've got an M-Audio Firewire Solo and Art Teknica Console. I, too, am having problems bridging the gap between WDM applications and ASIO.
Most Windows media players and apps use WDM. I havent found any way to connect a WDM music player to an ASIO input in software. So I did it the cheesy way.... I physically connected the output of the M-Audio to the input on an RME HDSP9652. Now I can play out any WDM windows app into and out of the ASIO sound card running crossover duty.
I was not able to use my HDSP9652's outputs to go back into one of its inputs. I had to use another sound card. I might have made sense to use a sound card with wordclock output and synch to the input of the RME HDSP9652.
RME drivers let you route the SPDIF output back to the SPDIF input internally. All you have to do is assign the SPDIF out as the output for your Windows Media Player and SPDIF input as the source for your console.
Make sure that the sync is set to internal in your RME card.
Thunau said:
RME drivers let you route the SPDIF output back to the SPDIF input internally. All you have to do is assign the SPDIF out as the output for your Windows Media Player and SPDIF input as the source for your console.
Make sure that the sync is set to internal in your RME card.
I've made SPDIF out the default output for Windows Media Player.
I've made SPDIF input the source for Console. Actually, I've always had this.
Not sure what you mean by set sync to internal.
Under Hammerfall DSP Settings I see:
MME
Buffer Size
Options
SPDIF IN
SPDIF OUT
Word Clock Out
Clock Mode
Pref Sync Ref
Still no luck. The HDSP mixer shows playback on SPDIF, but nothing on SPFID input.
In the RME TotalMix mixer application, Ctrl-Click on the SPDIF output channel label and it will turn red, indicating internal loopback, no cables required. I believe it's explained in the manual.
I have the older Hammerfall card, but a friend of mine has the DSP version and that's how he routs audio between applications.
I have the older Hammerfall card, but a friend of mine has the DSP version and that's how he routs audio between applications.
This is from the HDSP9652 manual. It might be possible to waste an adat output and send to one of the inputs with the clock set to internal. But according to the manual, hardware loopback is not possible on the HDSP9652. Bummer.
28.5 Recording a Subgroup (Loopback)
TotalMix supports a routing of the subgroup outputs (=hardware outputs, bottom row) to the
recording software. Unfortunately this feature is not available with the HDSP 9652, as the
FPGA of the card has no resources left for a hardware implementation. Therefore this chapter
describes the loopback mode when used with an external cable loop.
A loopback is used to record the playback signal. This way, complete submixes can be recorded,
the playback of a software can be recorded by another software, and several input
signals can be mixed into one record channel. Please note these important issues:
28.5 Recording a Subgroup (Loopback)
TotalMix supports a routing of the subgroup outputs (=hardware outputs, bottom row) to the
recording software. Unfortunately this feature is not available with the HDSP 9652, as the
FPGA of the card has no resources left for a hardware implementation. Therefore this chapter
describes the loopback mode when used with an external cable loop.
A loopback is used to record the playback signal. This way, complete submixes can be recorded,
the playback of a software can be recorded by another software, and several input
signals can be mixed into one record channel. Please note these important issues:
- Status
- Not open for further replies.
- Home
- Source & Line
- PC Based
- A how to for a PC XO.