multiamping with digital crossover is another method to "split/combine" DAC outputs for increased linearity
the digital crossover divides the source into limited frequency bands appropriate for each driver
driver sensitivity padding can be done in analog by tweaking each driver's amplifier gain
in each band the noise bandwidth is lower and allows reducing signal level re full scale in the DAC which can reduce the distortion from smooth nonlinearities like SiO2 Vcoeff, internal amp or resistor nonlinearity internal to the DAC
the bandlimited signal in each DAC also has fewer components to interact to cause IMD and some of the harmonic and IMD distortion componets will be outside the driver audio bandwidth
the above advantages are likely enough for the sub/woofer/mid channels, only the tweeter DAC would need multiple DACs
the digital crossover divides the source into limited frequency bands appropriate for each driver
driver sensitivity padding can be done in analog by tweaking each driver's amplifier gain
in each band the noise bandwidth is lower and allows reducing signal level re full scale in the DAC which can reduce the distortion from smooth nonlinearities like SiO2 Vcoeff, internal amp or resistor nonlinearity internal to the DAC
the bandlimited signal in each DAC also has fewer components to interact to cause IMD and some of the harmonic and IMD distortion componets will be outside the driver audio bandwidth
the above advantages are likely enough for the sub/woofer/mid channels, only the tweeter DAC would need multiple DACs
Uh, that's DSD, not PCM.Originally posted by SunRa
what do you think about the way accuphase is paralleling the chips? (the new method)
Sure, but you can do that with an analog crossover as well.Originally posted by jcx
the digital crossover divides the source into limited frequency bands appropriate for each driver
In a multi-amped setup, you can use a passive analog line-level crossover such as this and end up with the same result.driver sensitivity padding can be done in analog by tweaking each driver's amplifier gain
The restricted bandwidth would provide benefits as you describe, but I don't think it would overtake the benefits of using the same number of DACs together in a full-band setup, unless they were optimized for specific bands--in silicon.in each band the noise bandwidth is lower and allows reducing signal level re full scale in the DAC which can reduce the distortion from smooth nonlinearities like SiO2 Vcoeff, internal amp or resistor nonlinearity internal to the DAC
the bandlimited signal in each DAC also has fewer components to interact to cause IMD and some of the harmonic and IMD distortion componets will be outside the driver audio bandwidth
the above advantages are likely enough for the sub/woofer/mid channels, only the tweeter DAC would need multiple DACs
Hello abzug,
yes, I was aware it was dsd not pcm but I think that type of delaying between each chip can be done with pcm also. I was asking about the theoretical aspect of this procedure.
yes, I was aware it was dsd not pcm but I think that type of delaying between each chip can be done with pcm also. I was asking about the theoretical aspect of this procedure.
Then you need an even higher frequency clock, while retaining low jitter. You'll find the prices for such clocks astronomical.
Ok, I am trying to learn so if you can explain a little more it would be very helpfull. Now why would I need a higher frquency clock (asuming the procedure is implemented with PCM not DSD)? What this has to do with delaying the signal to each chip?
If you are performing linear interpolation on a PCM datastream, you will not need a clock frequency higher than the native bit rate.
abzug said:The delays are spaced apart at a fraction of normal clock cycle.
There are a number of ways to introduce that delay without resorting to higher frequency clocks. For example, the AD9510 and the Xilinx clock manager block.
There are a number of ways to introduce that delay without resorting to higher frequency clocks. For example, the AD9510 and the Xilinx clock manager block.
If you are performing linear interpolation on a PCM datastream, you will not need a clock frequency higher than the native bit rate.
Good to know, thank you.
Good you please guide me to some articles/books/something in regard to paralleling tehniques or basic digital signal processing ? I really want to understand these things better.
Looking at the 9510, the delay linearity is half LSB of the 5 bit setting, so 1/64 which for the 10 ns max range is over 0.15 ns. That's much worse than the clock jitter even!Originally posted by b-square
There are a number of ways to introduce that delay without resorting to higher frequency clocks. For example, the AD9510 and the Xilinx clock manager block.
Moreover, the limit from the clock jitter remains, from the end of one set of staggered delay pulses triggered by a clock edge to the next.
I haven't thought this through, but is there a reason that a funky combination of flip-flops couldn't delay the signal to each dac by any multiple of single bit-clock cycles? At 44.1kHz (with a 64*fs bit-clock), that would be 1/2,822,400th of a second. If you've oversampled to 88.2kHz, then a single bit-clock delay is the same as the Accuphase.
Just an idea.
Just an idea.
The CD2 used multiple DACs (with increasing delays to each one) to actively lowpass out the HF noise produced by their high-order noise-shaping and/or sigma-delta modulation? I didn't know these kinds od DACs were even being used twenty-plus years ago.
- Status
- Not open for further replies.
- Home
- Source & Line
- Digital Source
- Possible Parallel the AD1955??