During my development of TDA1387-based DACs I came to the tentative conclusion that fewer chips was generally better in SQ terms. That was based on experience that strongly hinted that the power supply was the limiting factor and that more chips meant heavier demands on the supply. I also found that more output current made an I/V stage (other than a purely passive one) harder to get sounding good.
The power supply sensitivity turned out to be in large degree dependent on the variation in output voltage at the DAC's current outputs - purely passive I/V usually has the largest output voltage variation, consequently the highest sensitivity to power supply rail noise. By 'noise' here I don't just mean random noise, normally load-induced (i.e. signal correlated) noise is a bigger influencer of SQ.
A 'lightbulb' moment came when I considered that a step-up transformer could be used to ensure a very low voltage variation at the DAC's output whilst still allowing a high enough I/V resistor to be used so as not to need a voltage gain stage to create a 2VRMS output signal. Use of a step-up transformer isn't new, Audio Note had a patent on it (now expired). What is new is using a high ratio step-up transformer in conjunction with a very large number of paralleled DAC chips - the parallel array allows an even smaller variation of DAC output voltage and hence lower power supply sensitivity.
I've attached an outline sketch of the arrangement. A large number of paralleled DACs generates a peak current in the region of a few tens of mA. The DAC arrays are arranged to generate that current in 'push-pull' mode (aka 'balanced') so that there isn't any need to block the DC through the trafo - the DC currents are applied out of phase and hence cancel within the core of the trafo. The secondary of the trafo has typically 100X the turns of the primary meaning the standard 2VRMS is generated directly from 20mV at the DAC and only needs filtering (to attenuate images) and buffering to create a low output impedance for driving cables and a power- or pre- amp.
From my earliest experiments with a step-up transformer, I found that the primary inductance is what creates the LF roll-off and that the usual core material I use (PC40) doesn't give enough inductance (or have high enough mu in other words) to give an optimized design. So I have moved over to using 10K material which has about 4X higher mu but a lower peak flux capability. There aren't too many forms of core that this 10K material turns up in (at least in off-the-shelf quantities) on Taobao, I have for my earliest public design settled on using EP17. The wish to create a fully balanced design means using two EP17s per channel, for a total of 4. Fortunately they're cheap to buy on Taobao, the main cost is going to be in the labour to wind them.
Gerbers are attached for a 36 DAC board - four of these are needed to feed into the 4 EP17 transformers so that the output created is 2VRMS across the I/V resistors. In the sketch the output filter's shown single-ended but for the first design I'm going balanced, necessitating doubling up on filter and buffers, leading to a transformer between the two phases for creating a single-ended output.
The power supply sensitivity turned out to be in large degree dependent on the variation in output voltage at the DAC's current outputs - purely passive I/V usually has the largest output voltage variation, consequently the highest sensitivity to power supply rail noise. By 'noise' here I don't just mean random noise, normally load-induced (i.e. signal correlated) noise is a bigger influencer of SQ.
A 'lightbulb' moment came when I considered that a step-up transformer could be used to ensure a very low voltage variation at the DAC's output whilst still allowing a high enough I/V resistor to be used so as not to need a voltage gain stage to create a 2VRMS output signal. Use of a step-up transformer isn't new, Audio Note had a patent on it (now expired). What is new is using a high ratio step-up transformer in conjunction with a very large number of paralleled DAC chips - the parallel array allows an even smaller variation of DAC output voltage and hence lower power supply sensitivity.
I've attached an outline sketch of the arrangement. A large number of paralleled DACs generates a peak current in the region of a few tens of mA. The DAC arrays are arranged to generate that current in 'push-pull' mode (aka 'balanced') so that there isn't any need to block the DC through the trafo - the DC currents are applied out of phase and hence cancel within the core of the trafo. The secondary of the trafo has typically 100X the turns of the primary meaning the standard 2VRMS is generated directly from 20mV at the DAC and only needs filtering (to attenuate images) and buffering to create a low output impedance for driving cables and a power- or pre- amp.
From my earliest experiments with a step-up transformer, I found that the primary inductance is what creates the LF roll-off and that the usual core material I use (PC40) doesn't give enough inductance (or have high enough mu in other words) to give an optimized design. So I have moved over to using 10K material which has about 4X higher mu but a lower peak flux capability. There aren't too many forms of core that this 10K material turns up in (at least in off-the-shelf quantities) on Taobao, I have for my earliest public design settled on using EP17. The wish to create a fully balanced design means using two EP17s per channel, for a total of 4. Fortunately they're cheap to buy on Taobao, the main cost is going to be in the labour to wind them.
Gerbers are attached for a 36 DAC board - four of these are needed to feed into the 4 EP17 transformers so that the output created is 2VRMS across the I/V resistors. In the sketch the output filter's shown single-ended but for the first design I'm going balanced, necessitating doubling up on filter and buffers, leading to a transformer between the two phases for creating a single-ended output.
Attachments
Last edited: