My no DAC project, FPGA and transistors

Compare that to the LT6957 data sheet, which is a sine-to-logic converter with
optional CMOS/LVDS/PECL outputs, all on the same process by the same designers.
ECL wins hands down, and LVPECL, that is.

It's my favorite:)
It shines and sonically outperforms all the logic inverters being used in the Well Tempered Master Clock thread.
I just wish they also made a cml output
 
Indeed its and old measurement result, but it does not make it invalid, we measure the same day in day out here in the lab.

One has to be careful with any logic family WRT edge speeds, CMOS likes FAST edges, and one has to be careful about signwave to square wave conversion, CMOS does not like the slow signwave edges due to lack of Gain.

The LTC Part has a high gain input block to square up the input signwave.

Quote from the LTC datasheet "Optimized Conversion of Sine Wave Signals to Logic Levels",

With normal fast edges CMOS logic wins hands down over ECL, you just have to highlight an applications where CMOS is weaker...

We use PECL everyday and can tell you that Close in carrier phase noise is lower with CMOS AC family.

Here is a link to a more updated measurement of AC logic phase noise:-

74AC04 Residual Phase Noise Measurements

Important to note that again the AC CMOS is being used to square a signwave input - the slow input edges having a big impact on CMOS LF noise (as can be seen by the lowering PN with increasing input level).

As an example we use PECL 100EL32 ECL divider when we need the speed, however a CMOS NC7SZ74 wired up as a divider (upto say 200MHz operation) will have about 15dB lower phase noise around 10Hz offset.

The LTC part is good due to its high input gain as a squarer - but once you have fast edges, CMOS has lower PN as a logic family.

I have a few different PN systems in the lab, from Agilent from 3048A systems to E550x etc. and Wavecrest etc, more then happy to make measurements for you.. but we work here with ultra low PN noises designs everyday and have some idea what I'm talking about..

This is the residual system noise floor of our E5500 PN system - so we can and DO measure PN down to very low levels...

Dropbox - Agilent E5500 System Confidence Test.jpg - Simplify your life

The E5500 PN system is a COMPLETE pain to use - but it still sets the performance standard in the industry - its a whole computer controlled rack of equipment:-

Dropbox - E5500 PN system.JPG - Simplify your life

On the very right of the image above you can just see a corner of the Wavecrest SIA4000 PN system in the second rack which is far easier to use, but does not have as good close in PN...

Just saying that I work with ULPN designs day in day out... I know what I'm talking about just simply from many many years of daily experience, unlike most others I can ACTUALLY measure ULPN designs in the lab - and there is PLENTY of BS WRT ULPN clock design... but this is a DiYForum so I only take such things with a pinch of salt...

The thing that keeps popping out is how to get from a sine signal (crystal) to really fast edges?

Since every squaring step also amplifies and generates phase noise.

What's the minimal (lowest) edge rate that's needed to ensure best phase noise and jitter specs?

Many phase noise measurements can measure sine waves, yet no one uses those to clock the parts.
The part that's missing is the edge needed to reach best measurement results, this depends on used dac structure, pcb layout and family used, probably more things. Once you've chosen a logic family the edge rates needed are by that fixed.
 
Realy cool, quick'n dirty implementation Marcel:)

The interleaved dac structure is used in Tektronix' AWG's, see attached picture.

Link:
https://www.google.com/url?sa=t&sou...FjAIegQIBRAB&usg=AOvVaw39N4uCW5WZswm6aDcYkkv-

Looking at the picture it uses different physical dacs,1 for odd and 1 for even parts of the datastream. It seems the even order dac never spits out every other bit, demultiplexed.

Looking closer to the problems they faced, they may only be of interest for the extra bandwith they achieve by effectively doubling the bandwith, but I think it concerns all data.
I took your solution as resembling this structure, with the tight matching it needs.

Who said it needs tight matching? It needs some degree of matching, that is, the even and odd weights should not be totally different.

Anyway, there is some similarity, but the Tektronix scheme converts different data with the odd and even DACs, while the scheme I wrote about converts the same data with the odd and even DACs (at half a clock cycle shifted moments).
 
Member
Joined 2017
Paid Member
Are those numbers true for AD and also DA conversion?
Are those the same for all OSR's?

I don't know how Signalyst generates the various SDM data, but with NRZ dac structures I know harmonics go up when OSR goes down at high signal levels. I also don't know if the modulation is higher or lower with different filters or modulators and if these correlate to significant difference in output signal levels. They can but don't have to be.
E.g. sacd format is bound to rules like not allowing more than 24 (or less than 4) out of 28 successive bits as 1. That's an index of 71.4%

OSR has nothing to do with modulation depth. Modulation depth(amplitude usage) depends on the order of 1bitDSM. The lower the 1bitDSM order, the more modulation depth. ADC and DAC also have the same restriction, where ADC is from analog to 1bitDSM pulse train, and DAC is from 32bit or 24bit PCM to 1bitDSM. If you use multibit DSM, more than 5bit has almost no restriction. 2bit, 3bit, and 4bit have some restrictions.

Rough output pulse train in 1bitDSM in 16 successive bits.
11111111 11111111 the 1st order and 2nd can output 16 ones;100%.
11111110 11111110 the 3rd can do 14 ones and 2 zeros;75%.
11111011 11011110 the 4th is 13 ones and 3 zeros;63%.
11101110 11101110 more than 6th are 12 ones and 4 zeros;50%.

I don't think SACD can output 24 ones and 4 zeros because it employs 7th order DSM. "Zero" of SACD is 11010010, which means 4 ones and 4 zeros. You can't choose both 11110000 and 11100010 because more than 3 successive ones or zeros mean low frequency. High order DSM doesn't allow to have low frequency, where the noise-shaping algorithm wipes away low frequency.

OSR in DSM and sampling frequency in PCM have the same meaning. As long as your bandwidth is satisfied, high OSR and high sampling frequency are redundant. If your BW is 20kHz, 64OSR with more than 7th order is enough. If 128OSR, more than 4th is enough. The difference is out of band noise like PCM.
 
Last edited:
OSR has nothing to do with modulation depth. Modulation depth(amplitude usage) depends on the order of 1bitDSM. The lower the 1bitDSM order, the more modulation depth. ADC and DAC also have the same restriction, where ADC is from analog to 1bitDSM pulse train, and DAC is from 32bit or 24bit PCM to 1bitDSM. If you use multibit DSM, more than 5bit has almost no restriction. 2bit, 3bit, and 4bit have some restrictions.

Rough output pulse train in 1bitDSM in 16 successive bits.
11111111 11111111 the 1st order and 2nd can output 16 ones;100%.
11111110 11111110 the 3rd can do 14 ones and 2 zeros;75%.
11111011 11011110 the 4th is 13 ones and 3 zeros;63%.
11101110 11101110 more than 6th are 12 ones and 4 zeros;50%.

I don't think SACD can output 24 ones and 4 zeros because it employs 7th order DSM. "Zero" of SACD is 11010010, which means 4 ones and 4 zeros. You can't choose both 11110000 and 11100010 because more than 3 successive ones or zeros mean low frequency. High order DSM doesn't allow to have low frequency, where the noise-shaping algorithm wipes away low frequency.

OSR in DSM and sampling frequency in PCM have the same meaning. As long as your bandwidth is satisfied, high OSR and high sampling frequency are redundant. If your BW is 20kHz, 64OSR with more than 7th order is enough. If 128OSR, more than 4th is enough. The difference is out of band noise like PCM.

I think that's a too static way of looking at it.
Because how else would you control pre or post ringing (filter slopes) if it's just a random pulse train where you can take an arbitrary row of bits out of the stream and define 0 as an even sum of 1's and 0's? The exact sequence of these 0's and 1's must have specific meaning. In other words: how can you control Filter types as well as Algorithms within a specific OSR and have volume control if all modulation depth is fixed by choice of an order?
I can't see this being a fixed thing within pcm to dsm conversion.

A sacd pressing plant will discard a master if it doesn't follow this 24/28 rule (4/28).

All I'm saying is that higher OSR might relax modulation depth rules. Or there is another explanation to why high levels of modulation depth generate lower distortion at higher OSR, or HQPlayer uses some secret ingredient (internediate stage with multi bit DSM perhaps?).
 
Human hearing NOT based on frequency... Its based on time!

".... I'm basically against time-domain analysis because there are many chances to misunderstand, and human ears are based on frequency domain."

With all due respect you are wrong, so wrong that it is painful to read as you are clearly a highly educated and intelligent engineer and this one falsehood is blinding you other possibilities.
This is the single most damaging falsehood accepted as irrefutable fact by 99.99% of all audio designers, electronic / digital and mechanical / transducers.

Highly abbreviated explanation of my statement:
(1) Ignoring bone conduction and sub 20Hz infrasound, 100% of human hearing is based on air pressure changes. Nothing else, frequency variation does not feature in the detection of sound, only later in the brain decoding of sound does frequency variation come into play.
(2) In a vacuum we cant hear anything because there is no air to increase / decrease (compression / rarefaction) in pressure.
(3) These Air Pressure Events (APE's) have a clearly defined start time, duration and end time and it is this timing information which forms 100% of the sound input information which is sent to our brain.
(4) ALL spatial / 3 D sound information is decoded using time domain data... NOT frequency domain!! Please consider the implications of this fact.... The number crunching power required in order to accurately process the tiny (low digit micro not milli second) differences in arrival time from left to right ear as an insect flies around our head while we monitor dozens of other jungle sounds are way beyond any computer and is at the heart of our "fight or flight" response.
(5) Our eardrum(s) moves due to these APE's and then the tiny hairs / nerve ending / inner ear biology and brain combine to perform the most astonishingly accurate decoding, identification and location of all sounds.

Deep understanding of just how remarkable our ear brain combination is only just shedding light on this complex subject and audiology is an exciting area of discovery which is now "cool" again thanks to the recent popularity of 3D audio ie Dolby Atmos / VR and gaming which demands the most accurate location of sounds in 3D space in order to identifly the precise location of enemies / prey / teammates etc.

Anyway, sorry to go off topic on an old post, but my point is that human hearing is all about the TIME domain NOT the frequency domain and all audio designers need to understand this fact and incorporate it into their designs... Ironically, IF we had the perfect time domain accurate digital source / DSP processing then the DSP could correct any frequency domain errors and give us the "perfect" frequency response from any loudspeaker, assuming the transducer was able to maintain perfect time domain... HIGHLY unlikely!
Cheers
Alex.
 
Last edited:
Hi, Zoran. I'm afraid I misunderstand what you mean. But I think your schematic requires accurate resister matching. The advantage of 1bit DSM is no need to care about resister matching. The attached pic has one resister for transistors; R4 for negative ones and R1 for positive ones. One resister is the essential point of 1bitDSM. Each tap doesn't have the same current(almost 0.1% accuracy). But the imperfectness results in a slight frequency response error, doesn't mean distortion.

I am asking because I wonder if I can use this topology for "switching" the R2R for instance 24bit discrete dac? I am aware that is different topic, but You already have experience with this BJT topology. Somehow I think that this BJTs will be better sounding than mosfets inside chips usually used for driving and switching R2R net.
.
Yes the resistors should be matched and from very small tolerance. For R-2R ladder output DACs.
 
Who said it needs tight matching? It needs some degree of matching, that is, the even and odd weights should not be totally different.

Anyway, there is some similarity, but the Tektronix scheme converts different data with the odd and even DACs, while the scheme I wrote about converts the same data with the odd and even DACs (at half a clock cycle shifted moments).

Yes we're talking about different structures, very similar though.

The interleaving dac should be very well matched. Otherwise every odd or even bit has different energy, as 1 dac never complements (weighs) the other with it's difference in bit length, amplitude etc.

I understand now that your example doesn't use different dacs for every bit (or sum of odd&even bits) in the bitstream, though you could. The part about specifically weighing taps 1 and 2 etc threw me off course, but it's just a standard shift register function , where the zero's are also added. I thought I recognized the interleaved structure;-)
 
That would not acheive the aim of doubling the sample rate.

The aim is to achieve an effective doubling of samplerate, while they actually don't no.

The interleaved dac, when well implemented, leaves more usable bandwidth because the noise products of both cancel each other out. So while at high frequencies there's nornally too much noise to be usefull to generate signals, interleaving clears up the noise, generating more available bandwith, effectively doubling the samplerate. Funny thing, right?!

Edit: but I don't see an easy use case for high resolution audio with this.
 
Last edited:
The aim is to achieve an effective doubling of samplerate, while they actually don't no.

The interleaved dac, when well implemented, leaves more usable bandwidth because the noise products of both cancel each other out. So while at high frequencies there's nornally too much noise to be usefull to generate signals, interleaving clears up the noise, generating more available bandwith, effectively doubling the samplerate. Funny thing, right?!

Edit: but I don't see an easy use case for high resolution audio with this.

The technique described in the Tektronix application note was also used in the early days of cd. It was not about noise but speed and settling time.
 
Member
Joined 2017
Paid Member
I think that's a too static way of looking at it.
Because how else would you control pre or post ringing (filter slopes) if it's just a random pulse train where you can take an arbitrary row of bits out of the stream and define 0 as an even sum of 1's and 0's? The exact sequence of these 0's and 1's must have specific meaning. In other words: how can you control Filter types as well as Algorithms within a specific OSR and have volume control if all modulation depth is fixed by choice of an order?
I can't see this being a fixed thing within pcm to dsm conversion.

A sacd pressing plant will discard a master if it doesn't follow this 24/28 rule (4/28).

All I'm saying is that higher OSR might relax modulation depth rules. Or there is another explanation to why high levels of modulation depth generate lower distortion at higher OSR, or HQPlayer uses some secret ingredient (internediate stage with multi bit DSM perhaps?).

Yes, I agree with you that static analysis can't make sense. But you need at least one-second pulse train to have accurate analysis. One second in 64OSR means 3072000 counts; it's impossible to use such number. My rough calculation is averaged one to explain the limitation of modulation depth. You can find successive 8 ones or zeros even in 8th order DSM pulse train, but it occurs two or three times per 1 second. The averaged occurrence of successive one or zero is under control of the order: the higher, the smaller.

High order DSM has many integrators; the 8th has eight. Many integrators easily can change the state of the 1bit quantizer inside the DSM feedback loop because integrator is a kind of accumulator. That's the reason high order has a high-frequency spectrum. In other words, high order DSM can have effective noise-shaping. If you employ high order DSM(many integrators), noise-shaping is an automatic function. 8th order in 64OSR can have 30kHz bandwidth, where you have low distortion up to 30kHz. If 8th order in 128OSR, the BW is 60kHz. 256OSR is 120kHz. You can't control the principle. The only thing is the order.

As long as you are in the digital domain, the principle is true. But our goal is DA conversion, which means the output is always in the analog domain. DAC topology often results in different phenomena in the analog domain. High OSR can sometimes have low distortion and vice versa. There is no concrete principle in the analog domain because DAC topology dominates the situation.
 
Member
Joined 2017
Paid Member
With all due respect you are wrong, so wrong that it is painful to read as you are clearly a highly educated and intelligent engineer and this one falsehood is blinding you other possibilities.
This is the single most damaging falsehood accepted as irrefutable fact by 99.99% of all audio designers, electronic / digital and mechanical / transducers.

The time domain and frequency domain don't conflict with each other. They are the position where your eyes exist. Some analysis is easier from the time domain than the frequency domain and vice versa. If you want to pass through a maze, you have two choices: from the goal or the entrance. From the goal is an optimum selection. I have many experiences to rip vinyl for my music library. The best noise and click reduction processing system(RX7 by iZotope) for me is frequency domain software. It can do a perfect job to refresh the old school music file. AFAIK, recent excellent mastering software is the frequency domain(FFT based one). You can do the same job in the time domain if you are sure of your success. But It's not easy to improve sound quality if you are in the time domain like the solution of a maze from the entrance.
 
Member
Joined 2017
Paid Member
I am asking because I wonder if I can use this topology for "switching" the R2R for instance 24bit discrete dac? I am aware that is different topic, but You already have experience with this BJT topology. Somehow I think that this BJTs will be better sounding than mosfets inside chips usually used for driving and switching R2R net.
.
Yes the resistors should be matched and from very small tolerance. For R-2R ladder output DACs.

The accurate phase relation between the 24bit ladder switch can have less glitch impulse than standard mosfet switch. But the performance of multibit DAC depends on resistor accuracy, though less glitch is useful.
 
...
The interleaved dac structure is used in Tektronix' AWG's, see attached picture.

Link:
https://www.google.com/url?sa=t&sou...FjAIegQIBRAB&usg=AOvVaw39N4uCW5WZswm6aDcYkkv-

Looking at the picture it uses different physical dacs,1 for odd and 1 for even parts of the datastream. It seems the even order dac never spits out every other bit, demultiplexed.

Looking closer to the problems they faced, they may only be of interest for the extra bandwith they achieve by effectively doubling the bandwith, but I think it concerns all data.
I took your solution as resembling this structure, with the tight matching it needs.
Can someone explain how this works? As far as I can make out, splitting the signal stream into odd & even samples, directing them to different DACs with the DAC handling the even samples being slipped by a half clock? I'm probably thick but hat I don't understand is this statement from the Tektronix PDF:
  • Odd images (including the one in the first Nyquist band) are the same and they have the same phase.
  • Even images for each DAC are the same but with opposite phase. As a result they cancel each other in the combined signal
In most DACs bits are processed on the rising edge of the clock pulse so I can't work out how the above two statements make sense?

What am I missing?
 
Member
Joined 2017
Paid Member
As far as I have understood, the Tek architecture requires an adder instead of an analog switch to interleave two DACs because their application is a very high sampling rate. If your fs is 48kHz and the input is 10kHz, The 1st image(odd) is 0-10kHz and 0+10kHz: the target one. The 2nd(even) is 48kHz-10kHz and 48kHz+10kHz. The 3rd(odd) is 96kHz-10kHz and 96kHz+10kHz. The 1st image is used to feed DAC. More than 2nd is images to be removed. That's why your available bandwidth is usually up to 24kHz: fs/2.

In the Tek interleaving system, the two DACs have the same phase images with odd ones but opposite phase with even ones. So, simple addition results in odd images only. The 2nd image(38kHz and 58kHz) will disappear. In other words, the output of the adder is the same data as 2xfs sampling. The two 48kHz sampled DAC can act as one 96kHz sampled DAC by an adder, not switch. If your sampling frequency is very high, It's advantageous.
 
Someone please
give a tip for good flip-flop for making shift register. To adopt serial to parallel datas for discrete dac?

I solder one piece with 4 x 74HC164 8bit ShReg. Working fine, but I think that it could be much better with flip-flops. And that will give +data and -data parallel steams. If FF is Q and -Q outputs?

Looking at the stuff people use in this thread, they're really non-standard stuff. Either fpga and solid, or too small to hand solder (for most), or different logic family/voltage levels etc.

This topic of what flip flop to pick has gone on and on in the DSC1 thread (or another one?), best you ask thereI'm afraid.

Last but not least: iirc the DSC1's usually use the HCT family, supposedly for lower distortion or other reasons, which seem to be hard to get.

Good luck Zoran!
 
In most DACs bits are processed on the rising edge of the clock pulse so I can't work out how the above two statements make sense?

What am I missing?

In addition: it's highly unlikely that Tektronix uses cmos, so propably its some ecl family they use. Being ecl all signals are usually differential, so the rising edge of a clock can be just connected to the inverted clock input, it makes no difference for the flip flop, just a routing thing.
 
As far as I have understood, the Tek architecture requires an adder instead of an analog switch to interleave two DACs because their application is a very high sampling rate. If your fs is 48kHz and the input is 10kHz, The 1st image(odd) is 0-10kHz and 0+10kHz: the target one. The 2nd(even) is 48kHz-10kHz and 48kHz+10kHz. The 3rd(odd) is 96kHz-10kHz and 96kHz+10kHz. The 1st image is used to feed DAC. More than 2nd is images to be removed. That's why your available bandwidth is usually up to 24kHz: fs/2.

In the Tek interleaving system, the two DACs have the same phase images with odd ones but opposite phase with even ones. So, simple addition results in odd images only. The 2nd image(38kHz and 58kHz) will disappear. In other words, the output of the adder is the same data as 2xfs sampling. The two 48kHz sampled DAC can act as one 96kHz sampled DAC by an adder, not switch. If your sampling frequency is very high, It's advantageous.

In addition: it's highly unlikely that Tektronix uses cmos, so propably its some ecl family they use. Being ecl all signals are usually differential, so the rising edge of a clock can be just connected to the inverted clock input, it makes no difference for the flip flop, just a routing thing.

Thanks for the answers but I still can't fathom it's method of working - my main stumbling block being the negative phase output from one DAC? Sorry if I'm being stubbornly stupid.

But I found this Tektronix PDF (interleaved DACs in AWG a bit further) which explains in Fig 13 that the input signal is 1/2 clock delayed & phase shifted to the DAC which also runs on a delayed clock http://www.tek.com/dl/76W_29216_0_MR_Letter.pdf

Always another question though :) - is this not the same as delaying by 1/2 clock, the neg phase analog output of a differential output DAC?

Why isn't this a commonly used approach in DACs? As there ar eother stated advantages besides increased bandwidth according to Tek:
  • The diss advantage of using images is that the amplitude will be greatly reduced and will effect the S/N ratio.
  • DAC interleaving also results in a signal-to-quantization noise ratio (SQNR) improvement. Quantization noise from both DACs is uncorrelated so total noise power doubles while signal power quadruples. As a result, SQNR is improved by 3 dB just as expected in an ideal DAC running at twice the speed
.
 
Last edited: