Ideas wanted -- USB DAC product

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
I'm an engineer with a PhD degree, not a dark artist. I care about specs, not how people interpret them.

For example, dcs and aries carat both have horrible specs. Only man among that list has some good figures.

I don't mind spending $20k on a top of the line audio analyzer or spending 500 hours to build one, but I only care about measured electrical data.

I would recommend getting used gear on ebay. My jitter measurement system alone is $130K new.

Bit I do like your last point -- to get rid of the headphone amp and let the user to figure out how to add their own.

High-end headphone amps are usually tube based. I am designing a fully balanced one right now, but it's SS.

Measurements are certainly important, but after 16 years designing in this industry after retiring from 25 years in the computer industry I have found that measurements are insufficient. This is not because they could properly characterize everything, it's because the current state of the art in measurement for audio in woefully inadequate. Just read some of the Stereophile reviews. The reviewer hears a difference in a cable and there is no measurement that correlates to this or even shows it. Even new effects like jitter are not correlated to sound quality. This is something I am working to change BTW.

Given that measurements are inadequate at this point in time, it is extremely important that you assemble an audio system that is highly resolving, with low noise and low distortion so you can do listening tests and "tune" your products by changing parts and adding parts, mainly decoupling caps. You will need a system that images superbly, with good focus, depth and width. You will need to acoustically treat your listening room, or design one from scratch, including power considerations. You will need a system with no ground-loops in it. I have personally spent 20 years doing this and I have that now. Most of the equipment in my system is designed by me and if not, it is highly customized/modified by me. I found this necessary because I literally found no products that delivered the dynamics, clarity and imaging that I needed.

How did I get good at modding? I spent 10 years modding other companies products, including amps, preamps, DACs and digital audio interfaces. I sold 40 mods in all. Through this process, I discovered the way NOT to do a lot of design and implementation. I even learned bad practices from Sony products. You would think they would know what they are doing. I identified many fine passive components that are key to performance in my products, a few from Australia, UK, Europe and Asia. Many of these are hand-made or contain exotic elements. The appeal of these passives is that they perform closer to the test-book ideal characteristics.

I also came up with a few novel inventions and implementation techniques that are my trade secrets. These are the kinds of ideas you will need to separate yourself from the pack, otherwise you will be just another cookbook designer. It's the reviews that will propel you into a successful high-end audio business, not having a product with lots of appealing features. Audiophiles talk to each other and they know what is good and what is not. Expect to spend a lot exhibiting at shows, and try to get your equipment into several rooms. I personally spend very little advertising. Most of my products are sold word-of-mouth. This way I can charge less and the be giant-killer on the block.

You need to have a strategy. I agree with the other posters. A $2-3K DAC is not really that high-end. You will be competing against PSAudio and others that crowd this market. Most of the high-performing DACs are in the $10-15K range. The latest crop are using multi-bit ladder designs, not Sigma-Delta or single-bit. A lot more work involved in designing one of these.

Final advice: License and purchase technology of which you are not an expert. Nobody is expert in everything. I license Ethernet firmware, several generations of USB firmware, DC regulator and other technologies because I am not an expert in those areas.

Steve N.
Empirical Audio
 
When designing an SDM DAC from scratch, there's another form of challenge, namely inter-symbolic interference, which basically means the current symbol's output level is dependent on the previous symbol, which causes auto-correlation, hence multiplicative noise, which results in THD degradation. For this reason, our SDM DAC has a blank pulse insertion mechanism to isolate output DAC bitstream symbols from each other. This decreases DC gain by ~1.9dB. IMHO, greatly reducing THD by a lot at the cost of 1.9dB DR loss is a good deal.

ISI usually also messes up the noise floor (including the idle channel noise), not just the THD.

Your remark about the "blank pulse insertion mechanism" makes me rather curious. Is this simply a return-to-zero DAC with 80 % duty cycle, a PWM-like circuit that avoids patterns with 0 % or 100 % duty cycle, or some exotic proprietary circuit you can't tell us anything about until the patent is granted?
 
The SDM is designed not to be aggressive, so time constants are relatively short, hence idle tone is far beyond hearing range.

The ISI mitigation is an 80% duty cycle RZI encoder (the I part cancels with inverted OPAMP IV converter to make it in-phase output). In addition, there's a PWM before RZI stage to give "one" (actually, 0.58) extra bit of resolution (so we can use 8 differential output elements instead of 16 differential output elements for a 4-bit thermometer code DAC), clocked at 12.288Msps.

Our output data stream:

b0~b3: data[0], b4: 0, b5~b8: data[1], b9: 0.

Each bit corresponds to 8.138ns, and is clocked out by a 122.88MHz master clock. The reason I want to operate my master clock close to 122.88MHz is because telecom systems use that frequency, and most low jitter PLLs and XOs are rated for that frequency.
 
All filters with back-feeding (SDM and IIR) have this potential phase reversal, which causes ringing, and in the worst case, parasitic oscillation. The oscillation frequency is determined by feedback delay. The more aggressive the filter is designed (higher feedback coefficient, longer time constant, etc.), the more it's easy to get an oscillation. With a shorter delay, noise attenuation/shaping from lower frequency will be worse, but oscillation frequency shifts higher. This is the fundamental reason why we oversample input to 12.288Msps -- to widen the hap between passband and stopband, so the modulator has less to worry about.

Right now the quantizer is basically splitting 32-bit modulated words into 16 zones, from 0 to 15. The 16 states are mapped to 16 output elements, 0x0 being no elements turned on, and 0xf being only one element being turned off. The actual pattern for which element gets turned on and which not is regulated by a random generator, this is the scrambler.

The feedback from quantizer feeds back to SDM as the delta part.
 
For my understanding, are you writing about big large-signal oscillations, a.k.a. overload cycles, or about small peaks in the output spectrum of a normally working sigma-delta, which is what I know as idle tones?

You can prevent large-signal oscillations by limiting state variables or by resetting, as you already do according to one of your earlier posts.

Idle tones in IIR filters can be avoided altogether by using dither wherever numbers are rounded (1 LSB peak-peak uniform dither or 2 LSB peak-peak triangular if you also want to get rid of noise modulation). In a single-bit sigma-delta, you can't do that because there are too few quantization levels available, but it's very well possible in a multibit sigma-delta.

Of course in a multibit sigma-delta you have to make sure that the algorithm that selects the unit elements doesn't generate tones of its own. With a random generator that shouldn't be an issue, though.
 
I'm writing about large signal oscillation. Idle tone is not my biggest concern for a 4th order SDM.

I'm currently not much into dithering as I thought SDM will automagically take care of residue error, and I was not able to trigger this idle tone with my simulation. But since you just reminded me about this dithering technique, I will try that later. Thanks for the heads up.

The DEM is of course driven by an LFSR random number generator with configurable feedback tap connection.
 
Thunderbolt is a wonderful protocol, but again, I see no reason to use it. USB2.0 provides enough bandwidth for many, many channels of 32/768 audio, yet its latency is not too high. Sure, Thunderbolt has virtually no latency since it is just PCIe, but the extra overhead caused by USB is less than 1ms anyway.

Besides, Thunderbolt, which was promised by Intel as an open protocol, remains closed, and you need to sign an NDA and talk with an Intel representative before being able to access to their documents. Unlike USB, you can't just grab an interface chip and start working on it.
And where is the trouble with probably a day or two delay for a great protocol?
USB is also proprietary and you need to pay royalties -although I don't remember the actual cost.

In my experience as a user with Thunderbolt's predecessor (Firewire), USB is like a poorly designed toy in comparison. Firewire is far superior in speed (especially FW800), stability and overall user-experience -they abandoned a great protocol for a garbage one. Even the USB connectors cause trouble to the million of users to find the right side each time (they could have made it to work at both orientations, or at no orientation at all -were they designed by sloppy/low-I.Q developers or what?).


In this stratospheric price range, DACs would not come with headphone amps built in, that would be too much of a compromise.
What if the hp amp can honor the DAC's great specs and be considered top-notch/reference?


'Ultra high-end' is not a spec classification in the audio world, its a marketing term. If you're wanting to sell on specs alone, good luck in convincing the customer base that great specs are a guarantee of great sound. You'll need to spend more on marketing that proposition than on DAC R&D and fancy test gear I reckon.

Incidentally if you go over to MSB's website armed with your PhD you might notice their figures are fudged. The fact that this passes completely unnoticed by the consuming public might tell you something....

Seems to me you'll want to open up a completely new market, one of believers in specs over and above reviewers and dealers and related hangers-on.

Very interesting views. I'm a spec guy too and though I agree, I think that there is a market for people that appreciate great specs as long as you can convince them that the specs are true and exceptional -eg verified by one or more independent companies/organizations.

Now, the ultimate way to convince most customers, would be to invent a test that can be repeated and verified by a large number of people about the superiority of your product vs those with inferior specs -and provide the official results. That would also shock the market and draw attention :)
 
And where is the trouble with probably a day or two delay for a great protocol?
USB is also proprietary and you need to pay royalties -although I don't remember the actual cost.

In my experience as a user with Thunderbolt's predecessor (Firewire), USB is like a poorly designed toy in comparison. Firewire is far superior in speed (especially FW800), stability and overall user-experience -they abandoned a great protocol for a garbage one. Even the USB connectors cause trouble to the million of users to find the right side each time (they could have made it to work at both orientations, or at no orientation at all -were they designed by sloppy/low-I.Q developers or what?).

FW/TBT require more than a few weeks or months of work. FW is dead, and nobody uses that anymore besides the nostalgic guys. TBT is essentially PCIe, so I need to develop a PCIe enabled gadget. PCIe enabled FPGAs are expensive, and I need to pay $4-digit amount of licensing fee to use the PCIe controller IPs, plus equally expensive membership fee to use TBT interface (TBT has a different protocol that encapsulates PCIe, so I need to use both technologies).

USB is royalty free with all documents available for free with no non-waived patents. The only fee to use USB is when you need a proper USB vendor ID, you buy that for $3500, but there are cheap sources VID/PID pairs banned by USB consortium, but the IDs still work and are still unique. Also, if I want to put USB compliant logo, I need USB membership, which is $2000 per year. In my case, I don't care about a USB logo, so I don't need to join USB-IF.

Overall, there's no fee for me to use USB besides the time invested on writing my own USB audio class 2.0 driver stack.

FWIW, my design uses a reversible USB type C, so even someone wants to plug it the wrong way around, it't not possible.

If anyone can't plug the USB plug on the computer side in the right orientation, then they are not my customer. This is a prosumer grade gear, not for some 16 year old kids wanting a boombox connected to their phones. For this reason, I'm not going to pay the Apple tax for MFi.

What if the hp amp can honor the DAC's great specs and be considered top-notch/reference?

Yes, but there are technical complications. The DAC has a master digital volume control, so the signal applied to integrated HPA is already attenuated. If I want to implement independent volume control between master out and HPA, I have to add a PGA chip, and even that requires master output can't be totally muted. Also, PGA chips are noisier and less linear than my digital volume control, so anyway the headphone output will be crappy. If I want to have truly independent, lossless volume control, I need a second set of physical DAC, which almost doubles the cost.

Very interesting views. I'm a spec guy too and though I agree, I think that there is a market for people that appreciate great specs as long as you can convince them that the specs are true and exceptional -eg verified by one or more independent companies/organizations.

Now, the ultimate way to convince most customers, would be to invent a test that can be repeated and verified by a large number of people about the superiority of your product vs those with inferior specs -and provide the official results. That would also shock the market and draw attention :)

I am also working on my own open source audio analyzer that focuses on detecting extremely low residual harmonics for THD and IMD characterization. DR (basically noise) is relatively easy to measure. The design will be in an open source license, so everyone can use it.

However, I'm not going to compare measured data with competitors' products. It's considered very offensive and uneducated. Let's the reviewers do the comparison.
 
Mola-mola looks like the one to beat.

It looks to me that they rated DR as SNR. IMHO there's no way to get 140dB real SNR from a single bit DAC (assuming 1kHz test tone).

Also, their not measurable THD claim doesn't equate with their -150dB estimation. Not measurable by latest AP means less than ~-120dB. To measure lower, you need to design a filter bank for each harmonic band, but that's doable and can get you down to -140dB and lower.

Curious - where did you get that idea from?

Liability. If I put test data with verbatim that suggests my testing is absolute accurate, then should my measurement method screwed up, I get into trouble.
 
It looks to me that they rated DR as SNR. IMHO there's no way to get 140dB real SNR from a single bit DAC (assuming 1kHz test tone).

Is your 'humble opinion' founded on some reasoning or evidence? I'd be interested in hearing what's led you to having this opinion.

Also, their not measurable THD claim doesn't equate with their -150dB estimation. Not measurable by latest AP means less than ~-120dB. To measure lower, you need to design a filter bank for each harmonic band, but that's doable and can get you down to -140dB and lower.

FFT is a filter bank, it allows ability to discern harmonics below -120dB. At least it did when I used an AP (well over a decade ago now) and that was one of the original dual-domain models. But you do need to suppress the fundamental with the AP's own notch filter otherwise the ADC itself contributes artifacts.

Liability. If I put test data with verbatim that suggests my testing is absolute accurate, then should my measurement method screwed up, I get into trouble.

You're not responding here to what I was asking about. You'd said it was 'offensive' and 'uneducated'. Not 'opening myself to liability to litigation'.
 
Is your 'humble opinion' founded on some reasoning or evidence? I'd be interested in hearing what's led you to having this opinion.

I stand corrected. I did a calculation, and it seems 140dB SNR is possible if they did everything right.

From their website, it seems they are using an interpolation filter to get sample rate up to 3.125Msps, then using a SDM to dither that into 4-bit PWM (100/3.125=32, since ISS mitigation requires to insert at least one zero between symbols, they can achieve up to 16 intensity levels, hence 4-bit). The PWM part gives them ~24dB of SNR, and that means to get their claimed 140dB SNR (leaving some margin for analog circuitry, let's say the digital part has 150dB SNR), the SDM needs to provide 126dB noise shaping.

I don't know how many orders of SDM they've used, but I've never seen any commercial SDM using more than fifth order (HyperStream) due to stability issues, so assuming a 5th order SDM, they need an oversample ratio of at least 32x.

In reality, SDM modulators have this trade off between noise shaping and stability, so I would assume they need considerably less aggressive SDM to keep their 5th order system stable. Based on a 64x oversampling assumption, at 3.125Msps SDM input data rate, the NOS sample rate should be no more than 48ksps, or represents up to 24kHz.

FFT is a filter bank, it allows ability to discern harmonics below -120dB. At least it did when I used an AP (well over a decade ago now) and that was one of the original dual-domain models. But you do need to suppress the fundamental with the AP's own notch filter otherwise the ADC itself contributes artifacts.

Exactly. In OPA1612 datasheet they mentioned THD floor of AP2, and that's basically what you can do without an analog filter bank (just use the digital AP with stock configuration).

I would say testing a DAC is easier than testing an opamp because you don't have to generate perfect sine wave. Still, you need an analog filter bank to get rid of the ADC's residual noise, even if your FFT has unlimited bin count.

The highest model of AP product line can do only -117dB without a custom analog filter bank. If that's how they claim "unmeasurable", then I don't know if that means their -150dB.

Single bit DACs tend to be very linear, and I have no doubt that they can set it right, but lower than -136dB just seems weird to me because that's how the best opamp goes, unless there is an opamp with lower THD than OPA1612.

You're not responding here to what I was asking about. You'd said it was 'offensive' and 'uneducated'. Not 'opening myself to liability to litigation'.

Breaking commercial laws on inappropriate competing is offensive, and educated people don't resort on breaking laws. It's both a manner and a legal requirement.
 
From their website, it seems they are using an interpolation filter to get sample rate up to 3.125Msps, then using a SDM to dither that into 4-bit PWM (100/3.125=32, since ISS mitigation requires to insert at least one zero between symbols, they can achieve up to 16 intensity levels, hence 4-bit).

ISTM PWM is inherently RTZ so do they still need a whole symbol's worth of zeroes in between and not just ensure they keep the modulation below 100%? (I'm no expert in this field so my question might be rather too naive).

The PWM part gives them ~24dB of SNR, and that means to get their claimed 140dB SNR (leaving some margin for analog circuitry, let's say the digital part has 150dB SNR), the SDM needs to provide 126dB noise shaping.
ISTM (again perhaps naively) that the SNRs are taking into account the whole bandwidth but Bruno's firstly doing a 32 tap transversal filter then subsequent analog filtering. If he no longer needs a whole symbol's silence between symbols then the first number approaches 30dB right?

In reality, SDM modulators have this trade off between noise shaping and stability, so I would assume they need considerably less aggressive SDM to keep their 5th order system stable. Based on a 64x oversampling assumption, at 3.125Msps SDM input data rate, the NOS sample rate should be no more than 48ksps, or represents up to 24kHz.
In Ncore Bruno's implemented a 5th order modulator in the analog domain so I'd guess he has something of a handle on the stability issues.

I would say testing a DAC is easier than testing an opamp because you don't have to generate perfect sine wave. Still, you need an analog filter bank to get rid of the ADC's residual noise, even if your FFT has unlimited bin count.
Its the notch filter (Bruno talks about this in one of his blog posts) which circumvents the ADC dynamic range (SFDR) issues.
 
It isn't a commercial product, but I have a DIY sigma-delta here that I can switch between a fifth-order and a strongly chaotic seventh-order mode.

The noise floor of a sigma-delta DAC is normally set by anything but shaped quantization noise: analogue circuit noise, clock jitter, ISI due to incomplete settling, crosstalk from the sigma-delta modulate to the reference and the clock, you name it.
 
ISTM PWM is inherently RTZ so do they still need a whole symbol's worth of zeroes in between and not just ensure they keep the modulation below 100%? (I'm no expert in this field so my question might be rather too naive).

ISTM (again perhaps naively) that the SNRs are taking into account the whole bandwidth but Bruno's firstly doing a 32 tap transversal filter then subsequent analog filtering. If he no longer needs a whole symbol's silence between symbols then the first number approaches 30dB right?

I don't understand, but did you mean he has an ADC to sample the output of DAC back and use that to correct his filter's tap coeffs?

Even that works, how does that relate to ISI? You still need to insert blanks to mitigate ISI, unless you have a calibration table for that.

I mean, a PWM generator generating 10000000 doesn't have exactly 1/2 of output energy than the same PWM generator generating 11000000 pattern.

Therefore, you need to insert a zero between every symbols, basically to get 11000000, you need to generate 1010000000000000 to make sure each symbol returns to zero.

Its the notch filter (Bruno talks about this in one of his blog posts) which circumvents the ADC dynamic range (SFDR) issues.

Yes. A notch to get 1kHz test tone to as low as possible, then a series of BPF to intensify intended harmonics.

Can you drop me a link to his blog? I think I can learn a lot from his posts.
 
Member
Joined 2017
Paid Member


I don't know how many orders of SDM they've used, but I've never seen any commercial SDM using more than fifth order (HyperStream) due to stability issues, so assuming a 5th order SDM, they need an oversample ratio of at least 32x.

In reality, SDM modulators have this trade off between noise shaping and stability, so I would assume they need considerably less aggressive SDM to keep their 5th order system stable. Based on a 64x oversampling assumption, at 3.125Msps SDM input data rate, the NOS sample rate should be no more than 48ksps, or represents up to 24kHz.


I have successfully implemented 7th order DSM which is the exact copy of the book "Understanding Delta-Sigma Data conversion" into Artix7 FPGA. My circuit is 7th order version of Fig 4.27 at 110 page. You can easily calculate the necessary coefficients by Matlab or python.

DSM has stability problem when the input becomes large especially in 1 bit DSM. Once oscillation occurs, it can't get rid of an abnormal situation even if the input becomes zero. An external hardware reset is needed to clear the abnormality. But this is simple logic and easy to implement. The latest DSM described in the book above does not need to care about stability problem. They are stable and excellent. I think it's better to use the book rather than developing your original DSM.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.