Philips Engineers

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Benchmark says intersample overs cause HF noise bursts. Not clear that old men are going to be able to hear them. :)

EDIT: It may not have been a problem prior to the advent of part of the reconstruction process being made digital. So long as the reconstruction filter has sufficient headroom above digital zero for peaks between samples, it shouldn't be a problem
 
Last edited:
About the Benchmark article ... it's an internal, corporate White Paper. It would be ideal if Benchmark submitted the paper to AES (or other). Or, at least, have their "TEST METHODOLOGY" (and results) confirmed by an independent agency w/o financial bias.

Benchmark notes:
It is possible to build interpolators that will not clip or overload, but this is not being done by the D/A and SRC chip manufacturers. For this reason, Benchmark has moved some of the digital processing outside of the D/A chip.

Methinks that the original ADC and DAC and DF designers would not have ignored such an issue if it were obvious, relevant or important.

And, lastly, a minor point: What about inter-sample undershoot? These "shoots" also miss the mark. But at 4x (and above), the reconstructed signal is good enough. And the analog reconstr. filter can smooth it out as necessary.
 
Member
Joined 2017
Paid Member
Inter-sample overs is the consequence of "loudness war." Before the war, you had enough headroom which could guarantee no clipping. When oversampling filter firstly developed, no need to care about such problem. You don't have enough headroom to be free from inter-sample overs especially in recent CD which has "rail to rail" amplitude.

I did a simple OS simulation on recent CD file from 44.1k to 88.2k(x2 OSR). See post#17.
Solving the intersample DAC clipping problem for about ten euros
Recent CD creates inter-sample overs in playback, and such files may have already clipped in the mastering process. I'm sure inter-sample overs now exist everywhere though I don't know it's audible or not.
 
Speaking of loudness wars, way back before digital audio in my early teens I used to wonder why 45 rpm singles sounded so bad compared to LPs. It was only in the past few years I heard it was because radio station play used mostly 45's and the record companies knew back then that louder sells more. Apparently, mastering engineers were told to make them as loud as they could in order to make them competitive on the radio.
 
hollowman said:
Methinks that the original ADC and DAC and DF designers would not have ignored such an issue if it were obvious, relevant or important.
To get overshoot problems you need high level audio (near clipping for a significant part of the time), digital reconstruction filters, and a DAC used at full scale. Remove any one of these and the problem goes away. CD originally had moderate audio levels and analogue filters, so two of the necessary conditions were not present. When Philips introduced oversampling (i.e. a digital filter) they still had the benefit of moderate audio levels.

Now that DACs have improved over what was available back then, it seems to me that the simplest way to avoid the problem is to ensure that the digital filter and DAC have a wider word size than the audio stream and then use them below full scale. This compromises on distortion and noise floor, but avoids clipping. Alternatively, listen to different music.
 
Now that DACs have improved over what was available back then, it seems to me that the simplest way to avoid the problem is to ensure that the digital filter and DAC have a wider word size than the audio stream and then use them below full scale. This compromises on distortion and noise floor, but avoids clipping. Alternatively, listen to different music.

There was a proposal to design a circuit that would LSB justify the data and extend the MSB though I think the issue then was settling time. Probably not much use for it now.
 
About the putative "overshoot" issue with the digital filter in the playback device ...

Isn't it (the DF) just seeing (processing) "strings of numbers" in the digital domain?
Why should it care about "loudness"?

I looked at the data sheets of several DF's and saw no mention of an "inter-sample overshoot" or "headroom" parameter , or even some sort of "sample dynamic range" metric.

The improved performance, reported in Benchmark's white paper, may be due to digitally lowering the volume before the DF, which allows electronic components to operate more linerarly. To get a more accurate assessment, perhaps Benchmark should take measurements at the I2S output of the DF (which ain't possible, AFAIK, in the ESS dac they use).
 
They use a src, which I'm pretty sure also has room in the dsp to configure the volume cut. So if you looked at the i2s coming out of the src...

And, yes, treating something like the 4490 as a 1v out, 23 bit dac erodes the snr by 6 dB but improves the distortion notably. One does have a good amount of snr to work with in these modern dacs.

Loudness wars has the mix being brought up to the msb far far more frequently, thus the probability of hitting a intersample over is much higher than something not compressed into oblivion.
 
Last edited:
About the Benchmark article ... it's an internal, corporate White Paper. It would be ideal if Benchmark submitted the paper to AES (or other). Or, at least, have their "TEST METHODOLOGY" (and results) confirmed by an independent agency w/o financial bias.

There is nothing technically wrong in their white paper, it is only straight maths. As I thought the 3.1dB number comes from the worse possible case of an 11.025kHz (at 44.1k sample rate) signal at 3.1dB over full scale. The question is how does this get on the CD in the first place (they actually only found one of these on 5000 CD's). If you were feeding an analog signal chain into a 44.1 ADC presumably the clipping indicator would be blinking away on the continuous analog signal.

The music example was 3.7 overs/sec with .8dB the worst. Again IMO the problem rests with the record production. If you down sample and normalize after you effectively are creating the clipping in the later up sampled data. Simply normalizing at maximum sample rate and then down sampling would reduce the problem dramatically (or eliminate it).

None of this changes the fact that many pop CD's are packed with true hard clipping these days.
 
@hollowman, The digital filters probably use fixed point math. The filter math is scaled so that the biggest number that can be represented in the DSP registers without overflow is same as digital FS for 24-bit audio. The problem then arises because two close-together-in-time incoming samples are close to or at FS. The reconstruction math calculates or tries to calculate that there is a peak between the two samples extending above digital FS which causes calculation overflow to occur. The result soundwise is bursts of HF noise each time it happens.
 
Last edited:
There may never be a case where analog VU meters are showing the problem. Gain may have been added after digitization, but not always.

Sometimes ADCs are intentionally clipped to create loudness in a way that sounds different from limiting. Sometimes the digital level is reduced a bit after clipping the ADC in order to reduce excessive additional distortion on playback.

Most digital clipping indicators do not show intersample overs. They must be running an internal reconstruction filter to know when it happens. However, digital meters that do catch intersample overs are becoming more common.
 
Last edited:
Sometimes ADCs are intentionally clipped to create loudness in a way that sounds different from limiting. Sometimes the digital level is reduced a bit after clipping the ADC in order to reduce excessive additional distortion on playback.

If this is fair game who cares about the overs? My cheap digital field recorders have an analog clipping indicator at +- full scale, VU meters on digital recorders makes no sense.
 
hollowman said:
About the putative "overshoot" issue with the digital filter in the playback device ...

Isn't it (the DF) just seeing (processing) "strings of numbers" in the digital domain?
Why should it care about "loudness"?
The issue with digital filters, as with all filters (apart from trivial first-order filters), is that sometimes the maximum output can be bigger than the maximum input. This is of no consequence with an analogue filter but can push a digital filter beyond full scale at the output if the input is at or near full scale.

Simple explanations of digital audio often give the impression that the reconstruction filter merely puts a smooth curve between the sample points, like a low order low pass filter. Serious reconstruction filters are high order.

Another way of looking at this is to consider a square wave, of peak amplitude 1. Now use a unity gain filter to remove all harmonics, just leaving the fundamental. The peak amplitude of the fundamental is 4/pi (IIRC), which is greater than 1. Overshoot is real.
 
My cheap digital field recorders have an analog clipping indicator at +- full scale, VU meters on digital recorders makes no sense.

If you have an oversampling ADC on board, you could in theory still get an overshoot problem in the digital decimation chain that your analogue meter won't see, like in DF96's square wave example - although musical waveforms typically don't look like square waves at all. In any case, for field recording you need to keep plenty of headroom anyway, just in case someone is going to shout or a musician is going to play a bit louder than expected.

What surprises me most about that Benchmark story is that they describe the Steely Dan recording with 3.7 intersample overs per second as "a spectacular CD recording with lots of dynamics and a low noise floor." Apparently it is not a severely compressed loudness war type of recording.
 
Skeptical about Benchmark's story

What surprises me most about that Benchmark story is that they describe the Steely Dan recording with 3.7 intersample overs per second as "a spectacular CD recording with lots of dynamics and a low noise floor." Apparently it is not a severely compressed loudness war type of recording.
I'm skeptical of Benchmark's claims about that CD. What they claim as " intersample overs " in that recording (if they are indeed claiming it's not deliberate) may simply be a result of intentional DSP sound editing. Many modern recordings are heavily (deliberately) processed (e.g., in ProTools). And this processing can leave behind "digital footprints" (artifacts) that can be interpreted in MANY ways.
I.e., the producers, artists, Spotify, fans, WANT that type of sound ... like Metallica wanted the compressed sound in Death Magnetic.

BTW, from the Benchmark "paper", they make a fairly bold claim here:

FAULTY D/A AND SRC CHIPS
Every D/A chip and SRC chip that we have tested here at Benchmark has an intersample clipping problem! To the best of our knowledge, no chip manufacturer has adequately addressed this problem. For this reason, virtually every audio device on the market has an intersample overload problem. This problem is most noticeable when playing 44.1 kHz sample rates.
 
Steely Dan recording with 3.7 intersample overs per second as "a spectacular CD recording with lots of dynamics and a low noise floor." Apparently it is not a severely compressed loudness war type of recording.

I would await a serious evaluation of actual audibility as shown, their graphics show a simple clip not an LSB rollover and full scale glitch. I question the audibility of 3.7 (to use their numbers a maximum 9.6% full scale) glitches per second that appear only during a near full scale transient. Yes it's wrong but I think they are exaggerating the issue with respect to many recordings.

And I repeat my original opinion that the problem is a lack of deep understanding in the creation of the source. Every recording should be done initially for any oversampling frequency i.e. future proof. This is not that different than the fact that some early CD's were 16bit undithered, as the great professors from Canada pointed out many years ago.

@hollowman I think your over stating your objections you can always feed unrealistic computer generated signals to the DSP and get garbage out. Their white paper might be self serving but it is not technically wrong.
 
Last edited:
I should've been clearer:

Modern monolithic DAC chips are multi-function devices. I.e., DF + D/A. In addition, many have built-in I/V (i.e, they are voltage-out) and may even contain an internal volume control. In a way, they are a big “black box”. And that’s what makes me question Benchmark’s conclusions. That is: after lowering the gain before the DF, they conclude that the better measured performance is because of their “HIGH-HEADROOM INTERPOLATION”.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.