Philips Engineers

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
I would await a serious evaluation of actual audibility as shown, their graphics show a simple clip not an LSB rollover and full scale glitch. I question the audibility of 3.7 (to use their numbers a maximum 9.6% full scale) glitches per second that appear only during a near full scale transient. Yes it's wrong but I think they are exaggerating the issue with respect to many recordings.

I don't know whether it is audible either. Establishing that would require well-controlled double-blind tests that would definitely be disputed if the result were that it is not audible. It also depends a lot on the used recordings, of course.

In any case, it certainly is silly to have high-performance audio equipment clip unnecessarily, so I like to avoid it whether it's audible or not.

And I repeat my original opinion that the problem is a lack of deep understanding in the creation of the source. Every recording should be done initially for any oversampling frequency i.e. future proof. This is not that different than the fact that some early CD's were 16bit undithered, as the great professors from Canada pointed out many years ago.

Better mastering techniques would definitely be the best solution to the issue, but given that recordings are made the way they are made, in my opinion, audio DACs should be designed to deal with that.
 
Better mastering techniques would definitely be the best solution to the issue, but given that recordings are made the way they are made, in my opinion, audio DACs should be designed to deal with that.

There are DAC's that won't play a simple 1KHz full scale sine wave without clipping (it has nothing to do with inter-sample overs) so there are worse things going on. BTW I own a copy of the very first Sony/Philips test CD (brought back from Japan in 1982) I should check the music samples. I already know it's un-dithered.
 
Last edited:
Member
Joined 2017
Paid Member
Yes, both reports are theoretically true. You can have the same result as long as your measuring conditions are correct. Many people want 0dBFS operation of their DAC, but it's not wise setup. Recent commercial DACs often have better THD+N at -3dBFS rather than at 0dBFS. Furthermore, -3dBFS operation brings you inter-sample overs free. The advantage of -3dBFS is more than the disadvantage. My setup is usually from -3dBFS to -9dBFS. Why don't you use -3dBFS?:)
 
This is exactly the same stuff as the Benchmark white paper. Read carefully, they normalize computer generated signals essentially creating (in the case of 11.025kHz) a virtual input of 3dB over full scale.

The fact that we are talking about digital overs as a consequence of normalizing in 2018 is a complete joke and typical of the evolution of digital
audio. Did they think all this through.....?

Any normalization process presumably done by whatever DAW (Protools etc)
should at least conceptually upsample to say 384kHz and then do the normalizing at say 48k based on what would come OUT of a digital filter.

Surely that is not too difficult.

T
 
Any normalization process presumably done by whatever DAW (Protools etc)
should at least conceptually upsample to say 384kHz and then do the normalizing at say 48k based on what would come OUT of a digital filter.

Surely that is not too difficult.

Especially since, as I mentioned earlier, the same problem with analog anti-imaging filters was well known in the beginning.
 
Member
Joined 2014
Paid Member
. Did they think all this through.....?

What hasn't been mentioned is that no one realised how lossy compression would take over. As Benchmark note, MP3s can be a source of huge number of sample overs. Again easily fixed with replay gain .

Back in the day of course CD was the brave new world where the limitations of vinyl could be thrown out and true full range high dynamic range recordings could be made ushering in a new realism to domestic replay. Then something went wrong.
 
The reports are not really the same. Especially when it comes to each report's conclusions.

They are the same technical issues around down/up sampling and normalization. It is amusing that CoolEdit had a warning and it's how old, more than 20yr.?

BTW I checked the CBS/SONY test disk (1983) not only is it undithered 16 bit but there are some clips and potential overs on some of the music samples. OTOH most of the samples don't look anything like current CD examples, they have real dynamic range so the flaws are rare.
 
I do agree that ISO's can be generated given the 0dBS+ condition.
Benchmark notes "INTERSAMPLE OVERS ARE A COMMON OCCURRENCE IN CD RECORDINGS", and proceeds to give several infographic examples, such as the Steely Dan album. I'm not sure if what they are calling "ISO's" (red spikes) are really ISO's. But for the sake of argument, let's say they were. What if those "ISO's" were the result of DSP manipulation (as Tham pointed out)? This may be some kind of efx, or reverb, or eq.
Then, trying to compensate for the anomalies ("ISO's") in the playback device would change the sonic. charac. of the music -- not what the artist wanted! After all, this is the Steely Day -- known for their Grammy-award-winning engineering.

As Tham noted: " 0dBFS+ levels can be created through subsequent manipulation or processing of a digital recording. For example, if a digital recording is "amplified" (by multiplying each digital sample by a constant amount) it may then subsequently lead to 0dBFS+ levels."

BTW: The pro-audio community (producers, engineering) has know about ISO's (aka inter-sample peaks, ISP) for quite some time:
Intersample Overs and Peak ceiling - Gearslutz Pro Audio Community
 
Then, trying to compensate for the anomalies ("ISO's") in the playback device would change the sonic. charac. of the music -- not what the artist wanted! After all, this is the Steely Day -- known for their Grammy-award-winning engineering.

That seems rather bizarre, as it would mean that the artist wants it to sound different depending on what DAC is used to play it back.
 
That seems rather bizarre, as it would mean that the artist wants it to sound different depending on what DAC is used to play it back.

They want people to like it and buy it. If it is not reasonably competitively loud, it will be a problem and the record company will not approve the master. There has been no perfect solution available.

At least mastering for iTunes includes setting levels so that iPhones will not be pushed into excessive distortion. Maybe it will have some influence on the rest of the industry, we will have to see.
 
Got it: you were thinking about the digital signal processing as such, I was thinking about DACs and mastering practices.

Right, I meant if it really bothered you you could rip all your CD's and convert to FP normalized to 32768.0000 nothing at all lost there. Then there are lots of good software SRC solutions to up sample and create 24bit files at say 176.4k or something like that renormalized after all the computations are done.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.