Is There a "Standard" for CD Antialias Filtering?

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
In the recording chain leading up to any digital media there has to be an implicit or explicit antialiasing filter. (OK, I guess you COULD leave it out, but from at least an information theory standpoint you wouldn't want to. I'm not going to quibble over nuances like that.)

Is there a formal standard, or even a working consensus, for the effective characteristics of the antialiasing filter in CD-quality audio? Yeah, we know it wants to be "almost flat" to about 20 KHz, and "reject everything" above 22.05 KHz. You can figure that out by the second day of your "Introduction to Digital Signal Processing" class. And I know that in a modern recording chain the overall antialias function will probably be distributed among analog and digital sections so you can't point to a handful of components and call it "THE antialiasing filter". ("Oversampling" and "Decimation" don't show up until at least the 3rd week; maybe half way through the class.)

But in realistic terms does the performance of the CD's antialias filter mean -3 dB at 20 KHz, and -20 dB above 22.05 KHz; or -1 dB at 20 KHz with -40 dB or better above 22.05 KHz; or something else? Where can I find a reference?

I thought Philips/Sony had published guidelines for this, or even a formal spec for anybody who wanted to paste the "CD" logo on their product, back around 1980 but after running my search engine at full throttle all evening I can't find it.

Dale
 
The alias image is centered on fs/2. Therefore you would need to reject as much as possible of the image. For CD you have a "guard space" 44.1/2-20=2.05kHz below and above Fs/2. That means that the alising products are full starting with 20+2x2.05=22.1kHz.

The only way to make a real analog aliasing filter work is to raise the sampling frequency in order to separate the images. In this way you don't need the brickwall filter but a more do-able one - every DAC manufacturer has in their datasheet a recomended filter.
Even from the first DAC's made by Philips and Sony, OS was seen as necessary (4x as minimum). They didn't "publish guidelines", they made the first DAC's for CD format and they made them 4x.
And... digital filtering is not possible without oversampling.
 
AX tech editor
Joined 2002
Paid Member
There's a CD 'Red Book' that is supposed to talk about such specs, probably somewhere on the 'net.
Then again, nothing stops anyone to implement what they think is the 'best' filter.
Some here run their DACs without any antialliasing filter and accept all the hf noise above 22kHz and even insist it sounds better that way. YMMV.

jan didden
 
Does it mean there is oversampling during the recording (with a relaxed antialiasing filter), and the resulting digital stream is downsampled to 44.1 kHz before DC mastering?

Yes, all the multibit ADC that I know of, use oversampling and anti-alias filter combo analog+digital.
For modern delta-sigma single, the OS is implied.
If aliasing error was only ultrasonic noise, I could see one not using a filter. But unfortunatelly there are alias products that are "folded" back in to audio band.
 
Last edited:
Thank you, Jan.
There's a CD 'Red Book' that is supposed to talk about such specs, probably somewhere on the 'net.
I found many references to the "Red Book" but didn't find any actual quotes, or even a formal title for it. And I couldn't find the document itself.


Then again, nothing stops anyone to implement what they think is the 'best' filter.
One nice thing about "standards" is that you can always create a new one. Or at least arrogantly claim that your opinion should be treated as a "standard".

Dale
 
Some here run their DACs without any antialliasing filter and accept all the hf noise above 22kHz and even insist it sounds better that way. YMMV.

I believe you're thinking of the anti-imaging filter, not the anti-aliasing. Other end of the digestive tract. :D

If the anti-aliasing and anti-imaging filters had been standardized by Red Book (they weren't), it would have allowed phase-correct CDs to be made. The phase shifts probably aren't that audibly significant, but if they could have been avoided, then why not?
 
SoNic_real_one said:
They standardized just the bare minimum - the CD's
When designing a long-term system architecture standardising the bare minimum is the the best option. The CD acts as the interface between the recorder and player. By defining the interface it leaves plenty of scope for improvements in technology. This is how railways, roads, the telephone system, the internet work - define the interface, leave the rest flexible.

Sometimes people are tempted to make the interface definition flexible, with scope for future changes. My experience in IT is that this almost always fails: it creates complications, and it nearly always turns out that you soon want a change which you haven't allowed for. Examples of poor interface design: almost any Microsoft file format, which is why they create such trouble with intergenerational compatibility.
 
Even from the first DAC's made by Philips and Sony, OS was seen as necessary (4x as minimum).

Well, Sony's first player (CDP-101) had no oversampling and used an analogue brick-wall filter.

Philips first player (CD100) did use 4x oversampling (with noise shaping), but this was mainly to achieve 16-bit performance from the 14-bit DAC (TDA1540).

And there were plenty of 2x oversampling filters used back in the mid-1980's (eg. Sony CX23034 used in CDP-302ES, CDP-502ES etc.)
 
Sony used brick filter in the first product then they quicky move to OS. Is that a sign that analog brick filtering was working? Or it was just a sign that the technology wasn't capable of making fast enough DAC's to cope with an OS signal? Sony needed to have something out quick, in competition with Philips.
To process OS you need faster settling times on DAC stage. With low distortion. That needed some time to develop.

Or... why do you think they moved as fast as they did to 2x, 4x, 8x, 16x??? A conspiration to make bad sound?
 
Last edited:
Fascinating chapter of industrial history. Obviously consortia contain contradictions, with each party looking to take up a differentiated position on the same bandwagon.

Sony's insistance on 44.1kHz and 16 bits was a bit last-minute. That suggests to me that there can have been no agreed exact standard for filtering during the recording process.

Perhaps Sony had less to gain from oversampling because their DACs needed a downstream audio switching stage that required stringent filtering anyway. Also, the PWM current output was timed by a clock running at over 512 times the sample rate, which was a limiting feature. In their players using the TDA1541 I think they followed Philips' practice.

Who made the machines used in the recording process? I would guess that de facto standards evolved in the industry.

Ian
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.