John Curl's Blowtorch preamplifier part II

Status
Not open for further replies.
I have heard playback of a tape multitrack mixdown, and the same stereo signal recorded to CD.
The CD version lost much 'life' and 'presence' when compared to the tape version.
The differences were not subtle.

Dan.

Yes. There is no arguing that point. Others can try but I cannot be convinced otherwise. Going off on a tangent about bit error correction for CD's et al is just a smoke-screen as far as i am concerned. The issue is still why LP and CD do not sound like the original master tape recording. There have been a few replies that are accurate about this but not down to any hardware/software specific that we can go after. So, I surmise from experience that a big sound changer occures when compression and limiters are used. And they are mainstream in useage.
and I might include compression codec's in here too.

Thx-RNMarsh
 
Last edited:
BTW I described earlier how complex the audio coding on a CD is and that there's no chance in hell that one could selectively physically modify pits or lands to 'upgrade' your audio.

It is actually much more complex than I described, so read and shiver:

"Each audio sample is a signed 16-bit two's complement integer, with sample values ranging from −32768 to +32767. The source audio data is divided into frames, containing twelve samples each (six left and right samples, alternating), for a total of 192 bits (24 bytes) of audio data per frame.

This stream of audio frames, as a whole, is then subjected to CIRC encoding, which segments and rearranges the data and expands it with parity bits in a way that allows occasional read errors to be detected and corrected. CIRC encoding also interleaves the audio frames throughout the disc over several consecutive frames so that the information will be more resistant to burst errors. Therefore, a physical frame on the disc will actually contain information from multiple logical audio frames. This process adds 64 bits of error correction data to each frame. After this, 8 bits of subcode or subchannel data are added to each of these encoded frames, which is used for control and addressing when playing the CD.

CIRC encoding plus the subcode byte generate 33-bytes long frames, called "channel-data" frames. These frames are then modulated through eight-to-fourteen modulation (EFM), where each 8-bit word is replaced with a corresponding 14-bit word designed to reduce the number of transitions between 0 and 1. This reduces the density of physical pits on the disc and provides an additional degree of error tolerance. Three "merging" bits are added before each 14-bit word for disambiguation and synchronization. In total there are 33 × (14 + 3) = 561 bits. A 27-bit word (a 24-bit pattern plus 3 merging bits) is added to the beginning of each frame to assist with synchronization, so the reading device can locate frames easily. With this, a frame ends up containing 588 bits of "channel data" (which are decoded to only 192 bits music).

The frames of channel data are finally written to disc physically in the form of pits and lands, with each pit or land representing a series of zeroes, and with the transition points—the edge of each pit—representing 1."

jan

(From Wikipedia: http://en.wikipedia.org/wiki/Red_Book_(audio_CD_standard) )
Yes have know of the old 8 14 coding for 24 years. From what I have seen from under the microscope is the yes you can burn CD with pits and lands with a more defined edge than comes from mass produced CDs . Stamper where out and that edge becomes not as crisp as it should be . Your physically modifying the pit I guess by cutting the edges in a much more defined way than does the worn stamper. So from a production stand point the stamping is not up to the standard of the red book. This is where the quality of production fall far short of the book. There may be no chance in hell of doing what I said but then again I not in hell any more. Regards
 
Richard,
Surely you understand the phase shift that compressors and limiters cause to the signal. The distortion from this step alone would be audible beyond just the change in dynamic range. I don't understand besides radio why everyone insists on using these methods, give me the dynamic range of the original piece. Now if we are doing PA, limiters are going to save some devices from going up in smoke, but why does a soft passage need to be a loud as a loud passage in music I just don't understand?
 
they are getting pretty close, the Zetex DDFA chips (such as those used in the NAD gear) leave only the final output stage power transistors before the speaker cable and even those have direct Digital feedback to the Zetex, but methinks the modulator is still an analogue device. All very blackbox though, 100% un-diy-friendly
Philips/NXP had the clarity (and integrity even) to refer to their power output chip for use with digital-domain modulators as a "One-Bit D-A Converter" iirc. And that's just what it is.

What does "direct digital feedback" mean in this Zetex/NAD context? If it is somehow encoding both voltage level and timing information then that's not digital. If it is an A/D converter producing the bits then the output stream's timing and accurate encoding of voltages is crucial. And there are a lot of problems with that approach, particularly for IM-like distortions, according to some with whom I've spoken.

TI doesn't talk about it openly, but will in unguarded moments admit that they had to add a form of analog feedback (details undisclosed) to their popular line of "digital" amp chips to improve performance, particularly with respect to power supply rejection.*

(general remarks not addressed to anyone in particular): Digital is wonderful for symbolic processing and data storage, and once you're in that domain get as much done as you can get done there. But be mindful of the requirements on timing, and don't suppose because something is characterized for symbol processing --- say as prosaic as a NAND gate --- that using that hardware for production of an analog signal (for which in some circumstances it may work just fine) makes the signal prima facie a digital one.



*I heard this fairly recently from a third party who was pitching some firmware to them, so I don't think it is in violation of a very old NDA.
 
Richard,
Surely you understand the phase shift that compressors and limiters cause to the signal. The distortion from this step alone would be audible beyond just the change in dynamic range. I don't understand besides radio why everyone insists on using these methods, give me the dynamic range of the original piece. Now if we are doing PA, limiters are going to save some devices from going up in smoke, but why does a soft passage need to be a loud as a loud passage in music I just don't understand?

I do understand why they do it... besides commercial attention to adverts --- a huge amount of listening is done in noisy places (like cars) and so a lot of the sound could not be heard in these environments at all if it wasnt reduced to a smaller dynamic range.

[OK SY give us the details and differences on compression algorthms and compressed dynamic range and limiting etc... all used on the raw recording.. I know you are going to do it.]

The affects are not limited to just limiting and dynamic range but adds real distortion (Harmonic and IM) amount depending on method used.

Thx-RNMarsh
 
Last edited:
Why do I care about the difference between the raw master and playback medium? Because that raw master sounds a lot more like real live music in the home.

The MP3 seems IMO to be an intermediate step to streaming and the cloud. but also we will have the opportunity to stream right off raw masters (2 mike acoustic recordings) or mixed-down 2nd gen master (multi-mike/multi-channel) and everyone will be able to hear better quality sound... not just the few with master recorders making recordings live or in the studio.
LG. 🙂


Thx-RNMarsh
 
Last edited:
OK

AES-3 Coax, Fiber and twisted pair, not consumer versions.

As to fuses, in a loudspeaker line running above 40% current rating will create Intermodulation Distortion that in my OPINION would be a problem.

In a power supply a fuse run at 80% of current rating input or rails, in a poor design may also cause problems.

The forward vs reverse difference on a fuse in an AC line is so small that in my OPINION folks hearing it are crazier than SY. (Ok really crazy as SY is not yet too far gone.) 🙂

Jan, according to the US electrical code a typical house wiring from the AC panel all the way to the power source should have less than .06 ohms. The branch wiring, connectors, power cord and switch will be much greater than that. The fuse has even more resistance than the combination at say 80% capacity.

Now as to CD's quality please remember the original CD's were only 9 bit linear as the first ones used the Sony A/D chip that was a 9 & 7 device based on Stockham's designs.

So when talking about any CD issues one can probably find examples from really awful to quite respectable. However I rarely find anywhere nearly as much concern about the acoustic environment. A system is only as good as its' weakest link. Most of the folks I know who care record at 24/196. Why? Because they can!

ES

PS BTY PSSR in switching amps certainly is well know!
 
Last edited:
Philips/NXP had the clarity (and integrity even) to refer to their power output chip for use with digital-domain modulators as a "One-Bit D-A Converter" iirc. And that's just what it is.

What does "direct digital feedback" mean in this Zetex/NAD context? If it is somehow encoding both voltage level and timing information then that's not digital. If it is an A/D converter producing the bits then the output stream's timing and accurate encoding of voltages is crucial. And there are a lot of problems with that approach, particularly for IM-like distortions, according to some with whom I've spoken.

I cant be exactly sure, there has been a couple threads on here and even one of the designers popped his head in, but there is no public datasheet. from what I understand though, its simply a digital sampling of the output stage, which is then processed in the DSP as PCM, and the error corrected signal fed back through the modulator.

its hard to find out much about it, but the Zetex DDFA (Direct Digital Feedback Amplifier... I guess) has internally generated 108MHz PLL type clock, digital inputs, internal multichannel DSP, volume, EQ (some biquads) and a class D type output (yes I know Class D doesnt stand for digital) but the output stage of the DAC modulator directly drives a power stage and I guess the feedback corrects for non-linearities in the transistors and perhaps output filter/coil. dont quote me on that, thats based on slightly foggy memory and deliberately obfuscated detail to begin with.

theres a thread here, from a couple years ago that was trying to get up numbers to GB some eval boards, but I gather Zetex werent keen, as it fell quiet and then there was a corporate takeover of the assets afaik and I lost the trail, I think its exclusive to NAD and… well I dunno, I havent read of anyone else high profile using them. the designer of the reclocking stages popped his head in here to tout its immunity and superiority compared to everything ever made … admittedly the touted numbers are pretty special for a poweramp, including the jitter spec. I was hoping Demian might have some inside info.

TI doesn't talk about it openly, but will in unguarded moments admit that they had to add a form of analog feedback (details undisclosed) to their popular line of "digital" amp chips to improve performance, particularly with respect to power supply rejection.*

(general remarks not addressed to anyone in particular): Digital is wonderful for symbolic processing and data storage, and once you're in that domain get as much done as you can get done there. But be mindful of the requirements on timing, and don't suppose because something is characterized for symbol processing --- say as prosaic as a NAND gate --- that using that hardware for production of an analog signal (for which in some circumstances it may work just fine) makes the signal prima facie a digital one.


of course, ive had the same argument with numerous people on here, this one just seems to extend it to a DSP control loop, but the actual output stage sounds quite analogue, more analogue than what they let on, but like I said, foggy on the details exactly.

that being said, subjective reports and reviews of the NAD HT receivers and integrated amps that use the Zetex are quite positive.
 
Last edited:
Now as to CD's quality please remember the original CD's were only 9 bit linear as the first ones used the Sony A/D chip that was a 9 & 7 device based on Stockham's designs.

Just to clarify, you mean by this that distortion was 0.2%? As I recall, the Sony PCM units from the dawn of the CD era were quite a bit better than that.

However I rarely find anywhere nearly as much concern about the acoustic environment.

Bingo, the single most important factor in music reproduction. Not as esoteric as worrying about Maxwell Demons and quantum bubbamayseh, but has the virtue of unquestioned audibility.
 
In a power supply a fuse run at 80% of current rating input or rails, in a poor design may also cause problems.
The purpose of a line fuse should be the same as that of what NEC requires breakers do. Reliably conduct current, but clear the line in the event of a fault.

Jan, according to the US electrical code a typical house wiring from the AC panel all the way to the power source should have less than .06 ohms.
Where in NEC 2008 does it say that? I've not come across it. I've seen 4% drop as where consumers will detect incandescent lighting dimming, but no hard droop spec caused by wiring.

The utility is required to make sure bolted fault at the panels is limited so that arc flash is not a problem in the consumer panel. Industrial users are on their own, they have to make sure legacy panels arc flash capabilities are not exceeded by utility upgrades to higher efficiency transformers.

jn
 
<snip> I cant be exactly sure, there has been a couple threads on here and even one of the designers popped his head in, but there is no public datasheet. from what I understand though, its simply a digital sampling of the output stage, which is then processed in the DSP as PCM, and the error corrected signal fed back through the modulator. <snip>
So my interpretation of what "digital sampling of the output stage" could be, is something that looks at the pulse edges and retrieves timing information. Important and interesting stuff to be sure, but missing the crucial information as to the voltage levels, which will be load-dependent, history-dependent, temperature-dependent. Hence, such additional information is needed to assure the ultimate audio-band waveform accuracy.

The amps do have decent specs as far as I've seen them disclosed. I think there was a little mid-course correction at one point, but it is always dangerous to read retailers' blurbs and take what they say as a correct translation of the manufacturer's data.
 
Just to clarify, you mean by this that distortion was 0.2%? As I recall, the Sony PCM units from the dawn of the CD era were quite a bit better than that.

No I mean the chip used for analog to digital conversion first did a 9 bit conversion and then treated the remainder to a 7 bit conversion. This was done so the zero crossing would not be the splitting point. The distortion would be dependent on the actual 9 bit converter's LSB accuracy and the follow on conversion. All that was guaranteed was 9 bits of linearity. Typical specs are a different issue. Do look up Stockhams papers on the A/D design.

The purpose of a line fuse should be the same as that of what NEC requires breakers do. Reliably conduct current, but clear the line in the event of a fault.


Where in NEC 2008 does it say that? I've not come across it. I've seen 4% drop as where consumers will detect incandescent lighting dimming, but no hard droop spec caused by wiring.

The utility is required to make sure bolted fault at the panels is limited so that arc flash is not a problem in the consumer panel. Industrial users are on their own, they have to make sure legacy panels arc flash capabilities are not exceeded by utility upgrades to higher efficiency transformers.

jn

The unspoken issue in fuse selection is tolerances! Typical spec is that it will blow in under 10 seconds for a 200% overload and much longer at 110%. However as audio equipment except for class A stuff draws a variable current, looking at different fuses, a sweet spot is for normal current draw to be 40% of rating or less! Once you get up to 80% the resistance curve really swings.

The NEC recommends that the voltage drop from no load to full load at the panel not exceed 5%. For a 200 amp 240 volt service that would be 6 volts at 100 amps per leg or 6/100 aka .06 ohms.

It also has no more than 5% drop for a branch circuit. For a 20 amp 120 volt drop that would be .3 ohms. However using 20 amp code compliant 12 gauge wire that would be a run of 95 feet. A bit more than typical.

ES
 
Jan, according to the US electrical code a typical house wiring from the AC panel all the way to the power source should have less than .06 ohms. ES!

Yeah sure but the DISTORTION of the mains is much higher like 5% or 10%, so the additional 0.1% of the fuse doesn't really upset the rectifier I'd think.
You know, the fuse acts as a PTC so may actually restore the mains toward a sine wave if it does anything at all. Fuses to the rescue!

jan
 
Last edited:
Yeah sure but the DISTORTION of the mains is much higher like 5% or 10%, so the additional 0.1% of the fuse doesn't really upset the rectifier I'd think.
You know, the fuse acts as a PTC so may actually restore the mains toward a sine wave if it does anything at all. Fuses to the rescue!

jan

Once the distortion rises above 5% that is actually considered a problem. A fuse going from 20% to 80% can drop a volt or two. Now the line distortion is a constant. The peak AC line voltage will be lower than a perfect sine wave resulting in less rail voltage.

The fuse drop will be modulated by the program material and will actually behave like IM distortion. So you expect to see an increase in ripple and even some audio on the rails from the fuse changes. Of course you will also see some audio on the rails pretty much so anytime you look. This is not a surprise to most designers which is why even in basic amplifier circuits you will see additional RC filtering on the input differential stage.

What SY was mentioning is being able to hear a difference in the final audio from flipping over a fuse. A much tougher issue as there is a bit of difference but that is only in the millivolt range.
 
Last edited:
What SY was mentioning is being able to hear a difference in the final audio from flipping over a fuse. A much tougher issue as there is a bit of difference but that is only in the millivolt range.

Data, repeatability, and error bars still missing. This claim is highly dubious, if what you're saying is that there's a difference with AC between replacing the fuse in the same "direction" and replacing the fuse in the reverse "direction."

No I mean the chip used for analog to digital conversion first did a 9 bit conversion and then treated the remainder to a 7 bit conversion. This was done so the zero crossing would not be the splitting point. The distortion would be dependent on the actual 9 bit converter's LSB accuracy and the follow on conversion. All that was guaranteed was 9 bits of linearity. Typical specs are a different issue.

OK, so the linearity was better than "9 bits." The only way it's "9 bits" is if the remainders are consistently thrown away. The 1610 (the standard at the time of CD's introduction) guaranteed 0.05% distortion.
 
Where in NEC 2008 does it say that? I've not come across it. I've seen 4% drop as where consumers will detect incandescent lighting dimming, but no hard droop spec caused by wiring.

jn

Better for a high-end product would be to use magnetic circuit breakers instead of thermal fuses.

I have, and you can buy, a hand held test instruments which will test the ac lines for NEC complience on a variety of NEC specs/conditions. One such instrument is made by IDEAL Industries Inc. in Ill/USA.

With the IDEAL, for one test, it places a full load on the ac line for a fraction of a second and records the volt drop. IDEAL states that anything higher than 5% drop means the cable/wire gauge is too small.

Thx-RNMarsh
 
Last edited:
I don't quite understand this analogy.
Just to put the record straight digital electronics disciplines are separate from analogue
electronics disciplines,
The analogy is, which I should have made more clear, is that the circuitry which handles the 'data' as an analogue entity, in other words, the DAC and everything downstream from that also exists in the same electrical enviroment. Yes, theory and good engineering should exclude all deleterious interaction, specifically the digital impacting the analogue - however, at some point, to some degree, all the electronics are connected, even if the linkage is mainly RF in nature. Unless everything is battery powered, the linkage extends back via the mains power commonality.

The next, key question is whether any distortion artifacts introduced this way are audible - and this is obviously where the "big fights" are. Personally, I have no trouble hearing the impact ... others may not ...

Edit: I'm not going to argue about the 'release agent' - something along these lines was made a big thing of at one time, and I used the concept as a simple example of where manufacturers may differ in the 'quality' of their product ...
 
Last edited:
No I mean the chip used for analog to digital conversion first did a 9 bit conversion and then treated the remainder to a 7 bit conversion. This was done so the zero crossing would not be the splitting point. The distortion would be dependent on the actual 9 bit converter's LSB accuracy and the follow on conversion. All that was guaranteed was 9 bits of linearity. Typical specs are a different issue. Do look up Stockhams papers on the A/D design.

What's the point, this is really tired stuff to bring up. Why not talk about Rudy van de Plassche's papers from Philips circa 1982? Ed you really got me today.
 
Where I have made progress, personally, in improving SQ is that I have no interest in separating the elements that make up the audio replay chain into various modules, independent areas of electrical activity, that just happen at the end of the chain to produce sound. They are all part of a single mechanism, for me, whose job is to operate with as much integrity as possible to get the job done; so, if I hear a 'problem' I have no hestitation in deciding whether to point to anything, anywhere, in this gestalt - the speaker, cable connections, RFI, mains power quality, etc - they all have equal chance of being the 'weak link' ...
 
Status
Not open for further replies.