John Curl's Blowtorch preamplifier part II

Status
Not open for further replies.
One reason not to get involved with the ML9600 is that according to Alesis and a number of Alesis authorized dealers, the ML9600 is obsolete. For example this is the Alesis web page of obsolete products:

Legacy Products

As far as its feature set goes, the obvious comparison is with one of the many computer audio interfaces that exploit the hardware that many people already have, yielding superior flexibility at a far lower cost.

Just for grins:

12 reasons why hi-res audio will never go mainstream | DAR__KO
I was the Alesis Service Center Admin. in the pre-bankruptcy Alesis. I have had a ML9600 when it was a current product and I still have it. Trust me, it still works just the same now as it did when there was technical support.
 
Another example of really poor reconstruction filter. Measured signal is 100Hz (44.1kHz) square at soundcard DAC analog output. Measuring sampling rate is 6.25MHz. The filter may look "OK" visually (no pre-ringing), but as a result there is a train of mirror images.
 

Attachments

  • poorfilterfft.PNG
    poorfilterfft.PNG
    42.6 KB · Views: 161
  • poorfilterstep.PNG
    poorfilterstep.PNG
    30.4 KB · Views: 158
www.hifisonix.com
Joined 2003
Paid Member
I am sorry, but you did not get it, maybe I was not explaining enough.

1) it is not a result of graphics and it is no simulation. It is an output of sound card digitized with a very fast digitizer. The "modulation" in time domain is another view to twin-tone in frequency domain. The twin-tone is a result of mirror image spectral line, which is 44.1 - 21.5 = 22.6 kHz. This is a result of not enough high steepness of the reconstruction filter. Please check the attached images.

2) everything you wrote about filters is OK, but it is a pure theory. I have measured dozens of CD players and sound cards and I am showing an usual and common behavior of real implementations.

3) it is an Oversampling DAC, quite up to date (AK).


This neatly demonstrates why avoiding the output filter, or not making it steep enough in order to minimize ringing simply leaves one with Nyqvist related problems. Better to go for 24/192, accept the slightly higher noise floor and be done with it.
 
No problems George, it is the playback, I am really too brief in my posts and I prefer images over explanations.

Reconstruction in real world is really an issue. Last one, it is a small signal 17kHz (i.e. audio signal) in 44.1kHz sampling. The reconstruction filter is very poor (but no pre-ringing!!! :D :D) and also please note HF interferences in MHz, at the analog output.
As said several times, let's go higher sampling rates to have enough margin for audio band signals.

Someone might say "no problem, inaudible". But please think about intermodulation products in tweeters and I can guarantee these examples will and do create audible intermodulation products in tweeters.
 

Attachments

  • 17_time.PNG
    17_time.PNG
    35.8 KB · Views: 61
  • 17_spectrum.PNG
    17_spectrum.PNG
    46 KB · Views: 60
Last edited:
Not really - the explanation for the apparent modulation of signals whose frequency are very close to the Nyquist frequency comes out of modulation theory. Basically we are moving from AM to SSB.

Once again - the reason of "modulation" is a mirror image, F(sampling) - F(signal). Close to Fs/2 signals, reconstruction non-ideal brickwall is unable to avoid mirrors. Mirror makes a twin-tone (similar to 19+20kHz CCIF test tone) and this causes "modulation" with frequency F(signal) - F(mirror).
 
Once again - the reason of "modulation" is a mirror image, F(sampling) - F(signal). Close to Fs/2 signals, reconstruction non-ideal brickwall is unable to avoid mirrors. Mirror makes a twin-tone (similar to 19+20kHz CCIF test tone) and this causes "modulation" with frequency F(signal) - F(mirror).

Pavel,

Time for you to do a book. Jan will pay for it.

All,

The 44.1 sample rate was originally picked to allow the data to be written to the VCRs of the day. The physical size was correctly reported.

I thought I was the only one at the time who though the sample rate was too low. At that time the A/D converters that were state of the art were really a composite of a 9 bit with a 7 bit second stage. The very first units were made of discrete components including matched and trimmed Allen Bradley CC resistors. Later Sony copied the design into a single chip. The best results I ever got from that chip was a bit more than 14 bits. That was enough for the folks doing research at the time. Moving from 11-12 bits to 14-1/2 showed no improvement in their results.

Of course with the then state of the art one megabit Ethernet file transfer was so slow it was faster to move drives.
 
I am sorry, but you did not get it, maybe I was not explaining enough.

I believe I got it, hehe;). That is a problem of interpretation and communication only. Any interpretation based on spectral is a pure theory, because the "spectrum" doesn't exist in reality, this is only an other way to describe the signal, which is primarily described by its time run. The "mirror" component shown is an inevitable product of sampling at 44.1 kHz. If reconstructed properly, this component will not be present in the resulting signal. This is - according the theory - possible only if an ideal brickwall filter is used, or, more generally, an filter achieving infinitely big attenuation above certain frequency being lower than a half of the sampling frequency. In fact you said this. If something like "modulation" or "mirror" is observed, than this is a consequence of the imperfection of the filter. You did say it, too. And, this match with the theory exactly, if applied properly. The only difference between our interpretations is that I say the modulation or mirrors described in the digital domain are the inevitable product of sampling, so they are artifacts, existing in digital domain only, which existence can be proved basing on the certain interpretation of the signal structure only and which are not returned into analog domain if reconstructed properly.

The fundamental problem is the all real signals are continuous, the digitalization is an purely analog process and the "digital signal" is a mathematical fiction in fact, a subject of certain interpretation only. No doubt this interpretation is very useful, but it is necessary to bear in mind its limits.:hohoho:
 
Last edited:
Ethernet was 10Mb/s back then. I doubt there were hot-pluggable drives, so transferring over Ethernet was always faster than moving media around, especially given the speed of writeable media like tape.

No, it was over coax and 1 megabit, although it really was just before what is now known as "thicknet" which was 10 megabit. Used a special variant of RG8 cable. Vampire taps didn't come in until a bit latter.

The usage made it terribly slow and interchangeable hard drives were common on the original Xerox Alto work stations and similar 14" hard drives.
 
Status
Not open for further replies.