John Curl's Blowtorch preamplifier part II

Status
Not open for further replies.
and my psychoacoustics handbook's 2006 3rd edition still prints the graph of audible "threshold in quiet" with even the 10% highest resolving fraction of "subjects 20 to 25 years old" going vertical well before 20 kHz

But we should keep in mind that Zwicker/Fastl talks about thresholds with single tones.
And that it is based on a sample of 100 participants, in which the spread is already quite high (note that the lines correspond to 5%-95% of the sample).
It is not mentioned if the sample was constructed as a representative one, but we could construct a statistic with estimations based on this sample.

Straight forward statistical analysis shows that several million people are most probably outside of the range shown in the graph (of course to both sides).

so why the exaggeration?

Exaggeration it surely is wrt to the whole population but if somebody does percept something it is not.
It seems to be established nowadays that the temporal resolution is higher than the upper bound normally would predict.

but it simply isn't "proven" that even as many as 10% of the population can tell the difference in typical recorded music listening - we have null results in studies of 100s of subjects, some Audio professionals

I guess that is related to Meyer/Moran.
While the results might be true, it is hard to tell because it was mainly an example of not so well done scientific work.
(And i am still wondering why this article could make it through the review process as it did not fulfill the basic requirements of any engineering report)
 
Well, at least we got to the "...there's a problem if you violate the sampling theorem" part <snip>

Does someone guarantee that no `illegal` data occurs during sampling/editing or digital audio work in general?
(illegal means digital data that could not be existent if a signal is quantized after a proper antialiasing filter)

It is the same problem as with intersample overs; it would not happen if everybody .....
 
But we should keep in mind that Zwicker/Fastl talks about thresholds with single tones.
And that it is based on a sample of 100 participants, in which the spread is already quite high (note that the lines correspond to 5%-95% of the sample).
It is not mentioned if the sample was constructed as a representative one, but we could construct a statistic with estimations based on this sample.

Straight forward statistical analysis shows that several million people are most probably outside of the range shown in the graph (of course to both sides).

Applying this line of reasoning to the size of people, we should have 5 meter tall giants.

Exaggration it surely is wrt to the whole population but if somebody does percept something it is not.
It seems to be established nowadays that the temporal resolution is higher than the upper bound normally would predict.

Temporal resolution works on the basis of phase shifts. As you may know, < 3.500 Hz or so, the nerve fibers connected to you inner ear fire at a precise point of the incoming signal, that is, they are phase locked. Hence, with for example a 1000 Hz tone (1mS wide), the temporal resolution may be < 10uS.

Therefore, you can't argue that, since the temporal resolution is < 10uS, we need >100KHz bandwith. They are not related. As a matter of fact, for higher frequencies, where the nerve fibres are no longer phase locked, the concept of temporal resolution looses its validity.

vac
 
Does someone guarantee that no `illegal` data occurs during sampling/editing or digital audio work in general?
(illegal means digital data that could not be existent if a signal is quantized after a proper antialiasing filter)
No, nobody makes any such guarantee. If you test a lot of equipment out there, you'll find plenty of aliasing in the digital outputs, and some amount of aliasing in the A/D. That's because, even if there are anti-alias filters, they might only reduce the 'illegal' frequencies by a couple of digits of dBs, but that still doesn't mean that they've been eliminated completely. Cost cutting in the circuits ends up allowing just a bit of aliasing in both recording and playback.

By the way, if you're using an audio interface for such analysis, it helps to set the frequency to linear rather than logarithmic. The latter is needed for creative tone sculpting (mastering), but when you're looking for aliasing or other errors the linear graph will show it more clearly as a reflection in the frequency domain. I often record at 48 kHz for DVD, and when musicians use digital equipment on stage it's quite common to see a mess of aliased frequencies between 22.05 kHz and 24 kHz. These are just aliased artifacts that are produced by budget digital gear running at 44.1 kHz without adequate reconstruction filters. Technically, there should be nothing but noise between 22.05 kHz and 24 kHz, but there's quite a bit more than that, and it's clearly signal dependent.
 
Does someone guarantee that no `illegal` data occurs during sampling/editing or digital audio work in general?
(illegal means digital data that could not be existent if a signal is quantized after a proper antialiasing filter)

I don't understand the point you're trying to make. Anti-aliasing filters work. Are you saying that digital media sometimes have errors? That seems quite unexceptional.
 
Temporal resolution works on the basis of phase shifts. As you may know, < 3.500 Hz or so, the nerve fibers connected to you inner ear fire at a precise point of the incoming signal, that is, they are phase locked. Hence, with for example a 1000 Hz tone (1mS wide), the temporal resolution may be < 10uS.

Therefore, you can't argue that, since the temporal resolution is < 10uS, we need >100KHz bandwith. They are not related. As a matter of fact, for higher frequencies, where the nerve fibres are no longer phase locked, the concept of temporal resolution looses its validity.
I don't follow your reasoning. If the recording system does not preserve 10 us events, then both periodic and nonperiodic detail is lost. We might search forever to find someone who can prove that they hear 100 kHz or even 50 kHz, but once you bandlimit to 20 kHz you've lost those 10 us events.

The current reasoning that I understand is that it's all about preserving the impulse response, not about preserving ultrasonic tones that are continuous. The clincher is that preserving 10 us impulses happens to also preserve 100 kHz tones. It's not that you need the 100 kHz tones, but you get them anyway if you preserve the 10 us impulse response.
 
www.hifisonix.com
Joined 2003
Paid Member
Well we may have had horrible digital music for the last 30 years (not my view I have to say PMA!) but I don''t know that there's that much pristine information above 20KHz on an LP either. There may be a case for a wideband digital audio format (SACD was supposed to be it, but got heavily criticized for the noise shaping and ultimately the high levels of u/sonic noise - but most of the reviews I read seemed to give the format a big + for sound quality) but whether it will ever become a reality . . . I doubt it.

I have to agree with SY here as well - I find huge differences in recordings and I think this is where most of the issues lie. I have a Fleet Foxes album ("Helplessness Blues') and the music is just wonderful . . . but the recording has a shrill sound to it, almost like there's a few sharp peaks in the repsonse up in 1-2 KHz region that make my ears ring and it can become quite tiresome after a while. Then I put another CD on (say Eliane Elias's 'Something for You') and that issue is not there. I put a Paul Simon CD on ('Rythym of the Saints') - no problem - very relaxed recording. So, where does that leave us?
 
Last edited:
Would the sine sum generated 1kHz triangle be linear even in its middle third? ;)

The 22kHz brickwall filtering is unacceptable for highest fidelity music reproduction. No one has ever proven that this sharp frequency cut-off has no influence to music perception and sound quality. If the roll-off were slow, then there would be no big problem. As the slow roll-off is impossible for the reason of aliases, we get what we get and we can complain to eternity. "We" means probably 0.00000000001% of population, and of course the rest counts for the consumer music industry.

The best DAC I've ever heard uses a first order 22kHz filter.

John
 
It seems a rather limited application, but yes, for that portables are still needed. I was actually more talking about "in field" (as opposed to "in studio") music recording. If you want to record jungle noises while walking around it would be limiting...

Ciao T

Thorsten,

Limited?

Not every discussion is an argument that has to be won at all costs. There has been a very large market for highish end portable recorders for a long time. John will remember the Nagra, IIRC retailed for more than a Revox or Ampex. Several models were specifically designed for the professional motion picture industry.

I have a folk LP from 1965 where one cut sounded far better than all the rest, when the CD came out in 1999 that cut was missing. Upon inquiry it turns out that cut was an audience tape made on a Nagra and was lost.
 
Last edited:
OK Ed,

Without Googling, I can only ID a few:

#47 - Most common pilot lamp in 6-volt tube filament equipment...oft found behind beautiful glass jewels, as in Fender amps, ad nauseum.

TL072 - FET Op amp.

WE300 - Western Electric triode. It is apparently associated with religeous rituals nowadays.

NE555 - Timer.

uA709 - Op amp.

CK722 - Germanium transistor, my first projects utilized these unstable gems.

SN7400 - TTL quad NAND gate.

Pretty poor showing, I'm afraid: looking forward to the other answers!

Howie

Howard Hoyt
CE - WXYC-FM
UNC Chapel Hill, NC
www.wxyc.org
1st on the internet

Howard,

Quite good.

ECC83 aka 12AX7 A bit of a common audio tube.

2N107 GE's killer app germanium transistor

Ua703 Fairchild's IF amplifier chip

Ul923 Fairchild's RTL Flip Flop Probably the first widely used logic family.



Reasonably close! But it is intended for just audio and actually doesn't go quite low enough in frequency for audio and not high enough for where I think I want to look. Thanks!

Don Moses proposed such a system to me years ago for measuring the amount of powerline noise that makes it through the power supply. In reality you need a much wider bandwidth since there can be significant energy up to 30 MHz on the power line. I have used a system consisting of a neat 10 Hz to 40 MHz vector analyzer OMICRON Lab "Smart Measurement Solutions": Home and two line isolation couplers that allow me to inject into the line at one place and see what comes out somewhere else. It allows me to measure ac line filters in operation. You could also use a spectrum analyzer and an isolator to protect it.

Quantifying degradation may be a challenge given all the sensitive nerves here. Showing increased im distortion or reduced snr in the audio band when out of band noise is injected would be interesting. An old trick, long lost, is to use a CB radio to test for stability and noise rejection. Its closest modern equivalent is the TDMA noise from a GSM phone.

Thanks Demian,

I have 1 to 1000 Mhz covered with my RF gear. (We install moderate size cable TV systems 1000+ drops.)

I guess I'll find out what injecting noise does to conventional measurements. I suspect I will see widening of the test tone FFT width at the baseline, what else should be interesting.

I don't understand the point you're trying to make. Anti-aliasing filters work. Are you saying that digital media sometimes have errors? That seems quite unexceptional.

There is something there when combined with reproduction approaches does show up as differences in player quality. Maybe that error correction really does count! :)
 
Hi,

G Slot (pseudonym?) isn't found in Author Search at AES or ASA sites

Maybe not. Both AES and ASA tend to be suspiciously US focused and dominated.

For example much of the research published by the IRT in Germany (which is generally excellent) does not make it to the AES (maybe the Researchers fear the AES thought police, or maybe they cannot be bothered to translate to English?) ;)

The Naim of the Gentleman in question is Gerard Slot and he appears dutch. The books is a translation from the dutch. Published by Drake New York. The gentleman also seems to have written other treatises with interesting titles, especially those that appear to be listed only in german (wohl derm der deutsch kann). :D

This is what I was able to find to on the title by highlighting what John had typed and right clicking "google"...

Audio quality

Audio quality: requirements for high quality audio equipment
by G. Slot

Publisher: New York : Drake, 1971.
ISBN: 0877490678

Notes: Translation of Geluidskvaliteit.

Includes bibliographical references.

Book Details:
Language: eng

Physical Description: 154 p. : ill. ; 23 cm.

Ciao T
 
Administrator
Joined 2004
Paid Member
I have used a system consisting of a neat 10 Hz to 40 MHz vector analyzer OMICRON Lab "Smart Measurement Solutions": Home and two line isolation couplers that allow me to inject into the line at one place and see what comes out somewhere else.

I'd love to work with such a rig. Gary Pimm and I did some testing of an AC line filter a couple of years ago using his very cool HP spectrum analyzer. But we were not injecting precise signals, just looking at what was there and what got by the filter. The problem was that line noise is forever changing.

FWIW, we found that the filter was better at keeping noise IN, than OUT. It went into the junk pile.
 
Thanks for the 'legwork' Thorsten. I picked up the book in London, in 1976 at either Foyles or Modern Book Co. and it is a hardcover edition.
It was also published by Philips Paperbacks, this is part of their description:
"This book deals with the problem of obtaining the quality of sound reproduction that will give the maximum satisfaction to those listening to it. The author considers the whole problem objectively. A chapter spent on discussing the 'problem' is followed by one on technical specifications. ..." A good book, one of the best on this subject.
 
Quote without comment: Slot P.52.

"The effect of the response sounding brighter or shriller than when the characteristic is straight (freq resp) starts when the roll-off is more than 6 dB/octave.
Unfortunately we know of no complete explanation of the phenomen nor of an exact quantitative analysis. In part it can perhaps be explained by assuming that cutting off an important part of the residue will to some extend reduce our certainty about the pitch of of tone, particularly when its fundamental is not very strong. Another explanation can be that a sharp cut-off in the frequency characteristic of a pick-up, amplifier or loudspeaker has the consequence that part of the ear that is reacting to the cut-off frequency will react abnormally to the mutilated signal [18]. "
[18] Franssen, N.V. 'Some Considerations on the mechanism of directional hearing. Thesis Delft 1960'
 
Status
Not open for further replies.