John Curl's Blowtorch preamplifier part II

Status
Not open for further replies.
The Boston test is a problem. Although i doupt that the speaker setup was state of the art
( free of horizontal early reflexions that disturb imaging, see my AES paper with Malkolm Hawksford and Bernd Theis) no one has proven so far that this test did not tell the truth.
When digital processing is carried down to the speaker results from 16/44 digital can be excellent. We build a digital speaker in Essex in the early 90th that sounded mesmerising,
still my speaker system at home sounds even better to my ears. No digital processing is used in my reference spaekers but they are phase coherent. Yes i know, there are a lot of definitions what phase coherent means. I mean that the hilbert transform of the transfer function of my spaekers is also the measured phase response. In Europe we call that a causal system.
 
Why should they be considered 'superior' IF they usually do not reflect what people hear consistently on their own. Perhaps it is the ABX test that is at fault. I think that is so.

It may indeed be a problem with the ABX test. Why do you think its that rather than people's perceptions being influenced by what they think they know about the equipment (expectation effect/ confirmation bias) ?
 
Terry, you need to explain something here.

"Down sample to 44.1 / 24 using a) Tony Faulkners averaging program (no
LPF) and b) a high quality SRC program (typically with with brickwall
filtering)."

In this context downsampling, to me at least, is a sample rate conversion and the brickwall filter defeats the point of the TF averaging. I'm confused. I read the Stereophile article (in part) more Wadia-esque stuff about deliberately letting some aliasing(imaging) get in to avoid the dreaded Gibb's effect. Please show me some real music waveforms that exhibit significant pre-ringing. Again we are criticised because "music is not sine/square waves" but are shown impulse and square wave responses as examples of how the mathematically correct way of doing things "looks like it must sound bad". Greiner did careful DBT's that showed brickwall filters are not audible. He is a good engineer and his tests were very careful and peer reviewed.
 
Terry, you need to explain something here.

"Down sample to 44.1 / 24 using a) Tony Faulkners averaging program (no
LPF) and b) a high quality SRC program (typically with with brickwall
filtering)."

In this context downsampling, to me at least, is a sample rate conversion and the brickwall filter defeats the point of the TF averaging. I'm confused. I read the Stereophile article (in part) more Wadia-esque stuff about deliberately letting some aliasing(imaging) get in to avoid the dreaded Gibb's effect. Please show me some real music waveforms that exhibit significant pre-ringing. Again we are criticised because "music is not sine/square waves" but are shown impulse and square wave responses as examples of how the mathematically correct way of doing things "looks like it must sound bad". Greiner did careful DBT's that showed brickwall filters are not audible. He is a good engineer and his tests were very careful and peer reviewed.

Scott,

All points noted and well understood.

If people listen to in isolation the various mechanisms to get from 24/176.4
to 16/44.1 we might have more informed opinions.

I preferred the TF downsampling to a HQ brickwall SRC but it was pretty
subtle. The material was acoustic / non complex and I am also completely
comfortable that it is very likely a euphonic effect.

We've been doing it for years with tape :spin:\_/:spin:

T
 
Its obscure to me that higher order filters necessarily provide faster settling. As far as I can see, the settling time isn't so much a function of the filter order as the Q - a higher order bessel filter will settle quicker than a somewhat lower order elliptic for example.



I don't consider a level error (linear) of 1% to be particularly worrisome. Do you? The faster the settling, the smaller this level error. 5 time constants will give an error -43dB down.



Sorry, I'm not following here. Both methods?

For the same type of filter the higher the order the greater the out of band attenuation and the faster you should get to your design goal.

Yes 5 time constants of error on a step size of 1/2 full scale is significant.

In a ratiometric converter the maximum change is a single bit and both methods (linear filter and inverse exponent) are about equal. In other D/A methods you can have a change from one output period to the next of up to full scale of the converter. So if your settling error is -44db that is probably not very good.
 
Last edited:
For the same type of filter the higher the order the greater the out of band attenuation and the faster you should get to your design goal.

Sure, but that's another design goal, related to aliasing, not settling time. Or have I missed some relationship between the two?

Yes 5 time constants of error on a step size of 1/2 full scale is significant.

I could understand if this was a non-linear error, but its entirely linear - just results in a level change. Stereo pots (just to choose a random example) introduce far greater level errors - they'd need to be jolly good to maintain their tracking error below -43dB.

In a ratiometric converter the maximum change is a single bit and both methods (linear filter and inverse exponent) are about equal. In other D/A methods you can have a change from one output period to the next of up to full scale of the converter. So if your settling error is -44db that is probably not very good.

I'm still confused. How are linear filters and inverse exponents different? To me an inverse exponentially settling filter is a linear filter. If you think its non-linear, please explain how this comes about.
 
It is normally implied as the primary test for audio listening testing.

Not by anyone who has even cursory knowledge of sensory testing.

I do not look at nameplates. I just keep track of the components under test. I don't have to be told 'which is which'. I just need to know that this is L and this is M for example, or D and H. This nameplate thing is a 'slander' to compromise my experience, in making intelligent and correct audio decisions.

Wow, you can't even keep your story straight for two sentences in a row.
 
Sorry, SY, let me make it clear for everyone. Nameplates do NOT impress me, but listening differences between two components does. To me, a 'nameplate' could bias the gullible. I agree to that, but keeping track of different sounding equipment still needs some sort of ID. For example, in 1978 I visited HK in Japan and was given a blind test of three circuits that I did not design or build. The evaluation equipment was good, I was very impressed with the engineers and the facility, and I gave my subjective impression of each circuit as it was presented, and compared them in ranking. They were very impressed. To me, it was just a day's work.
 
The same year, 1978, I tried the Spiegel ABX box in my own apartment and I could not tell the difference reliably, between a Levinson JC-2 line amp and a Dyna PAS-3X. That is when I realized that 'something' was wrong, and I wrote it up in a LTE correspondence between Dr. Lipshitz and others in 'The Audio Amateur' in 1979.
Perhaps those serious about this sort of testing should also read the 'other side' or the criticisms of ABX, etc. testing. They are still valid to me.
 
For example, in 1978 I visited HK in Japan and was given a blind test of three circuits that I did not design or build. The evaluation equipment was good, I was very impressed with the engineers and the facility, and I gave my subjective impression of each circuit as it was presented, and compared them in ranking. They were very impressed. To me, it was just a day's work.

What exactly was the "test"?

Sounds like you were simply presented with three options and asked to rank them. If so, I don't see that that's any sort of test.

Now, if they had presented them to you as A, B, and C, asked you to rank them, then randomly changed their designations, asked you to rank them again, and then did this a dozen times and compared the results, that would have been a test.

But as you describe it here, it doesn't seem to be any sort of test at all.

se
 
Well, HK was impressed. You see, they made up the test. Their expert listeners had already 'ranked' them and I got the same ranking between the 3 designs.
In truth, I find that I can hear differences in audio equipment. That has kept me trying to design better and better audio equipment over more than 4 decades.
I also find that I can be fooled by ABX tests. Why, may be argued, but it is a fact, at least for me. That is why I don't participate in ABX or similar blind testing.
It is important for me to assess subtle differences in audio components, if possible. If I don't, I can't make better products, and I will be out of a job. Therefore, I use what works for me, as it has for decades. I just listen, and sometimes compare.
I usually listen with pretty good equipment, and really good source material.
If others here are happy with blind or controlled testing to show them what they can hear, so be it.
 
Well, HK was impressed. You see, they made up the test. Their expert listeners had already 'ranked' them and I got the same ranking between the 3 designs.

The odds of that happening would be a mere 1 in 6 even if you hadn't bothered to listen to anything. Don't know that I'd call that terribly impressive.

In truth, I find that I can hear differences in audio equipment.

But you haven't been able to actually demonstrate that "truth."

se
 
Status
Not open for further replies.