John Curl's Blowtorch preamplifier part III

Status
Not open for further replies.
I don't know what the problem is here, but I also have designed the electronics for condenser microphones as well as the rest of the chain. My first effort was making measurement microphone electronics to match up with a 1" B&K capsule. We made two of these back in 1971. One still is with me today, when I need it. Later, I modified the Nagra based electronics for the B&K 1/2" capsules (about 10) with higher value input resistors, and lowered the noise 10dB for the GD, and added variable sensitivity by controlling the 200V bias supply. Then, I made some custom microphone electronics for some German capsules for a customer, in the early 1980's.
I still have a couple of 1/2" B&K capsules, as well as a couple of Sony 500 capsules, but I have never had the need to build the electronics for them. Most microphone electronics for these capsules, 'suck' compared to what I design for them, except for the B&K electronics, which is usually, just OK. Of course, I am a perfectionist, so my criticisms come easily. Of course I do also make standard microphone electronics for moving magnet and ribbon microphones as well, but that is another set of problems.
 
More nonsense. Designing mic electronics and recording with them is not the same thing. And studios have been using condensers with tube pre amps ( with all the extra complexity ) for ever. Some are still highly praised.

https://www.sweetwater.com/store/de...u47-large-diaphragm-tube-condenser-microphone

And the 50 year old ones are worth more money.
Stop pretending you know recording.

The biggest issue with original U47's are the VF14 tube being unobtanium.

The one in your link is a new version and uses a different tube which is actually a glass tube inside that metal envelope.

Also the originals were run in starved heater mode, some info:

http://www.wagner-microphones.com/compare inside3.htm

Transformer was dual bobbin, Pi wound and I don't think it had the usual 80/20 core material.

http://tab-funkenwerk.com/id79.html
http://www.tab-funkenwerk.com/id58.html

Local guy (G Wagner), makes one of the best current re-issues of a U47.

http://www.wagner-microphones.com/

T
 
Sorry, I read cbcb's criticism wrong. I am NOT a recording engineer. I just design electronics for recording engineers. I would not know my way around a modern, or even a 50 year old recording studio. I only visited them, sometimes, to participate in some production.
However, some recording engineers want more than a typical studio provides, and that's when they bring me in.
 
Last edited:
john curl said:
Of course, subtle changes are addressed in telecommunications or gravity waves. It is just that they are kept as 'trade secrets'.
OK, that explains why new physics and things like directional cables are not discussed openly; they are all trade secrets. How dumb of me not to think of such a simple explanation.

Then there are military secrets; some are advanced science, some are a scam, some are merely nonsense.

scottjoplin said:
First day at BT I had to sign the official secrets act, never understood quite why,
In olden days it could have been so you didn't spill the beans on all the phone tapping going on, if you came across some of it or overheard colleagues talking about it.
 
mmerrill99 said:
No you have the wrong definition of false negative - it is not hearing something which is actually different (by measurement or by other blind tests which have established this)
Sorry, I thought that the true believers will only accept differences found in a sighted test. They discount any blind test or measurements, because blind tests are flawed and we always measure the wrong things.
 
Sorry, I thought that the true believers will only accept differences found in a sighted test. They discount any blind test or measurements, because blind tests are flawed and we always measure the wrong things.
Does this not represent a very strong bias by you even to the extent that you will misuse well know & established definitions of "false negative"?

I certainly don't hold the position that all blind tests are flawed & again this is a miss-statement of all that has been discussed so far.

I do believe however that we need some better measurements, as discussed recently here.
 
Citation? How was the testing and analysis done?

It follows from the same, already cited several times in the other threads, comparisons between ABX and other test protocols. When testing the same sensory difference the proportion of correct answers (poc) was found to be significantly lower in ABX than in the other tests.
So, if the statistical analysis is based solely on the number of correct responses (accumulated as test statistic used) then it gets more difficult to obtain a correct positive result.

As a general comment on Leventhal's critique (and not your comment), it carries with it the acknowledgment that small effects are hard to find, regardless the protocol.

Which obviously is true, and Leventhal´s analysis was indeed universal in this regard. But, what it makes our discussions in these threads imo quite often so difficult is a premise - usually not outspoken - but assumed to be true, namely that listening under test conditions must be (in a miraculous way) the same as under "normal/casual" conditions, if not that it even must be better (because of the missing "sight confounder").

Leventhal´s examples use the binomial test and the effect size is therefore expressed in proportion of correct (over time), as you have to assume an effect size for any power analysis.

But there is a distinction between the "real" effect size and the effect size under test conditions.
That is one of the reasons, why i consider the "gorilla" example as good illustration of this distinction; the "gorilla" represents imo an effect that is not small, but can be nevertheless be transformed into a "small effect under test conditions.

The only context I've seen Leventhal quoted is in attempts to discredit any and all ABX protocols, which raises alarm bells, rather than acknowledge that it essentially says, "we don't have the statistical power to say anything about small effects".

See above; so it indeed reflects the reality of any empirical experiment, and it might have been used to discredit any "blind test attempt" (not only the ABX variant) but otoh especially Clark and others tried hard to discredit Leventhal´s analyis as unjustified "audiophile nonsense", although as we know Leventhal simply did report something that Clark et al. should have been aware of, but apparently weren´t.

(Again not to you) but Clark, et al.'s part of the debate should also be read before passing judgment.

Of course; the first response from Shanefiled and Clark et al. i´ve seen was there Author´s response to Leventhal´s critic (both in the JAES). According to the rules of such journals it was a reasonable one - failing for obvious reason when trying to reject the critic - but already presenting the ad hoc argument that any of these alledegly small differences would be of no practical relevance, the argument brought up by Dan Shanefield.

The response in the letter section of Stereophile to Leventhal´s critic by Clark (and Nousaine it think, Shanefield did not participate) not restraint by the strict Journal rules was far less polite if not overly aggressive and tried to rebutt Leventhal´s analysis with questionable if not absurd (from a statistical view point) arguments.
Reading that i was quite shocked by the missing professional attitude and began to question their motives, as it seemed that they were not so much interested in finding the truth.

A criticism I would make is that it'd be nice to show the effectiveness of Clark's chosen ABX protocol against varying, synthetic positives to have a better characterization of their overarching experimental design. If that data exists, I haven't seen it and would be most appreciative of anyone who can link it.
The articles i´ve cited were not about synthetical but real differences/positives and showed the reported differences under real experimental conditions.
In association with other analysis attempts there were additional simulations done wrt predictions following from modelling the internal judgement processes. I have to do dig some deeper in my library for that......

One thing that gets ignored is that Leventhal's assessment also leaves wide open that purported large effects are generally within the capability of these lower-N experiments.

Surely, but that brings us back to the (hidden) premise i´ve mentioned above. We have to remember that in most other fields with "controlled blind experiments" there usually is a variable assessable by conventional measuring device, so in these experiments with humans there is objective measurement (still variable between individuals though) of an effect and in addition a response given by subjective "rating" from the humans.

In listening tests this objective part is missing (Oohashi et al. tried to reach something comparable by incorporating PET scans and EEG; recently some researcher tried fMRI to assess listeners response to different reverberation times for concert hall acoustics) and all we have is the subjective response by the participants that might show mainly impact by the test condition than response to the alteration of the independent variable.

Agreed on the sieve property, but in reality that doesn´t help in most cases.
Let me mention again Shanefield´s ad hoc argument of missing practical relevance. It seems plausible, but the scientific approach requires to check if it is true in reality. I have never heard about any attempt by the ABX-audio test crew to do so. But i´ve to mention that Arny Krueger (on his old pcabx page) did very well in mentioning the need to do training for being really prepared and even cited the ITU-Recommendations.
 
...so in these experiments with humans there is objective measurement (still variable between individuals though) of an effect and in addition a response given by subjective "rating" from the humans...

The factors above all discredit any short testing regimen. Forget about variability between different humans, my opinion after testing many is there is very little correlation between most people for anything more than gross differences; but even within a single test subject variability is significant. Our perception varies second by second as muscles tense in our jaw, in our inner ear, as what we are looking at changes, as our minds wander from one thing to another, as we feel hungry, thirsty, have minor sinus and other ailments, add to this expectation bias which is subtle and devilish to train away except by repetition...Longer testing sessions over a few days or longer average out these factors as much as possible.

Except for gross sonic differences, I believe any of these tests really only test the individuals, not the equipment. Of course there is value in that result, but not necessarily as applied to subtle equipment differences.

Cheers,
Howie
 

TNT

Member
Joined 2003
Paid Member
But there is a distinction between the "real" effect size and the effect size under test conditions.
That is one of the reasons, why i consider the "gorilla" example as good illustration of this distinction; the "gorilla" represents imo an effect that is not small, but can be nevertheless be transformed into a "small effect under test conditions.

Well, imo, if gorillas are not detected it is a small effect (yes, the animal is quite large...) - so small that gorillas does in fact not matter. Then some might think that gorillas are a big deal once they learn that they are there (=sighted ;)).

But there is the whole trick - the hypotesis on trail is "gorillas are not detectable" (or; electrolytic capacitors sound no different from film capacitors) - if they cant, they aren't.

So you can watch basketball with gorillas your whole life and don't suffer at all i.e. having the same experience as the ones watching basketball without gorillas.

Gorilla-less basketball tickets are also much cheaper ;)

Ignorance can be a bliss.

//
 
Interesting (depressing for me) coincidence that you should mention that, I've recently begun to notice that sometimes my tinnitus has started to vary, usually for the worse, when I chew :mad:

Yes, I can tell when I wake up if there was a teeth clenching event (even if a tooth was not broken). My dentist showed me a study where subjects wore strain gauges at night, you honestly would not believe the pressure numbers.
 
"As easy"? Surely a sighted test makes incorrect results (for 'hearing') at least slightly more likely?

As a scientist you know that plausible hypothesises are still hypothesises and have to be examined in a sufficiently way.
One way to do so is the comparison of poc (proportion of correct trials) with stimuli that are known (either known to be perceptible or known to be identical). Something that was done routinely within the experiments for SDT (Signal Detection Theory) since the ~1940/50s.

If you analyze well documented listening tests you´ll see that for example the poc in case of presenting identical stimuli twice might be as low as 20% (under controlled "blind" listening test conditions) in multidimensional evaluations .

As said before there is a risk of being flawed due to the "knowing about" effect, but it depends on a lot of variables. I think i´ve proposed it before in other threads, it helps a lot to do some experiments yourself (means conducting "blind" tests with other people) to get a better fealing what is really happening under such "unusual" conditions.
Test some people that don´t know neither about "high end" at all nor about specific brands. If you test for amplifier differences you´ll notice that the usual black box design of amplifiers does not attract them and that it often does not make a difference if one black box is named "......" and the other "......" .

That it could be quite different if doing tests with "high end" afficionados should be mentioned.

Difference I would expect. Significant difference is perhaps a bit more surprising. How significant, enough to render some protocols unhelpful?

As usual it depends. If you know about the pitfalls (and know about the countermeasures) of a method, it still can be useful.
If you don´t care (reason might be ignorance or arrogance) it is more like "cargo cult testing".

The problem with calculating probability is that you have to make some assumptions, which may or may not be reasonable assumptions and may or may not appear to be reasonable to others.

Surely but that isn´t different, as you in any case carry a lot of assumptions (like, it´s slightly more likely to get wrong results in "sighted listening",or about the relevance/degree of a difference and it´s associated outcome in experiments, or about the objectivity of experimenters and so on and on....)

My concern remains that, as a general rule, people object to tests when they don't like the result - especially if they find the result commercially challenging.

Of course it can be.....

Having decided that they don't like the result, they can then almost always find some grounds on which to criticise the test.

As a scientist you already know that it doesn´t invalidate the critic; it is either justified or it is not. You might not like it due to the reasons you´ve mentioned, but that is a totally different matter.

The same people can be curiously accepting when a test gives the result they prefer. This can happen on both sides of the 'audio debate', of course.

Again as a scientist you already know that it could happen due to good reasons. For example i don´t like the ABX protocol myself, and as written before did notice that two people i tried it with, experienced quite severe difficulties, so i dropped it from my method list. But i nevetheless have to accept that other people might have very different experiences,so if listeners like for example Paul Frindle or Bruno Putzeys report results from ABX tests i don´t have generally a problem with acceptance despite my general concerns.

ABX seems to be acquiring the same notoriety as THD. It is just a protocol, as THD is just a number. It is not as important to us as those opposed to it seem to imagine.

As a scientist you are able to make a distinction between "bashing" and justified critic based on scientifc evidence. I´m sure we can agree that consideration of the known pitfalls is quite rare despite the fact that these pitfalls are known for at least 60 years.

Of course, ABX is just a protocol, but as there is constant demand for "doing an ABX" now justified by the anticheating part, it is imo the mainly used protocol in the amateur test department.

Instead of bashing ABX, perhaps some people ought to try to come up with good reasons why they trust the results of sighted tests?

Which is a completely different topic, isn´t it?

We all learn in our lifes to rate the subjective advice of other people differently according to our experiences with this kind of advice.
 
As a scientist you know that

As someone who worked in 2 scientific institutes as a scientist during my professional career I know that scientists do not differ much from "ordinary people", many of them have their beliefs and biases and in order to protect their methods some of them are willing to hide some results that do not confirm their theories. Of course not everyone behaves this way, but the scale is as wide as with other people.
 
As someone who worked in 2 scientific institutes as a scientist during my professional career I know that scientists do not differ much from "ordinary people", many of them have their beliefs and biases and in order to protect their methods some of them are willing to hide some results that do not confirm their theories. Of course not everyone behaves this way, but the scale is as wide as with other people.

No dispute there, but the scientist really _should_ know , although behaving sometimes differently (for whatever reason).

@DF96,

happens to me also before, so PMA´s advice is a good one as it guards against a lot of the mischief that can happen.....
 
Status
Not open for further replies.