John Curl's Blowtorch preamplifier part II

Status
Not open for further replies.
Administrator
Joined 2004
Paid Member
Hi John (Curl),
The most rational comment you have made in my experience, Soundminded. ;-)
Figures. Now THAT makes perfect sense. :) <-- this is understood by browsers. :)

Hi John (jneutron),
Completely agree.
Claims of "you're wrong but I can't tell you why, doesn't float anybody's boat.
JC and Soundminded,
Take note. Those comments really don't fly. They tell me that you have no way to back yourself up. No credible way at any rate. If there is a statement that you feel has to be made, but you can not support what you are saying, then just be quiet. Doing that will make you both appear to be more intelligent than the other lines we are used to from you.

-Chris
 
Administrator
Joined 2004
Paid Member
Hi Soundminded,
The frequency of a sound wave with a period of 5 uSec is 200 Khz, the frequency of one with a period of 2 uSec is 500 Khz. The limit of human hearing is in the vascinity of 20 khz. The understanding of how the human brain interprets the neural information it gets from the ears determines not only what is and is not important in accurate sound reproduction but the specification criteria for performance of a sound system intended to reproduce it accurately. The two go hand in hand. Between lack of knowledge of how sound works and lack of knowledge of how the brain hears means that current sound systems cannot be designed with the expectation of reproducing the subjective experience of a live musical performance. Until that gap is closed, investment in expensive sound reproducing equipment is a waste of money and trying to perfect a failed paradigm is a waste of time and effort. Meanwhile the claims of vast superiority for the most expensive equipment and technological breakthroughs never lets up. The reaons are not only commercial but also relate to egos.
I normally use swept sine, swept filter or fixed signal and swept filter. My equipment has a minimum I.F. of 3 Hz, which I do occasionally use. I can look up the number of data points if I have to, or you can. It's an HP 3585A (old but nice to use). I also use a 3580A (1 Hz I.F. and adaptive sweep) and recently got a hold of an HP 35665A which I'm getting use to - that's a "Dynamic Signal Analyzer" which uses FFT. So do most of the digital 'scopes on the market (even USB types and sound card type systems). Newer, nicer gear does make life easier, and allows you to "see" more detail and information. From what you are saying, none of this makes any difference? It sure does, and it's helpful information. Even JC uses test equipment.

Resolution does make a difference, and so does knowing where to look for anomalies or problems. No one I know would bother taking a measurement that does not have enough data points. The normal complete scans are simply to look for areas of interest, then you go hunting in what you know are happy hunting grounds.

Give up on good sound equipment? That makes no sense at all. The better the equipment is, the more enjoyable it is to listen to it. Just because it's not perfect doesn't mean you can't enjoy what is good. Besides, I don't really want to hear someone three rows over cough or sneeze - do you?

Now, you were talking about sound arrival differences in time, and alluded to the difference in arrivals demanding a much higher cut off frequency than the accepted 20 KHz (you are looking at an arrival different of 5 μsec, requiring a 200 Khz bandwidth). This "top end" is highly variable between people, and most often well below your minimum limits. The reason is that you are confusing the amplitude process and the timing functions that occur in our brains. You can easily bandwidth limit a pulse to some low frequency, and delay one event by 2 or 5 μsec with respect to the other event. You seem to be confused as to how these things are related.

So sure, we don't know everything. But, we do know more these days than we did in 1970 (for example). Should we assume everything we do know is invalid because we don't know everything? You can, I prefer to continue learning. We correct what we find is inaccurate as we go along while building on what has been proved to be accurate. That is how the journey of learning happens.

-Chris
 
Take a look at Bill Waslo's excellent article "Reflecting on Echoes and the Cepstrum: A look at Quefrency Alanysis and Hearing" in Speaker Builder, Aug 1994. Bill also had a very nice paper on cepstral theory on his website, but it doesn't appear to still be there. It's an interesting technique- we played around with this during my Nicolet days.
My copy of Speaker Builder, (Aug 1994) really it's Vol. 15 #6 or Six:1994, has a Bill Waslo article "Absolute SPL Sensitivity Measuring with IMP".

Should I look in another issue?
 
Hi Soundminded,

I normally use swept sine, swept filter or fixed signal and swept filter. My equipment has a minimum I.F. of 3 Hz, which I do occasionally use. I can look up the number of data points if I have to, or you can. It's an HP 3585A (old but nice to use). I also use a 3580A (1 Hz I.F. and adaptive sweep) and recently got a hold of an HP 35665A which I'm getting use to - that's a "Dynamic Signal Analyzer" which uses FFT. So do most of the digital 'scopes on the market (even USB types and sound card type systems). Newer, nicer gear does make life easier, and allows you to "see" more detail and information. From what you are saying, none of this makes any difference? It sure does, and it's helpful information. Even JC uses test equipment.

Resolution does make a difference, and so does knowing where to look for anomalies or problems. No one I know would bother taking a measurement that does not have enough data points. The normal complete scans are simply to look for areas of interest, then you go hunting in what you know are happy hunting grounds.

Give up on good sound equipment? That makes no sense at all. The better the equipment is, the more enjoyable it is to listen to it. Just because it's not perfect doesn't mean you can't enjoy what is good. Besides, I don't really want to hear someone three rows over cough or sneeze - do you?

Now, you were talking about sound arrival differences in time, and alluded to the difference in arrivals demanding a much higher cut off frequency than the accepted 20 KHz (you are looking at an arrival different of 5 μsec, requiring a 200 Khz bandwidth). This "top end" is highly variable between people, and most often well below your minimum limits. The reason is that you are confusing the amplitude process and the timing functions that occur in our brains. You can easily bandwidth limit a pulse to some low frequency, and delay one event by 2 or 5 μsec with respect to the other event. You seem to be confused as to how these things are related.

So sure, we don't know everything. But, we do know more these days than we did in 1970 (for example). Should we assume everything we do know is invalid because we don't know everything? You can, I prefer to continue learning. We correct what we find is inaccurate as we go along while building on what has been proved to be accurate. That is how the journey of learning happens.

-Chris

JN and I have had discussions before on another site going back many years.

JN reported that the human brain/ears can detect differences of sound arrival between two ears with as small a difference as 2 to 5 microseconds based on his measurements. I accept that as correct. However, any implication that this translates into an ability to hear sounds beyond 20 khz is not a logical conclusion. What is logical is that the ability to detect the direction of the source of sound and make rapid and critical judgements about not only where the source is but other "factors" about the source is far greater than can be explained by simply looking at the amplitude variation with time of sound that reaches the ear. The binaural recording/playback method meets all of the criteria that model presupposes. The sounds arrive at each ear with the same relative time relationship, loudness relatiionship, and differences in spectral content as they would for a human sitting exactly where the recording microphones are located, the oriface of two human ears yet the system clearly doesn't work. The brain almost instantly figures out that the illusion is a false one. This is not a learned response, it is wired into the way the brain processes auditory impulses. How? The correct answer to that question opens a door to an entirely different view about sound and hearing, one that puts current knowledge in an very different perspective. It does not necessarily negate what is known to be true but it sheds new light on it. It's a door I went through 37 years ago and I've never turned back.

BTW, how does your sweep generator/tracking filter measurement results compare to your FFT measurements for audio equipment? Clearly the FR of the tracking filter will add some measure of increased tolerence (band of uncertainty) to the measured results.
 
JN and I have had discussions before on another site going back many years.

Yup
JN reported that the human brain/ears can detect differences of sound arrival between two ears with as small a difference as 2 to 5 microseconds based on his measurements.
No, No, No, No......

I never said "my measurements"...:eek:

Nordmark first reported measured results of 1.5 microseconds for dithered signals. He also provided data for dithers from 0 to 6 uSec, and the human response from 500 hz out to 12 Khz...

I repeat, I have never attributed that information to my ow effort...It was Jan Nordmark who I first read of. He published back in 1974, and I was informed of the paper and research by Jon Risch of audioasylum fame.

While Jon and I do not generally seem to see eye to eye, I credit him for providing me this information..

I accept that as correct. However, any implication that this translates into an ability to hear sounds beyond 20 khz is not a logical conclusion.

Waaaay back, I absolutely stated that the inverted bandwidth cannot in any way, shape or form, allow anyone to reach the conclusion that humans are capable of hearing information in the 200 to 500 kHz range...never.

Again, it's nice to hear from you ...tis been a while..

Cheers, John
 
The binaural recording/playback method meets all of the criteria that model presupposes. The sounds arrive at each ear with the same relative time relationship, loudness relatiionship, and differences in spectral content as they would for a human sitting exactly where the recording microphones are located, the oriface of two human ears yet the system clearly doesn't work.
No. A stereo reproduction system is incapable of producing the spherical wavefronts necessary for the correct soundfield humans are designed to respond to.

David Greisinger even started to touch on this issue with normal pan based lateral movement in a rudimentary form, with his figure 5(or 6, if you count correctly) graph showing angular distortion of location vs IID and frequency.

You and I have always agreed that the current model of stereo reproduction is incapable of generating the correct spacial soundfield..

This is why your "you are incorrect and I can't tell you or I'll hafta kill ya" schtick was such a surprise to me.. This is the first time I've seen you do that...to me, anyway..

Cheers, John
 
Last edited:
No. A stereo reproduction system is incapable of producing the spherical wavefronts necessary for the correct soundfield humans are designed to respond to.

David Greisinger even started to touch on this issue with normal pan based lateral movement in a rudimentary form, with his figure 5(or 6, if you count correctly) graph showing angular distortion of location vs IID and frequency.

You and I have always agreed that the current model of stereo reproduction is incapable of generating the correct spacial soundfield..

This is why your "you are incorrect and I can't tell you or I'll hafta kill ya" schtick was such a surprise to me.. This is the first time I've seen you do that...to me, anyway..

Cheers, John

"This is why your "you are incorrect and I can't tell you or I'll hafta kill ya" schtick was such a surprise to me.. This is the first time I've seen you do that...to me, anyway.."

The theory my patent is based on may have a new lease on life. I thought it would be a dead issue and I was willing to discuss it in great detail until just a few years ago when I was still posting on that other site that seems to place its priorities on selling its sponsors' products even if it uses the soft sell approach. That has changed. There might just be another chance for it yet. There are limits on what I'm willing to talk about now.

But here is a novel thought, something for you to think about that you may not have considered. In the evolution of our senses which helps our brain model the exterior world to enhance our chances for survival, the fact that higher animals have two ears allows their brain to not only determine where a source of sound is directionally with respect to our own location but to assess how large and powerful it likely is, how far away it is, and the size and nature of any enclosed space the hearer and the source are mutually in. This helps determine whether or not it is a threat (predator) or an opportunity (predation or potential mate) and what the appropriate instinctive response is, to run towards it or away from it. Ever wonder why dogs are frightened by thunder? Why audiophiles often play recordings and rock bands perform at deafening sound levels? Why "imaging" where the perceived distance to a source has become very important to audiophiles? All of these have to do with the perceived power of the source. I have identified at least five factors that influence this judgement and how the ear-brain mechanism for making that judgement works. Hearing is the primary sense, as important for survival as sight because hearing a sound is often the first clue to the presence of something new in our environment. It is no accident that when you hear a sound, your instinct is to turn your head towards it to look at it. The senses of sight and hearig work together. How this mechanism works and why the brain can determine the direction of the source so quickly and accurately is one part of a complimentary theory that justifies the effort to create much better sound reproducing systems than we have today. The mechanism to create the acoustics of venue which defeats that ability and how they work together is also part of the theory. That's the part that informs the brain about the physical space you are in.

The perceived power of musical instruments and groups that perform at large public venues such as pipe organs, symphony orchestas, large choruses, and even a grand piano cannot be duplicated without duplicating the acoustical effects of those places. Those effects constitute most of the sound that is heard in them. Their ability to fill up those spaces even at modest SPLs is a demonstration of their intrinsic power which no sound reproducing system today can duplicate. While recordings made using the existing technology cannot capture these effects, they can be reconstructed from what is available. That's where the theory and invention comes in.
 
Soundmind, I think I learned something from your input. Of course, I respect your need for confidentiality. Only people who do not contribute to the outside world with new ideas may take offense, unless they are 'fishing' for new ideas, as often happens.

Then here is some more food for thought. Consider two situations, one where you are seated in a concert hall with a symphony orchestra playing a passage at say 85db and another where you are at home listening to the same passage on a recording also at 85 db. At the live performace the musicians may be and sound 40 to 60 feet away. At home the perceived source is 10 to 15 feet away. The direct sound falls off with the square of the distance so this variable alone may add a factor of 9 to 16 to the perceived power or impact of the sound. Your home listening room is about 3000 to 5000 cubic feet in size, the concert hall between 600,000 to 1,000,000 cubic feet. The reverberant sound field of each note in your room dies out in about .2 to .3 seconds, in the concert all as much as 2 to 3 seconds or more. The perceived power of the orchestra is by my calculations as much as 10,000 to 20,000 times as great as the recording or 40 to 43 db when they are at the same loudness. That is how much louder you'd have to make the recording to equal the impact of the live playback. For a large church or cathedral, it could be 50 to 60 db, a factor of 100,000 to 1 million. This IMO is the main reason why audiophiles play recordings so loud, it has no impact otherwise because the acoustic effects which give clues to the power of the sound source is missing. If Dr. Bose wrote even one thing worth remembering in his white paper, it's that only 16 feet from the performing stage of Boston Symphony Hall, the reverberant sound field which results from the acoustis already constituted 89% of the total sound field and his graph showed that as you went further back in the hall the percentage continued to increase. This illustrates the importance of that field (not that his 901 speaker had a prayer of duplicating it anymore than any other speaker does.)

The number one correlation between a measured characteristic of a concert hall and a hall's ranking as acoustically desirable as reported by Leo Beranek in his paper comparing 59 concert halls using 20 measured parameters as judged by golden eared conductors and other "cognesenti" is binaural quality index (1-interaural cross correlation) in other words the perception of space in it. Number two was bass response. Bass clearly has an important role to play on the subjective impact of sound (that's why thunder scares the dogs. The same thing happens when hot air balloons are near my house.)

These are some of the issues my ideas deal with. The electronics become a servant of a much larger problem.
 
Soundmind, of course you are correct on this. However, I like female singers singing close to me in a smallish room. There, high quality playback can be VERY effective.
I agree with you about perceived 'power'. Once 40 years ago, we took our early GD system to Grace Cathedral (purported 7 sec reverb time) to add to a Tibetan Horn group. What 'power'! HONK ON! ;-)
 
Administrator
Joined 2004
Paid Member
Hi Soundminded,
I am actually very relieved to read your complete response. Originally I was worried that you were yet another fringe character clutching at straws. Your point of view on sound reproduction is accurate for music performed live. Anything produced in a studio can't claim anything but that it's a creative effort produced to please us in some way. I do have to say though, few audiophiles "crank" rock music up. They will end up with reduced sensitivity (hearing loss) doing that over time. I must admit that I do turn things up past what is reasonable from time to time.

Your comments regarding room acoustics have been recognized for quite some time now. Yamaha (for example) spent a great deal of time and money to capture the sonic signatures of various venues to include in their sound processors. If they weren't at the front of that field, they must have been in the top few. But either way, the actual experience can not be reproduced exactly in a home setting. Even if you include the 7 speaker setup they came up with.

However, I still have to point out that good equipment doesn't actually attempt to recreate the venue where the recording took place. If the ad copy and testimonials give that impression, well ... that's advertising for you. Add that to the long list of lies the audio community tries to pass on as fact. Nope, an audio system is only there to amuse, a (hopefully) joyful experience. Otherwise, we wouldn't return to that activity as much, would we? Once you accept that viewpoint, justifying better equipment for an improved listening experience can be justified. If you stick to the idea of an attempt to recreate an audio event, then I have to agree with you. The equipment isn't there yet, and probably will not be for a long, long time. The link coupling the air to the information is a glaring weak point in every respect. The equipment used can make an audible difference, but nothing yet can claim to do a good job of recreating the real experience.

JN and I have had discussions before on another site going back many years.
That's something that is between you two. Most of us wouldn't have known that.

BTW, how does your sweep generator/tracking filter measurement results compare to your FFT measurements for audio equipment? Clearly the FR of the tracking filter will add some measure of increased tolerence (band of uncertainty) to the measured results.
Good question, and one I can't answer yet. I'm not yet familiar with the Signal Analyzer. What I can say is that each has a stated accuracy for time and amplitude. The Spectrum Analyzer does not process any input in relation to time. It has one input channel and one output channel. The tracking generator is internal to the unit and is calibrated with the input channel - so it tracks well. The Dynamic Signal Analyzer does output a signal and has two inputs. It is an early model, so current instruments would be much better in every regard - assuming the same quality of design and manufacture. This instrument does process signals related to time (the other channel), and will produce phase information as well as amplitude information. However, information from each will not increase the uncertainty if you are considering data from both. You just have to characterize each instrument in relation to the other. The same holds true for sound card based instruments.

The most difficult thing to deal with would be the actual characteristics of a sound card based system. The characteristics on the sound card make and model would have to be accurately known in order to begin to assign measurement limits. The effect of the software and host system may very well play a part as well, unless the sound card had it's own on-board controller. Once you have that figured out, the added analog front end would need to be characterized as well. I haven't even considered examining my setup in detail to get an idea of accuracy limits (I have two sound card based systems). In fact, I haven't used either in at least 6 months now.

What I can say is that most things I can hear, I can find in measurements. Keeping in mind that I'm normally looking for problems in a design or product. Most things I do to improve the quality of sound I can also see as a reduced distortion or noise level. Listening to things, and measuring them in various ways seem to go hand in hand. Both keep you on track. Using either one method alone tends to allow you to go off on a tangent and end up where you don't want to be. Just look at all the pure "design by ear" folks out there for confirmation. Just as lost are the people who only design by measurements.

Your examination into the mechanisms that developed and shaped how we hear and react to things is a great place to begin building a system to recreate a live sound event. I've read a little on that, not enough to claim a good understanding, but a general idea of how the process functions. I'll leave it to people like yourself to learn more from. I still don't think that the performance of present day audio equipment interferes with the enjoyment of some nice music though.

-Chris
 
Hi Soundminded,
I am actually very relieved to read your complete response. Originally I was worried that you were yet another fringe character clutching at straws. Your point of view on sound reproduction is accurate for music performed live. Anything produced in a studio can't claim anything but that it's a creative effort produced to please us in some way. I do have to say though, few audiophiles "crank" rock music up. They will end up with reduced sensitivity (hearing loss) doing that over time. I must admit that I do turn things up past what is reasonable from time to time.

Your comments regarding room acoustics have been recognized for quite some time now. Yamaha (for example) spent a great deal of time and money to capture the sonic signatures of various venues to include in their sound processors. If they weren't at the front of that field, they must have been in the top few. But either way, the actual experience can not be reproduced exactly in a home setting. Even if you include the 7 speaker setup they came up with.

However, I still have to point out that good equipment doesn't actually attempt to recreate the venue where the recording took place. If the ad copy and testimonials give that impression, well ... that's advertising for you. Add that to the long list of lies the audio community tries to pass on as fact. Nope, an audio system is only there to amuse, a (hopefully) joyful experience. Otherwise, we wouldn't return to that activity as much, would we? Once you accept that viewpoint, justifying better equipment for an improved listening experience can be justified. If you stick to the idea of an attempt to recreate an audio event, then I have to agree with you. The equipment isn't there yet, and probably will not be for a long, long time. The link coupling the air to the information is a glaring weak point in every respect. The equipment used can make an audible difference, but nothing yet can claim to do a good job of recreating the real experience.


That's something that is between you two. Most of us wouldn't have known that.


Good question, and one I can't answer yet. I'm not yet familiar with the Signal Analyzer. What I can say is that each has a stated accuracy for time and amplitude. The Spectrum Analyzer does not process any input in relation to time. It has one input channel and one output channel. The tracking generator is internal to the unit and is calibrated with the input channel - so it tracks well. The Dynamic Signal Analyzer does output a signal and has two inputs. It is an early model, so current instruments would be much better in every regard - assuming the same quality of design and manufacture. This instrument does process signals related to time (the other channel), and will produce phase information as well as amplitude information. However, information from each will not increase the uncertainty if you are considering data from both. You just have to characterize each instrument in relation to the other. The same holds true for sound card based instruments.

The most difficult thing to deal with would be the actual characteristics of a sound card based system. The characteristics on the sound card make and model would have to be accurately known in order to begin to assign measurement limits. The effect of the software and host system may very well play a part as well, unless the sound card had it's own on-board controller. Once you have that figured out, the added analog front end would need to be characterized as well. I haven't even considered examining my setup in detail to get an idea of accuracy limits (I have two sound card based systems). In fact, I haven't used either in at least 6 months now.

What I can say is that most things I can hear, I can find in measurements. Keeping in mind that I'm normally looking for problems in a design or product. Most things I do to improve the quality of sound I can also see as a reduced distortion or noise level. Listening to things, and measuring them in various ways seem to go hand in hand. Both keep you on track. Using either one method alone tends to allow you to go off on a tangent and end up where you don't want to be. Just look at all the pure "design by ear" folks out there for confirmation. Just as lost are the people who only design by measurements.

Your examination into the mechanisms that developed and shaped how we hear and react to things is a great place to begin building a system to recreate a live sound event. I've read a little on that, not enough to claim a good understanding, but a general idea of how the process functions. I'll leave it to people like yourself to learn more from. I still don't think that the performance of present day audio equipment interferes with the enjoyment of some nice music though.

-Chris

"Hi Soundminded,
I am actually very relieved to read your complete response. Originally I was worried that you were yet another fringe character clutching at straws. "

It depends on what you mean by fringe character. If you mean someone who does not accept the conventional wisdom as gospel when it doesn't make sense to him, looks for his own answers that do, is uncompromising with errors and knows a fudge when he sees one, then I plead guilty on all counts. And not just in the realm of audio and sound, I'm that way about everything.

If I didn't have a working model of my idea that I've been experimenting with then I might just be clutching at straws. But my theories are not only well grouned on paper, they can be made to work....sometimes. It isn't easy, especially when you are as uncompromising a person as I am even with my own efforts.

I don't do this for a living, it's a hobby. I'm up to any challenge, the more impossible it seems the better I like it, the more determined I am to solve it. All of my ideas about sound stem from a mathematical model I devised purely by accident 37 years ago. Since that time it hasn't changed one jot. Figuring out how to apply it was one of the hard parts. But even harder by far was trying to figure out if being able to do this was actually better or just different. If it is better, why is it better. Do we try for accurately reproducing sound for its own sake or is there a valid reason that justifies the time, cost, and effort? Understaning hearing and how it relates to the perception of music is one of the challenges that give me some insight into the answer to that question.

If all that sounds too philosophical, then consider that for people who love the sound of live music they are willing to go to extremes to hear it at its best. Rather than sit satisfied to hear recordings in the comfort of their own home they will go to places where they will be surrounded by two thousand or more strangers, confined to a seat for an hour or more, exposed to communicable diseases, distracted by extraneous noises they have no control over. Multimillionaires and billionnaires will donate millions to build concert halls and opera houses while auctioneers have no problem selling the rarest and most priceless musical instruments for a kings ransom. It isn't just for their history that they command so much money, it's for the sound they can make in the right skilled hands. These are sounds that so far have eluded the ability of our technology to duplicate. Not by a little but by a lot.

I've been fortunate to hear live and recorded music all of my life. I hear both most days of my life. I've known musicians at every level of skill from rank beginners to world famous performing artists and at every level in between. The sounds that are possible from live music can enrich life and bring pleasure that is unmatched by any recording machine. It is sad but true that these experiences are restricted to a small percentage of the population and not necessarily by cost but by knowledge. Perhaps if more people knew more about music there would be greater demand for better technology and a real incentive to invent it. Is that being on the fringe? Then so be it if it is.
 
Then here is some more food for thought. Consider two situations, one where you are seated in a concert hall with a symphony orchestra playing a passage at say 85db and another where you are at home listening to the same passage on a recording also at 85 db. At the live performace the musicians may be and sound 40 to 60 feet away. At home the perceived source is 10 to 15 feet away. The direct sound falls off with the square of the distance so this variable alone may add a factor of 9 to 16 to the perceived power or impact of the sound. Your home listening room is about 3000 to 5000 cubic feet in size, the concert hall between 600,000 to 1,000,000 cubic feet. The reverberant sound field of each note in your room dies out in about .2 to .3 seconds, in the concert all as much as 2 to 3 seconds or more. The perceived power of the orchestra is by my calculations as much as 10,000 to 20,000 times as great as the recording or 40 to 43 db when they are at the same loudness. That is how much louder you'd have to make the recording to equal the impact of the live playback. For a large church or cathedral, it could be 50 to 60 db, a factor of 100,000 to 1 million. This IMO is the main reason why audiophiles play recordings so loud, it has no impact otherwise because the acoustic effects which give clues to the power of the sound source is missing. If Dr. Bose wrote even one thing worth remembering in his white paper, it's that only 16 feet from the performing stage of Boston Symphony Hall, the reverberant sound field which results from the acoustis already constituted 89% of the total sound field and his graph showed that as you went further back in the hall the percentage continued to increase. This illustrates the importance of that field (not that his 901 speaker had a prayer of duplicating it anymore than any other speaker does.)

The number one correlation between a measured characteristic of a concert hall and a hall's ranking as acoustically desirable as reported by Leo Beranek in his paper comparing 59 concert halls using 20 measured parameters as judged by golden eared conductors and other "cognesenti" is binaural quality index (1-interaural cross correlation) in other words the perception of space in it. Number two was bass response. Bass clearly has an important role to play on the subjective impact of sound (that's why thunder scares the dogs. The same thing happens when hot air balloons are near my house.)

These are some of the issues my ideas deal with. The electronics become a servant of a much larger problem.

Right, two fundamental differences i can think of - there may be more - between home listening rooms and concert halls or outdoor listening experience are decorrelation of sounds, and numbers of modes. A concert hall or outdoors allow for greater amounts of both.

Crude experiment to illustrate:

Simply adding HF rolled off, delayed side and rear channel(s) to ordinary stereo set up will add to greatly to perceived SPL, even though they are operated at very low SPLs. They also add to sensation of spaciousness. Addition of extra subs suitably distributed and EQ'd add further to the sensation of much greater loudness and spaciousness.

As you know there are now quite a number of systems for doing multichannel but basically what they are adding to the home experience is some kind of extra decorrelation and greater LF modal distribution.
 
Last edited:
Then here is some more food for thought. Consider two situations, one where you are seated in a concert hall with a symphony orchestra playing a passage at say 85db and another where you are at home listening to the same passage on a recording also at 85 db. At the live performace the musicians may be and sound 40 to 60 feet away. At home the perceived source is 10 to 15 feet away. The direct sound falls off with the square of the distance so this variable alone may add a factor of 9 to 16 to the perceived power or impact of the sound. Your home listening room is about 3000 to 5000 cubic feet in size, the concert hall between 600,000 to 1,000,000 cubic feet.

However, the recording was made at the concert hall, so there are aural cues that the sound is from a large room. This, of course, is confounded with the ambient acoustics of the playback enviroment. It's better to playback in an acoustically dead room ?

The number one correlation between a measured characteristic of a concert hall and a hall's ranking as acoustically desirable as reported by Leo Beranek in his paper comparing 59 concert halls using 20 measured parameters as judged by golden eared conductors and other "cognesenti" is binaural quality index (1-interaural cross correlation) in other words the perception of space in it.


I'd like to find unbiased reviews of the acoustics at the Tokyo Opera City concert hall with its high BQI. I have mixed thoughts about the five or six recordings I have from there. Of recordings, Disney Concert Hall sounds much better to me.



.
 
Last edited:
However, the recording was made at the concert hall, so there are aural cues that the sound is from a large room. This, of course, is confounded with the ambient acoustics of the playback enviroment. It's better to playback in an acoustically dead room ?




I'd like to find unbiased reviews of the acoustics at the Tokyo Opera City concert hall with its high BQI. I have mixed thoughts about the five or six recordings I have from there. Of recordings, Disney Concert Hall sounds much better to me.



.

When a commercial recording is made, microphones are placed much closer to the musicians than anyone in the audience is. Also, the microphones usually have a cardioid pickup pattern. This reduces the acoustical effect that gets on the recording as a percentage of the total sound field to a small fraction of what the audience hears even in the closer seats. Also, qualatitively it is inadequate to be reproduced accurately. Also it cannot be separated from the directly arriving sounds on playback. Various four channel and other multichannel efforts have not worked well. Other miking techniques have been tried too. In addition to binaural recordings, I think ambisonics (not to be confused with ambiphonics) has its own miking technique. As far as I can tell, none of these techniques work well despite wild claims made by their proponents.

Beranek designed one hall in Tokyo, it may be the one you are referring to. He claims it is an excellent sounding hall, among the best in the world. He seems like he's pretty honest and straightforward about both his failures as well as his successes. He makes no apology for what is now called Avery Fisher Hall in Lincoln Center although his explanation of why it happened is surely different from those who managed the fiasco at the time. They of course blame him. Just because a hall sounds good live doesn't mean that recordings made there will also be excellent. Besides the enormous range of different miking techniques possible, what enhances a live experience may not necessarily make for the best recording environment. It can in fact create major difficulties for the recording engineer.
 
Administrator
Joined 2004
Paid Member
Hi Soundminded,
I'm not demanding to know what you are working on, and I can understand that a working concept that may be on the way to market at some point is worth some protection.

I think your views as stated would fit many other people here. My definition of a "fringe character" is one who embraces ideas simply because someone they respect has supported them. Someone who knowingly ignores the known physics of a process and creates what amounts to their own fantasy world, then attempts to force that reality on everyone else. I can think of a few people around here who do that, but there is no point in identifying them. Some members will know who I'm referring to.

My main points were that most music is created to be an experience, not a reproduction of a live performance. You'd need all the sights and smells in addition to sound to be successful I think. So for that type of music, it only needs to be enjoyed. Some intervention may be required to enjoy it though (like cutting some bass on Madonna productions). For a recording that is supposed to capture "the real event", I'd say that a person's best attempt should be made to reproduce the sounds, not to modify it unless there is a clear problem with the "software". In either case, better equipment (defined as actually better, not necessarily more expensive or a brand name thought to be superior). It's a hobby or luxjury meant to create a pleasing experience. Go ahead and make yourself happy, and better equipment can do that (unless it's not actually better :) ). In other words, make the best of a situation that is not perfect.

-Chris
 
Member
Joined 2004
Paid Member
The most difficult thing to deal with would be the actual characteristics of a sound card based system. The characteristics on the sound card make and model would have to be accurately known in order to begin to assign measurement limits. The effect of the software and host system may very well play a part as well, unless the sound card had it's own on-board controller. Once you have that figured out, the added analog front end would need to be characterized as well. I haven't even considered examining my setup in detail to get an idea of accuracy limits (I have two sound card based systems). In fact, I haven't used either in at least 6 months now.-Chris

One of Bill Waslo's key innovations in his Praxis system is a self calibration process that corrects for both the record and reproduce side of the soundcard. It really reduces the uncertainty of the system a lot. There are limitations to sound card based systems that a dedicated analyzer overcomes but for the measurements above, everything from frequency response to csd and etc are easily done and quickly with the soundcard based Praxis.

Measuring very precise small time differences in bandlimited systems is not difficult, but the measurement part needs the necessary time resolution. In a sampled data system (digital audio) there may be issues relating to reproducing the small time differences that could be important. Possibly this is the biggest benefit to high resolution (higher sample rate) recordings. The microphones are not much better than they were 20 years ago. In many cases they are the same mikes they were 40 years ago, and very few recording mikes have response to 20 KHz let alone higher.
 
Administrator
Joined 2004
Paid Member
Hi Soundminded,
I should have mentioned that I've also been fortunate to have worked in commercial music recording studios, and live events as well. I have repaired audio equipment used in consumer reproduction, live events and recording studios for over 30 years. There is a wide gulf between what an engineer has to work with in the way of beds, and what a customer may receive on whatever media it's sold on. I understand your points regarding your musical experiences.

-Chris
 
Status
Not open for further replies.