Sorry, I´ll have to disagree on that: If we have to look at Hi-Fi as a means to convincingly replicate a live music performance, I say we need not go farer than stereo simply because humans have two ears, not four (or should that be 5.1?).
So we only need two mics, that map all signals, regardless of their origin, to two speakers in front of us?. I'm sorry but this does not and will never work at reproducing any kind of soundstage, even if it is only coming from in front on you. When I'm walking down the street and a car is coming up the road from behind me, I know it's behind me, I'm also able to make a pretty good guess at the distance and position of said car, relative to me, without turning to look at it. The same also applies when a plane is flying overhead, I know it's overhead, even though I don't have an ear on top of my head 😛.
When we listen to sounds in real life, whether they be musical sounds or otherwise, we rely on 3 different cues to tell us the direction of the sound. The level of sound(currently the only one of the three cues represented by stereo/pan potted mono setups), phase and the Haas Effect.
Here's a link explaining why stereo doesn't work for localising sounds, it also shows you some nifty tricks you can do with delays, to show how the ear determines direction by phase. Please, please read and digest this text, it will help you with removing any such notions of stereo not being the problem.
Ambisonics Text
I think there are two options for beating these flaws, always using a special set of microphones for capturing and a stereo system for playback (possibly with dipole speakers):
Can you elaborate on this special set of microphones, because at the very least they will have phase anomalies(from being spaced apart even the slightest bit) and will map all signals regardless of their direction to the front two speakers.
1- Recording inside an anechoic chamber and playback in a "normal" room;
2- Recording inside a normal environment and playback in an anechoic chamber.
First option seems to be the most "user-friendly"
Eek, can't see many musicians wanting to play in an anechioc chamber. Firstly the sound will be incredibly dull and secondly, how would anyone justify the expense to the accounts manager?.
If musicians start playing in anechoic chambers, shouldn't we be going to listen musicians play in anechoic chambers aswell?
The thing is, the room is part of the original performance aswell, we shouldn't be discounting it any more than the music. I understand we all have problems of our own room adding to/changing the sound of the original recording, but surely there are better ways of reducing such effects(KYW's idea of controlled directivity, for instance).
We should always try to bring the original performance space into our own rooms, rather than trying to have the performers play in our rooms. Surely, the former represents a higher degree of accuracy to source than the latter.
Alright, no digital processing, no Eq´ing (provided the microphones response curve are flat enough). All those natural colourations that contribute to what we call 3D effect will be originated by listening room response and nothing else
Yes, but these reflections are still mapped to the two front speakers and you still have phase anomalies between the two mics.
There´s no need for a second pair of speakers playing backwall reflections because under these conditions the brain will decode every perceived sound into a single three-dimensional source.
I'm sorry, but this is not the case. While our ears are incredibly clever organs, they won't be able to decifer what signals from our front two speakers are supposed to be coming from the front, left, back, slightly to the right, above us...this just isn't necessary in real life, as given that the delay is significant enough, our brain will notice the reverberation as coming from x direction/s and the original origin of the sound from another.
Our senses function as a system, each one depending on the others for making us aware of the outside world. A good experiment is asking your beloved one to offer you two different apples or whatever is at hand to taste, while you´re not being allowed to see nor smell them. Chances are you won´t distinguish a starking from a grannysmith. Or pork from beef.
This is a gross simplification. Smell goes hand in a hand with taste, but knowing the direction of a sound doesn't rely on sight. I knew the car was behind me, didn't I?, if I didn't I would have a hell of a time trying to cross the road, what with sounds coming from all directions mapping out approximately where the cars are and the direction they are travelling in. I knew that plane was there, even though I don't spend much of my time looking up into the sky for planes, so I can relate their position to the sound that's arriving at my ears.
So, it´s possible that one still needs say a visual aid to help visualizing a credible aural picture (punnnnnnnn) and sad to say, staring at glowing valves doesn´t help. Perhaps the current Home Theatre trend has a word or two to say. But more than 2 speakers? No way
Hmm, the current home speaker trend relies on 5.1, which is just the mapping of mono signals around 5.1 speakers, this is not Ambisonics and please don't bundle it in with such. Please, read the first text I linked to and the one in this post, if you are so set on continuing to believe 2 speakers are all that's needed, you probably won't gain anything from them. But for those that approach with an open mind, they will be highly revealing of the flaws and fallacies pertaining to todays "stereo" systems.
BTW here´s the stereo mic set I was referring to:
Ah, a dummy head, indeed it is a clever microphone, which can enhance stereo recording and playback somewhat. But the phase anomalies still remain, the incorrect mapping remains.
There is one microphone, which I like to think is even more special than this one....
Attachments
PauSim said:Originally posted by derf
Sorry, I´ll have to disagree on that: If we have to look at Hi-Fi as a means to convincingly replicate a live music performance, I say we need not go farer than stereo simply because humans have two ears, not four (or should that be 5.1?).
While listening to an ultra Hi-end system, if we keep thinking "hum... sounds great, but still not close to the real thing...", the stereo approach is not to blame, but it´s the recording technique and the listening environment that´s flawed.
I think there are two options for beating these flaws, always using a special set of microphones for capturing and a stereo system for playback (possibly with dipole speakers):
1- Recording inside an anechoic chamber and playback in a "normal" room;
2- Recording inside a normal environment and playback in an anechoic chamber.
First option seems to be the most "user-friendly" 😉
Alright, no digital processing, no Eq´ing (provided the microphones response curve are flat enough). All those natural colourations that contribute to what we call 3D effect will be originated by listening room response and nothing else.
There´s no need for a second pair of speakers playing backwall reflections because under these conditions the brain will decode every perceived sound into a single three-dimensional source.
Our senses function as a system, each one depending on the others for making us aware of the outside world. A good experiment is asking your beloved one to offer you
two different apples or whatever is at hand to taste, while you´re not being allowed to see nor smell them. Chances are you won´t distinguish a starking from a grannysmith. Or pork from beef.
So, it´s possible that one still needs say a visual aid to help visualizing a credible aural picture (punnnnnnnn) and sad to say, staring at glowing valves doesn´t help. Perhaps the current Home Theatre trend has a word or two to say.
But more than 2 speakers? No way![]()
Excellent post.
Sy , i will be at the Jazz festival all weekend if you want to hook up and go on a date 😀 you and my GF can fight over my exquisitness 😎 

geez... wrt wires and cables...
No one has mentioned the ever-present "skin-effect" well known in the micro wave field and often mentioned along with "intergranular dipoles"
anomalous mulltifactorial phenomena including "spin induced ambiguities", "co-morbid impurity directional mis-coupling", etc. etc.
Why not.... surely you've heard the improvements in your systems when compensatory efforts are made to minimize their interactions in the signal chain...
No one has mentioned the ever-present "skin-effect" well known in the micro wave field and often mentioned along with "intergranular dipoles"
anomalous mulltifactorial phenomena including "spin induced ambiguities", "co-morbid impurity directional mis-coupling", etc. etc.
Why not.... surely you've heard the improvements in your systems when compensatory efforts are made to minimize their interactions in the signal chain...
Member
Joined 2004
🙂
I knew this would come up to the conversation 😉. Please note I know very little of psycho-acoustics, but I suspect they have a lot to explain about these stuff. Let me illustrate with a short joke: Two guys are walking by the docks and the smart one says: "So sad, did you see it? A dead seagull..." and the dumb guy looks up to the sky and says "Where? Where?".
Don´t take me wrong, but if you were walking in the middle of the road and someone on a deltawing ringed a bycicle bell above you, your immediate impulse would be to turn your head back or to get out of the way, just because you knew where the sound was supposed to be coming from, right?
Long ago I played this trick to a friend´s dog: I recorded the dog barks on a Casio SK-1 (a four-note polyphonic toy sampler) and played back the sample a few notes higher and lower so it sounded like four different dogs barking. Guess what? The animal run madly to the balcony, barking at the street.
About locating the car that´s coming from behind: that has also a lot to do with the doppler effect: a scientific and very measurable fact that explains why a noise or light source coming towards us, passing by and running away has its pitch or colour frequency in downward progression while its amplitude increases and decreases. That´s what is based the expanding Universe theory on.
It´s that and reverberance. It´s not only left-right sound reflections that count, the floor you´re stepping on (while at the street) also plays a fundamental role, as well as the ceiling while you´re at home. Both provide sound reflections that help identificate and localize sound sources on a vertical axis. The ears receive, the brain decodes.
That´s an easy one: Almost every studio recording room sounds dead enough (just not at anechoic level) for musicians not feeling confortable to play in. That´s when some sound processing is required by them, to listen through the monitoring bus (using TADAMMM headphones). What they´re hearing while performing serves as a guide and it´s not actually recorded. This is common studio practice.
By saving on the less time (read money) required to record the entire performance without the use of multi-tracking, and listening, and consequent mixing, and listening, and rerecording, and relistening, and remixing, and relistening...
I think either you didn´t get my idea or you´re writing along as you´re reading, finding later the answers to your questions in my very same post. But as English is not my native language, and I don´t write much, I´ll take that as my fault.
So, my original idea is that our brain needs some cues for perceiving recorded sounds that it´ll take as realistic. On a practical case, when you decide to listen to your favourite band playing live at the confort of your home and that´s not possible for a number of $$$ reasons, you start to build a home system just to verify that it´s just not the same thing. Why? Because the performance was recorded in a different venue and your listening room is biasing with its sonic signature every acoustical nuance the recording tried to preserve. This alone is IMHO more than sufficient for the brain not getting easily deceived. So, either the musicians record inside a room with no sonic signature at all and the resulting playback will have any listening room´s ambience added, for "the sense of being there" (sic TL), or (in case your favourite band is the London Symphony Orchestra so it would be physically impossible to fit hundreds of players inside your house anyway for you to compare) your desire is to be beamed into the Royal Albert Hall: unless you are in the Saharian desert, you´ll have to listen inside a room with no sonic signature at all. That´s why audiophiles fill their rooms with acoustic treatment materials and most are happy with their stereos, to a certain point.
Yep. For that I can´t see no other solution than to use earphones provided the recordings were made with the Dummyhead microphones in STEREO.😉 I know some recordings that use heavy digital processing like RSS ( Roland Sound Space) but they´re all made for entertaining purposes, like flying singing pygmees and such.
Not if a non-reverberant location is in the game.
Nope, the head is there preciselly to avoid phase anomalies, like the phase plug on the centre of a dynamic speaker driver.
I think the ear shape is not there just to amplify but to somehow help locating sound sources. The direct, non reflected sound from a car´s engine coming from behind for instance will have some harmonics attenuated by the ear pavillon, just like when you place a speaker´s cover to tame some incisive tweeter., while reflected sound from side-walls will arrive later, but will have most of those harmonics intact. In principle. And of course, the brain will decode.
You may think such small area of the ear may function with very small wave-lengths, and that´s true. In fact, most people into audio knows that it´s the higher frequencies that are the most perceived in terms of direction. That´s why you can place a subwoofer in a corner whithout prejudicing stereo image.
Hummm, here´s a bizarre idea: If we have to use a dummyhead for capturing a stereo scene why not some ear-shaped horn loudspeakers for reproducing it? Mirror-imaged, for STEREO.😉
I added the Ambisonics site to the Favourites. I´ll read it when I have the time.
Cheers
I knew this would come up to the conversation 😉. Please note I know very little of psycho-acoustics, but I suspect they have a lot to explain about these stuff. Let me illustrate with a short joke: Two guys are walking by the docks and the smart one says: "So sad, did you see it? A dead seagull..." and the dumb guy looks up to the sky and says "Where? Where?".
Don´t take me wrong, but if you were walking in the middle of the road and someone on a deltawing ringed a bycicle bell above you, your immediate impulse would be to turn your head back or to get out of the way, just because you knew where the sound was supposed to be coming from, right?
Long ago I played this trick to a friend´s dog: I recorded the dog barks on a Casio SK-1 (a four-note polyphonic toy sampler) and played back the sample a few notes higher and lower so it sounded like four different dogs barking. Guess what? The animal run madly to the balcony, barking at the street.
About locating the car that´s coming from behind: that has also a lot to do with the doppler effect: a scientific and very measurable fact that explains why a noise or light source coming towards us, passing by and running away has its pitch or colour frequency in downward progression while its amplitude increases and decreases. That´s what is based the expanding Universe theory on.
It´s that and reverberance. It´s not only left-right sound reflections that count, the floor you´re stepping on (while at the street) also plays a fundamental role, as well as the ceiling while you´re at home. Both provide sound reflections that help identificate and localize sound sources on a vertical axis. The ears receive, the brain decodes.
Eek, can't see many musicians wanting to play in an anechioc chamber. Firstly the sound will be incredibly dull (snip)
That´s an easy one: Almost every studio recording room sounds dead enough (just not at anechoic level) for musicians not feeling confortable to play in. That´s when some sound processing is required by them, to listen through the monitoring bus (using TADAMMM headphones). What they´re hearing while performing serves as a guide and it´s not actually recorded. This is common studio practice.
and secondly, how would anyone justify the expense to the accounts manager?.
By saving on the less time (read money) required to record the entire performance without the use of multi-tracking, and listening, and consequent mixing, and listening, and rerecording, and relistening, and remixing, and relistening...
If musicians start playing in anechoic chambers, shouldn't we be going to listen musicians play in anechoic chambers as well?
I think either you didn´t get my idea or you´re writing along as you´re reading, finding later the answers to your questions in my very same post. But as English is not my native language, and I don´t write much, I´ll take that as my fault.
So, my original idea is that our brain needs some cues for perceiving recorded sounds that it´ll take as realistic. On a practical case, when you decide to listen to your favourite band playing live at the confort of your home and that´s not possible for a number of $$$ reasons, you start to build a home system just to verify that it´s just not the same thing. Why? Because the performance was recorded in a different venue and your listening room is biasing with its sonic signature every acoustical nuance the recording tried to preserve. This alone is IMHO more than sufficient for the brain not getting easily deceived. So, either the musicians record inside a room with no sonic signature at all and the resulting playback will have any listening room´s ambience added, for "the sense of being there" (sic TL), or (in case your favourite band is the London Symphony Orchestra so it would be physically impossible to fit hundreds of players inside your house anyway for you to compare) your desire is to be beamed into the Royal Albert Hall: unless you are in the Saharian desert, you´ll have to listen inside a room with no sonic signature at all. That´s why audiophiles fill their rooms with acoustic treatment materials and most are happy with their stereos, to a certain point.
We should always try to bring the original performance space into our own rooms, rather than trying to have the performers play in our rooms. Surely, the former represents a higher degree of accuracy to source than the latter.
Yep. For that I can´t see no other solution than to use earphones provided the recordings were made with the Dummyhead microphones in STEREO.😉 I know some recordings that use heavy digital processing like RSS ( Roland Sound Space) but they´re all made for entertaining purposes, like flying singing pygmees and such.
(snip)our brain will notice the reverberation as coming from x direction/s and the original origin of the sound from another.
Not if a non-reverberant location is in the game.
I only sugested an experiment. Of course things aren´t so linear.This is a gross simplification (snip snip snip)
Ah, a dummy head, indeed it is a clever microphone, which can enhance stereo recording and playback somewhat. But the phase anomalies still remain, the incorrect mapping remains.
Nope, the head is there preciselly to avoid phase anomalies, like the phase plug on the centre of a dynamic speaker driver.
I think the ear shape is not there just to amplify but to somehow help locating sound sources. The direct, non reflected sound from a car´s engine coming from behind for instance will have some harmonics attenuated by the ear pavillon, just like when you place a speaker´s cover to tame some incisive tweeter., while reflected sound from side-walls will arrive later, but will have most of those harmonics intact. In principle. And of course, the brain will decode.
You may think such small area of the ear may function with very small wave-lengths, and that´s true. In fact, most people into audio knows that it´s the higher frequencies that are the most perceived in terms of direction. That´s why you can place a subwoofer in a corner whithout prejudicing stereo image.
Hummm, here´s a bizarre idea: If we have to use a dummyhead for capturing a stereo scene why not some ear-shaped horn loudspeakers for reproducing it? Mirror-imaged, for STEREO.😉
I added the Ambisonics site to the Favourites. I´ll read it when I have the time.
Cheers
Member
Joined 2004
About locating the car that´s coming from behind: that has also a lot to do with the doppler effect: a scientific and very measurable fact that explains why a noise or light source coming towards us, passing by and running away has its pitch or colour frequency in downward progression while its amplitude increases and decreases.
Hmm, Doppler effect has little do to with the location of a *static* sound source, although it will help you roughly determine the speed at which a moving vehicle is coming towards you. The moving car was just an example, for arguements sake, why don't we say the car is stationary and is idling. There's no doppler effect now, but I (hopefully you and everyone else too) knows the car is there, and can guess at the approximate distance and position of said car, without having to turn to look at it. This is down to the three things I mentioned earlier. Level, phase and Haas effect. All explained in the texts I have linked to.
It´s that and reverberance. It´s not only left-right sound reflections that count, the floor you´re stepping on (while at the street) also plays a fundamental role, as well as the ceiling while you´re at home. Both provide sound reflections that help identificate and localize sound sources on a vertical axis. The ears receive, the brain decodes.
Indeed, reverberations from every angle count. It's the time between reverberations reaching you ears and the original source of the sound reaching you ears, that helps your brain determine what is source and what is reverb. Hence your brain will point you in the right direction of the source and read the reverb as exactly that, reverb.
As you've said even the reverberation from the floor and ceiling count, alas phase relationships on your typical pan-potted stereo mix are next to non-existant and incorrectly mapped(all sound comes from the front). Even those done with a stereo pair will have incorrect mapping, all to the front, then how do our ears tell what's supposed to be coming from the front and what's supposed to be coming from elsewhere?.
That´s an easy one: Almost every studio recording room sounds dead enough (just not at anechoic level) for musicians not feeling confortable to play in. That´s when some sound processing is required by them, to listen through the monitoring bus (using TADAMMM headphones). What they´re hearing while performing serves as a guide and it´s not actually recorded. This is common studio practice.
Common studio practice pays little regard to accurately capturing a soundstage to any degree. It mostly relies on panpotted mono close-miked signals(these only deal with level, not the two other things needed for accurate location of sound, all in the texts), which the studio engineer can do exactly what he pleases with. There's little reference to where the original musician was situated in relation the the studio and there's rarely an attempt to capture all the musicians together with one stereo pair, pretty much making the idea of imaging or a soundstage in such recordings a moot point.
By saving on the less time (read money) required to record the entire performance without the use of multi-tracking, and listening, and consequent mixing, and listening, and rerecording, and relistening, and remixing, and relistening...
If the musicians are arranged correctly and the correct mic is used, it makes the above unnecessary. As you said yourself "the studio is dead enough for musicians", why take them anywhere else?. Even if someone would let you use their anechoic chamber, how many studios do you think could afford, let alone justify such a thing?
So, my original idea is that our brain needs some cues for perceiving recorded sounds that it´ll take as realistic. On a practical case, when you decide to listen to your favourite band playing live at the confort of your home and that´s not possible for a number of $$$ reasons, you start to build a home system just to verify that it´s just not the same thing. Why? Because the performance was recorded in a different venue and your listening room is biasing with its sonic signature every acoustical nuance the recording tried to preserve. This alone is IMHO more than sufficient for the brain not getting easily deceived.
Well, then we should be working on our room and system setup, rather than trying to implicate our own room resonances into the playback. You can recreate a venue to an extent given that the 3 cues pertaining to location of sound are maintained and that the sound is correctly mapped. Stereo can't so this, it will never do this, please don't discount Ambisonic's ability to do this prior to reading the texts(so far you've only bookmarked the texts, correct?).
Yep. For that I can´t see no other solution than to use earphones provided the recordings were made with the Dummyhead microphones in STEREO. I know some recordings that use heavy digital processing like RSS ( Roland Sound Space) but they´re all made for entertaining purposes, like flying singing pygmees and such.
Firstly, I'm kind of confused by how you agree with me about recreating the source, yet you think musicians should start being recorded in an anechoic chamber. How many concerts do you think will take place in an anechoic chamber?
Also, trying to listen to "realistic" levels inside a pair of headphones can potentially be dangerous and painful, I'd advise against it, Dummyhead or not.
I posted a picture of a microphone that can get closer than any other to capturing the source, including the dummy head, but you've yet to comment on it or acknowledge my posting it, even question what it was...
Not if a non-reverberant location is in the game.
Life contains a fair amount of reverb. Unless everyone starts, as you suggest, recording in an anechoic chamber, I don't see any way round this. But then I don't see it as any kind of problem in the first place.
Nope, the head is there preciselly to avoid phase anomalies, like the phase plug on the centre of a dynamic speaker driver.
This is patently untrue, the head is there "precisely to cause phase anomalies". It is there to recreate a generic human head and the effect it has on sound capture(two spaced ears/microphones will have phase differences, but the brain uses and decodes such difference, microphones cannot). So any sound recorded with it, will contain "to a degree" the cues needed for sound localisation(read as "phase diferences" between the two microphones/ears). This works best when using headphones, as the sound is right next to our ears, there is no room effect, no reverb, this is what the dummy head's primary function is, recreating these things when they aren't available.
I think the ear shape is not there just to amplify but to somehow help locating sound sources. The direct, non reflected sound from a car´s engine coming from behind for instance will have some harmonics attenuated by the ear pavillon, just like when you place a speaker´s cover to tame some incisive tweeter., while reflected sound from side-walls will arrive later, but will have most of those harmonics intact. In principle. And of course, the brain will decode.
The harmonics of anything have very little do with localisation(save the fact that a signal off to the left will have reduced high frequency content when arriving at the right ear and vice versa).
How does a reflected sound get away with not being "attenuated by the ear pavillon", but the direct sound does not?. A direct sound will have more of the "original" harmonics in tact than a reflected sound, the reflected sound might have different harmonics, due to it bouncing off this object and that, but it will also usually exhibit a loss in high frequency content, due to the fragility of such frequencies.
I added the Ambisonics site to the Favourites. I´ll read it when I have the time.
To be honest, I suggest you read it asap. It will stop you repeating the same fallacies ad infinitum.
It's farly much dead as a format. That's a pity because it works really, really well and tackles the REAL issues of sound reproduction (as opposed to the rather trivial matter of wires).
It's funny how when we start talking about the REAL issues of sound reproduction, the thread grinds to a halt. I've only had 3 different people reply with regards to Ambisonics, don't know how many people have bothered to read the texts.
As you say, it's a pity.
derf said:As you say, it's a pity.
Maybe because those who know how real unamplified instruments and live performances sound don't really care with multi-channel (read: more than two speakers).
I'm one of those who you can't convince to put speakers on my back, or (worse still) a center channel.
Also, processing the signal is something that doesn't appeal me AT ALL.
Each one has his priorities and his own way to enjoy the music.
I prefer it natural, unprocessed.
The ambience and reverberations I can hear with a good stereo recording on my system are more than enough, no need to synthetically "simulate" reverberations of churches, halls... for that you have Yamaha A/V amps with tens of DSP modes.

PS: sorry, but discussing cables is more interesting.😀
Hi all
A litle of the side of the real topic, perhaps but...
If we really only wanted to listen to music a good mono system would be
all that was needed. I speak from experience, I only used mono for allmost 3 years. Spending all your resourses for musical bliss can be very rewarding. Stereo effects have nothing to do with music.
cheers 😉
A litle of the side of the real topic, perhaps but...
If we really only wanted to listen to music a good mono system would be
all that was needed. I speak from experience, I only used mono for allmost 3 years. Spending all your resourses for musical bliss can be very rewarding. Stereo effects have nothing to do with music.
cheers 😉
Maybe because those who know how real unamplified instruments and live performances sound don't really care with multi-channel (read: more than two speakers).
Perhaps that is so, but I bet all those who dismiss Ambisonic have never even heard an Ambisonic system. I'd be very suprised if such people even read the texts(read: proofs), as to why Ambisonic is a superior capture/reproduction method compared to stereo. May I ask, did you read the texts I have provided?
Yet, these are the same people who can post pages and pages about the superiority of this cable over the next, without providing anything in the way of proofs, reliable tests or measurements.
I'm one of those who you can't convince to put speakers on my back, or (worse still) a center channel.
OK, so even if you heard a Ambisonic system(not the same as the systems you're bundling it in with) and decided it was the better capture/reproduction method, you'd stick with your current stereo system, you couldn't be moved?
PS: sorry, but discussing cables is more interesting.
Yes, carlos, I think you may be right, it's probably more interesting for some.
So, I'm thinking would it be possible to split the thread from where I first mentioned Ambisonics into a new thread about Ambisonics. I have no wish to hi-jack the cables thread anymore, and it's just possible a dedicated thread will garner interest from some slightly more open minded individuals.
Mods, is this easily done?
Dude the cable thing is done. As for ambisonics ya i read that , i also Downloaded an ambisonic recording (or Binaural[same thing ?]) of a guy in a Volkswagen with his wife and listened to it on my Md-70's . It doesnt sound right.
Thats all i have to say about it because i wont have a leg to stand on if it gets technical. Now to post something far more interesting, to me at least.
Thats all i have to say about it because i wont have a leg to stand on if it gets technical. Now to post something far more interesting, to me at least.
derf said:May I ask, did you read the texts I have provided?
Yes, I did.
Also, years ago I've read all about Ambisonics and it's development process, and other surround formats from the 70's too.
Good reading, but doesn't interest me, sorry...
Madmike, Ambisonics and binaurual recordings are two very different things.
Ambisonics uses the microphone shown in my avatar, whilst binaurual uses the head microphone shown earlier or you can use own head with two mics attached to either side near the ears(usually small electrets, unless you really don't care what people think about you 😉 )
I'm liable to think it's the binaural method you heard, the Ambisonic mic is beyond the reach of most people(I'm still saving for mine). Of course there's all kinds of other factors involved when recording.
Have a listen on a pair of headphones, if it sounds more "right", then it's probably binaurual.
A link would probably help...
Ambisonics uses the microphone shown in my avatar, whilst binaurual uses the head microphone shown earlier or you can use own head with two mics attached to either side near the ears(usually small electrets, unless you really don't care what people think about you 😉 )
I'm liable to think it's the binaural method you heard, the Ambisonic mic is beyond the reach of most people(I'm still saving for mine). Of course there's all kinds of other factors involved when recording.
Have a listen on a pair of headphones, if it sounds more "right", then it's probably binaurual.
A link would probably help...
MD-70's are head phones 🙂 I dont remember where i got that clip and i never kept it. Someone else sure had it because i seem to recall getting the link from here MONTHS ago. I can say i have never heard Ambisonics then although i did read about it. And yes i recognize that microphone now. The other one is the stupid crash test dummy head 😀 Binaural. It does not sound right dude. Not to me anyway.
Negative Resistance:
Yep, I take no umbrage at being viewed with your tongue somewhere else in the mouth than the usual place.
But technically it is quite simple. All it requires in practice is that the current drawn from, say, a regulated power supply, is sensed by monitoring the voltage drop over a very small resistor in series with the load, and applying that as positive feedback within the overall negative feedback loop which is the basis of any regulated supply. What you will then measure, is a slight increase in output voltage the moment a load is applied. (This principle is often used in electric motors for speed control under varying loads.) Logic thinking will then show that the only way this can happen is if the output impedance of said power supply is negative. This is e.g. used in amplifier output circuitry, called motional feedback, and results in a very clean bass response. Where any deviation of loudspeaker cone travel from the original input is detected by sensing the loudspeaker "back e.m.f.", the driver cone is "forced" to follow the applied wave.
It can be a powerfull tool in bass reproduction, but as usual in life there are penalties. The particular amplifier sensing circuitry must be tailored exactly to the specific driver, to the extent that such amplifiers are usually mounted in the speaker cabinet. There are also initial pre-set adjustments to be made - a well-known driver manufacturer has revealed that drivers of the same type can vary by as much as 20% in production (and then we want high fidelity!) Such designs must also be carefully controlled for stability as I think it is general knowledge that drivers make atrocious loads (phase angles all over the show etc.). The technique is usually limited to the frequency band in the vicinity of the loudspeaker bass resonance because this is where major distortion can arise.
I hope it is less of a black art now! (To boot, you can use ordinary wire for loudspeakers if you sense the fed-back signal directly at the driver terminals with a separate "sensing" cable connected .)
Yep, I take no umbrage at being viewed with your tongue somewhere else in the mouth than the usual place.
But technically it is quite simple. All it requires in practice is that the current drawn from, say, a regulated power supply, is sensed by monitoring the voltage drop over a very small resistor in series with the load, and applying that as positive feedback within the overall negative feedback loop which is the basis of any regulated supply. What you will then measure, is a slight increase in output voltage the moment a load is applied. (This principle is often used in electric motors for speed control under varying loads.) Logic thinking will then show that the only way this can happen is if the output impedance of said power supply is negative. This is e.g. used in amplifier output circuitry, called motional feedback, and results in a very clean bass response. Where any deviation of loudspeaker cone travel from the original input is detected by sensing the loudspeaker "back e.m.f.", the driver cone is "forced" to follow the applied wave.
It can be a powerfull tool in bass reproduction, but as usual in life there are penalties. The particular amplifier sensing circuitry must be tailored exactly to the specific driver, to the extent that such amplifiers are usually mounted in the speaker cabinet. There are also initial pre-set adjustments to be made - a well-known driver manufacturer has revealed that drivers of the same type can vary by as much as 20% in production (and then we want high fidelity!) Such designs must also be carefully controlled for stability as I think it is general knowledge that drivers make atrocious loads (phase angles all over the show etc.). The technique is usually limited to the frequency band in the vicinity of the loudspeaker bass resonance because this is where major distortion can arise.
I hope it is less of a black art now! (To boot, you can use ordinary wire for loudspeakers if you sense the fed-back signal directly at the driver terminals with a separate "sensing" cable connected .)
Phew!!
A lot said there, Carlos - and don't snub me if I repeat: I am not going back sentence for sentence; I agree with your views overall.
I want to stress one thing that I get the impression is not always realised by all: It is largely the "ambience" (room/hall effect) and not directionality of primary sources that creates realism. Even a solo instrument will sound better in stereo than in mono (or all the players sitting one behind the other for that matter).
Then (having studied this somewhat), the processing taking place in the total hearing faculty (from the auditory nerve through to the final sensation) is incredibly complex. When I last read not all was clear - e.g. somewhere right and left ear signals even get mixed before going further. Such research is very difficult; one cannot go fishing around in the brain with a probe to find out exactly what is going on where. Only to accentuate that the brain had years to "programme" itself and can analyse stuff that has no simple relation to direction, shape of ear lobes, etc. only. This was implied in your post but needs to be accentuated.
Lastly (from my side) it is also necessary not to overlook the phase-related physics of recording. If one draws on a piece of paper in its simplest form say 5 instruments some distance away, with your ears at the position of 2 microphones, realism can only be duplicated by listening to this with earphones, i.e. left channel to left ear and vice versa ONLY. The moment this is reproduced from 2 loudspeakers (now replacing the microphones) and the ears (now some distance away) must form a picture by a doing a process of differentiation a second time, something will be different. A simple geometrical drawing will illustrate this. (It must be kept in mind that it needs only a few centimeters of path difference to add/cancel high audio frequencies.)
The laws of physics preclude that stereo as it is (2 channels only) can render a true image of the original (not to say it cannot come close!). The only other alternative to nirvana is a microphone recording every instrument (sound source) and that source alone - no pick-up from next door. Again a drawing on paper with a virtual wall between listener and orchestra, i.e. where the microphones are (which then become the secondary sound sources) and then drawing and considering the difference in path lengths between the 2 systems, will illustrate.
Note that I am not saying that we cannot have something very enjoyable by existing procedures; I listen to my recordings and find them immensly satisfying (...er, well, most of the time). But these factors do exist when realism is analysed.
A lot said there, Carlos - and don't snub me if I repeat: I am not going back sentence for sentence; I agree with your views overall.
I want to stress one thing that I get the impression is not always realised by all: It is largely the "ambience" (room/hall effect) and not directionality of primary sources that creates realism. Even a solo instrument will sound better in stereo than in mono (or all the players sitting one behind the other for that matter).
Then (having studied this somewhat), the processing taking place in the total hearing faculty (from the auditory nerve through to the final sensation) is incredibly complex. When I last read not all was clear - e.g. somewhere right and left ear signals even get mixed before going further. Such research is very difficult; one cannot go fishing around in the brain with a probe to find out exactly what is going on where. Only to accentuate that the brain had years to "programme" itself and can analyse stuff that has no simple relation to direction, shape of ear lobes, etc. only. This was implied in your post but needs to be accentuated.
Lastly (from my side) it is also necessary not to overlook the phase-related physics of recording. If one draws on a piece of paper in its simplest form say 5 instruments some distance away, with your ears at the position of 2 microphones, realism can only be duplicated by listening to this with earphones, i.e. left channel to left ear and vice versa ONLY. The moment this is reproduced from 2 loudspeakers (now replacing the microphones) and the ears (now some distance away) must form a picture by a doing a process of differentiation a second time, something will be different. A simple geometrical drawing will illustrate this. (It must be kept in mind that it needs only a few centimeters of path difference to add/cancel high audio frequencies.)
The laws of physics preclude that stereo as it is (2 channels only) can render a true image of the original (not to say it cannot come close!). The only other alternative to nirvana is a microphone recording every instrument (sound source) and that source alone - no pick-up from next door. Again a drawing on paper with a virtual wall between listener and orchestra, i.e. where the microphones are (which then become the secondary sound sources) and then drawing and considering the difference in path lengths between the 2 systems, will illustrate.
Note that I am not saying that we cannot have something very enjoyable by existing procedures; I listen to my recordings and find them immensly satisfying (...er, well, most of the time). But these factors do exist when realism is analysed.
OOPS - SORRY!!
Apart from compliments to Carlos, my previous reply was really stimulated by the posts from Derf and Paulo Simoes as well. Sorry, guys, you also deserve credit! One gets engrossed in the contents and forgets the authors.
Apart from compliments to Carlos, my previous reply was really stimulated by the posts from Derf and Paulo Simoes as well. Sorry, guys, you also deserve credit! One gets engrossed in the contents and forgets the authors.
- Status
- Not open for further replies.
- Home
- Design & Build
- Parts
- Speaker wire ......... Why