Engineering the stereo image

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
I don't know if there are any sound engineers reading the forum, but I was curious about how modern recordings were engineered, especially with respect to creating the stereo image.

If each musician is being recorded separately on individual tracks, where does the stereo image come from? Is it created by the sound engineer in the studio?
 
preiter said:
I don't know if there are any sound engineers reading the forum, but I was curious about how modern recordings were engineered, especially with respect to creating the stereo image.

If each musician is being recorded separately on individual tracks, where does the stereo image come from? Is it created by the sound engineer in the studio?

An engineer by the name of Adolpho Waring created a circuit that is used to create the imaging. A complex system, is was designed around circulator theory, somewhat like that used during ww2 for microwave waveguides. It creates the proper stimulus via a 3-d topological construct that was modelled correctly in the 50's, I believe, and adapted in the early 70's to work using modern opamps (at that time of course), with a circuit which cross couples the leading edges through some algorithms...this has been adapted by use of the Z transform theory today, in a system blending topology...current mixdown soundboards have this feature included, albeit the technology is sufficiently advanced that one is now able to use exactly one knob on the front panel to alter the direction of the source image in the soundfield..

So, the simple answer is this: All the individual tracks are dropped into the soundboard, put through a freakin WARING BLENDER, and the garbage output stream is what you buy in the store..

Seriously, they just use a pan pot, and alter the amplitude from side to side. That is not a stimulus that occurs in nature..

Cheers, John
 
I have a test CD and there's this tapping sound that goes to the right and then moves behind the listener. Something such. At least that's what the guy on the disc tells us. It sounds like he's for real. I guess I wasn't in the mood. I mean, I don't understand those behind the viewer speakers in surround set-ups. The action takes place on the screen in front of me. Why would there be sound coming from behind? Makes no sense. Is the idea that I should turn around and look? Shouldn't there then be another screen behind me? Too many questions.

Maybe I'll give the test CD a serious try one day.
 
Re: Re: Re: Engineering the stereo image

rdf said:
Examine carefully the basis for their understandings, and the equations they are using.

Note they use R and L...the right and left information.

That is all...they play with phase, they may even play with amplitude, although in the brief view, I didn't see that, I assume that some really do...

Where in their signal processing, is the angular variable for the origional source to the mikes, where is the angle/depth vs amplitude variable..

On the reproducing end, where is the speaker placement angle, where is the distance to listener, where is the speaker angular dispersion function, where is the correction function for the two point source to nulled three false images???? (unless of course, you want to put an absorbant septum on your face).

What has been done is rudimentary. It is all a simple shell game to try to fool your ears into believing it's there...it is not good.

And, nowhere do these equations have a frequency dependent component.

As I said, rudimentary. And darn near all the positional information is tossed in the garbage as a result of mono compatibility.

Even implimenting the correct ITD/IID algorithms is a small step, as it does not correct for the cross head images that we are supposed to ignore.

Give me a year or two, and I think I can explain image localization stability theory to them.

If you wish, I can go into more depth tomorrow....right now, I carpool..

"welcome to the nineties.." that was funny..I gotta give you that..

What I did 10 years ago will be SOTA in another decade..not in audio, of course..(you'd be suprised how complex refrigerator magnets are...;) ;) )

Cheers, John

ps...it's all fun, I'm not offended..hope you aren't either..
 
Re: Re: Re: Re: Engineering the stereo image

jneutron said:


ps...it's all fun, I'm not offended..hope you aren't either..


Not at all! I'ld be very interested in reading more, with the proviso that lacking the background on your personal approach it's reading in half sentences right now. A bit like walking into the middle of a conversation. I do think the simplicity of the Wikipedia format might lead to an underestimation of the degree of sophistication some of these techniques employ but that's for tomorrow.
 
So rdf,

Are there actually studios using these gizmos on a track by track basis to array the instruments in an image?

John can nitpick the exact algorithm all day long (he will BTW), but hell, that's a start.

Short of 2 mics in foam head and headphones, it will always be an illusion, but a better one is better one.
 
Re: Re: Engineering the stereo image

jneutron said:




So, the simple answer is this: All the individual tracks are dropped into the soundboard, put through a freakin WARING BLENDER, and the garbage output stream is what you buy in the store..

Seriously, they just use a pan pot, and alter the amplitude from side to side. That is not a stimulus that occurs in nature..

Cheers, John

Most of my sources seem to have been processed through a Viking blender, not Waring, as you suggest...

Trouble is, only Sibelius, Grieg, and the world reknown Ole Bull voice properly, and the imaging, what with the fjords and stepps, seems cold and lifeless... :cold:


auplater
 

Attachments

  • ob5.jpg
    ob5.jpg
    8.8 KB · Views: 150
rdf said:
Absolutely. One manufacturer's example:

http://www.qsound.com/2002/spotlight/main2a.asp (Princess Patricia's Canadian Light Infantry Band!?!)

To judge by listening experience only (appreciation in advance to anyone who can confirm it) but I'm certain movie surround algorithms make heavy use of this technique.
Their technique is on the money. However, much of what they have done cannot be applied to stereo reproduction as we know it today.. Mixdown in the studio to two tracks destroys the majority of the information which is needed for accurate localization cues for an arbitrary playback system configuration. Note they indicate that the processing algorithms for headphone and speaker use are entirely different.

For speakers only, the geometry of the listening setup is critical. What configuration is the Q system optimized for? What distance to the speaker plane is optimal?. And, if you play a stereo disk on a boombox, what happens?? How will point source speaker drivers differ from line arrays and planar arrays??

And what they have not written down...how sensitive is the perceived imaging to head location and rotation? Does the sweet spot break up when the head is moved? (note, this is one of the issues surrounding stability, there are more). Is the sweetspot large enough for two people? If you shift to one side, does the speaker contribute more volume or less???? IOW, is the speaker toed in or out.. Does HF dropoff faster than mids vs angular displacement?

They had to made significant tradeoffs, simple decisions up front.

If they have not standardized the speaker and listener position, they cannot do it correctly....only an approximation.

For HT, that is certainly good enough, and significantly better than was previously available.

I look forward to the day the media contains all the voices independently, with post processing at the playback system providing image placements. THAT removes the system geometry considerations. Providing the equivalent of 10 to 20 channels in a digital stream is trivial today.


Cheers, John
 
David Griesinger is an industry pioneer in the field of spacial perception, the maintenance of it and the it's artifical creation. On his web site he has generously made many of his original research papers available along with a number of other seminal papers from other pioneers on the subject.
Everything you ever wanted to know and lots more. Definitely worth having a look at!

http://www.std.com/~griesngr/

Cheers, Ralph
 
jneutron said:


Does the sweet spot break up when the head is moved? (note, this is one of the issues surrounding stability, there are more).

Only if you take back the diamonds :) (oops, that was another thread) re: poobah

jneutron said:


Is the sweetspot large enough for two people? If you shift to one side, does the speaker contribute more volume or less????


If the two people are REALLY good friends maybe...:devilr:
 
ralphs99 said:
Everything you ever wanted to know and lots more. Definitely worth having a look at!

http://www.std.com/~griesngr/

Cheers, Ralph

Not everything...much is missing..

Thank you for the link, I'm printing them out to mark them up...Major nice...

Coupla notes: (this is from the pan laws pdf)
Quote""The stability of the front image is beyond the scope of this preprint...""

Well, ok..that was one of my biggies..I'll look through his other stuff for that one..

Quote:""Thus we found in all our experiments it is better to switch the presentation of a sound from left to right and ask the subject to estimate the width between the two images rather than estimating the position of a single sound image.""

This is the beginning of differential localization theory...I do not know if he continued, but this is an avenue I've been "railing" about.

Quote:""The apparent position of a sound source is also strongly influenced by past history..""

That is why I talk about differential localization....Use of that removes all dependence on past history.


ARRRRRGGGGHHH!!!! Where is figure 6????NOW I'M PEEVED...

Oops, they have two figure 5's...whew...

Note the deviation in positioning as a result of ITD using a sine pan law. I note that there is no figure showing this for a standard sine law IID pan. What is important here, is that the relative position is frequency dependent.. That will also be present with amplitude only pans..shame that wasn't in this paper..

I note also that there is absolutely no hint of any testing being done with both ITD and IID, nor the sensitivity of one parameter vs another with frequency and offset angle....but clearly, I speak of image stability in light of the localization parameters. Had any work been done along this line, the plots would have to be 3-D ones for clarity.

From the first paper I read, he is on the right path. I'll read all the rest to see if he stayed the course..although, in perusing the others, I do not see any indication of that. Given that he is in business, I cannot bame him for jumping off the theoretical bandwagon where he did. I, however, am under no such constraint. That is why I enjoy it so much....even the ping's poobah slaps me with don't bother me...(gotcha, wookieman);)

Cheers, John..

ps..thanks again for such a great link..
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.