Linkwitz Orions beaten by Behringer.... what!!?

That sucks RE: the WM61a. I half heartedly soldered fresh WM61a capsules earlier today, actually. Perhaps I should have been more careful with the last of my specimens...

I considered buying some of the capsules from here in the past:
Microphones
I think that these are the Primo capsules mentioned earlier in the thread. They are popular with people collecting sounds of nature, due to the low noise floor.

It's frustrating that there is not already a software version of the Realiser. I would gladly pay for it as a plugin, as well as the headtracker and mics. I can't afford the real thing, unfortunately.
 
Last edited:
Having modeled HRTFs before I am pretty sure that the cupped hands would add a level of complexity that would be impossible to guess at. As far as I am concerned all of this "cupped hands does this because" is pure conjecture.

And to repeat a statement that I made in another thread, the ear cannot detect phase crossings of the waveform, that would completely violate the place theory of hearing which detects the location of the peak amplitude of the wave crest along the cochlea. No zero crossings of the waveform are detected. The neurons do not fire synchronously with the waveform, but in volleys. The more neural activity of the volley the greater the intensity sensation - the location of this volley encodes the frequency.

If the ear detected actual waveforms there would be no masking. Masking can only be explained by place theory and place theory does not allow for waveform detection.

The ear does detect zeros of the signal envelope as time difference between the ears (ITD), but this is NOT a zero crossing of the waveform.
 
Last edited:
And to repeat a statement that I made in another thread, the ear cannot detect phase crossings of the waveform, that would completely violate the place theory of hearing which detects the location of the peak amplitude of the wave crest along the cochlea. No zero crossings of the waveform are detected. The neurons do not fire synchronously with the waveform, but in volleys. The more neural activity of the volley the greater the intensity sensation - the location of this volley encodes the frequency.

The ear does detect zeros of the signal envelope as time difference between the ears (ITD), but this is NOT a zero crossing of the waveform.

This is actually untrue. To quote from Pickles physiology of hearing, p. 82 in the 1982 paperback edition: "At high frequencies, above 5Khz, the nerve fibers fire with equal probability. At lower frequencies, however, it is apparent that the spike discharges are locked to one phase of the stimulating waveform".

So, what the brain processes is a tonotopically organized 'flat cable', where neurons corresponding to a specific frequencyband fire with a) the number of firings related to the intensity of the sound in that bin, and b) the timing of the firings phase locked to the stimulating waveform. So, the brain has both intensity and phase to work with.

It is also known where the primary brain nodes are located where phase and intensity are processed, with inputs from both ears. These are nuclei of the brain stem, most likely the lateral superior olive for intensity comparisons and its medial sister to untangle phase information.

A lot of cats suffered a misserable life for science to find this all out, so we better make good use of this hard gained knowledge.
 
You are quoting from a 1982 text - well before the actual tests on the cochlea were ever done! The text is incorrect, except that it is clear that he claims that "phase locking" only occurs "at low frequencies", which I agree with if these are < 500 Hz.

The recycle time for a nueron is about 1 ms. The nerves do fire with equal probability, but not in synch, they are not "phase locked" at HF. If that were true then there would be no need to have a cochlea or place theory as the nerves would just fire "phase locked" to the waveform all the way up to 10 kHz. This would do away with the need for the whole cochlear structure.

The cochlea evolved as a means for our hearing, which would be lmited to < 1 kHz if the firings were "phase locked" - given the 1 ms. recharge time. To aquire the ability to hear above 1 kHz (which is clearly an evolutionary advantage) by "placing" these frequencies along the cochlea. At frequncies < 500 Hz there is an almost synchronous waveform detection with firings on the positive side of the waveform, just as your text suggests. But as the frequencies approach 1 kHz these firings start to become random because the recharge can no longer keep up with the waveform. They do fire "with equal probability" but NOT sychronous to the waveform. This is exactly why our hearing sensitivity changes below 500 - 1000 Hz and why there is masking above 1 kHz and not much below 500 Hz.

Again, if the firings were synchronous then one cannot explain masking. Only place theory can explain masking and place theory would not be necessary if the nerves could "phase lock" to the waveform.

"A lot of cats suffered a misserable life for science to find this all out" - correct, done by friends of mine like Jont Alan at Bell Labs in the 90's, from whom I learned all of this.
 
Last edited:
Earl, all of this stuff has been known for a long time, from actual measurements, done directly by probing nerve fibers with micro electrodes in live test animals subjected to sound sources.

As it goes in science, what stands, stands; we still use Kirchhoff's law, which dates back to the middle of the 19th century, and we will do that for centuries to come. The fact that I quote from a 1982 book only makes it puzzling that you are not aware of these facts.

I find scientific liturature a much more trustworthy source than some guy you know at the Bell lab who told you all this. It is entirely possible to actually make a real study of this subject matter, with books and what have you.

This is not to say that since 1982, no further knowledge has been accumulated and that no refinements have been made. For example, the 5KHz quoted by Pickles might actually be more like 3.5KHz.

Lernen!
 
Last edited:
More testing done this weekend with a small portable heavy wooden barrier, only 80x70cm right in front of the ears and well known classical recordings. Stereo setup. Listening for a few hours, then back to normal, things get "dull". The stereo spread revealed in its full glory, a very nice window into an event, but not more.
With the barrier, the effect is definitely more "I am there", better sense of the hall acoustics, the listening room dissapears. I just could not switch off and nailed nearly two requiems and half of Boris Godunov in one go. Listening fatigue is also much less. I will built a proper small separation on wheels, can't live without it now!
 
Well, the tests proves the barrier does not have to be huge to work and could be hidden easily. I would not be able to live with a proper ambio setup but that type of compromise, yes!
It also depends on the listening habits, I am more off a casual marathonian then an every day sprinter.
The funny story is, I helped Saturday one of my wife colleague with her stereo, a small Denon combi. Reduced the spread slightly and the listening distance, gave her a light barrier. Obviously she laughed when she saw me "taking position", then not so much when she tried it. Last report is she spent her whole Sunday listening. :cool:

Yes Markus, I tried that too and it does work, surround upmixing will be the next move, but I keep the barrier.
 
Just another Moderator
Joined 2003
Paid Member
I think cross talk cancelling is very unfair :mad: It will reveal you the spectacular triumph of stereo, but eventually you just cannot live along with the barrier :sad:

So then you seek out qsound recordings, or perhaps this --> Adventures in 3D Sound! - Studio 360 note that I found qsound more convincing, but perhaps if the one in the link is tuned to "typical" listening setups it will be more convincing (a lot of the tracks on that site if you follow some of the links say they are optimised for computer speakers).

Tony.
 
I think cross talk cancelling is very unfair :mad: It will reveal you the spectacular triumph of stereo, but eventually you just cannot live along with the barrier :sad:

I'm not decided why the barrier actually works. Reading through some papers by Benjamin/Brown, stereo's intensity-to-phase-conversion caused by interaural crosstalk should work pretty well.

In AES convention paper 7018 they write,
"For natural sound sources in the front quadrant, the
signals observed at the listener’s ears vary in a
progressive fashion, such that the ILDs increase
relatively smoothly as the sound source moves away
from the central axis and as frequency is increased. For
stereophonic sound sources there is a region about an
octave wide for which the ILDs are observed to increase
as the intensity ratio is decreased (the sound is panned
toward center). This counterintuitive behavior is
explained by the fact that the crosstalk across the
listener’s head is delayed enough that it destructively
interferes with the direct arrival of sound from the near
speaker."
 
I was kind of interested in the cross talk canceling so I made some measurements.

This first figure shows the different setups:

Tests_1-5.JPG


Case 1 is just a single speaker with mic at my listening position. It is a reference for things to come.

Case 2 is the same as Case 1, but with both speakers on. This is used to confirm that the mic is "exactly" the same distance from both speakers.

Case 3 has both speakers turned on but with a barrier place between the mic and the right speaker. The barrier is positioned so that the sound from the left speaker hits it edge on so as to minimize diffraction/reflections to the mic.

Case 4 is with only 1 speaker on with the parried positioned so that it would be perpendicular to the head and the mic was position 1/2 a heads width to the left of the barrier. The reflection off the barrier from the left effectively creates a phantom right speaker.

Case 5 is without any barrier, both speakers active, but with the mic offset to the left compared to Case 1, 2, and 3, 1/2 a heads width.

Figure 2 shows the frequency response for the 5 cases. As you can see, the FR for cases 1 and 2 is almost identical except for a 6 dB increase, as would be expected with matched speakers and the mic equal distance from both.. Above 3k Hz, case 3 is very similar to case 1. The barrier effectively blocks higher frequencies from the right speaker. Below 3k Hz there is evidence of some diffraction around the barrier, and at lower frequencies the response tends to the case 2, two speaker result, also as would be expected. Case 4 with the barrier perpendicular to the "head" and the mic off the barrier at ear's position shows the cancellation due the the time lag of the reflected sound. The barrier is not absorptive. Notice that for case 5 (no barrier, both speakers on, mic at ear position) the result is almost identical to case 4.

Tests_1-5-FR.JPG


Lastly the impulse response for the different cases are shown in the 3rd figure:

Tests_1-5-IMP.JPG


It is apparent that the impulses for Cases 1 and 2 are pretty identical except for level. Case 3 shows some deviation due to diffraction around the barrier. Cases 4 and 5 are again almost identical.
 
Hi Rudolf. The hard barrier I used did spread the angle somewhat, yes, but depending on the recording. The hall acoustics are going wider than the speaker spread, it gets more "halo" like. It seems though that it allows for a better tracking of the recording technique, and microphone spread/placement of virtual sources in studio.

Then, looking into the AS, for some large choral recordings differences where clearly heard, going from an ensemble congested between the speakers and a confusion of voices towards a way higher definition and separation. I also got a better definition of distances, for example in Abbado's Godunov following the singers moves on stage.