what causes a big soundstage (audio research)

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
why is it Audio Research preamps are known for such large soundstages? is it something they're doing? class A tube gain stage is nothing special, same for dual transformers. but my preamp (Audio Horizons) has a much narrower stage, and i'm wondering if i can do something to address this (ie more capacitance, more PS bypass caps, etc).
mahalo
 
This is a quote from the original Blumlein patent application:

Directional hearing sense is due to phase and intensity differences between sounds reaching the two ears, phase differences being more effective for the lower frequencies and intensity differences for higher frequencies.

I've been telling people this for years, after reading some papers by Toole and others. In the upper midrange we sense image localization primarily by amplitude comparisons, so frequency response matching of left and right speakers may be the primary issue. In the lower midrange, it's more about phase or timing comparisons left to right and vice versa. Any time based cue information that we would detect and interpret in the lower midrange is typically blurred or scrambled by the presence of a second set of timing cue information created at playback by inter-aural crosstalk. The Bob Carver "Holographic Generator" is an electronic approach to remedy this, but listener position is critical (tiny sweet spot). Polk made a speaker that did this acoustically. They had a second set of drivers in each cabinet that was fed an inverted and roughly 6dB attenuated feed from the other channel. I've never heard the Polk personally, but I hear that it has a wider sweet spot. If you want the best of imaging, you should understand this. Listening room acoustics typically screws this up as well.
 
^^ Are you kidding , Papa is easy to meet ...... :D
 

Attachments

  • NelsonPass.jpg
    NelsonPass.jpg
    49.7 KB · Views: 765
Forgot to ask, you are saying in your system, without changing anything else, if you replace the ARC with the AudioHorizons the soundstage is narrower, correct?

Without seeing the AH, I kind of doubt adding more capacitance to a power supply will make it wider (or bypassing). As it has been alluded to here, soundstage (in amps/preamps) is a function of accuracy, channel separation, bandwidth, gain, etc.

Hrmm, if you want the AH to sound like the ARC, why don't you buy/keep the ARC? Again, my previous question stands... did you A/B the ARC and the AH in your system?
 
This is a quote from the original Blumlein patent application:

Directional hearing sense is due to phase and intensity differences between sounds reaching the two ears, phase differences being more effective for the lower frequencies and intensity differences for higher frequencies.

I've been telling people this for years, after reading some papers by Toole and others. In the upper midrange we sense image localization primarily by amplitude comparisons, so frequency response matching of left and right speakers may be the primary issue. In the lower midrange, it's more about phase or timing comparisons left to right and vice versa. Any time based cue information that we would detect and interpret in the lower midrange is typically blurred or scrambled by the presence of a second set of timing cue information created at playback by inter-aural crosstalk. The Bob Carver "Holographic Generator" is an electronic approach to remedy this, but listener position is critical (tiny sweet spot). Polk made a speaker that did this acoustically. They had a second set of drivers in each cabinet that was fed an inverted and roughly 6dB attenuated feed from the other channel. I've never heard the Polk personally, but I hear that it has a wider sweet spot. If you want the best of imaging, you should understand this. Listening room acoustics typically screws this up as well.

"Directional hearing sense is due to phase and intensity differences between sounds reaching the two ears, phase differences being more effective for the lower frequencies and intensity differences for higher frequencies."

IMO despite this being the accepted conventional wisdom, it's not true. If it were, binaural sound would work. It meets ALL of the criteria of the conventional wisdom yet binaural sound actually has no direction at all. Each sound only has relative differences of intensity between one ear and the other. If binaural sound worked, that's what we'd all be listening to. The only reason not to would be the inconvenience of wearing heaphones.

I'm not sure why or even if a wide soundstage is a good thing. The only ones who hear a wide soundstage at a live performance are the conductor and those who sit in the first few rows of seats. As you move back into the audience, the angle between the extreme left and right of the stage and you the listener diminishes. Typical concert halls are about 75 feet wide with the stage being about 40 feet wide. The closest seats must be a good 15 to 20 feet from the musicians who sit closest to the apron of the stage. Go back a dozen or so rows and it's not all that wide, maybe +/- 45 degrees but not much wider. Further back and it's a narrower angle still.

Nevertheless, the goal of achieveing a wide soundstage seems to be an obsession with many audiophiles. I think it's one of the driving forces behind ambiophonic sound.
 
have you tried head tracking virtualization - I've only heard a demo of the SVS system but it did appear to give stable spatial reproduction


I don't see how recorded stereo/multichannel imaging can be so delicate that reasonable amplifiers can damage it

if multi-miced mastering, room modes, reflections, damping, loudspeaker positioning, crossover group delay, ect seem to often allow the perception of sound stage, imaging

what amplifier frequency, phase response properties approach those within orders of magnitude?
 
Each sound only has relative differences of intensity between one ear and the other

Sorry SoundMinded, since the works of Lord Raleigh (1907 IIRC) it's established that the time of arriving at each ear (then the relative phase) is the localization factor between grossly 80 and 1500 Hz, this is not a matter of opinion. (And please don't take offense, it's just a correction and maybe I'm in trouble with English).

OTOH, I agree with you about the wrong obsession of width. First width has only to be of realistic proportions, second width means nothing without the corresponding depht. All this being really nebulous notions.

PS : I can be considered as an ambiophonic addict
 
Sorry SoundMinded, since the works of Lord Raleigh (1907 IIRC) it's established that the time of arriving at each ear (then the relative phase) is the localization factor between grossly 80 and 1500 Hz, this is not a matter of opinion. (And please don't take offense, it's just a correction and maybe I'm in trouble with English).

OTOH, I agree with you about the wrong obsession of width. First width has only to be of realistic proportions, second width means nothing without the corresponding depht. All this being really nebulous notions.

PS : I can be considered as an ambiophonic addict

"Sorry SoundMinded, since the works of Lord Raleigh (1907 IIRC) it's established that the time of arriving at each ear (then the relative phase) is the localization factor between grossly 80 and 1500 Hz, this is not a matter of opinion."

Yes it is now an accepted fact....except by me. And I've told you why. You are free to consider what I've said and draw your own conclusions or just go with the flow and believe what everyone else does because like the earth being flat, it was accepted as common wisdom a long time ago before there was any way to put it to a real and rigorous test. In fact in 1907 I don't think that even the vacuum tube triode had been invented yet.

BTW, I've listened to Choueiri's demo and I can't hear a fly buzzing around my head, it's just left and right and maybe front, never appears to be behind. IMO best listening is with my eyes wide shut. Fewer distractions, better concentration.
 
Yep, you are right on the principle : we often rely on the "proven statements" of somebody else. It's the comfort of the passivity.

Back on topic the crappiest way of widening a soundstage is introducing a delay of 150 to 500 us between each channel, but this comes with an important loss of focus, and I can't imagine that serious designers could use this Boombox trick.
 
Yep, you are right on the principle : we often rely on the "proven statements" of somebody else. It's the comfort of the passivity.

Back on topic the crappiest way of widening a soundstage is introducing a delay of 150 to 500 us between each channel, but this comes with an important loss of focus, and I can't imagine that serious designers could use this Boombox trick.

I recently looked at the block diagram of the RACE decoder used for Ambiophonics on their web site. That device applies a delay that's adjustable from 60 to 100 microseconds to each channel and feeds it out of phase to the opposite (actually it's part of a recursive loop) so that when this field arrives at one ear, it is hopefully out of phase with the same sound arriving from the opposite speaker. If you sit at just the right location, this is the difference in time due to the path length difference between ears and speakers. With the speakers fairly close together, I think about 20 degrees apart and a short wall between them, this is said to be comparable to having a large wall between them right up to your head. This is claimed to make the soundstage much wider. Other surround speakers with time delays are also used, I think one pair must be at 55 degrees off axis to function as intended. Other pairs are side rear and to the rear above you. Reflections within the room should be minimized to avoid spurious fields negating the cancellation effect. There are other decoders but this is among the underlying principles of ambiophonics as I understand it. You can listen to processed audio clips at their website and decide for yourself what you think. Choueiri's method uses a single pair of speakers, binaural recordings and his own processor. He's also got a web site with a demo and claims.
 
I don't see the difference between what Choueiri is doing and the Bob Carver Holographic generator.

I got curious one day back in about 1983 about what a head mic recording played back through headphones would be like. I think I had been reading the Audio Cyclopedia. I put two Radio Shack condenser mics on either side of a flower pot that was roughly the size of a human head, sitting on my coffee table, and recorded me sitting on the couch playing an acoustic guitar, into a little Sony stereo cassette recorder.

Then I flopped down on my bed with the headphones on and started the playback. The sense of stereo effect, or imaging, was so accurate, that the rustling sound before I actually started to play the guitar caused me to jump up abruptly in fear, before I had time to think. I though someone was in my house. I was dumfounded at how real the stereo effect was. Ever since then I've been a fan of the rather not perfect Carver Hologram circuit. I built it from a grey market schematic someone had, thought the center image was too thin and cold, re-optimized the circuit slightly using the equivalent of a SPICE modelling program (added some reverse Fletcher Munson EQ and another trick that's hard to explain), and now I usually prefer to have it on. Recordings vary mucho on how they are mic'd and processed, so the results are all over the place, but personally, I think it rocks big time. The Preamp I'm designing and building right now will have an L+R output that will help stabilize the center image (the lead singer or guest artist usually), which the Hologram circuit, because of it's critical listening position aspect, makes the center image a little "phasey" as Linkwitz puts it. It seems a bit pushed back or weak. But recording techniques and playback room acoustics will deteriorate the process of cancelling the inter-aural crosstalk significantly, sometimes rather substantially, so you have to judge it carefully, taking into account these variables. I have yet to hear a better way of getting that space, and image individuation where each sound seems to have a sense of its own acoustic space. Whatever it is that gives us a sense of depth works better. Embedded reverbs become 3-D.

It's arguable that the interaural cancellation should only be done to the frequency range of about 100HZ - 2kHZ, since below 100HZ typical room acoustics will have a bigger effect on how we perceive bass, and above about 2kHZ; since the half wavelength is now shorter than the distance between our two ears, our brain doesn't know for sure which period it's trying to compare, so it gives up on measuring timing differences and uses amplitude comparisons instead, from about 2kHZ on up. Up around 8kHZ, our pinea, or outer ear shape apparently gives us cues that tell us about the height of a sound source. I haven't played with that concept yet, but it sounds interesting.
 
I used to have a BBE Sound Maximizer and it worked on similar principals.

I too would like to meet both Nelson Pass and John Broskies, as I have read all of their material, Somtime within my lifetime!
Amazing stuff!

jer

P.S. At least I have been able to meet a few very significant figures here on DIYaudio.:) :)
 
Last edited:
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.