If you mean, do we finally have a good test for soundstage? Then, no. We still don't have a good measurement for that.The question is if things have really improved that much?
Driving thread off-topic, again? 🙄If you mean, do we finally have a good test for soundstage? Then, no. We still don't have a good measurement for that.
You can’t measure an imaginary thing. Soundstage does not exist. It is an illusion crated by our brain while processing sound from two sources containing partially the same content (or our brain is fooled). All the same. How do you measure illusions in someone’s brain?
Naturally, human perception determines sound source position based on a single source.
We can only try to determine what in reproduction chain performance affects or fools our perception of soundstage. It is not a single property that can be measured and done with.
I’ve found what works for me. But, I doubt it would be the same for all.
Careful Markw4... got to include the "only opinion"... helps to put it in large bold underlined capitalization too "ONLY MY OPINION"
Last edited:
If the equipment does not reproduce the pressure waveforms correctly at your ears, then your brain may not have the information it needs for virtual localization of sources. I have seen pieces of equipment that if inserted into a reproduction chain distort or collapse some or nearly all of the soundstage localization cues. The equipment I am thinking of may have excellent THD+N. Therefore, depending on what Hierfi was asking about in #361, the answer could be very different.You can’t measure an imaginary thing. Soundstage does not exist.
Regarding the properties that need to be reproduced, they have been described in the literature. The main ones for distance are low level reverberation tails, direct to reverberant sound ratio, HF roll off with distance. It requires things like very low, symmetrical crosstalk under all conditions and at all frequencies, matched FR and phase response between channels using realistic loads, etc. IMHO, nonstationary noise from SMPS and or Class-D modulation may also be implicated as a problem in some cases.
Last edited:
You can't always measure a real thing either. Did you know you were going to be born? I like the discussion though... otherwiseYou can’t measure an imaginary thing.
If an imagination (illusion?) is created by external stimulation, surely it must be possible to measure these external stimuli properties and correlate them with the "imaginary" thing.
Some pairs of stimuli creates a clear imagination, other pairs not so - whats the difference... ?
//
Some pairs of stimuli creates a clear imagination, other pairs not so - whats the difference... ?
//
Yes. I did say the same.If an imagination (illusion?) is created by external stimulation, surely it must be possible to measure these external stimuli properties and correlate them with the "imaginary" thing.
We can only try to determine what in reproduction chain performance affects or fools our perception of soundstage. It is not a single property that can be measured and done with.
However, there is no way to make a direct measurement of soundstage that would be comparable to other measurements in a way as distortion measurements of the same kind are comparable.
I think that most light to the matter could bring sound engineers with experience in manipulating final result of illusion on studio recordings. They should know what sound properties are to be manipulated to get desired effect. Then we can argue which properties of reproduction devices are likely to affect that in a positive or negative way.
A lot can be made with processing. For me, one of mandatory recordings to test equipment soundstage reproduction capabilities is album Amused to Death by Roger Waters. Spatial effects are enhanced with Qsound. In example, this is the only album that can reproduce sounds behind my back. Beginning of track Too much rope, has a chariot coming from 10-11 o’clock and disappearing at 4-5 o’clock far behind my right shoulder. It is not the same with every amplifier all other reproduction components being the same.
Well, If what I postulated above is true, it should be possible to create a series of pairs (i.e. stereo) of synthetic stimuli and, by mic and software, decide the grade of 3 dimensional content precision detected. No?
I think this is absolutely possible but that there is not enough interest by the people who have the broad insight and analytical prowess to pull it off. I don't think i have it - on the other hand, as Pippi Longstocking says - I don't know because I haven't tried ;-D
//
I think this is absolutely possible but that there is not enough interest by the people who have the broad insight and analytical prowess to pull it off. I don't think i have it - on the other hand, as Pippi Longstocking says - I don't know because I haven't tried ;-D
//
Problem is that raw data wouldn’t be easy to interpret. It would be equally good as is 2D picture good in presenting 3D space. We would need new software with some model how our brain interprets magnitude, timing and phase differences between left and right ears. Measurement would need to be performed using binaural dummy head with microphones in ears. Too much for a hobby that is not an obsession.
On headphones I have finally figured out how to get the soundstage way out in front of me, and all around me, way beyond the headphones - ten to thirty feet out in front of me. like a good electrostatic speaker set up. Like surround sound.
I thought it was 'agreed' that soundstage existed in part in the domain of psychoacoustics. How we interpret visual and auditory clues varies. A bit like the way some people see shapes in clouds and others (me) don't.
I doubt it'll ever be measurable except, possibly, by an ai that's been trained through the experiences of thousands of listeners. A bit like the way they've been trained to perform (and shown skill at) analysis of mamograms and other scans
I doubt it'll ever be measurable except, possibly, by an ai that's been trained through the experiences of thousands of listeners. A bit like the way they've been trained to perform (and shown skill at) analysis of mamograms and other scans
Synthetic spatialization is already being used in car audio to create a different virtual soundstage for each passenger. Spatial information on an original recording is removed and can be replaced with the spatial information from other venues as selected by for the passenger's personal space. Some people think it sounds fantastic, other people think its poorly done and sounds bad.
Of course, instead of publishing in AES or IEEE, they keep trade secrets as to the exact DSP. So, no "proof" of what they claim. No published DBT to give almighty "proof" of perception. You just have to listen to it and trust your ears to make a judgement if it works or not.
Also there are VST plugins, such as the S1 Stereo Imager from Waves. I tried it in Samplitude X3 playing into the Sound Lab ESLs. Sounds phony, but can sorta modify the soundstage. The problem is that stretching out the width sort of leaves hole in the middle, or at least it produces a damaged middle image (depending on settings).
Of course, instead of publishing in AES or IEEE, they keep trade secrets as to the exact DSP. So, no "proof" of what they claim. No published DBT to give almighty "proof" of perception. You just have to listen to it and trust your ears to make a judgement if it works or not.
Also there are VST plugins, such as the S1 Stereo Imager from Waves. I tried it in Samplitude X3 playing into the Sound Lab ESLs. Sounds phony, but can sorta modify the soundstage. The problem is that stretching out the width sort of leaves hole in the middle, or at least it produces a damaged middle image (depending on settings).
Last edited:
Oh bye the way I got there by listening not by measuring. And lots of experimenting, which involves…..listening. Don’t try this at home, kids, stay in school!
We had a lot of fun with the PS1's synthetic spatializations as 9 year olds... as an adult I've never been fond of the various AVR iterations
Please elaborate and let's discuss! My results:On headphones I have finally figured out how to get the soundstage way out in front of me, and all around me, way beyond the headphones - ten to thirty feet out in front of me. like a good electrostatic speaker set up. Like surround sound.
https://www.diyaudio.com/community/threads/open-wing-headphone-crossfeed-stereo-sound.391630/
If a test signal is panned to say between the left and centre. The image of that should appear to the left of centre from the listening position. Can stereo mics each side of the listening position then pick up what is received from each side and maybe there is someone who knows how to pickup that difference and plot a pan point on the screen?
A step panned sweep can then show how well the speakers image?
A step panned sweep can then show how well the speakers image?
Regarding volume panning, it only one of the methods humans use for lateral localization. ITD is another important one. Also, not everyone hears volume panning as spatial positioning.
https://en.wikipedia.org/wiki/Sound_localization
https://en.wikipedia.org/wiki/Interaural_time_difference#:~:text=The interaural time difference (or,sound source from the head.
As an aside, soundstage is no more imaginary than color perception from RGB display devices. The human perceptual system is fooled or not, depending only on how someone wants to think about it.
https://en.wikipedia.org/wiki/Sound_localization
https://en.wikipedia.org/wiki/Interaural_time_difference#:~:text=The interaural time difference (or,sound source from the head.
As an aside, soundstage is no more imaginary than color perception from RGB display devices. The human perceptual system is fooled or not, depending only on how someone wants to think about it.
Official song of this thread. 👍Somehow this thread brings back memories from the early 80's.
Listening once is mandatory before typing any response. I just did. 🙂
- Home
- General Interest
- Everything Else
- Measuring the Imaginary