Bigger midranges/speakers have better imaging?

So while you guys are wondering about what image is and where it might be and how to measure, I believe that it is measured using simple tools like microphones lol...

the first part of this statement seems a little condescending at tries to create the impression you know more about it then the rest of us.
now i understand why you are evading answering my questions in a direct fashion.

you've got a DM
 
If you aren't "wondering" what is it then? Whats your explanation? I've given mine. Whats your definition of "image" in the context of loudspeaker design or however you may find it useful?
i agree with Adason currently we only have subjective means of evaluation.
OK here is your position, and to that I disagree. It is no different than measuring tone and other aspects of the signal. Measurements are objective. How we perceive them is subjective. There are subjective and objective topics of tone... There are subjective and objective topics of Image... whats the confusion?
 
here's my low brow thinking...we're trying to come up with a means to measure/quantify/or visualize a complex phenomenon with the equivalent of a stone age hammer.
after losing some sleep over pondering the problem i feel that using an impulse response is wrong in that as a test signal is not it's intent to freeze time to see what's ringing after the event? but the very phenomenon (image) we want to examine is in large part something that comes about as a result of changes in phase over time at multiple frequencies and over distance.( i wish i could turn that into a mathematical formula!)
so how would an impulse be be used in this case?
seeing impulse response capitalized as IR caused me to think of "infra red" and wonder could using that like a sonar sweep succeed in showing changes in energy over a field? sound is a form of energy,no? (i hope i won't be treated like a football over this!)

are you insinuating i'm confused?
 
Last edited:
Again......soundstage from a stereo signal is only effected by crosstalk.....either in the source or through both early and late reflections from the environment. You would have to do extensive null sampling to evaluate..........in other words......not going to happen......far too many environmental factors to take into account. I'm not even going to entertain room reverb decay or modes and other resonances that show up in the time domain.
 
If you aren't "wondering" what is it then? Whats your explanation? I've given mine. Whats your definition of "image" in the context of loudspeaker design or however you may find it useful?

OK here is your position, and to that I disagree. It is no different than measuring tone and other aspects of the signal. Measurements are objective. How we perceive them is subjective. There are subjective and objective topics of tone... There are subjective and objective topics of Image... whats the confusion?

Is an image only a measurable quantifiable acoustic event? Think: loudness, distance, direction, frequency response, Near Field Acoustic Hologram. Add any measurement tools you can imagine.

Or is it an image of an acoustic event mapped on a human's neural network?

Or both?

I am afraid that in the context of this thread we are speaking of the human experience of a acoustic event. You know like being there in your own head.

Human perception is completely measurable. If you read the PDFs that I have posted here you can understand that human hearing can quantify distance and azimuth. That means that humans can form a model of how far and in what direction the sound came from.

Research has found that humans with long term complete blindness use the visual cortex in their brains to map auditory spatial information.

Human perception is objectively measurable and quantifiable. Subjective becomes a bit more fuzzy, you think?

Thanks DT

Still confused?

Do not hit it with a hammer.

Poke it with a stick.
 
Last edited:
  • Like
Reactions: krivium
but the very phenomenon (image) we want to examine is in large part something that comes about as a result of changes in phase over time at multiple frequencies and over distance.( i wish i could turn that into a mathematical formula!)
so how would an impulse be be used in this case?
Again, we shouldn't conflate the components of soundstage with the design of a speaker for reproducing it.

It is not necessary to measure the soundstage to design a speaker.
 
I am afraid that in the context of this thread we are speaking of the human experience of a acoustic event
There was no distinction made by any post that I read that suggested we are focused on perception and not literal sound field.
Human perception is objectively measurable and quantifiable.
The conflict was surrounding wether or not image was measure able. My position remains that it is. Have you changed your position?
 
Again......soundstage from a stereo signal is only effected by crosstalk
DeepSeek - "Soundstage perception is dominated by lateral reflections (not crosstalk alone). Crosstalk cancellation (e.g., BACCH SP) addresses interaural coherence, but room modes/decay still corrupt imaging. Nulls are frequency-dependent; psychoacoustic masking limits audibility."
 
Last edited:
Just a note: at 20 kHz phase changes every 0.85 cm being WL=1.7 cm

Yes and happily we rely on delta amplitude for high freq localization ( because our head shadow and impose an interaural time gap).

Phase tracking is important at lower freq because wavelength are so long our head is not an obstacle anymore and we can't rely on amplitude to define lateral source.

@camplo , you quote AI that's it? Please make it obvious to anyone by indicating your source of quote please.
 
  • Like
Reactions: camplo
I usually don't post huge amounts of AI derived content in a thread debate but DeepSeek with its reasoning feature turned on is intriguing to me;

"Measuring Imaging in Acoustics: A Structured Approach

Imaging in acoustics refers to the spatial perception of sound, including direction, distance, depth, and the arrangement of sound sources. Measurement involves both objective physical analysis and subjective perceptual evaluation:

1. Objective Physical Measurements

  • Interaural Time Difference (ITD) & Interaural Level Difference (ILD)
    • Measured using dummy heads (e.g., KEMAR) with ear microphones to quantify directional cues (time/level differences between ears).
    • Tools: Binaural microphones, oscilloscopes, or software like MATLAB for signal analysis.
  • Head-Related Transfer Functions (HRTFs)
    • Capture how an individual’s anatomy filters sound using specialized equipment (e.g., spherical microphone arrays or 3D scanning).
    • Applied in spatial audio systems (e.g., Dolby Atmos) to simulate realistic imaging.
  • Room Impulse Response (RIR)
    • Measures reflections, reverberation, and decay using a calibrated speaker and measurement microphone (e.g., B&K mics).
    • Software: Dirac, ARTA, or REW for analysis of time-domain behavior affecting imaging.
  • Beamforming & Acoustic Imaging
    • Uses microphone arrays (e.g., 64-channel systems) to localize sound sources and generate spatial heatmaps.
    • Tools: Acoustic cameras (e.g., Sorama) or software like Comsol for sound field visualization.
  • Near Field Acoustic Holography (NFAH)
    • Reconstructs sound fields near sources to identify spatial contributions (e.g., loudspeaker driver interactions).
    • Tools: Brüel & Kjær’s NAH systems or numerical methods (Fourier-based algorithms).

2. Subjective/Perceptual Evaluation

  • Psychoacoustic Testing
    • Listening tests (e.g., MUSHRA or ABX protocols) where subjects rate imaging attributes (e.g., "Can you locate the violin?").
    • Metrics: Localization accuracy, perceived depth, and source width.
  • Localization Blur
    • Quantifies the smallest angular change a listener can detect (e.g., 1° azimuth for trained listeners).
    • Measured via controlled lab experiments with rotating speaker setups.
  • Spatial Release from Masking (SRM)
    • Evaluates how well listeners separate target sounds from noise using spatial cues (e.g., speech-in-noise tests).
    • Indicates imaging clarity in complex environments.
  • Auditory Scene Analysis Models
    • Computational models (e.g., auditory grouping algorithms) predict perceptual segregation of sound sources.
    • Tools: Auditory Modeling Toolbox (AMT) or Python libraries (e.g., librosa).

3. Advanced/Research Tools

  • fMRI/Neuroimaging
    • Maps brain activity (e.g., visual cortex in blind individuals) during spatial hearing tasks.
    • Research-focused but clarifies neural correlates of imaging perception.
  • Binaural Recording & Playback
    • Uses dummy heads (KEMAR) to capture and replay spatial audio via headphones, mimicking human hearing for subjective evaluation.

Key Considerations

  • Calibration: Ensure measurement tools (mics, arrays) are phase-matched and calibrated to IEC standards.
  • Environmental Control: Minimize room reflections (anechoic chambers) or use time-windowing in RIR analysis.
  • Subject Variability: Account for individual HRTF differences and hearing acuity in perceptual tests.
Conclusion: Imaging is quantified through a blend of physics-based metrics (ITD, HRTFs, beamforming) and perceptual testing (localization blur, psychoacoustics). Advanced tools like NFAH and fMRI bridge the gap between objective data and human experience."
 
  • Like
Reactions: wchang
don't need someone else to tell me, In my opinion, it is plain to see. In an anechoic chamber a good loudspeaker will give near perfect imaging

If you cannot disprove this statement, you cannot prove that imaging cannot be measured. In this scenario, the only bottleneck would be the loudspeaker and the source material.
Ehm...
If that's your opinion, what can I say?
The first statement is simply wrong, so what follows is... opinable ?!
Why you cannot define a sound complete when inside of an anechoic room?
If sound travels, ehm...we're sure it does, right?! Ok, it travels and bounces and loses force.
Now, I'm not going to tell you that all that HTRF, ITD et similia has no sense as there is no brain in the KEMAR and maybe that's the only useful thing that an AI can do, playing tones to develop its taste.
 
  • Like
Reactions: camplo