Not so easy in real life. Seeing where microphones are placed in a youtube video tells little about the soundstage as it is not known how sound engineer did the mixing. Also using youtube videos for any listening evaluation is questionable.it is very easy, just listen...
I've attended numerous concerts but rarely if ever the concert was recorded. And even if it was I don't get to sit at the best seat of the concert hall which is most likely what the sound engineer uses as reference.
The image in post #251 shows 3rd harmonic in what I would call "compressive" phase.
In a stereo sound field there should be lateral localization cues, and depth cues (two different things). ITD and ILD are for lateral (horizontal) localization. OTOH, depth cues are mainly given as ratio of direct to reverberant sound and HF loss with distance."It includes differences in the precise time at which the sound reaches each ear (interaural time difference, ITD) and the intensity of the sound reaching each ear (interaural level difference, ILD). The ITDs are more effective for localizing low frequencies (lower than 1500 Hz), and the ILDs are more effective for localizing high frequencies (higher than 1500 Hz). Frequencies in the range of 2000–4000 Hz are poorly localized. In addition, spectral changes that occur when the sound encounters body parts, such as the torso, head, and pinnae, also provide information on the location of the sound source".
Some of what has been discussed regarding H2 phase was described as affecting depth perception.
In a somewhat related vein I have also seen some cases where particular dac clock jitter resulted in a narrow (compressed laterally in space between the speakers) and forward (in front of the speakers) virtual soundstage. IME, more normally the stereo illusion of depth starts at the plane of the speakers and extends out behind them.
Last edited:
It might be useful to differentiate between created 'soundstage', the aural equivlaent of CGI, and attempts to accurately capture the vector information a human head experiences at the event. For me a fantastic example of the latter is a two-mic modified Jecklin organ recodring linked by a DIYA member entitled AF7aExcrptA.flac. Can't remember the thread. The building behind my speakers dissappears when played.
In a stereo sound field there should be lateral localization cues, and depth cues (two different things). ITD and ILD are for lateral (horizontal) localization. OTOH, depth cues are mainly given as ratio of direct to reverberant sound and HF loss with distance.
@Markw4 - Much appreciated. I'll dig back through. I may be trying to educate myself beyond my intelligence. I'm curious about the topic. It's fun. However, everything I've found over the years is either way above my head and/or anecdotal. I need a Master's student to do a research summary for me. I'm old, dumb and lazy.

What you've mentioned certainly makes sense. I may have confused "fluctuations in volume" (Kim and Kuwada) cited as a key variable in terms of the perception of distance aurally with an absolute difference (ILD) between what's perceived with each ear.
As I start to soak in more of the terminology and 'truly' understand their meanings and measurements, I may get somewhere.
Much appreciated.
Cheers!
Edited to correct ILD (level) from ITD (timing).
The image in post #251 shows 3rd harmonic in what I would call "compressive" phase.
Yes, about to post that.... I kept counting peaks... also it shows a phase shift....
About H2. Assuming no phase shift.... the negative H2 will slow down the rise time of the leading edge of the fundamental. Will this be measured as a temporary low pass filter? For a particular fundamental frequency. How will this affect a wide bandwidth... say across the audio spectrum.
Has anyone ever taking the effects of H2P and H2N and H3 upon the fundamental and then calculated -over time- the filtering action? In some cases this might be like tuning down the treble, and other turning it up... like riding the treble control.
Is is a frequency based distortion, but when analyze its effect over time ( dV/dt ) then it becomes a time based distortion. What kind of graph would we use? A waterfall?
Here's an example of what harmonics do the fundamental... unfortunately it's only positive harmonics, but you get the idea of how the fundamental gets modulated... Note how the 3rd harmonic makes the fundamental look like a square wave!
Of course, no one with a straight face would design an amp with harmonics of the same amplitude as the fundamental.... unless you are creating a square wave generator.... but you can get the idea...
https://www.electronics-tutorials.ws/accircuits/harmonics.html
Last edited:
A review article from 2015:
Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss
https://pmc.ncbi.nlm.nih.gov/articl...ys a,directional aspect of sound localization.Thanks for the correction.The image in post #251 shows 3rd harmonic in what I would call "compressive" phase.
Image is distortion residual combining all distortion separated from the fundamental. Looks like mix of H3 + H2, producing uneven summed distortion waveform peaks.
This is how pure H2 in negative phase would look.
It would be a sad world if youtube sounded bad on a great audio system, on the contrary I find great pleasure in every sound played through! I find it a hate\love relation as adds are invading a peaceful listening session, I know how the original version of a song would give more pleasure.Not so easy in real life. Seeing where microphones are placed in a youtube video tells little about the soundstage as it is not known how sound engineer did the mixing. Also using youtube videos for any listening evaluation is questionable.
For discovery new songs and fast browsing I find it incomparable, and I love video clips!
@tonyEE - Maybe what you've clearly articulated is what I am poorly trying to describe. I don't have any 'real' understanding around the topic, as if that wasn't painfully obvious.
A saying sticks with me. Something akin to - The differences in sonics of amplifiers isn't in what they do well, it's in what they do poorly. I think it may have been Bob Cordell, but again... I don't know. I'd like to give credit where it's due.
So, I've been trying to understand distortion characteristics. My gut (there's no knowledge or research behind it that I've found or can understand) says that if the harmonic structure of the distortion changes with load, frequency, power (3 common variables that I've seen affect distortion "character"), then the "tone" of even a single instrument with overtones should change over its frequency range and at different SPL.
Crudely stated...
Assume a speaker's impedance in the bass region of 4R @ some phase angle (I still don't understand that... but) and in the treble region at 16R and @ another phase angle.
Assume an amplifier at 1W / 1kHz / 8R (strictly resistive load) produces a "2nd Harmonic dominant" THD.
Assume that same amplifier at 10W / 1kHz / 4R (strictly resistive load) produces a "3rd Harmonic" THD.
Assume further that the THD does not remain perfectly 'consistent' from 20Hz to 20kHz.
That's a pretty realistic set of conditions for some amplifiers that may be considered wonderful to listen with.
BUT... what I wonder out loud with a group of people much smarter than me reading...
Doesn't that mess with everything? The "secret sauce" of the distortion being added isn't consistent at various levels.
So.... it might be an overstatement / over simplified, but a light strike on a piano note may sound tonally different than a heavy strike. The lower tones on the piano would have a different "distortion effect" than the higher notes, I'd assume.
So, I suppose one of my theories is that "bad sound" may not simply be the distortion character at that one unique frequency and load. It may be... how the distortion varies with "real music" and with "real speakers"
Edited to add - In the inverse - "better" sound might simply be "more consistent" distortion vs. "low" distortion.
Sorry for the long post... maybe you can make sense of some of it. Cheers!
A saying sticks with me. Something akin to - The differences in sonics of amplifiers isn't in what they do well, it's in what they do poorly. I think it may have been Bob Cordell, but again... I don't know. I'd like to give credit where it's due.
So, I've been trying to understand distortion characteristics. My gut (there's no knowledge or research behind it that I've found or can understand) says that if the harmonic structure of the distortion changes with load, frequency, power (3 common variables that I've seen affect distortion "character"), then the "tone" of even a single instrument with overtones should change over its frequency range and at different SPL.
Crudely stated...
Assume a speaker's impedance in the bass region of 4R @ some phase angle (I still don't understand that... but) and in the treble region at 16R and @ another phase angle.
Assume an amplifier at 1W / 1kHz / 8R (strictly resistive load) produces a "2nd Harmonic dominant" THD.
Assume that same amplifier at 10W / 1kHz / 4R (strictly resistive load) produces a "3rd Harmonic" THD.
Assume further that the THD does not remain perfectly 'consistent' from 20Hz to 20kHz.
That's a pretty realistic set of conditions for some amplifiers that may be considered wonderful to listen with.
BUT... what I wonder out loud with a group of people much smarter than me reading...
Doesn't that mess with everything? The "secret sauce" of the distortion being added isn't consistent at various levels.
So.... it might be an overstatement / over simplified, but a light strike on a piano note may sound tonally different than a heavy strike. The lower tones on the piano would have a different "distortion effect" than the higher notes, I'd assume.
So, I suppose one of my theories is that "bad sound" may not simply be the distortion character at that one unique frequency and load. It may be... how the distortion varies with "real music" and with "real speakers"
Edited to add - In the inverse - "better" sound might simply be "more consistent" distortion vs. "low" distortion.
Sorry for the long post... maybe you can make sense of some of it. Cheers!
Last edited:
I would think so. To understand things better its very helpful to know what a "transfer function" is. It may also be helpful to know that talking about one frequency at a time, at one particular level, giving one particular distortion profile is really only valid when only one frequency is present. So why do we perform distortion analysis that way? Because that way its possible for human mind to make sense out of a spectral display. OTOH, if there are too many frequencies at once, the spectral display can become in indecipherable messy jumble of spectral lines.Doesn't that mess with the everything?
In a better/closer-to-realistic model than above, there is assumed to be a slightly curved transfer function that produces distortion. The curvature can change with frequency, but rather than think of curvature change in terms of frequency (in the frequency domain view) we can think of the curvature (as viewed in the time domain) as a function of the rate of change of voltage (there is a relationship between the two views in that higher frequencies have a higher rate of change in time, but combinations of multiple frequencies added together can have even higher rates of change with respect to time).
The latter paragraph kind of gets more toward the way of thinking about and modeling distortion which was proposed by Earl Geddes.
Last edited:
So you find multitone measurements indecipherable?OTOH, if there are too many frequencies at once, the spectral display can become in indecipherable messy jumble of spectral lines.
Depends. Sure is nice to have a computer program to help with some of the deciphering when there are a large number of tones. But multitone tests are also special cases of fixed amplitude frequencies chosen to make their distortion products more distinguishable.
@Markw4 -- I have limited recollection of transfer functions, but not enough to apply. I'll see what I can dredge from the memory ...
Separate, but related re: how to make sense of some of it. I personally like looking at it in the time domain. FFTs are nice, and I understand their purpose, but my brain registers a little better to seeing a fundamental voltage and the residual voltage superimposed on the same image. I can't make that type of visualization with an FFT. I've seen some really 'funky' looking residual waveforms, that I'd have never picked up on simply by looking at an FFT. Now, I don't know if it actually "matters"... but that's part of the fun.
Separate, but related re: how to make sense of some of it. I personally like looking at it in the time domain. FFTs are nice, and I understand their purpose, but my brain registers a little better to seeing a fundamental voltage and the residual voltage superimposed on the same image. I can't make that type of visualization with an FFT. I've seen some really 'funky' looking residual waveforms, that I'd have never picked up on simply by looking at an FFT. Now, I don't know if it actually "matters"... but that's part of the fun.
In my opinion it can matter. Crest factor of distortion can matter especially at lower frequencies. We don't see that in the spectral view since phase information has been discarded.I don't know if it actually "matters"
EDIT: Some info on transfer functions at: https://ccrma.stanford.edu/~dtyeh/papers/yeh07_dafx_distortion.pdf This includes some graphical representations to help develop some intuition. Graphically is also a way we used plot approximate distortion waveforms for things like tube circuits.
Here is a graphic showing a linear transfer function graphically:
Last edited:
Why do you stubbornly repeat that after it was pointed out to you, more than once, that no phase information is discarded?We don't see that in the spectral view since phase information has been discarded.
See again here:
An FFT produces sin and cos outputs. It is then converted to polar form which is magnitude and phase angle. From that REW can display phase. However for graphical spectral analysis plots, only magnitude information is displayed.Why do you stubbornly repeat that after it was pointed out to you, more than once, that no phase information is discarded?
This is easy to prove because if you average many FFTs and phase information is discarded then noise doesn't tend to cancel out to zero. Rather is is boxcar filtered to give the average noise level.
OTOH, if you average many FFT datasets with phase retained then noise is reduced to zero as the number of averages approaches infinity (however, this requires synchronous sampling in order to not also cancel out some or all of the desired spectral information).
Last edited:
For the first question:Does THD accurately predict good sound quality? And is subjective SQ useful to assess amps?
A good question to pique interest......
Thoughts?
I think the word "accurately" is fundamental, so my answer is no.
For the second question:
I would definitely say yes.
As you may have understood, i am certainly not a designer or an engineer in the sector, because from what i have read so far in the topic, perhaps all those who have answered can be divided into two philosophies.
One includes people who, for work or by nature, tend to measure everything measurable to have answers on electro-acoustic components and on how a sound event takes place in all its details.
The other simply listens to the sound event by combining the components that they like the most for a result that they like the most.
I belong to this group, but i like to think that "hybrid" professionals exist.
Generally, the former pursue high fidelity, the latter are labeled for my-fi.
High fidelity in my opinion is an illusion, an amplifier that today is considered the most hi-fi because it is the best at reproducing the sound most faithful to the original, in a year could be overtaken and considered the second, in two years the twentieth, and in three years the hundredth.
So when it will be the second, can it still be called hi-fi?
And when will it be the hundredth?
If what i read is true, an amplifier that has percentages of THD that are not very low and that instead are used to give musicality to the harmonic profile, is not hi-fi.
Do we put it in the two-hundredth place?
Thousandth?
But then, if we assume that the second is no longer hi-fi, what difference is there from the thousandth?
Who establishes that the second, instead, is still hi-fi compared to the first, and the twentieth is not?
And the hundredth yes or no?
And the thousandth?
At this point i start thinking that my-fi is much more coherent, because if i create my setup with the "twentieth of the ranking" (or the thousandth), which maybe is a Nelson Pass project, and that maybe i even like a lot, i have already won over all those who chase the first, and which perhaps is never definitive.
As for the reproduction more faithful to the original, how many recordings can be reliable for this?
Both for the quality of the support, of the recording devices, of the microphones in the environment for the credibility of the soundstage.....
Am i forgetting something? Probably.
But in essence what has more value in my opinion, is the experience, both of the designer and of the listener.
In this way i believe that the result is assured, as close as possible to one's objective.
Which is not a small thing, but above all it applies to everyone, those who chase hi-fi and those who chase my-fi.
The purist in my opinion is not "superior".
Sorry for the long post and my opinion.
Yes, infinite number of averages would be needed to reach zero, with proper accounting for phase of each averaged samples, as noise is random by magnitude and phase. But measured fundamental and harmonics are not random in phase and magnitude (yes, of course, they have some unknown variation). So, including actual phase of every sample in math averaging operation wouldn’t have a big effect.This is easy to prove because if you average many FFTs and phase information is discarded then noise doesn't tend to cancel out to zero. Rather is is boxcar filtered to give the average noise level.
- Home
- Amplifiers
- Solid State
- Does THD accurately predict good sound quality? And is subjective SQ useful to assess amps?