Should we touch on the issue that the results are actually complex numbers?
Amazingly, computers have no problem with complex numbers. In the real world (pun!), we call that "phase."
I do have a bit of a chuckle that virtually all the digital audio systems I see today are "24" bit. Even the low cost ones. To me there is a large difference between 24 bit arithmetic and a 24 bit accurate converter. (I don't want to get into the specific qualifications of true 24 bits, but 1/2 LSB accurate at a little more than double measurement bandwidth would be a start. The last time I complained about 24 bits someone pointed out a chip that was accurate to 24 bits... in a few seconds.)
As I said the analog hardware will limit you long before the math.
Should we touch on the issue that the results are actually complex numbers?
Are you sure they are not half imajinary? Really Ed, you have to go over complex math.
Last edited:
No. Frequency and time are conjugate variables. One representation is exactly equivalent to the other.
yes, of course.
Is there a peak level higher than what gets displayed? Isnt the display of each harmonic levels' average?
Thx,
RNM
The other chuckle is that more hairs are split by native English speakers than those for whom it is a second language. (Although there are certainly some language gaps also.)
No, it is from the thread "University has no clothes" in Lounge. Non-English speakers learned less end-product-oriented science and technology, i.e. more fundamental, for better understanding, rather than for easier application.
The FFT calculation works with peak levels. The display can show whatever the programmer decides (or the user requests). The actual numbers coming out from the calculation depend to some extent on the windowing function used. If no windowing is done ('rectangular window'?) then you can get 'ringing' just like a brickwall filter - a time window for FFT is the dual of a frequency filter.RNMarsh said:Is there a peak level higher than what gets displayed? Isnt the display of each harmonic levels' average?
yes, of course.
Is there a peak level higher than what gets displayed? Isnt the display of each harmonic levels' average?
Thx,
RNM
The FFT calc is averaging. the instrument is calibrated in rms. and the peak or peak to peak is higher than shown. is this true?
Thx, Richard
Are you sure they are not half imajinary? Really Ed, you have to go over complex math.
Do you really want to get into this? Or is there anything specific you want to review.
I did consider complex math a bit tough when I was exposed to my first use of it. Of course I was getting taught by a family friend. I would have barely passed a course in it, if for real.
The classic story is that I had asked about Boltzman's constant as used in semiconductors. I was told it was basic, so I went back to the books. I still didn't get it and asked again explaining that I was having trouble with something basic. The answer was "I said it was basic not simple."
The FFT tells you how much, say, 1kHz there is in a time waveform. The result is averaged over the entire time covered by the waveform. If you want to ask questions about when the 1kHz appears then you need to do wavelet transforms rather than plain Fourier.RNMarsh said:The FFT calc is averaging.
Given a 1kHz sine wave, then the RMS value is 0.707 times the peak. FFT calculates the peak, but the display might show RMS. Exactly the same information, because multiplying every value by a constant does not add or subtract information.
I am not certain exactly what you are asking, or why. Could you clarify, as that might help us to explain it better.
why are people apparently complaining about the different information that is readily visible in time series vs fft
that is the point - the two views contain the same "information" - but presented completely differently - it is the advantage of having, using the two views
complaining that the amplitude in a fft bin is ~ a whole record "average" is missing the point - that is what you use the Fourier view to find out - what the frequency components of the recorded waveform are
for a "eyeball" evaluation of the signal frequency components of simple test tone you have to scroll through the whole time record, recognizing, marking "features" like zero crossing times, see if there is a pattern to the timing
conversely its hard to "find" the edge of a step function "by eyeball" in the FFT data
they are different tools for looking at different features in the data – having the advantage that they both contain all of the original information in either view
that is the point - the two views contain the same "information" - but presented completely differently - it is the advantage of having, using the two views
complaining that the amplitude in a fft bin is ~ a whole record "average" is missing the point - that is what you use the Fourier view to find out - what the frequency components of the recorded waveform are
for a "eyeball" evaluation of the signal frequency components of simple test tone you have to scroll through the whole time record, recognizing, marking "features" like zero crossing times, see if there is a pattern to the timing
conversely its hard to "find" the edge of a step function "by eyeball" in the FFT data
they are different tools for looking at different features in the data – having the advantage that they both contain all of the original information in either view
The name 'complex' is an unfortunate historical accident, which I am sure discourages people. Even worse is 'imaginary', as imaginary numbers are just as real as 'real' numbers.simon7000 said:I did consider complex math a bit tough when I was exposed to my first use of it.
Several ways to think of complex numbers:
1. a 2-D vector
2. a pair of real numbers with rules about how to combine them
3. a quite natural extension of real numbers, just as irrational, rational, negative numbers were (in reverse order) successive natural extensions of positive integers
Boltzman's constant can be thought of as a way to convert from the way we measure temperature (K) to the way nature measures temperature (J).
The FFT tells you how much, say, 1kHz there is in a time waveform. The result is averaged over the entire time covered by the waveform. If you want to ask questions about when the 1kHz appears then you need to do wavelet transforms rather than plain Fourier.
Given a 1kHz sine wave, then the RMS value is 0.707 times the peak. FFT calculates the peak, but the display might show RMS. Exactly the same information, because multiplying every value by a constant does not add or subtract information.
I am not certain exactly what you are asking, or why. Could you clarify, as that might help us to explain it better.
Ok. I'll try it another way ---- isn't the amplitude on the screen of the FFT display all in rms voltage value. I put in 1 volt rms and that is standardized at 0 DBM on the display screen. All the harmonics are in rms (actually should be average).
Although we break the waveform down into sine waves etal.... the ear does not think all waveform shapes with the same average level, sound the same level in loudness. Some shapes are easier to detect than others.... So would distortion with those waveform shapes.
So, I am wondering if we should consider that peak levels of sine waveforms are a bit more detectable than it might otherwise appear.... by some factor... Just for sine waveforms. Other factors for other waveforms... music factor?
Thx, Richard
Last edited:
It is all about finding certain combination of strictly sine waveforms of different amplitudes which sum recreates the wavefourm in question. This analysis does not care if this waveform is electric and what equivalent of DC is needed to heat the conductor to the same temperature (i.e. "Average" value).
Last edited:
Boltzman's constant can be thought of as a way to convert from the way we measure temperature (K) to the way nature measures temperature (J).
IIRC - Energy per degree of freedom in a system = kT, what I find hard to understand how the texts pull degrees of freedom in a system out of their heads (at least in some cases it seems).
Scott; I always hated everything about gas theories with their statistical math apparatus to explain experimental data. I don't know what is wrong, but it does not look like a real firm science. And Einstein said, "God does not play dices". 😉
trying to get closer to what the ear/brain does in response to sound is the subject of psychoacoustics - with anatomy, neural limits leading us to some incomplete views
the "filter bank" "place theory" seems to fit a large fraction of the psychoacoustic data - is the dominant model
the psychoacoustic model can loosely be approximated by doing overlapping short term fft on the sound pressure arriving at the eardrum - few? ms length, 24 "critical band" "frequency bins" covering 20-20 kHz
that doesn't mean every "audible feature" of a musical performance recording will stand out to the unaided eyeball in a full length fft of the time series SPL data
we aren't very good at knowing what to do with the complex phase data of the fft - often isn't shown
but psychoacoustic data compression algorithms do a great job largely based on this "filter bank" model - can throw out >75% of the Shannon-Hartley Channel Capacity of Redbook CD with few able to DBT the difference - and even then often on just a few selections of music
another fun factoid - these algorithms are only using 6-7 bits per critical band - implying we really don't reslove "features" in this "space" to better than ~1%
the "filter bank" "place theory" seems to fit a large fraction of the psychoacoustic data - is the dominant model
the psychoacoustic model can loosely be approximated by doing overlapping short term fft on the sound pressure arriving at the eardrum - few? ms length, 24 "critical band" "frequency bins" covering 20-20 kHz
that doesn't mean every "audible feature" of a musical performance recording will stand out to the unaided eyeball in a full length fft of the time series SPL data
we aren't very good at knowing what to do with the complex phase data of the fft - often isn't shown
but psychoacoustic data compression algorithms do a great job largely based on this "filter bank" model - can throw out >75% of the Shannon-Hartley Channel Capacity of Redbook CD with few able to DBT the difference - and even then often on just a few selections of music
another fun factoid - these algorithms are only using 6-7 bits per critical band - implying we really don't reslove "features" in this "space" to better than ~1%
Ok. I'll try it another way ---- isn't the amplitude on the screen of the FFT display all in rms voltage value. .. music factor?
Thx, Richard
You are right, changing the phase of the harmonics will change the envelope of the time domain signal. You can create some interesting effect with this type of manipulations, there might have been some on that AES flexidisk (don't recall). At the -100dB or less level I doubt these are audible.
Scott; I always hated everything about gas theories with their statistical math apparatus to explain experimental data. I don't know what is wrong, but it does not look like a real firm science. And Einstein said, "God does not play dices". 😉
Except that statistical mechanics gives the correct answers and accurate physical insights. You may not like it, but it's an excellent description.
Dick, don't forget phase.
Jcx,
this theory does not explain significance of phase coherence of wideband sounds. As the result, people who religiously believe in this theory as complete explanation of hearing phenomena deny phase coherence significance. Of course it may represent one of mechanisms involved in sound (and pressure) perception, but it is incomplete.
this theory does not explain significance of phase coherence of wideband sounds. As the result, people who religiously believe in this theory as complete explanation of hearing phenomena deny phase coherence significance. Of course it may represent one of mechanisms involved in sound (and pressure) perception, but it is incomplete.
Except that statistical mechanics gives the correct answers and accurate physical insights. You may not like it, but it's an excellent description.
Exactly. Like wheelchair helps a lot when don't know how to use muscles.
Not at all the same. The statistical approach allows you to predict far more things far more accurately than the Rule of Dumb. It's the way nature works. No hidden variables, it's not a matter of not "knowing," it's just how the world is.
- Status
- Not open for further replies.
- Home
- Member Areas
- The Lounge
- John Curl's Blowtorch preamplifier part II