FFT + Multi-Tone Discussion

This will only work with common-mode noise - if the noise is series mode, a ferrite will unfortunately not help.
I get that about the common mode noise.

I think that they have relatively little effect in my experience for this kind of thing because the impedance the clamp on types add to the common mode current path is relative small. I suppose multiple turns in the same ferrite structure would be better, although that would also force whatever currents are passing through the shield to also now pass on the other conductors. Maybe not what you want.
 
  • Like
Reactions: Bonsai
Of course, you can have a scenario where two frequencies fall really close together. If you use a short FFT length they'd be lumped together into the same bin, so you lose frequency resolution (but the FFT is faster and requires less memory). If resolving frequencies that are very close together is a concern all you have to do is increase the FFT length (or decrease the sampling frequency). For example if you use a "1M" FFT (1.049M-sample = 1024*1024 samples) you get a frequency resolution of 42 mHz (= 0.042 Hz).

Tom
I didn't use Gemini, but I did find this in the REW Help files:

FFT Length​

The FFT Length determines the basic frequency resolution of the analyser, which is sample rate divided by FFT length. The shortest FFT is 8,192 (often abbreviated as 8k) which is also the length of the blocks of input data that are fed to the analyser. An 8k FFT has a frequency resolution of approximately 6Hz for data sampled at 48kHz. As the FFT length is increased the analyser starts to overlap its FFTs, calculating a new FFT for every block of input data. The degree of overlap is 50% for 16k, 75% for 32k, 87.5% for 64k and 93.75% for 128k. The overlap ensures that spectral details are not missed when a Window is applied to the data. The maximum overlap allowed can be limited using the Max Overlap control below to reduce processor loading at higher FFT lengths.

Relevant?? BTW, Multitone has the very same functioning.
 
SOTA DACs have surprisingly little out of audio band noise, it is incredibly well suppressed. Visible noise shaping was an issue years ago, but it is not a case anymore.

View attachment 1468954 View attachment 1468956

If post #368 is a reply to post #366, then please keep in mind that I was referring to the spectrum of the digital signal entering the coarse DAC. Per the Gerzon-Craven noise shaping theorem, it has to have lots of out-of-band quantization noise. When you push that noise far enough out of band and have a good analogue reconstruction filter, it may not show up at the filter output.

If post #368 has nothing to do with post #366, then please ignore what I just wrote.
 
I want to circle back around to the basic idea of this thread.

I can see why professionals might want to jump for an Audio Precisions system. Nobody will question the measurements. It adds general cred. It's all in one box, aside from the computer, so in an industrial setting there's less chance for screw-ups and error. The performance is generally state of the art. What's not to like? Aside from the price. And the software maintenance fees.

So, here is my very subjective opinion. (Anatech - flush this post if you see fit to.)

For the rest of us, we can make really great measurements for not a huge investment.

At one end, you have boxes like the Focusrite Scarlett Solo. Stand alone, you're probably limited to something better than -100 dB distortion performance or so. At the other end of cheap, you have the QA403 and the Cosmos boxes. All have free software available to run just about any test you like.

If you want to measure beyond -120 to -135 dB harmonic distortion, you probably need a notch filter. But, guess what! So does the Audio Precision system. It just happens that they have the notch filter built-in. (FWIW, E1DA will sell you a notch filter already built for $79 plus whatever the shipping and tariff might be this week. You just need to do the alignment and supply your own box if you want one. The boxed already aligned unit costs not much more.)

That brings me to my point. You can pretty much do as well in terms of test performance with less than $1000 of test equipment as you can with the AP. You just have to be creative and understand what you're doing. But, I'd argue that you should understand what you're doing to make good measurements with an AP and be creative to really test as many performance aspects as you can.
 
  • Like
Reactions: Hipocrates
Hi CG,
Nope. You are on point. I won't flush posts unless they are disruptive. We have a mod team, not single cowboys. Subjective opinions have value if they are based in what is real and what is reasonable.

I happen to agree with you on all your points. I post as a member, that's all.

I am sure that AP is trading on their reputation more than anything else. The hardware shouldn't be that expensive, and the software sure as heck isn't.
 
  • Like
Reactions: CG
I didn't use Gemini, but I did find this in the REW Help files:
Mark suggested I run a Google search and for once I found the AI result usable. So that's how I ended up with Gemini. 🙂 I was just citing my sources.

As the FFT length is increased the analyser starts to overlap its FFTs, calculating a new FFT for every block of input data.
I'm willing to bet that this overlap is done in the time domain. That does not mean that the FFT bins overlap or that spurious signals can hide between the bins.

As I understand it, the window function will result in attenuation of the head and tail of the block of samples. That could attenuate a signal that you actually wanted to look at. So the way I read it, each FFT is done on N samples with a sliding window that slides in chunks of N/x, where x is some integer multiple of N. That's my best guess based on the description. I'm sure John Mulcahy would explain if you ask.

Tom
 
  • Like
Reactions: CG
I'm willing to bet that this overlap is done in the time domain. That does not mean that the FFT bins overlap or that spurious signals can hide between the bins.

I was focused on this part:

"The overlap ensures that spectral details are not missed when a Window is applied to the data."

Relevant??

In the end, it may not matter. With long FFT lengths, the bins are tiny. It wouldn't take much for everything to drift a bit. This is why I suggested the notion of saving the peak data. The idea is to catch sorta random events as well as to allow for drift over time.
 
I am sure that AP is trading on their reputation more than anything else. The hardware shouldn't be that expensive, and the software sure as heck isn't.
That could be, although development costs are considerable these days. Plus, the market isn't large so the development cost and the rest needs to be amortized over not so many units. If they dropped the price in half, would they sell twice as many units?

It's even worse with software. Just keeping up with operating system version changes can be a challenge.

I watched the software guys at my job suffer through this. The testing requirement was monumental. Not audio test software, though.
 
Don't forget. They don't design or build from scratch. The same can be said for Keysight, Tektronix or anyone else. Software is built on previous drivers, user interfaces. Mostly adding features.

The costs for test equipment have gone out of sight, build quality dropping badly. My experience with front line support is disappointing, a real waste of everyone's time. Nope, there is no excuse for the costs in the test and measurement industry. That and manufacture has moved to cheap labour markets and I do question quality control.

As far as debugging is concerned, you have fixed (known) hardware here. No surprises at all. You run into trouble when you get cute or fancy, this holds true for a large telecommunications system as much as it does for a piece of test equipment. That is an internal decision. As a user, you just want it to work and be simple to use as long as it does the job. It had better not fail for the cost you're paying, and with the amount of past experience, there is zero excuse for early equipment failure. They know beforehand the most likely things that will die.
 
  • Like
Reactions: CG
An eye opener is to look at the output of the DAC pre reconstruction filter with a wide band spectrum analyzer. You need to go to at least 5 MHz, maybe higher with the latest DAC chips.The extended spectra energy is quite large. And challenging for opamps to process. I have used LC prefilters which seem to make a big difference even if the measured results are similar.

Bit error rate testing would be really intersting on an audio link. Pseudorandom data into the DAC pass through the analog link to an ADC and see what the error rate is.

However, while perfection here is a valid and desirable goal the reality of the sound field in a room may require substantial changes to be perceived as an accurate reproduction of the experience (even though few have ever had the experience of the original session and the reproduction of it). I think this is one aspect that makes pinning down audio standards as good so challenging.

When I said there are thresholds where improvement in performance will make no change I was thinking of physical noise limits (e.g. a moving magnet cartridge will have a thermal noise floor for example making lower noise electronics moot) or distortion products well below the physical ability of transducers to respond.

If you know how to make something better, do it. But don't just fix what you can test easily.

Years ago I was show a test setup that excited incoming power with a swept signal and a tracking spectrum analyzer looked for it in the audio output. I never got to actually use it. However I have accumulated what would be needed to duplicate the setup, except for the energy to set it all up. It would be interesting to quantify the actual noise resistance of various audio products.
 
I would offer that the simple reason for professional measurement tools is precisely that it is a known and characterised entity.

I've not done any professional audio work but I have worked professionally in optics and photonics where lasers and photon detectors share similar measurement issues. Although each individual professional equipment device needs further characterisation as part of the experiment, the key is they are at least able to support the given detailed data sheet specs between devices - you're buying a known reference that your customers know.
 
Hi Mark,
.........
We know all about audio rectification and everything else. Now you have derailed the discussion from audio analyzers into a focused attempt to debunk the value of measuring things. Oddly enough, you are referring to these measurements, meaning they show up.
That it is exactly his only goal, "a focused attempt to debunk the value of measuring things". So we have another pointless thread, hand in hand with former Blowtorch threads, Bybee's etc. You have the power to change it, Chris.
 
The nearest competitor to AP was (past tense) R+S. They stopped production of all AAs around 3 years ago and I very much doubt we'll see them back in the audio game any time soon. AP has a monopoly, in that you can show screenshots and values via their hardware, in the knowledge that they are produced by machines that are "inherently calibrated".

I suffer from G.A.S when it comes to test gear (I have way too much of it). The 555 gave me no G.A.S (a replacement for the UPV would). Our UPVs are used every day, typically for hours at a time. The 555 gets switched on, on very rare occasions when ultimate THD+N is needed (we aren't in the DAC or class D power amp arms race, so that is a blue moon) or 1MHz FFT (that is a useful function, as typical RF analysers don't have its low noise floor). Again, this is a rare example as one hopes to be in a place where what's happening above 100-or-so KHz is not of importance (the UPV goes to 250KHz and in the time I've had the 555 I can't think of an example where the 250-1MHz range told me something that I didn't already know...but I live in hope that I'll find a use for it, other than telling customers that we have it).

Oh - and there's the fan. I "choose" not to ignore it on a machine that cost £35K GBP (£42K Inc tax). I have a selection of Papst fans that are silent, but powerful enough to chop your fingers off (and they were made decades ago). I have no idea why AP would fit a junk fan on such a high-end machine...

edit - I do know why AP would have fitted that fan: they'll be able to fit a quiet one and call it 'Rev C'. You'll have to pay thousands to get it. Maybe they'll throw in THD readings on their own at the same time, without lumping THD+N together?
 
Last edited:
  • Like
Reactions: anatech
That does not mean that the FFT bins overlap or that spurious signals can hide between the bins.
An FFT is a decomposition (or a change of basis) of a signal. Instead of representing the signal as set of sample amplitudes at sample times it is represented as a set of sine wave magnitudes and phases at the FFT bin centre frequencies. The RTA spectrum shows the magnitude components of those tones. If the input signal is at an FFT bin centre frequency it will appear as a single bin as it only needs a single frequency to represent it. If it is not at a bin centre frequency it will end up represented as combinations of nearby bin frequencies, how far that tone spreads will be influenced by the window choice, as an underlying feature of the FFT is that it treats the input as periodic over the FFT length. When it isn't (i.e. the input has frequencies that are not at bin centres), the window helps deal with the discontinuities that would occur at the block boundaries by tapering the data away there.
 
An eye opener is to look at the output of the DAC pre reconstruction filter with a wide band spectrum analyzer. You need to go to at least 5 MHz, maybe higher with the latest DAC chips.The extended spectra energy is quite large. And challenging for opamps to process. I have used LC prefilters which seem to make a big difference even if the measured results are similar.

Bit error rate testing would be really intersting on an audio link. Pseudorandom data into the DAC pass through the analog link to an ADC and see what the error rate is.

However, while perfection here is a valid and desirable goal the reality of the sound field in a room may require substantial changes to be perceived as an accurate reproduction of the experience (even though few have ever had the experience of the original session and the reproduction of it). I think this is one aspect that makes pinning down audio standards as good so challenging.

When I said there are thresholds where improvement in performance will make no change I was thinking of physical noise limits (e.g. a moving magnet cartridge will have a thermal noise floor for example making lower noise electronics moot) or distortion products well below the physical ability of transducers to respond.

If you know how to make something better, do it. But don't just fix what you can test easily.

Years ago I was show a test setup that excited incoming power with a swept signal and a tracking spectrum analyzer looked for it in the audio output. I never got to actually use it. However I have accumulated what would be needed to duplicate the setup, except for the energy to set it all up. It would be interesting to quantify the actual noise resistance of various audio products.
I would expect it to be messy with lots of HF stuff. Many DAC>opamp circuits feature filters and/or bandwidth limiting capacitors in the opamp feedback path for this reason. D-S DACs have a lot of switching and digital filtering, decimation etc going on. Luckily a lot of this stuff is >> higher up than the audio band.
 
That it is exactly his only goal, "a focused attempt to debunk the value of measuring things". So we have another pointless thread, hand in hand with former Blowtorch threads, Bybee's etc. You have the power to change it, Chris.
You couldn't be more wrong. In fact, I have posted links here for people to go and read your posts at ASR on measuring RF sensitivity in opamps such LM4562. I have posted links to bohrok2610 posts on measuring noise skirts with high resolution FFTs. I have nothing against measurement, nothing at all.

What I am trying to do is raise awareness of what commonly used measurement instruments can and cannot measure well, and how what we can most easily measure does and does not correlate with how humans hear.

IOW, I am trying to educate people who are willing to think more deeply about the limits of today's common measurements. If people can't understand that then we are like men who have come before us in history and made claims like, "everything that can be invented already has been invented." The only difference is we are doing that exact same type of thinking with FFT measurements in the field of audio. Too many people think it measures everything that can possibly matter, or that it measures everything that is really important.

If you want to know what I am really trying to do, its the same thing @1audio is trying to get you to understand in #390. The next goal in audio is to reproduce the experience of being in a listening space. An FFT analyzer such as is wide usage today is inadequate to the task, at least its current form. That doesn't mean I want you to stop measuring. It means I want you to understand the limitation of today's FFT analyzer and understand over-reliance or over-focus on that being all there is, is interfering with further advancements in audio reproduction. Maybe 1audio explains it more pleasantly than I do. Or maybe he is so nice about it that it makes it easy for most people to ignore him. The jury is still out on that.

EDIT: All this talk has also got me thinking about my many debates and arguments with Syn08 in the past. IIRC, one of his arguments was that getting people in an audio forum like this to understand FFTs and take them as the holy grail of audio measurements, was because you couldn't expect amateurs to understand anything more advanced that that. Trying to get them to understand Hilbert transforms or other more advanced ideas would only confuse them, then they would go back to wasting money on audio devices that were expensive snake oil. So better to keep the measurement masses ignorant in order to protect them from themselves.

Well, is that what you want? To be protected from yourselves because you are incapable of understanding more complex ideas than simple FFTs with PSS signals? Personally, I found that type of thinking insulting to everyone in this forum, but since then I have come to understand Syn08 was not entirely wrong about it.
 
Last edited:
Many DAC>opamp circuits feature filters and/or bandwidth limiting capacitors in the opamp feedback path for this reason.
IIRC, @MarcelvdG and ThorstenL had a discussion about that very matter in Marcel's RTZ dac thread. What they talked about was some of the noise being so far above what an opamps could effectively handle by feedback, such that some RF would go around the opamp through the feedback cap and be partially absorbed by the open loop opamp output stage impedance. Anything not so absorbed would not be filtered at that point.

EDIT: Maybe some of the above dovetails in with what 1audio said about LC filtering before the opamp in #390? Measurements may look a lot the same, didn't he more or less say, but it helps some anyway. Helps with what then, you might ask. If nothing much shows on an FFT, there can't be any benefit, so it must not help with anything? Or maybe its that not everything that matters shows up well on an FFT? My bet would be on the latter.
 
Last edited:
Hi Mark,
Okay, I can agree with you to a point. Often I try to illustrate the limitations of real test equipment applied to the task at hand. At no point do I attempt to debunk the entire concept, nor do I drag in factors that can't be controlled.

Case in point. Pointing to a system in a listening environment (totally specific case, no standards) and how a person might hear it. This is case that you cannot test, and if you do it applies to one specific case and time. Useless to the rest of the world. If you then point out an exception to what is being said on this, no one can replicate your findings. This kind of thing is only disruptive, nothing more. Other valuable points are then lost as people argue with you. That has got to stop, it doesn't further anything you're trying to get across either. Now as soon as you attempt to bring in psychology, you are way off topic and firmly in undefined, uncontrollable areas. I think at that point it's time to delete that entire post, you know better.

The entire premise of a test is to control all factors and measure the result under standard test conditions. This is then reproduceable by anyone with comparable equipment and resources. When we calibrate any equipment, we use traceable artifacts or primary standards. This keeps everyone on the same page. Everything is controlled, tests done in a prescribed manner. No one gets creative and deviates from procedure. The same holds true for audio tests. A scientific method.

Nothing says you can't devise a new test, but it has to place the equipment under test under realistic conditions using realistic signals. Otherwise the value of the resulting test is not valid for normal use.
 
Hi Demian,
Agreed. Present any op amp with signals beyond it's range and you end up with non-linear behavior. DAC filters are a prime case in point. The HF energy can be well beyond the ability of that op amp to pass, so it can't follow the signal. As you've found, paying attention to that first filter stage can make a big difference.

The test you saw is a valuable troubleshooting technique. Monitor the output and lock the trigger to an aggressor signal, injected at various points. Power supply being one. Used mostly in developing a circuit design, also on the service bench to find issues.

As for digital accuracy, years ago there were things called the C1 and C2 error flags in CD players and DAT machines (DSP chips - systems). We used to monitor those flags to fine tune mechanisms for the lowest error rate. Digital transmission has less impairments, but it is a serial (SIP) type transmission without error control. Media normally has error control information plus the repetition of information that can be used to correct the data. The C2 flag was uncorrectable errors. Long ago they made these flags inaccessible. The original name for the DSP chip was "error detection, correction and concealment" They dropped the "concealment" term later. That function has not changed.
 
Case in point. Pointing to a system in a listening environment (totally specific case, no standards) and how a person might hear it. This is case that you cannot test, and if you do it applies to one specific case and time. Useless to the rest of the world. If you then point out an exception to what is being said on this, no one can replicate your findings. This kind of thing is only disruptive, nothing more.
Hi Chris,

With all due respect, I don't think you are being quite fair. Nobody PM'ed or asked in the open forum what equipment I would recommend. Similarly, I would be surprised if you were to say than anyone asked you for a list of all the tests you know how to do, all the test equipment they would need to buy to replicate what you can do, how they should setup their test environment, etc., so they can replicate what you do and what you claim. Plain logic would then suggest what you say and do is as a practical matter just as disruptive and unreproducible as what I might claim. The only difference is theoretical. If someone wanted to replicate your test setup, it might be more straightforward than trying to replicate my listening setup mainly because of differences in rooms (maybe unless you have a shielded cage or something like that which is custom built).

Regarding the psychology thing, I think it was largely an experiment, and it has largely failed to produce any good results. Long before I came to this forum there was already a tradition of referring to at least one psychological effect: Dunning-Kruger. And long before I got here Earl Geddes already used the "streetlight effect" in one of his AES papers. What I did when I got here was update the list of known psychological biases including linking to the list at Wikipedia. I also objected to the misuse (before I got here) of the term "expectation bias" by psychoacoustics pioneers such as James Johnston. All of the above people and the above history certainly precede anything I have said. Anyway, so far I haven't found that bringing up psychological effects has been very useful. In fact, rather the opposite has happened; now over at that other website where SINAD is king, they use known psychological bias effects to explain away people who don't agree with them. However, I still think in small ways it might be useful to talk about measured human biases now and then. After all, it is from measured data. Also, it does bear some semblance in the form of models to how humans actually take shortcuts in reasoning. Dunno though if on average it just aggravates people or if it is a good way to get them thinking more carefully. At this point I'm kind of leaning towards the former assessment.

Mark