Scaling voltage waveforms for distortion analysis

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
I am using Igor Pro from Wavemetrics and my sound card to do some basic measurements of amps and speakers.

I would like to implement basic distortion analysis.

I can obtain time aligned output and input voltage waveforms.

I would like to scale the output waveform to the input waveform to obtain the minimum difference and then subtract the one from the other to give a distortion measurement.

My question is, what is/are the method/s for scaling a waveform to minimize the difference?

thanks.
 
i think i need to add some info to clarify my question.

the input and output waveforms are frequency sweeps - it would be much easier to find the best fit scaling for a single frequency but i am interested in obtaining
THD measurement in a single sweep across the audible spectrum.

also, how do distortion analyzers handle phase shifts - are phase shifts considered part of distortion or are waveforms phase aligned before distortion analysis?
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.