why don't we use predistortion more in the audio world?

People who don't understand something will sometimes claim that it is beyond conventional analysis. That way they can either feel more comfortable in their ignorance by believing that nobody else understands it either, or they can leave the field supposedly open for their unconventional 'explanation'.

I believe you, but... presumably you will come down just as hard on anyone who still thinks that vinyl and valves are, in some mysterious way, better than digital and solid state despite all the 'conventional' evidence to the contrary? On second thoughts, probably best to let that one lie..!
 
try cliking a link? reading the whole post?

Perhaps we could shift our attention from amplifiers to speakers, then? We know about (linear) room correction and so on, but can we gain anything from nonlinear correction of speakers using pre-distortion?


...
dynamic speakers are physical performance limited "plants", sensing adds expense, economical sensors have accuracy, noise limits - so only limited negative feedback can be applied - and that really only for subs/woofers


for lots of info on dynamic speaker driver distortions, measurements, design cruise the Klippel site:

Home

specifically for a summary view of loudspeaker distortion correction try:

http://cogsys.imm.dtu.dk/nonlincomp/Klippel.pdf

and its often worthwhile poking around sites Google directs you to - to see if related material is in the higher level directories:

Symposium on Nonlinear Compensation of Loudspeakers
 
CopperTop said:
presumably you will come down just as hard on anyone who still thinks that vinyl and valves are, in some mysterious way, better than digital and solid state despite all the 'conventional' evidence to the contrary?
I normally stay out of that debate! I use both CD and LP, and recognise that they both sound nice but different. I am cautious about dogmatic assertions by either side, as the the most dogmatic people tend to be the least knowledgeable. Note that there we are talking about a preference, quite different from the faithful amplification of a voltage signal (which is all an amp is supposed to do).

tsiros said:
"predistortion" is used in hifis.

it's called "equalising"
Predistortion and equalisation are quite different, although the former may need to incorporate some of the latter. Equalisation is undoing the effect of a fixed linear filter. Predistortion is undoing the effect of nonlinearity, which may include signal-dependent filters.

Equalisation is relatively easy. Predistortion is harder, which is why it is generally only used when NFB cannot be used.
 
Last edited:
the only good with lp is that, being analog, it doesn't require any decoding.

everything else (freq resp, noise floor, dynamic range, stereo separation) is worse when compared even to cd, let alone higher quality digital sources.

yes, it sounds different, but it is one of the very rare cases where i will insist that it is worse (when, for example, i will say that tubes sound different but not necessarily worse than ss amps)
 
I normally stay out of that debate! I use both CD and LP, and recognise that they both sound nice but different.

Hi DF96

I can't help but notice a contradiction between your earlier dismissal of things that 'defy conventional analysis', and your use of LP. If everything is "just algebra" and open to mathematical analysis, then you should be able to 'pre-distort' (you see I'm staying on-topic!) your digital audio to sound just like an LP with none of the drawbacks, or to transfer your vinyl to digital format and perform whatever rigorous and non-hand waving processing that needs to be done to keep it sounding like vinyl. (Simply recording it 44.1/16 seems to work fine for me!)
 
no, I just play with amplifier design - not because it is the limiting factor in audio but because thats my expertise after decades of employment designing electronics for scientific/industrial instruments

have done some motion control, designed the circuits, coded the DSP, dissed the Mech E selection of acutators, pointed out better spots for the sensors for the product performance requirements – the reason I have a little more general knowledge of Control Theory as a cross discipline subject with details, methods other EE, analog circuit designers don't often think about
 
Last edited:
Why would I want to make either one sound like the other? I also listen to FM radio - should I add a 19kHz pilot tone to my CD or put a 15kHz sharp cutoff filter on LP?

Fair enough. I'm so used to people around here claiming that LP is superior to digital for some as yet undefined reason that I forget that some people may just like to play it for nostalgia reasons, or because they like the ceremony, or the smell of the vinyl, or the 12" artwork.

But of course you could analyse why they both sound nice but different, using conventional analysis..? And transfer vinyl to digital transparently..? If you wanted to..? Just checking!
 
nfb does everything you would need to do in order to calculate a predistortion signal, and applies it. The only difference is delay, and incomplete cancellation due to limited gain. What you might calculate by successive approximation, nfb does continuously in search of equilibrium.

Perhaps you could put your zero-feedback amp into the feedback loop of a second, zero distortion, amplifier. Record your music in this configuration, and then play it back through your amp as usual. This might illustrate the point that feedback is an inevitable part of error correction.

However it was done, some problems of nfb would remain, such as a tendency to aggravate clipping, unless some intelligence was added.

Surely it would be better to add the same intelligence to an ordinary feedback loop? Doesn't the OP's question amount to "can nfb be improved by intelligent processing of the feedback signal?" Presumably in real time.
 
If you really want to get aggressive with pre-distortion, do it totally in the digital domain, offline, and fully end to end. That is, modify the digital data to create a "new" recording, a Dynagroove sort of thing, and monitor the quality only by mic in front of speaker. Get a particularly difficult recording, get software clever enough to drive the system harder and harder at the worse bits, repetitively until it works out general rules of thumb to use. Then, apply rules of thumb to a full recording, record output, and at different volumes, and fine tune by going around the loop any number of times in slow, human time, until you create effectively a completely new set of recordings that give a mighty close match at a reasonable number of volume levels, to the original recording from the point of view of what the mic registers. Painful, yes, but no reason why something like this can't get cleverer and cleverer at working it all out ...

Frank
 
Last edited:
Assuming that the necessary output is within the capability of the equipment, is all distortion correctable, in principle, by modifying the input? Are there categories of correctableness, perhaps defined somewhere in systems theory?

What about room reflections, for example? Is anti-reverb possible?

Also, are there sounds that my room cannot support, regardless of how I try to generate them?
 
monitor the quality only by mic in front of speaker
This makes no sense, as the environment causes distortions of much bigger magnitude than the rest of the system. The (pair of) microphones should be at positions identical to those of the microphones where the original (hopefully live) recording was made--and hopefully those were separated by the average width of a human head. If the original recording was not made in such a way, then there's already a level of unrealism introduced which probably can only be fixed by getting a proper recording.
 
This makes no sense, as the environment causes distortions of much bigger magnitude than the rest of the system. The (pair of) microphones should be at positions identical to those of the microphones where the original (hopefully live) recording was made--and hopefully those were separated by the average width of a human head. If the original recording was not made in such a way, then there's already a level of unrealism introduced which probably can only be fixed by getting a proper recording.
No, I mean in front as in right in front, only a few inches away from the drivers: the direct sound is far, far greater in amplitude than any reflections. These days you can get microphones capable of handling above 140dB sound levels without going bad -- here you're worrying about precisely what the air vibrations at the source are doing ...

Frank
 
Actually, Audissey works this way. It asks to put calibrated microphone in 6 different listening positions, measures frequency response in each positions, decides what caused by speakers, what caused by microphone positions, and adjust each channel for phase and frequency response. Stores this data in EEPROM, and downloads it each time you boot it again.

Pretty smart software, and works pretty well.
 
here you're worrying about precisely what the air vibrations at the source are doing ...
You're arbitrarily omitting a significant path of the signal--that between the speakers and the ears (which the microphones of the original recording are stand-ins for--because you can't be there with your ears listening, they put microphones in your place so you can listen to it later).
 
Actually, Audissey works this way. It asks to put calibrated microphone in 6 different listening positions, measures frequency response in each positions, decides what caused by speakers, what caused by microphone positions, and adjust each channel for phase and frequency response. Stores this data in EEPROM, and downloads it each time you boot it again.

Pretty smart software, and works pretty well.
The problem is you're trying to do a best fit minimization (akin to least squares) of multiple positions, which in essence means you're trading off full minimization of distortion in one specific spot in order to expand the area within which the listener may be positioned while getting sort-of-OK results. Of course, this is not a knock against Audissey; it's a problem with all speaker systems vs the use of headphones and binaural recordings. The only way around it using speaker systems and users that are not in a specific fixed location is active user tracking, similar to advanced 3D displays that track viewer position and render the correct view for that particular viewer. It's quite difficult to implement, but one would be able to get away with using only two speakers and still get fully positional 3D audio. Such a system would need a sufficiently-finely sampled HRTF measurements of the listening space (think moving around the microphones and making measurements in a grid pattern) so that interpolation between the grid points will give sufficiently precise HRTF coefficients for the DSP to do its predistortion+crosstalk-cancellation+inverse HRTF convolution while maintaining good 3D results.
 
Last edited: