AI And Amp Voicing

I work in IT. While I have no formal experience with AI, butI understand its ability to quickly comb through massive similar data sets to determine patterns.

This evening, I watched a couple YouTube interviews of Papa. Their content got me to reflect on the various pieces he's authored concerning amplifier voicing. In particular, I enjoyed the explanation of negative 2nd order harmonics which accompanied the Korg NuTube B1 piece.

I've built the Aleph-J, F6, M2, and F2J (in that order), and have become familiar with their different characters (though I wish I had a wider variety of speakers to sample them with). I imagine Mr Pass spent a considerable amount of time developing each of these characters.

I began to wonder if AI could help us better understand which amplification artifacts (or lack thereof) are going to sound "better". I'm not sure what the data sets would be. Perhaps a combination of topologies, measurements, reviews, and sales figures?

While we can't yet measure the processing happening between our ears, maybe we can analyze more data to better understand why some of what makes it through is more pleasing that others. Or, maybe it's just too subjective. My nirvana is an M2x/Ishikawa driven by a Korg B1. Someone else's might not be. But, perhaps there's similarities between the two that are an important data point we haven't yet paid attention to.

My point is that I wonder if we'll get to a point where measurements could conclusively indicate how pleasing most people will find a particular amplifier within a certain application.

Just my musings, anyway.