Is it possible to cover the whole spectrum, high SPL, low distortion with a 2-way?

The mere fact that a lot of papers are written on the basis of a smallish non random sample exactly goes to my point.
Their value IS LIMITED because results cannot be extrapolated uniformly to a much larger population.
Ehm, that is kind of a weird general statement.
Depending on how the experiments are done, some most definitely can be extrapolated to a much bigger population.

In this case we would start another brainstorm session/thought experiment with asking the question what parameters the average human being has over trained listening experts? That approach will bring us much further than criticizing without being constructive.

My criticism is therefore based on the fundamental limitations of the statistical model. This is what it is.
There are always limitations. In any field of science.
Yet we can get extremely precise and accurate results.

Questioning results and evidence based criticism is, after all, what science is about.
Science is also about understanding orders of magnitude and importance in context.
Which (very often) can also be investigated even with a very limited experiment.
Just because you weren't able to investigate all possible options does absolute NOT make the conclusion nor the experiment invalid. Nor the general conclusions from those experiments.

That being said, care has to be taken to very carefully read how a certain experiment was being done and what conclusions were being made. That's not always objective and legit, I do totally agree on that.

As you no doubt remember, Toole points out the 'experts' where frequently wrong and performed significantly worse in double blind testing than random students picked for the tests.
I have read that, and that is kind of a strange statement.
There aren't that many variables here, either how the experiment was done or the age of the people involved.
But it just doesn't make any sense why a trained expert would perform less than an average Joe.
If that was true, we could quite literally ditch the whole idea of being an expert by itself.
The question would be here not IF those "experts" were performing worse, but WHY they were performing worse and why random students would perform better. And most important of all, how big this difference will be.

My general opinion is that Toole goes over this way to quickly, fast and on the surface.
I am not saying he isn't right, but I am saying he also doesn't come with any proper answers either.

I was trying to make a clear point which I never anticipated would be contentious, my apologies for the OT.
Point wasn't clear and already debatable from the beginning.
In general these kind of subjects are always open for discussion, that's the nature of them.
Which is important, because it keeps us fresh and with an open mind.
It's sometimes very easy to fall into judgements quickly.

After so many pages I legit don't know what off or on-topic is inside this topic anymore to be very honest.
There seems to be zero correlation.
 
Hi Mark

Thanks for the feedback. Do the XRs also have that "non-horn-like" sound as the XT1464 has or are there more differences than the radiation pattern (and probably the XT going lower) ?
I always wondered how the beefy coax drivers like the DCX 464 and BMS would work with these small horns if they don't have to go low in a four way system.

Regards

Charles
 
Yes, I feel sure the DCX 464 and BMS would work fine on the XR's to at least 700Hz.

I don't remember any major difference between the XRs and XT1464, other than the XT1464 seemed to sound a wee bit more real,
which i'm guessing was due to my hearing needing & liking the XT1464s tighter HF/VHF pattern rolloff (despite the common 60 deg spec)


The 3TX I had were the 60 deg version, and they used a bms 4593, reaching down to a pair of 10"s.
I switched the passive xovers in them to active, and crossed somewhere between 650-700Hz.....don't remember exactly.
Worked great. here's a thread https://forums.prosoundweb.com/index.php/topic,164582.msg1516535.html#msg1516535

I tried the 90 deg XR in the 3TX and that worked great too, but i didn't really try to find the exact best xover frequency, and just stuck with whatever was used for the 60 deg.
 
Did I just mentioned deductive logic and reasoning?

There is nothing wrong with having a certain bias, as long as you keep the bias in mind.

So a test can be done with super highly skilled listener experts.
Lets say that they even can't hear the difference in a certain experiment.
That automatically (by factual definition of deduction) means that on average, people are not able to hear it as well.
Probably (very likely) even less so.

Fully randomized is by definition not possible btw, that's the reason why we need to use things like SD and SEM
So a bias is by definition inevitable, because fully random is only possible with an infinite amount of test subjects (people).
As long as you keep the bias in mind, and write your conclusion accordingly, there is nothing wrong with it.

These techniques are being used in other fields of physics, biology and archeology all the time.
In the world of audio they just keep being pedantic about certain things.

It's also kind of ironic, because the very vast majority of hundreds of papers I have read (incl Toole's for example), the experiments were done with just an handful of test subjects, often not more than 5-10, and just only done by the academics or a couple of students that were on hand. So very very far from being random.

Yet, it gives us enough insight to get an order of magnitude at least.
Which is in most cases already more than enough to determine how meaningful it is in the whole chain of variables.

Weirdly enough some people don't seem to understand the definition of "order of magnitude" and still try to squeeze out more and more decimals. As if that is going to give any significant improvements.
A good example of this is distortion (THD+N).

edit: Forget to add. But there is also nothing wrong with the first initial tests not being totally random.
Once again, it will give enough insight to see in what order of magnitude we are talking about.
If the results seems to be interesting enough, a follow up can always be organized to get the results a little bit more objective.

Another idea is first to estimate how much more "experts" will be compared to totally random people.
Once you get some numbers from there results can always be adjusted accordingly.
That still won't give perfectly random results, but you will get at least better results.
Again, when the outcome looks promising, a follow up can always be organized.

"Science would best progress using deductive reasoning as its primary emphasis, known as critical rationalism."

and

"A pseudo-science is set up to look for evidence that supports its claims.
A science is set up to challenge its claims and look for evidence that might prove it false."


Don't be surprised if 80% of so-called science actually belongs to the first category.
 
I can say the XR1464C's 60 deg horiz pattern, didn't seem to have as sharp a -6dB edge, as the 60 deg XT1464 has.
Can also say if I wanted to use a single stack of the modulars for 90 coverage, I'd likely go with the RCF 950, just from positive experiences with it in the PM90s.
Mark, how do you rate the HF950 compared to the other horns - especially the XT1464 and XR1496C, based on your subjective experience?
 
Last edited:
FaitalPRO HF1440 + 18sound XR1464C horn, crossed 1st order at 1kHz to 4x ScanSpeak 15WU's.

1661363810826.png
1661365672376.png
1661365738833.png
 
Last edited:
"Science would best progress using deductive reasoning as its primary emphasis, known as critical rationalism."

and

"A pseudo-science is set up to look for evidence that supports its claims.
A science is set up to challenge its claims and look for evidence that might prove it false."


Don't be surprised if 80% of so-called science actually belongs to the first category.
If you take it that loosely and simply, yes.

Especially in the field of acoustics, I have seen so many papers using FEM/BEM techniques to prove certain ideas, by running the results through FEM/BEM simulations. Often totally unaware of the false assumptions they started with.
In other words, the entire simulation is based on one assumption that has no fundamentals whatsoever.
The papers talk pages on end about all the maths of the FEM/BEM simulation, but totally neglecting the fundamentals.
In some extreme cases not even proving anything at all.

One of the best a classic examples of how science should be done, is the old foil experiment done by Rutherford (Geiger–Marsden experiments). Yes, that's a while back.

I have a background in applied physics, especially in statistics and experimental physics.
Disproving your own findings is a fundamental part of the whole experiment.

I really don't even understand how some of those papers, thesis and PhD reports would even pass.
Back in the day I would get a big red cross trough it all and been thrown in the garbage bin.
 
Several points:

I do write as succinctly as possible. Thus, it is certainly possible that someone misunderstands my assumptions and hence might question my conclusions. Best would be to ask. It takes a lot of time to write such that no one can misinterpret what was said.

Good discussion of the value of subjective listening. I would agree that fairly large problems can be worked out in non-scientific studies or by deduction, but details cannot. The more unobvious the flaws the more a well controlled experiment is required. THD and IMD certainly come to mind. Used for decades, when studied in depth it turns out that they don't correlate to anything, let alone subjective impressions. That makes them not very useful for discussions of the audibility of nonlinearities.

As you no doubt remember, Toole points out the 'experts' where frequently wrong and performed significantly worse in double blind testing than random students picked for the tests.
I have seen this many times in my career as well.

Another idea is first to estimate how much more "experts" will be compared to totally random people.

Toole/Olive have also studied this as well. Their conclusion was that experts agree with the masses on average, but it takes many more novices to arrive at statistically significant results.
 
I have seen so many papers using FEM/BEM techniques to prove certain ideas, by running the results through FEM/BEM simulations. Often totally unaware of the false assumptions they started with.
I've seen this as well. I used to encourage my FEA analysts to get away from their computer screens long enough to understand exactly what the problem is that they are trying to solve. Not just to generate numbers, charts and graphs that are irrelevant to the actual problem.
 
  • Like
Reactions: 3 users
I've seen this as well. I used to encourage my FEA analysts to get away from their computer screens long enough to understand exactly what the problem is that they are trying to solve. Not just to generate numbers, charts and graphs that are irrelevant to the actual problem.
Or as I would say, just get their hands dirty, gain some experience and especially some practical idea of what we are talking about.

Purely technical speaking THD (and IMD) does correlate to some extend.
Every human being has no problem hearing 20-30% THD of any kind.
Fact is, does are just plain raw numbers. Meaning they have zero connection with psycho-acoustics, or just any preferences we have. After a while we do can say that the order of magnitude is becoming so small, that other factors or just so much more significant. Strangely enough some speaker manufactures seem to totally neglect this and just dismiss it all "since it's clear as a mirror". Go figure.

As for the "experts" listening tests.
To me, it falls a bit in the category of correlation doesn't imply causation.
I believe the results, but it doesn't say much on itself.
Or in other words, what is the reason why they perform less than an average person?

That statement on itself doesn't tell us anything without any prove and will just be a lose statement.
Just an observation nothing more, nothing less.

I can come up with some idea why experts would perform less, but there aren't many.
One that comes to mind are age, as well as ear fatigue (caused by dealing with a lot of audio and noise)

I can up with more ideas that experts would actually just perform basically the same as average people.
Just have better knowledge of name and point out the differences, but that doesn't make anyone a better listener.

The only way to get a better understanding of this, is to investigate and perform a test of experts vs non-experts.

But once again, I find it a little far fetched for most research, since order of magnitude is what we care about in the first place. Or when there is a very big reason to believe that the test panel in question was performing significantly different.
We are talking of orders of magnitude different.
I personally find that difficult to believe as long as there are enough contestants from all kinds of ages.

The original point I was making, was responding to someone saying that those experiments were expensive and all.
Which I don't think is true at all, you just have to be a lot more creative and get out of that academic bubble.
 
That's total nonsense.
Well it's a little extreme and blunt, but there is a little truth in it.

I know people who are in the medical field.
Let me say it this way, there are other priorities than being objective in some cases.
Even more so when there are politics involved.

In the field of audio and acoustics, I have definitely seen many papers clearly being biased a certain particular way.
 
  • Like
Reactions: 1 user
That's total nonsense.
Perhaps you're confident that financing / sponsorship of 'scientific' institutes is (still) founded on purely 'noble / honorable' objectives (in quotes, because even these adjectives are subject to corruption) and that all researchers are equally facilitated, even if a discipline / research field has few or no marketable applications and serves no interests other than science for the sake of science.

In this respect too, there is much to learn from history.
However, be very critical when selecting literature.
 
Last edited:
Science operating outside of academia is called technology.
Ehm, no?

Science is nothing more than a method and every person on this planet is allowed to contribute no matter his/her/it position, status, credentials, race or beliefs.
As long as people stick to this method of working and testing, it is called science.

Saying that is just only of academia sounds highly arrogant and disrespectful to me.
As someone with a scientific background I also would like to distance myself from such statements
As well as refer back to the meaning and foundations of what science is all about.

The definition of technology is also very different as well.
Technology is the result of (scientific) knowledge and skills, or methods used in industrial or commercial production.
 
  • Like
Reactions: 1 user
The scientific method is of course applicable for all research, private, military, business etc. It aint science though unless it's published in a scientific journal and subjected to peer review, i.e. academia. Applied engineering on the other hand is technology based on science that does not have to meet the generalized outcome criteria that science requires. I don't know if English is your first language since 'science' traditionally has a different and stricter meaning in English than f.i. the German Wissenschaft or French la science.