Beyond the Ariel

ra7

Member
Joined 2009
Paid Member
I would encourage everyone to try RePhase (just search the forum). You can fix phase aberrations and listen. Using foobar, you can quickly turn it on and off. It helps if someone else does this, so that you don't know whether it's enabled or not.

I could not hear a difference.

Also, remember that moving the horn 1/4 inch isn't going to magically fix all the phase rotations. It will only mean a smoother crossover. The phase turn of the crossover will still remain, as will all other phase aberrations in the system (vented box, etc).
 
Last edited:
I've read that the frequency response of each nerve in human hearing is limited to about 2 KHz*. If true, is this a coincidence?

*And that the 20 KHz limit is achieved in the brain when taking into account the input from many nerve inputs (software).

Got a cite?

Oohashi et al (2000), "Inaudible High-Frequency Sounds Affect Brain Activity: Hypersonic Effect", Journal of Neurophysiology,vol 83(6), pp. 3548-3558 suggests other pathways for sound perception

I need to do some homework and see what's happened since (eg. Reiko Yagi, Emi Nishina, Manabu Honda, Tsutomu Oohashi, Modulatory effect of inaudible high-frequency sounds on human acoustic perception, Neuroscience Letters, Volume 351, Issue 3, 20 November 2003, Pages 191-195,)
 
Last edited:
I would encourage everyone to try RePhase (just search the forum). You can fix phase aberrations and listen. Using foobar, you can quickly turn it on and off. It helps if someone else does this, so that you don't know whether it's enabled or not.

I could not hear a difference.

Also, remember that moving the horn 1/4 inch isn't going to magically fix all the phase rotations. It will only mean a smoother crossover. The phase turn of the crossover will still remain, as will all other phase aberrations in the system (vented box, etc).

I should mention the new loudspeaker (which I will call LTO for now) is not linear-phase in the sense it can reproduce nice-looking square waves. Like the Ariel, it resembles an allpass filter at the crossover frequency.

In terms of time-to-decay, loudspeakers with 1st-order crossovers are not necessarily better than allpass-filter loudspeakers. Why? Because the time decays from the drivers themselves can be a lot longer than the relatively brief up-and-down from the allpass function.

Now, if the drivers were perfect, yes, 1st-order crossovers would certainly look a lot nicer in the time domain. But in the world where we live, the time-to-decay is dominated by the driver itself, with horns in particular needing special attention to minimize diffraction at the horn-mouth and in the throat. Kinks in the throat (as seen in the JBL Bi-Radial or Altec sectoral horns) cause far more clutter in the time domain than a well-designed high-order crossover.

In addition, diaphragm excursion rises very rapidly at frequencies below the horn cutoff. High-order filters are much more effective in shielding the driver from the LF energy that creates IM distortion than 1st-order filters.

If people are OK with multiple amplifiers and digital signal processing, this can be sidestepped with FIR-type filters in the digital domain, but the LTO is aimed at the triode-amplifier enthusiasts.
 
Last edited:
I certainly do not know the answer to that one. Geddes just said above that we need more than one cycle. If there are some studies that show this, or hints as to were to find them, please let us know. It's an interesting subject.

sound is simply pressure on the eardrum. while a partial might be 'sensed', it might not have a 'sound' per say.

anybody have an audio generator that can send a single/partial wave down the pipe or an ADC that can 'edit' a wave from an audio generator?
 
I certainly do not know the answer to that one. Geddes just said above that we need more than one cycle. If there are some studies that show this, or hints as to were to find them, please let us know. It's an interesting subject.

But in any case, the harmonics would take a minimum of 1/2 as long for a full cycle. Maybe 1/3 to 1/4 that long.

here is a link to an online generator.

Online Tone Wave File Generator | Sine, Sweep, Noise & more

set to a pulse sign with level -6 and width 1 sample i hear only a brief 'click'. this likely only the sound of the audio circuit in the computer turning on and off.

it certainly makes sense that a complete cycle is required for humans to 'detect' a sound (hear it).
 
On the audibility of phase, my impression is that our ears work in several different ways and so what might be important in one frequency range, may not be in another.

Also terms like Group delay which were intended to be used over a narrow bandwidth are confusing when applied over a large span.
For example, ANY response feature takes 10 times longer if the frequency is 1/10 before, so subwoofer must have large GD’s associated with them.

It seems to me that our hearing system is something which we have all “learned” like learning to speak, ride a bike or play guitar etc.
As our one and only reference we are also unaware of it’s limitations and even it’s complexity. We think in terms of “an image” but that is something constructed in our brains based on the inputs from two ears who’s responses change wildly with incoming angle. We hear none of that as problems or comb filtering, that’s how we have learned to tell the direction etc.

Rather than try to emulate or re-create some of our ears work (pina recordings etc), the object could also be to simply reproduce the input signal intact, Voltage waveshape in, pressure waveshape out.
One re-occurring theme in marketing is what can be done partially governs what is thought to be necessary by those selling the stuff.

Without DSP, it had been nearly impossible to make a loudspeaker who’s radiation occupied one point in time over a broad band (as Dick Heyser described as an ideal more than 25 years ago). That meant that a normal multi way loudspeaker had sources which were too far apart to add into one source in the horizontal and vertical planes and offset in time due to the crossover and physical depth position relative to each drivers acoustic phase where they interact.
So in part because of a complex source radiation, what one hears listening to a loudspeaker is often very different than what one hears simulating the mag and phase sampled in only one place (like a microphone does) and auditioning “phase” over headphones.. Oddly, when you remove the two ear spatial stuff, all of a loudspeakers flaws are more audible when you listen to it over headphones and measurement mic.

What we did at work was to make generation loss recordings using that approach too (a 24/96 MI recorder), not only was it useful to see how “faithful” what I was working on was but also our competition. Here a Earthworks measurement mic was places at a meter on axis and a music track was played at about 90 dB 1 meter.
Usually that was done on a tower well off the ground so that the captured sound was the loudspeakers radiation (reflections far down in level). For a while we also made a parallel generation copy of the music track except the degradation in that was negligible.

Want to know where a loudspeakers audible warts are, just make a generation loss recording.
Want to hear how much different phase makes in a given system?
Set up a multi-way speaker with a dsp speaker controller and make a recording both ways.

I think most hifi fans would find it hard to believe that most loudspeakers sound pretty lame on the first pass and many sound bad at generation 2, very few are even tolerable after three generations.
Conversely, the perfect loudspeaker, like a perfect anything else, can be inserted in the signal path and cause no degradation no matter how many generations you choose.

The reality of loudspeaker faithfulness or absence of it is rather shocking, it reminds me of something Don Davis said once.
An electronic engineer looked at the magnitude and phase for a loudspeaker and proclaimed “well, obviously, it’s broken”.

If your game, try the generation loss or mic capture approach to evaluating crossovers and changes, it was very useful when we weren’t sure of the direction.
Best,
Tom Danley
Danley Sound Labs
 
I've read that the frequency response of each nerve in human hearing is limited to about 2 KHz*. If true, is this a coincidence?

*And that the 20 KHz limit is achieved in the brain when taking into account the input from many nerve inputs (software).

Mostly true, but the common thought is that each nerve cell takes about 1 ms to recharge, so this is more like 1 kHz. This limitation is clearly why we have a cochlea. Without a cochlea we could not hear past 1 kHz. The cochlea spreads frequencies along its length and does indeed encode frequencies > 1 kHz in "software" and "place". This encoding and decoding is exceedingly complex.
 
sound is simply pressure on the eardrum. while a partial might be 'sensed', it might not have a 'sound' per say.

anybody have an audio generator that can send a single/partial wave down the pipe or an ADC that can 'edit' a wave from an audio generator?

This is the problem - it is impossible to even define a wave like this that does not have much higher frequencies that could be detected. No single period wave can be created at a single frequency. The closest that we can come o this is a wave packet with a Gaussian envelope, but this still takes several periods to accomplish.
 
If your game, try the generation loss or mic capture approach to evaluating crossovers and changes, it was very useful when we weren’t sure of the direction.
Best,
Tom Danley
Danley Sound Labs

HI Tom

Trouble is that this is virtually impossible to most people to do - I can't. When I first read this idea I liked it, but I have never been able to try it.
 
There is a well known early electronic noise-music piece that was just a short spoken passage endlessly played back and re-recorded until it became all just noise. Is this not all you have to do? Tape generations would have been a part of it then - recorded and re-recorded digitally it would mostly be the loudspeaker artifacts multiplying, unless I am missing something.
 
This is the problem - it is impossible to even define a wave like this that does not have much higher frequencies that could be detected. No single period wave can be created at a single frequency. The closest that we can come o this is a wave packet with a Gaussian envelope, but this still takes several periods to accomplish.


I would theorize that a quarter cycle of a wave could be enough for the brain to synthesize the rest. It's the information contained in that first quarter cycle of a single frequency wave, that defines the frequency and amplitude. By the ear sensing the change in acceleration (third derivative of displacement) and the peak displacement value through the ear drum, shouldn't that suffice for the brain to make up the remaining wave? Again this is just a hypothesis.
 
Last edited:
It is difficult for this to be done with any certainty mathematically. I did a lot of work with the "linear predictive coding" which attempts to do what you suggest mathematically. The problem is that the "predicted" results have huge uncertainties when the data is limited. For the brain to reliably do this would not be possible on less than several cycles. Remember these are not systems with perfect resolution, noise and errors are part and parcel to any and all human perception.
 
Musical Noise,
I think we have two things happening here. One is does the brain need a complete cycle to be triggered and actually identify the signal withing the first 90 degrees of the waveform? We would probably discount that in such a short time frame but at the same time how do you only produce a 1/4 of a waveform and have it instantly stop, that seems impossible on its face? So how many full waves do we need before we identify and react to a sound? It seems we are looking at multiple functions here, the initial rise time triggering the neurons and the brains processing and our let's say comprehension that the event has happened. Theory and science do not always align as we don't understand all of the brain functions and nerve impulse response times.
 
Musical Noise,
I think we have two things happening here. One is does the brain need a complete cycle to be triggered and actually identify the signal withing the first 90 degrees of the waveform? We would probably discount that in such a short time frame but at the same time how do you only produce a 1/4 of a waveform and have it instantly stop, that seems impossible on its face? So how many full waves do we need before we identify and react to a sound? It seems we are looking at multiple functions here, the initial rise time triggering the neurons and the brains processing and our let's say comprehension that the event has happened. Theory and science do not always align as we don't understand all of the brain functions and nerve impulse response times.

we have completely 'shanghaied' this forum topic here.

someone mentioned above about 'ear training'. perhaps it is possible to 'train' the ear to do just this. i participated in a study a few years back on overtones in the human voice at FSU School of Music. the tester was attempting to determine the relationship between the singer's ear (detection of pitch and ability to maintain intonation) and overtones in the voice. i recognized what he was after in the study and told him. of course harmonics are the determinant in this study.

anyway here is something on wiki that is interesting. i googled "can human hear tone burst" and this was one in the list.

Absolute threshold of hearing - Wikipedia, the free encyclopedia