Not sure what "inter-aurals" means in this context? Are you meaning the difference in arrival time at the two ears?
Got any refs to test results on this?
No. Just makin it up as I go along. 😉
Nordmark was the first I found to test inter-aural.
Trying to remember who is current, kinda busy at the moment. I do remember the numbers 11/11/11, which was the date of one of his writeups..wait, Greisinger...that's the name...he had quite a bit on his site..
There are others, but for now I'm wrestling with issues that are far more important than audio, so I will not be pursuing it for anywhere from two to three weeks. 🙁 Life sucks. Sorry
My concern is this: a stereo program will have images consistent with the pan pot placements (studio mix). There will be a center image for some part of the program content. Panning moves some to either side, of course it is usually level only panning.
If part of the content is delayed by a timeframe which exceeds human sensitivity thresholds, the placement of the image with respect to a central one will be changed.
Not "head in vice" arguments, but rather, relative shifts from one perceived image to another.
Settling time of the wires to load will depend on the relationship between the characteristic impedance of the wire to that of the load. The load impedance has a wild frequency dependence, both as measured steady state, and moreso dynamically. (as in, copper shorting ring vs no ring kinda thing.)
Since we have ITD sensitivity at the microsecond level, settling times of the cable to load in that range cannot be discounted. That is why the RF characteristic impedance is used by me, the speeds of the ITD are in that regime..
John
jneutron,
Wait, are you saying that signals below 50kHz or 20kHz can't have relative delays, interaural or otherwise, different less than some amount (tens of microseconds)? Because that is certainly not true, try to do signal nulling/cancelling tests and find that delay differences in the nanoseconds can be significant. That might even be what is going on in ears, RF bandwidths aren't necessary.
Or that delays that would be the same in both channels would still matter for image placement?
Wait, are you saying that signals below 50kHz or 20kHz can't have relative delays, interaural or otherwise, different less than some amount (tens of microseconds)? Because that is certainly not true, try to do signal nulling/cancelling tests and find that delay differences in the nanoseconds can be significant. That might even be what is going on in ears, RF bandwidths aren't necessary.
Or that delays that would be the same in both channels would still matter for image placement?
Do you have any proof that humans are not sensitive to content at the 1.5 uSec level? Otherwise, we may have to reject such an unfounded claim.
Let's see where the numbers go.
As far as the pick up element is concerned, inner hair cells stop coding for phase from 4-5kHz thereabouts. So for ease let's start out with a 1kHz tone, which has a period of 1000 uSec. 1.5uSec is .006x of that, or a phase shift of a bit over 2 degrees. At the limit of phase detectability, that would be over 10 degrees.
I have my ideas as to whether this is perceptible, but first I want to invite others to give their thoughts.
I don't have the studies at hand but I believe the stimulations were usually ~impulsive, not tones. Hadn't known that Griesinger was involved.
jneutron,
Wait, are you saying that signals below 50kHz or 20kHz can't have relative delays, interaural or otherwise, different less than some amount (tens of microseconds)? Because that is certainly not true, try to do signal nulling/cancelling tests and find that delay differences in the nanoseconds can be significant. That might even be what is going on in ears, RF bandwidths aren't necessary.
Or that delays that would be the same in both channels would still matter for image placement?
I would concentrate on midband where we are most sensitive to ITD.
If an image is centered, delay of the keying information on both channels by the same amount, in the low microsecond range, will be of no concern.
If an image is physically offset quite a bit as a result of panning, changing the keying information in both channels will not guarantee the image remains put. Humans will be sensitive at some level to that shift relative to a central image. If there is no image to use as relative, then the argument reduces to head in vice.
If the relative motion of the voice coil causes the dynamic load to change for the keying information, again there is the possibility of an image to image relative shift.
If the settling time is close to zero as a result of getting the rf impedance close to that of the load, all this goes away. It's the 150 ohm plus cable impedance in concert with 1 (typ 4 or 8)to 60 ohm load that changes settling times.
John
We'll see I guess, the above makes no sense to me at all. That time spread stuff is plain old sinc function nonsense and has nothing to do with group delay, I'll remind you sinc reconstruction (that which they want to avoid) is linear phase (pure delay).
EDIT - In this context what you added makes even less sense. There are few (any?) sound cards or players with DC response.
First of all, I do not think they are telling all.... they beat around the bush a lot to avoid core elements of what they are affecting. A reason why what they have dixclosed doesnt seem like what they are doing based upon what the sound change is. Lets see their patents.
The DC thing is an add-on of my own .. may have nothing to do with what they are doing --- just there for those who remember the bad (bleared or indistinct) bass before and after was not freq response but GD affect. Similar to that is at the high end where filtering is abundant in digital
THx-RNMarsh
Last edited:
Mono, certainly not. As I recall, Greisinger was sitting in a concert, a string quartet, and was calculating the ITD necessary to distinguish the performers horizontal locations relative on stage from his listening position.Let's see where the numbers go.
As far as the pick up element is concerned, inner hair cells stop coding for phase from 4-5kHz thereabouts. So for ease let's start out with a 1kHz tone, which has a period of 1000 uSec. 1.5uSec is .006x of that, or a phase shift of a bit over 2 degrees. At the limit of phase detectability, that would be over 10 degrees.
I have my ideas as to whether this is perceptible, but first I want to invite others to give their thoughts.
John
edit: bcarso, Greisinger is the most noteworthy audio guy I recall. There were plenty of neuro type researchers also, I just don't know their names off the top of my head.
GrIEsinger I respect for his courage in asking Sid Harman why Harman featured a "boom box" in the pages of an annual report years ago, in an event at an LA AES convention called "An Afternoon with Sidney Harman". Sid responded: DOCTOR Griesinger, that is not a boom box but a mini-system, a best-selling product category.edit: bcarso, Greisinger is the most noteworthy audio guy I recall. There were plenty of neuro type researchers also, I just don't know their names off the top of my head.
We were dragooned into attending the event, which was in the format of a question-and-answer session. One person said privately "an afternoon with SH---that would be about ten minutes".
One question that was asked was would he have done things differently if he were to be starting in that day, and he replied that he would have been less autocratic. Had I had a death wish and plenty of f*ck-you money , I might have hollered out Well too late now!
The other thing I was impressed by with DG was how he had managed to draw a salary AND royalty income from Harman.
This reminds me of when I was dallying with phased arrays for mobile phone masts. Spoiler, MIMO is much better. It was only when messing around with the numbers that it strikes home how much clever stuff goes on in our heads to get the directional accuracy that we have.
Then we screw it up by only have 2 channels to recreate a soundfield...
Then we screw it up by only have 2 channels to recreate a soundfield...
Mono, certainly not. As I recall, Greisinger was sitting in a concert, a string quartet, and was calculating the ITD necessary to distinguish the performers horizontal locations relative on stage from his listening position.
John
edit: bcarso, Greisinger is the most noteworthy audio guy I recall. There were plenty of neuro type researchers also, I just don't know their names off the top of my head.
Just this weekend saw 4 of his discrete opamps at Jan Didden's man cave complex. Would like to have the equipment required to find it's flaws.
As to the audibility, agree. ITD goes kaput at wavelengths smaller than the distance between your ears, so to come back to the 1.5 uSec, that would be about 2 degrees phase shift so of absolutely no consequence.
Even theoretically it is difficult to make bad cables for audio.
Last edited:
Make 'me big so they can deliver the I and keep the loops area and thus Ls under control and that's it IMV. I've seen too many claims about speaker cables over the years that have ended up being debunked to lose any sleep over the subject.
I spoke to Dick Sequerra this afternoon and he says that he has never used an electrolytic cap in the MET7 The value has changed over time from 6uf to 20uf and originally is was Mylar, later it was polypropylene. He knows everything that I know about caps, and MORE! He hates electrolytics and ceramics, just like I do.
years ago when I was trying to measure my room for decay and reverb time and near/far field speaker FR .... I developed the RTA concept and with a co-worker designed and published it. But, I learned and said in the RTA article that the room had to be well behaved/balanced so that as little EQ was used as necessary. Mostly for the direct, on axis response measured at 1 meter or upto the critical distance. Then I was experimenting with controlled dispersion using JBL lens etc. I came to the other conclusion that with as little reflection off side walls and sitting in the near field, an equalizer can help. And, with more narrow dispersion, the near-field/critical distance could be extended.
Now, I have that type of JBL speaker system again (M2) and tried the Audyssey Pro EQ back at the listening position... well into the room and pointing their cal'ed mic at the ceiling as they instructed and took 12+ measurements stored in the Aud EQ Pro then it was averaged in some fashion and came up with correction I needed. Well, I have been listening to it for awhile and it ain't right at the< 300Hz. The Audyssey has a bypass button - with and without EQ and it just did fine from mid range and higher but messed up the bass by its averaging algor. I see the curves and it appears that it takes the min and max response and sets the output for the middle range?
I like it better the way JBL has it. meanwhile, I am looking for a way to just use the EQ on 500Hz and higher only. Only on-axis response can be EQ'ed.
Below 500 or so, it is room response which has to be dealt with differently. Maybe the Crown Amp/DSP can do that... saw something about that in thier lit.
Here is the latest R&D summary on speakers and EQ from F.Toole seminar (M2 included). It is an excellent presentation with powerful info:
https://www.youtube.com/watch?v=zrpUDuUtxPM
THx-RNMarsh
Now, I have that type of JBL speaker system again (M2) and tried the Audyssey Pro EQ back at the listening position... well into the room and pointing their cal'ed mic at the ceiling as they instructed and took 12+ measurements stored in the Aud EQ Pro then it was averaged in some fashion and came up with correction I needed. Well, I have been listening to it for awhile and it ain't right at the< 300Hz. The Audyssey has a bypass button - with and without EQ and it just did fine from mid range and higher but messed up the bass by its averaging algor. I see the curves and it appears that it takes the min and max response and sets the output for the middle range?
I like it better the way JBL has it. meanwhile, I am looking for a way to just use the EQ on 500Hz and higher only. Only on-axis response can be EQ'ed.
Below 500 or so, it is room response which has to be dealt with differently. Maybe the Crown Amp/DSP can do that... saw something about that in thier lit.
Here is the latest R&D summary on speakers and EQ from F.Toole seminar (M2 included). It is an excellent presentation with powerful info:
https://www.youtube.com/watch?v=zrpUDuUtxPM
THx-RNMarsh
Last edited:
I found the Audyssey algorithm to be virtually useless as implemented in an Onkyo receiver. What was particularly odd was that there was no low-frequency function that might have helped tame a major room resonance---although you are only going to have that effective at a few listener positions. Multiple subs and EQ are the way to go, and much less unsightly than a plethora of traps.
The best room bass I have ever heard is at Toole's house, the result of multiple subs and Todd Welti-derived individual EQs. It is serenely uniform and largely independent of listener position. I hope he can preserve it when he transitions to an envisioned 24 channels! As I am quite happy with listening to music through a "two-channel window" I don't think I would mess with it, but Floyd reminds me that we can't take it with us 🙂
The best room bass I have ever heard is at Toole's house, the result of multiple subs and Todd Welti-derived individual EQs. It is serenely uniform and largely independent of listener position. I hope he can preserve it when he transitions to an envisioned 24 channels! As I am quite happy with listening to music through a "two-channel window" I don't think I would mess with it, but Floyd reminds me that we can't take it with us 🙂
Can do for power distribution, buss bars have already been mentioned... flat conductors broadside coupled.
As well as skin effect their is also proximity effect, that we discussed a couple of years ago on here, this can further compound the problem as current density is increased in certain areas of the conductor, will try and find the link I posted that showed it.
Thanks marce, I´ll look up the proximity effect. Reality is so much more complex than the simplifications we start with.
Does stranded wire (non-litz) versus solid copper, equivalent resistivity, make a difference in terms of skinning? I believe not. One teacher said yes, I think he was wrong on this a few years ago on boot camp.
I have learned things here on diyaudio and on this thread too, thanks and sorry for any noisy input from me, i´ve been wasting way too much time on youtube and have posted links because of a need to interact.
First of all, I do not think they are telling all....
What they don't just come out and say say is that they think they have a better LOSSY codec, this is somewhat insulting to people's intelligence. It's all a big yawn to me since it goes against one of my primary rules, don't bet the farm on a band-aid to whatever technological limits that happen to exist in a narrow time window. In two years every one might have unlimited BW and storage for a song (so to speak).
Now if they claim their lossy codec is better than lossless, what would you think?
EDIT - Please don't hold the lossless vs. lossy in doubt they clearly claim compression below the Shannon entropy limit.
Last edited:
I spoke to Dick Sequerra this afternoon and he says that he has never used an electrolytic cap in the MET7 The value has changed over time from 6uf to 20uf and originally is was Mylar, later it was polypropylene. He knows everything that I know about caps, and MORE! He hates electrolytics and ceramics, just like I do.
Then I was mis-informed.
Wow - interesting guy 🙂
Does not suffer fools to say the least, he was a friend of one of my neighbors.
- Status
- Not open for further replies.
- Home
- Member Areas
- The Lounge
- John Curl's Blowtorch preamplifier part II