Arny, I think you came here recklessly. No one has been ignoring you. I think you came with different perspective as many of us here..
Thanks for the personal attack characterizing me as an unthinking careless person. It reveals your state of mind which you may not be aware of.
Of course I came here with a different perspective as many of those here, and it appears that is in your mind a fatal and unmanageable error on my part.
As I have shown there are many simple logical flaws in the customary way some think around here.
It is clear that there is very limited openness to any thinking but the established thought patterns of those who feel they are the sole determiner of such things around here.
How long do you think I had to read, in order to determine this? ;-)
Lol, three of us posted together.
Perhaps the best way for any future files would be to just present the raw files untrimmed, no fade ins, fade outs etc and let those that want to align them perfectly do so themselves.
Just a thought.
Or, you could follow some simple guidelines such as editing the starts and stops within a millisecond or two. It takes a second or three per file, and that may more effort than you think is worthwhile.
I've never figured out how to align files other than by eye and visually chopping them. Using Audacity, using LTspice, its all hard won knowledge.
. And SY himself will never want to conduct such test because he knew the effort required to conduct a controlled test. So I believe many of us have similar perspective in seeing this test and it's purpose. There are many things involved, such as how to attract participants.
In fact several did conduct controlled listening tests on their own, so the claim that it is too hard would seem to fail a reasonable logical test.
It would appear to me that providing the materials for high quality tests, given that much of the work had already been done and not that much remained, would attract new participants at a low cost.
I suspect that many non-insiders are seeing a struggle between time honored customs with obvious flaws, and modern technology; based on a belief in pre existing sufficiency, personal infallibility, and lack of desire to make even minor changes.
I tried to make my suggestions as low key as possible... The personal attacks just came all by themselves... ;-)
I've never figured out how to align files other than by eye and visually chopping them. Using Audacity, using LTspice, its all hard won knowledge.
There is no other way that I know of but using visual means, but with a little practice it takes only a few seconds.
In the process one may examine one's data a little more closely which is not all bad! ;-)
I guess the insight of using cue marks and then chopping based on them was very helpful.
Finding faults in naively designed experiments is like finding cows in dairy farms.
Then don't participate in it. Simple!
Jay said:And SY himself will never want to conduct such test because...
...he's already done a well-controlled test of this variable, published the results, and is well aware of the propensity of some people to cheat in tests where there are holes in the blinding procedure and, yes, uncontrolled variables other than the variable being tested.
Thanks for the personal attack characterizing me as an unthinking careless person.
Sorry, but that was not an attack 🙂
It reveals your state of mind which you may not be aware of.
Anything is possible. As I observed it, you came here on post #68 (IIRC) with your complaint regarding "no reference file". Many had posted against you, thinking that you didn't read carefully that the reference will be provided later for reasons clearly outlined. Even SY was against you.
Then somebody came up suggesting a Lacinato ABX, but you surprisingly responded as though it was Foobar he is suggesting. You didn't read carefully, that's what I thought. You might be busy or in a hurry, or you might just be careless (Note: English is not my language so I may pick the wrong word).
Mondogenerator, IIRC, mentioned that he often talk before thinking. Lately he picked a preference (in the driver test I talked about previously) and regreted that he didn't supposed to pick that clip. It's a 2-week blind test and he made up his mind in a few minutes. Do I think that he is "terrible"? No. I think it takes a mature personality to know and acknowledge own weakness and laugh for it.
Am I wrong to assume that you're a mature personality, not to be offended by my post?
How long do you think I had to read, in order to determine this? ;-)
Those smileys are reason I think you're not that emotional person 😀
...he's already done a well-controlled test of this variable, published the results,
I think I've tried to find with no result. I believe it is not online but in magazine, correct? I'm interested to see what a controlled test is like, because, is it possible at all? I think yes, it is possible. But I doubt I would agree with any blind test. My prejudice.
Audio diffmaker
> B-A parameters: -663,9usec, 0,211dB (L), *0,032dB (R)..Corr Depth: 32,1 dB (L), 48,4 dB (R)
> C-A parameters: -377,3usec, 0,106dB (L), *0,007dB (R)..Corr Depth: 38,3 dB (L), 65,5 dB (R)
> D-A parameters: -99,65msec, 0,433dB (L), *0,271dB (R)..Corr Depth: 26,0 dB (L), 30,4 dB (R)
> E-A parameters: -285,7usec, 0,258dB (L), *0,058dB (R)..Corr Depth: 30,4 dB (L), 43,9 dB (R)
> F-A parameters: -149usec, 0,304dB (L), *0,087dB (R)..Corr Depth: 29,0 dB (L), 40,0 dB (R)
>
> B-A parameters: -663,9usec, 0,211dB (L), *0,032dB (R)..Corr Depth: 32,1 dB (L), 48,4 dB (R)
> C-A parameters: -377,3usec, 0,106dB (L), *0,007dB (R)..Corr Depth: 38,3 dB (L), 65,5 dB (R)
> D-A parameters: -99,65msec, 0,433dB (L), *0,271dB (R)..Corr Depth: 26,0 dB (L), 30,4 dB (R)
> E-A parameters: -285,7usec, 0,258dB (L), *0,058dB (R)..Corr Depth: 30,4 dB (L), 43,9 dB (R)
> F-A parameters: -149usec, 0,304dB (L), *0,087dB (R)..Corr Depth: 29,0 dB (L), 40,0 dB (R)
>
Molly, regarding auto synchronising, prepare file with added sawtooth tones in the begin&end and use audio diffmaker
Look at this (comment also)
Archimago's Musings: PROTOCOL: [UPDATED] The DiffMaker Audio Composite (DMAC) Test.
Look at this (comment also)
Archimago's Musings: PROTOCOL: [UPDATED] The DiffMaker Audio Composite (DMAC) Test.
Last edited:
Thanks for the link sikahr.
I've never used diffmaker and so don't know what all the info represents. I'm guessing time difference between each and the reference (A) and some measure of rms levels averaged over time ??? No idea what corr depth represents 🙂
I've never used diffmaker and so don't know what all the info represents. I'm guessing time difference between each and the reference (A) and some measure of rms levels averaged over time ??? No idea what corr depth represents 🙂
Mooly, it's freeware that aligns two files temporally, then subtracts them to create a new file of the difference. Thank Bill Waslo for another gem.
From th link:
If you peruse the DiffMaker site, it's quite obvious what this program does. It basically takes two recordings of the audio (presumably under 2 conditions or with different hardware), inverts one of them, and applies it to the other to see if the signals "null" each other out. The "magic" of course is in the algorithm used to align the samples in terms of time (including sample rate drift), and signal amplitude. If the recordings are identical, there should be a complete null where the result is silence. The program will create the "null" WAV file to review (very useful) and spit out a number representing the amount of "audio energy" left in the resulting null'ed audio file - expressed as dB's. The program calls this the "Correlated Null Depth". The higher this value, the more correlated the 2 samples are (ie. the "closer" they sound).
If you peruse the DiffMaker site, it's quite obvious what this program does. It basically takes two recordings of the audio (presumably under 2 conditions or with different hardware), inverts one of them, and applies it to the other to see if the signals "null" each other out. The "magic" of course is in the algorithm used to align the samples in terms of time (including sample rate drift), and signal amplitude. If the recordings are identical, there should be a complete null where the result is silence. The program will create the "null" WAV file to review (very useful) and spit out a number representing the amount of "audio energy" left in the resulting null'ed audio file - expressed as dB's. The program calls this the "Correlated Null Depth". The higher this value, the more correlated the 2 samples are (ie. the "closer" they sound).
The TLE2072 is a oddball. I've used that and really wanted to like it but over many weeks didn't. Was it a real problem though or was it imagined.
If it does happen to you, I believe it is real. It didn't take me weeks of course, only half an hour for fatigue, and a few minutes for unmusicality when compared head to head with TL072. But this is only ONE occasion. The more occasion, the more I will believe.
I don't think the TLE2072 (C) is an improvement to TL072 (A) because from the spec it is obvious that they traded off some specs for higher slew rate and bandwidth, meaning that it will be better only for limited purposes. I don't know what is the real problem with high slew rate opamps. If it is stability, then you have confirmed that there was no oscillation. Funny that this time the LM4562 (E) behaved well. It is even hard for me to believe that the opamp sounded wooly, almost as wooly as JRC4560 (F).
I felt fatigue with NE5532 (D) in less than half an hour. This is my classic experience with this opamp so no surprise for me here. This strengthen my judgement about this opamp. Head-to-head comparison with 833 (B), the 833 win hands down.
Only TL072 (A) and LM4562 (E) are out of my expectation. But I know LM4562 only from previous listening test (I have this opamp, JRC4562, but never really listened to them except in amplifiers, such as the Apex A9).
The TL072 (A) anyhow, has been proven to me too, to be musical and not fatiguing, just like my real-life experience. But I didn't expect it to sound so dynamics, outperforms all other opamps. And the lack of definition (high noise?) was indeed interesting. I will pay attention to that the next time I compare TL072 in my normal circuits.
Mooly, it's freeware that aligns two files temporally, then subtracts them to create a new file of the difference. Thank Bill Waslo for another gem.
Thanks SY. Only so many hours in the day ya know for learning new stuff 😉
From th link:
If you peruse the DiffMaker site, it's quite obvious what this program does. It basically takes two recordings of the audio (presumably under 2 conditions or with different hardware), inverts one of them, and applies it to the other to see if the signals "null" each other out. The "magic" of course is in the algorithm used to align the samples in terms of time (including sample rate drift), and signal amplitude. If the recordings are identical, there should be a complete null where the result is silence. The program will create the "null" WAV file to review (very useful) and spit out a number representing the amount of "audio energy" left in the resulting null'ed audio file - expressed as dB's. The program calls this the "Correlated Null Depth". The higher this value, the more correlated the 2 samples are (ie. the "closer" they sound).
Thanks for the explanation.
So can you use Diff Maker to align the files for you and then export the audio as separate files ? What I'm getting at is whether I could have aligned all my six files to... lets say a seventh track arbitrarily called 'Timing'... and then re exported the files back as .wav. If so then all six would be in perfect agreement. Is that possible, even if done a file at a time ?
Yes, possible, one file at a time.
My advice, play a little with the program, all is very clear( at least to me).
My advice, play a little with the program, all is very clear( at least to me).
Don't try to use DiffMaker on files longer than 30 seconds. It can run into problems after that.
- Status
- Not open for further replies.
- Home
- General Interest
- Everything Else
- Listening Test. Trying to understand what we think we hear.