Open Source DSP XOs - Page 31 - diyAudio
Go Back   Home > Forums > Source & Line > Digital Line Level

Digital Line Level DACs, Digital Crossovers, Equalizers, etc.

Please consider donating to help us continue to serve you.

Ads on/off / Custom Title / More PMs / More album space / Advanced printing & mass image saving
Reply
 
Thread Tools Search this Thread
Old 30th June 2012, 02:15 PM   #301
diyAudio Member
 
steph_tsf's Avatar
 
Join Date: Mar 2008
Quote:
Originally Posted by abraxalito View Post
He already said that a silent PC could do millions of taps - you're seriously not giving him a hard enough time here steph by limiting it to six thousand
Indeed, however in the long run fools and trolls can be right. There may be a method for condensing time, in the form of a hardware artificially maintained at the causal edge, providing any computation answer, one microsecond after having asked the question.

This way a one million taps FIR could execute in one microsecond, fast enough for six 96 kHz audio channels. Currently we have no idea how to do it because we don't know how the brain operates, how the brain stores data, and how the brain processes data.

On March 15th 2012, French scientist Jean-Marie Souriau died. From 1960 he kept suggesting that we are not in the reality. He kept suggesting that we were equipped with hardware-cabled "group engines" from a mathemetical conception, acting as perception devices. He kept suggesting that mechanical engineering is a mathematical group (groupe de Galilée), that relativity is another group (groupe de Poincaré), and generalized relativity is another group. He kept suggesting that we, as living organisms, we are trapped into particular, complicated sections of "hidden groups". The whole geometry discipline, is now considered as a mathematical group, indeed.

According to Jean-Marie Souriau, from a geometrical and mathematical perspective, time and energy can be manipulated, annihilated, using particular group sections to be seen as geometric sections. In an interview dated December 27th 2010, he said that monkeys have abilities that the human don't have, because they have a better "Euclidian group" cabled in their brains. In the same interview Jean-Marie Souriau said that some day, mathematics, geometry and physics will meet in the neuroscience playground and generate huge progresses, but it won't happen rapidly because those sciences are usually kept apart.

A very optimistic theory - nothing to do with Jean-Marie Souriau words here - would be that once you manage to operate the correct geometric transformation, you end up like "virtually" connected to the "reality", hence able to manipulate time, space and energy at will. There are peoole saying that the human can be persuaded doing this, by accident (near death experiments), by drugs (DMT), and possibly later on, by science. You'll find always people ready to say that a copper wire has an infinite digital processing power because it permanently executes a 1-million sample FFT then inverse FFT on the signal he is conveying, taking the required energy from nowhere, and exactly compensating the inherent delay by executing in the future. Or in the past? This is only about a copper wire. Imagine what they would say about a whole brain. Gosh, I'm completely lost!

Last edited by steph_tsf; 30th June 2012 at 02:25 PM.
  Reply With Quote
Old 30th June 2012, 02:24 PM   #302
chaparK is offline chaparK  Luxembourg
diyAudio Member
 
Join Date: Apr 2010
Location: Luxembourg
Quote:
Originally Posted by abraxalito View Post
You're welcome to contribute to that thread I already linked, this one clearly isn't for you
CuTop is right and the point is this: if you guys intend to make extensive use of long FIRs, then you'd rather stop counting the MACS of the direct (or time-domain) implementation and look instead at the classic ways to implement FIRs in the frequency domain. You will save a lot of processing resources - but you'll have to implement a FFT and deal with output delay.

For you Abralixito:

a FIR is a convolution denoted

y[n] = x[n]*h[n]

y is the output, x the input and h is your filter. n is the time index, and * is the convolution operator.

In order to make it in the frequency domain, you compute first the Fourier transform of x and h:

X = F(x) and H = F(h)

The convolution in time-domain translates into a simple multiply in the Frequency domain:

Y = X . H

where Y is the Fourier transform of your output and '.' is the multiply operator.

All you need to do now is to compute the inverse Fourier transform of Y in order to recover the time-domain samples.

So the global operation is:

y[n] = invF(F(x[n]).F(h[n]))

Now what's the fuss?
Well, direct form of convolution is power hungry, and required resources grow with *square* of the length of the filter.
The domain-frequency convolution, despite of looking complicated, is much less power hungry if you compute the Fourier transform and it's reverse using a FFT.

Hope it makes sense!
  Reply With Quote
Old 30th June 2012, 02:30 PM   #303
diyAudio Member
 
Join Date: Feb 2009
Location: UK
Quote:
Originally Posted by steph_tsf View Post
Indeed, however in the long run fools and trolls can be right.
Nice.
  Reply With Quote
Old 30th June 2012, 02:30 PM   #304
diyAudio Member
 
abraxalito's Avatar
 
Join Date: Sep 2007
Location: Hangzhou - Marco Polo's 'most beautiful city'. 700yrs is a long time though...
Blog Entries: 104
Send a message via MSN to abraxalito Send a message via Yahoo to abraxalito Send a message via Skype™ to abraxalito
Quote:
Originally Posted by chaparK View Post
CuTop is right and the point is this: if you guys intend to make extensive use of long FIRs, then you'd rather stop counting the MACS of the direct (or time-domain) implementation and look instead at the classic ways to implement FIRs in the frequency domain.
Sure, but its a very big 'IF' there from my pov. I'm content with FIRs in the range 3-128 taps (take a look at my blog for my 3 tap one). If I were doing bass control (no plans to, as yet) I'd decimate down first.

Quote:
You will save a lot of processing resources - but you'll have to implement a FFT and deal with output delay.
That's debatable - I'm definitely interested if you can show how it all fits on a simple M0/M3 with 8k RAM and 32k flash (and no FPU). Otherwise you're tilting at windmills
__________________
I have the advantage of having found out how hard it is to get to really know something... how easy it is to make mistakes and fool yourself. - Richard Feynman
  Reply With Quote
Old 30th June 2012, 02:44 PM   #305
diyAudio Member
 
steph_tsf's Avatar
 
Join Date: Mar 2008
Quote:
Originally Posted by chaparK View Post
CuTop is right and the point is this: if you guys intend to make extensive use of long FIRs, then you'd rather stop counting the MACS of the direct (or time-domain) implementation and look instead at the classic ways to implement FIRs in the frequency domain. You will save a lot of processing resources - but you'll have to implement a FFT and deal with output delay. Hope it makes sense!
Yes indeed, that's a strong point. I'm very curious to see how this will look, practically.

Most of the time, there will be a Bode plot (gain and phase) associated to a given speaker driver exhibiting a 2nd order high-pass at something like 100 Hz (Q maybe 1.0), a few semi-random in-band irregularities in a 6 dB corridor from 100 Hz to 5 kHz, possibly a +10 dB resonance at 5 kHz, then a quite irregular low-pass above 5 kHz, possibly 3rd-order, with a few high frequency resonances corresponding to the cone breaking up. Most of the time, the designer ambition is to reshape the Bode plot, for attaining a desired Bode plot like a nice 4th-order double Butterworth highpass at 300 Hz and a 4th-order double Butterworth lowpass at 3 kHz, with the phase exactly matching a minimum phase behaviour.

The direct FIR method is very simple : you graph the actual driver impulse response, you graph the idealized driver impulse response, and by comparing them, you know the FIR coefficients that you need. You apply a windowing function, and you are done.

Basing on this, how would you calculate the FFT coefficients (real and imaginary) for generating the exact same filtering function, both in magnitude and in phase?

Say we apply a 1024 point FFT at 48 kHz, leading to a frequency resolution of 47 Hz. Is the FFT + FFT-1 feasible, realtime, on a Cortex-M4 clocked at 72 MHz?

Last edited by steph_tsf; 30th June 2012 at 02:54 PM.
  Reply With Quote
Old 30th June 2012, 02:51 PM   #306
chaparK is offline chaparK  Luxembourg
diyAudio Member
 
Join Date: Apr 2010
Location: Luxembourg
Quote:
Originally Posted by abraxalito View Post
That's debatable - I'm definitely interested if you can show how it all fits on a simple M0/M3 with 8k RAM and 32k flash (and no FPU). Otherwise you're tilting at windmills
I've seen FFTs running on tiny fixed-point processors, I don't think you're going to have a problem running it on your ARM.
It's not my purpose to show anything - I just reacted when CuTop was accused of trolling, which he wasn't. Sorry if I let you think I was trying to demonstrate something.

Now re. the FFT on the ARM, I know there's a DSP library from ARM themselves. There should be a FFT in it, and if it's the case you even won't have to implement it
  Reply With Quote
Old 30th June 2012, 03:04 PM   #307
chaparK is offline chaparK  Luxembourg
diyAudio Member
 
Join Date: Apr 2010
Location: Luxembourg
Quote:
Originally Posted by steph_tsf View Post
Basing on this, how would you calculate the FFT coefficients (real and imaginary) for generating the exact same filtering function, both in magnitude and in phase?
Ok so your filter coefficients in time domain are
h[n] = h[0], h[1], ..., h[M] where M+1 is the length of your filter

Your coefficients are simply the Fourier transform of the sequence h:

H(k) = F(h[n])

Quote:
Say we apply a 1024 point FFT at 48 kHz, leading to a frequency resolution of 47 Hz. Is the FFT + FFT-1 feasible, realtime, on a Cortex-M4 clocked at 72 MHz?
For the convolution itself, you don't need to care about 'frequency resolution'. The length of the filter will impact what you can achieve, but this is not the FFT convolution matter.

Now why do you want a 1024-point FFT?
Let's say your original filter is a 1024-point sequence. You can always split it into 8 successive 128-point sequences and thus use a 128-point FFT...
  Reply With Quote
Old 30th June 2012, 03:08 PM   #308
diyAudio Member
 
abraxalito's Avatar
 
Join Date: Sep 2007
Location: Hangzhou - Marco Polo's 'most beautiful city'. 700yrs is a long time though...
Blog Entries: 104
Send a message via MSN to abraxalito Send a message via Yahoo to abraxalito Send a message via Skype™ to abraxalito
Quote:
Originally Posted by chaparK View Post
I've seen FFTs running on tiny fixed-point processors, I don't think you're going to have a problem running it on your ARM.
I've coded FFTs - on a 68k back in 1986. It did use up a fair amount of RAM (I had 64k bytes though) and that's quite limited in this application as I copy the code to RAM for increased speed. This is not just about writing the code, your argument was that it would be a better use of resources. That's what I'd like demonstrated - for you to back up that claim. Justify that for (say) a 128 point filter kernel, an FFT convolution would be a better use of resources in this particular instance.

Quote:
It's not my purpose to show anything - I just reacted when CuTop was accused of trolling, which he wasn't.
He wasn't accused of trolling, his behaviour was described as trolling. Clearly he was and he's done nothing since which is not consistent with trolling.

Quote:
Now re. the FFT on the ARM, I know there's a DSP library from ARM themselves. There should be a FFT in it, and if it's the case you even won't have to implement it
I have no doubts there is. But the FFT is just part of the solution to FFT convolution, and a part I'm quite familiar with - enough for me to code it for myself. However this thread is about FIR filters which can be modelled using LTSpice.

Since you clearly know a fair amount about DSP, do you have any answer to my earlier posed question about round-off errors in FFT convolution when using fixed point and how those compare with FIR convolution? I'm curious to know whether Dr Smith's arguments hold up for fixed point.
__________________
I have the advantage of having found out how hard it is to get to really know something... how easy it is to make mistakes and fool yourself. - Richard Feynman
  Reply With Quote
Old 30th June 2012, 03:13 PM   #309
diyAudio Member
 
steph_tsf's Avatar
 
Join Date: Mar 2008
Quote:
Originally Posted by chaparK View Post
For the convolution itself, you don't need to care about 'frequency resolution'. Now why do you want a 1024-point FFT?
Let's say your original filter is a 1024-point sequence. You can always split it into 8 successive 128-point sequences and thus use a 128-point FFT...
Don't you have the impression that, operating with a 48 kHz sampling frequency, that you need a 1024-point FFT (hence a 47 Hz frequency resolution) for exercising a decent control on the low part of the spectrum, say between 30 Hz and 300 Hz. In the example I have supplied, you may have noticed that for accurately controlling the 300 Hz highpass (as target), we may need a better frequency resolution than the quoted 47 Hz.

I have the impression that your assertions are correct on a mathematical point of view, but completely discoupled, and potentially wrong, when it comes to the physical application.
  Reply With Quote
Old 30th June 2012, 04:16 PM   #310
diyAudio Member
 
Join Date: Feb 2009
Location: UK
Quote:
Originally Posted by abraxalito View Post
He wasn't accused of trolling, his behaviour was described as trolling. Clearly he was and he's done nothing since which is not consistent with trolling.
Nice.

It's a real dilemma in these forums.

There's an interesting sounding thread about DSP and active crossovers called "Open Source DSP XOs". It doesn't mention "No PC allowed". It doesn't mention "No FFTs allowed".

There's 29 pages of stuff about specific hardware, and the difficulty of implementing a simple crossover with any number of esoteric processors.

Whereas I've got a system running on a PC using my own software which can implement millions of FIR taps. Personally, I'm using 65536 taps for each of my six channels and the PC isn't even breaking into a sweat. Why shouldn't I ask why people aren't doing the same thing? Mine is an "Open source DSP XO" (anyone is welcome to my code if they ask nicely) so I'm not even off topic.

Ah, but didn't I know? No FFTs and PCs are allowed in this thread because everyone here knows there's something mysteriously wrong with that approach. Eject the outcomer as a troll!

But then it turns out that some people here, at least, don't know about FFTs and how they can bestow upon you virtually unlimited FIR processing power. Maybe there is some theory about 32 bit floats not providing enough precision, but why not use 64 bits instead? Your PC can do it!

But anyway, I'll leave you to it. Good luck!
  Reply With Quote

Reply


Hide this!Advertise here!
Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Volume / Source selector - open source project ? AuroraB Analog Line Level 22 22nd September 2012 02:21 PM
Violet DSP Evolution - an Open Baffle Project cuibono Multi-Way 211 18th May 2010 02:26 AM
Open call for suggestions on Open Source DIY Audio Design gfergy Everything Else 1 15th April 2007 07:33 AM
Open Source, Open Architecture! zenmasterbrian Digital Source 185 23rd February 2007 10:35 PM


New To Site? Need Help?

All times are GMT. The time now is 12:51 PM.


vBulletin Optimisation provided by vB Optimise (Pro) - vBulletin Mods & Addons Copyright © 2014 DragonByte Technologies Ltd.
Copyright ©1999-2014 diyAudio

Content Relevant URLs by vBSEO 3.3.2