STM32H7 Dual Software EQ - so close, but so far.

I've been looking at how I spend the budget I have after optimisation.

It comes back to the two EQ channels. I feel like I can make it work, for this prototype, but it also feels a bit "half" done. Having 1.5 EQ channels is a bit of cop out. Even though it works for my immediate needs.

Additionally, there are things I have not done. Holes I have not patched. Features I need to add. Not least interaction with the GUI.

More worrying things (performance and CPU cycle cost wise) include:

  • Silence detection - putting 0s through IIR filters doesn't always result in 0s out. Results in a fissle/crackle sound over silence.
  • Anti-clipping, limiter, compressor (per EQ).
  • Energy, "volume", loudness or just plain digital db calculations resulting in a periodic output for a graphical "bar meter" per channel.

All of these things might end up costing that 0.5 EQ channel.

Again depending on optimisations, such as ARM DSP accelerated BiQuad filters. It might be better to make a single channel EQ (multi-in, multi-out) and provide 5,6,7 full 24bit 96k bands. Rather than shoe horn 2 channels of 5 and 3 and have little overhead left.

It would mean pre-routing I2S between 2x H7 come DSPs. That is why that "Spare DSP" is sitting there waiting it's opportunity.

Because it's 100% bespoke low level code, I have many options. Such as simply using H7s as "send return" digital EQs and route any channel to any EQ. I can also make "bands" and "channels" intermixed so that band 1 will run on both channel 1 and channel 2, but bands 2 and 3 are channel 1 only... etc. The main downside of that is complexity, particularly in terms if config HID and user interaction. What kind of dynamic configuration you are moving toward Analog's Sigma DSPs.... requiring the GUI is off loaded to the PC!

The other aspect is whether they will be anything at the end worth sharing with the community and therefore needs to be dynamic and configurable for many use cases.... versus just hardcoding it for exactly what I want now and if that changes I can recompile the code. Seems selfish, but maybe this is just my thing and nobody would really be interested in such a guerrilla Software DSP. (Which it has just named itself).
 
By the way. I found it cool that if you trigger on the LRClk, the data trace dances to the beat. It makes sense, little endian, but I didn't expect it and had a little "chortle". It's pretty cool. Actually gave me ideas about a similar log2 volume meter from bit testing.
 
as the triggering at 20% and 80% (or whatever is defined) as logic high and low is correct.

An interesting point here is ... i thought ... that edge detection circuits for lower frequencies like I2S tend not to be rising edge detectors anymore, like in old TTL circuits. Instead they super sample the whole waveform and pick the transition sample point. Say simply by putting the waveform through an OR gate with the master clock and feeding the result through a shift register allowing the output to be taken as the first high/low "bit" and the timing 'there of' accurate and synchronised to the master clock. To be effected by jitter it would need to jitter by 1/2 a master clock cycle.

I could be wrong though. I know UART does this over-sampling and I thought I read the STM32 I2S peripherals work that way.
 
Just for interest as I haven't done any further work on it yet. I did notice that bit clock "jitter" anomaly was/is transient. It's always specific interval so it may have a traceable cause. Anyway, presently it's not doing it, rising the falling edges are congruent down as far as much scope can see. Even with the persistance set to infinity and the trace ran for an our. Not one edge out of place.

Then an hour later the anomoly is back. I'm going to wait till I start tidying up into PCBs before I diagnose clock noise or anomolyes. Breadboards are a source of many such issues.