PIC32 bare metal Audio DSP with Serial Console

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Hello,

I am in search of somebody having the required knowledge and experience, for teamworking on a PIC32MX and PIC32MZ bare metal Audio DSP with Serial Console.

A PIC32 microcontroller equipped with one or more SPI interfaces supporting the "audio mode" gets exploited as digital signal processor for audio. By audio, I mean 44.1 kHz or 48 kHz 16-bit or 24-bit, always in stereo.

The whole digital audio processing shall happen in the PIC32 SPI interrupt routine service :

Read the PIC32 SPI_RX buffer containing the audio sample that just arrived
Process the audio on a sample-by-sample basis, using the shadow register set and the MADD instruction.
The audio signal flow is easy and fixed, like eight IIR Biquad filters in series, followed by one 128-tap FIR filter, for each audio channel.
Write the filtered audio into the PIC32 SPI_TX buffer
As I wrote above, such SPI interrupt service routine is processing one audio sample at a time (actually two samples, as there is the left audio channel, and the right audio channel)​

Phase I : the PIC32 serial console shall enable the user to send simple text commands and parameters to the PIC32 :

Reset the PIC32 SPI interface
Disable the PIC32 SPI interrupt
Initialize the PIC32 SPI interface as slave
Enable the PIC32 SPI interrupt
Send the list of the coefficients of the IIR Biquad filters (there are 2 x 8 IIR Biquad filters, each requiring 5 coefficients)
Send the list of the coefficients of the FIR filters (there are 2 x 128 coefficients)​

From the user perspective, Phase I transforms the PIC32 single board computer, into a specialized in Audio DSP easy to control using a serial console. When connected, the user shall issue a few text commands for telling the IIR Biquad filters coefficients and the FIR filters coefficients. The user shall issue a command for validating the whole configuration, enabling the SPI interrupt. The Audio DSP shall immediately start. Such "enable" state to be written into Flash. As consequence, in such "enable" state, the Audio DSP shall immediately after doing a power-on, or after pressing the reset button. At any moment, the user can send a text command for disabling the SPI interrupt. At any moment, the user can send a text command for re-enabling the SPI interrupt.

Phase II : the PIC32 serial console shall enable the user to :

Enumerate and specify all audio boxes (SP_RX as signal source, a few IIR Biquad filters, a few FIR filters, a few mixers, etc., and also SPI_TX as signal sink)
Describe how they connect (send the corresponding netlist)
The netlist shall obey the same standards as a netlist coming out of LTspice, describing an interconnexion of user-defined subcircuits.
This way, the user has the opportunity to sketch a signal flow using LTspice, then export the LTspice netlist.
Please note, it is still the responsibility of the user, to specify all parameters associated to each subcircuit.​

Thus, in Phase II, the PIC32 serial console shall install various payloads inside the PIC32 :

a) parameters in storage Flash + executable code in main_loop Flash doing the initialization of audio specific PIC32 hardware (example : the SPI block)
b) parameters in storage_Flash + executable code in main_loop_Flash doing the initialization and control of external audio hardware (example : in case of a DAC requiring I2C for its initialization and volume control)
c) parameters in storage_Flash + executable code in main_loop_Flash doing the initialization and control of external non-audio hardware (example : reading the output of a 3-terminal infrared decoder, reading the output of a rotary encoder, both for the volume control)
d) parameters in storage_Flash + executable code in SPI interrupt_service-routine_Flash of all the audio DSP blocks, in a proper manner (following the netlist)​

There may be a Phase III, but I doubt it is worth working on this.
In Phase III, the PIC32 serial console shall enable the user to specify some audio Buffer_Length, say 256.
The main_loop is now in charge of doing the DSP of many audio samples in a row, say 256 (or 512 because of the left and right channels).

Like in Phase I and II, there is an interrupt service request, each time there is a new audio sample coming in. In Phase III, the SPI interrupt service routine increases a counter telling how many samples got in, and that's all.
Main_loop is supposed to detect that the counter equals Buffer_Length. As consequence, the audio buffer acts as "stable" audio buffer : we do no more writes into it. The main_loop can thus read in it, for doing the DSP on the 256 audio samples in a row.

In parallel, the counter gets reset to zero, and another buffer gets used by the SPI interrupt service routine as "progressive" audio buffer for grabbing the following incoming audio samples.
As soon as the counter again equals Buffer_Length, the two buffers roles get exchanged.
This guarantees no interference between the software that's processing the audio (main_loop), and the software that's grabbing the incoming audio (interrupt service routine).

Phase III is not mandatory.
Phase III doesn't improve the quality of the audio DSP.
Phase III may find some utility, in specific cases where Phase I or Phase II happen to consume all the available processing power, in the context of a main_loop that's in charge of much more than waiting for the next audio sample. A main_loop can be in charge of dealing with audio streaming using Wifi or USB. A main_loop can be in charge of running an operating system like Linux. By grouping the processing of 256 audio samples in a row, in a main_loop that's interrupted by a very short SPI ISR costing almost no processing power, the main_loop gains more control, and becomes able to de-schedule low priority tasks, for ensuring that high-priority tasks, do meet their deadlines (and typically, an audio DSP task is such high-priority task).​

There is no guarantee that Phase III is more effective than Phase I or Phase II, as the SPI ISR simplification of Phase III, gets counter-balanced by the sophistication of the audio DSP of Phase III that's now in the main_loop, needing to deal with a counter and a pointer for knowing where to read in the audio buffer, and for knowing when it is time to stop the audio DSP and wait for the next chunk of 256 audio samples.
On top of this, Phase III causes an audio latency, that may be incompatible with applications like live audio, or adaptive filters. Phase III gets mentioned here, for avoiding people to laugh and say "hey, no audio buffer, how come, that's ridiculous".

What's ridiculous indeed, is to rely on a cannon to kill a fly.
Audio DSP is 80% made of IIR Biquad filters and FIR filters, all relying on the multiply-accumulate instruction. Provided that you sync the coefficient list with the filters signal cells, the assembly code can be trivial. Relying on a C compiler can ruin the efficiency. Especially if your programming and debug environment is fuzzy, what's regarding the optimization levels, and the CPU core features and DSP extensions.

The audio clock jitter must be taken for what it is. No microcontroller in the world got designed to act as high quality audio clock master. Some are better, like the STM32F4 and STM32F7 featuring a dedicated 256 x Fs clock input, able to clock their I2S or TDM blocks. Today in 2016, there are specialized integrated circuits, able to synthesize a high quality recovered clock in difficult cases, like when the audio is coming from USB or SPDIF. Knowing this, why asking the microntroller to act as audio clock master ? As soon as a microcontroller is able to operate as audio clock slave, one must opt for such feature. Same for the DACs following the microcontroller. In a nutshell, the microcontroller and the DACs should exploit the high quality recovered clock that's elaborated from device that's delivering the audio.

Once you understand such evidence, you realize that ideal hardware remains simple. Homemade digital audio is feasible in 2016, using inexpensive elements that are available on-the-shelf :

PCM2706 USB Decoding Module USB to I2S DAC Decoder Headphone Amplifier DIY | eBay
Connects on any PC.
Operates as standard USB stereo soundcard.
Instead of delivering analog audio, it delivers digital audio formatted as I2S.
The audio clock that's conveyed on USB gets duly regularized by the PCM2706 chip internal RAM and PLL, so it can locally serve as master audio clock.
Such I2S clock and data to be grabbed by the PIC32MX SPI_RX, operating as SPI slave.

BV508A PIC32 Microcontroller ESP8266 Ready Development Board with Bypic | eBay
PIC32 board.
Clocked at 50 MHz.
Has the SPI with audio capability, equivalent to I2S.
Operates as slave what's regarding the I2S/SPI.
This enables connecting multiple PIC32MX boards in parallel, grabbing the same digital audio signal, in case more MIPS are required.
Executes one or more IIR Biquad filters and FIR filters.
Takes the incoming audio, and splits it for separately feeding a woofer and a tweeter in some optimal way, including all required frequency response corrections.

PCM5102 DAC Decoder Encoder I2S Player Assembled Board 32bit 384K Beyond PCM1794 | eBay
Stereo DAC easy to use, as it doesn't require a software configuration.
Receives the digital audio data coming from the PIC32MX SPI_TX.
Operates as slave.
Delivers stereo analog audio.
You can hook power amplifiers at the analog outputs, for directly driving a woofer, or directly driving a tweeter.

So please can you provide guidance about phase I, knowing that the final ambition is Phase II? I'm alone on such project. I think I can manage getting through it, with some help. This is something I'm thinking about, since twenty years.

Back in 1996 I've been the author of a DSP56002 project, writing a DSP application for the medical sector, in DSP56002 assembly language. This was basing on the Motorola DSP56002EVM and Domain Technologies serial debugger. See the hardware here :

http://www.nxp.com/files/dsp/doc/inactive/DSP56002EVMP.pdf

Speaking about the software, it took one week for getting something similar to Phase I. The tight project deadline was incompatible with the development of something resembling Phase 2. This is my regret since 20 years.

The medical application was a Brainstem Auditory Evoked Potentials machine, basing on a Windows 3.11 PC having an Intel 486DX inside (133 MHz), equipped with a 640 x 480 VGA color display and a 115200 baud serial interface (RS232).
There was a console command, asking the DSP56002EVM to update the coefficients of an IIR Biquad used as 2nd-order highpass filter, and to update the coefficients of an IIR Biquad used as 2nd-order lowpass filter.

There was a console command asking the DSP56002 EVM to send a burst of 640 consecutive 16-bit samples acquired by the DSP56602EVM, to the PC. The acquired signal consisted on sub-microvolt voltage variations buried into the ambient electromagnetic pollution, collected by three electrodes attached to the patient head, connecting to a single-channel differential voltage amplifier, galvanically isolated from the PC. Such data burst was displayed on the VGA display, thanks to a Visual Basic application that I wrote, running inside the PC. The data burst consisted of the average of many time-aligned synchronized frames (up to 1024 frames), correlated to an audio stimulation made using headphones.

Such 1024 averaging leads to the auto-cancellation of the ambiant electromagnetic pollution.
Such 1024 averaging relied on a 48-bit real-time accumulation, kind of piece of cake for the DSP56002.

The DSP56002 was also in charge of real-time synthesizing the audio stimulation, IIR Biquad filtering an internally generated impulse signal recurring at approx 10 times per second. The audio stimulation occurrence acted as trigger for the acquisition & accumulation. Using such machine and software, a physician could visualize the realtime activity of the various auditory neuronal clusters following each other along the brainstem. This, because neuronal activity induces a first an influx of sodium ions (leading to massive local depolarization) followed by a rapid efflux of potassium ions from the neuron (leading to local repolarisation).

This also, because the speed of neuronal influx propagation is related to the length and condition of the axons linking a cluster to the following cluster. As an example, if the Myelin sheath gets damaged (Multiple Sclerosis), the propagation speed gets pathologically slow. If this happens in the auditory system, deafness is the consequence.

Using such machine and software, the physician could store the patient data and protocol, in digital. The physician could send them in digital, using the Local Area Network, possibly anywhere in the world using internet, still in infancy.
The project got halted in 1997 because the 115200 baud RS232 could not cope with the data throughput that was required for real-time monitoring the activity of muscles, for investigating Multiple Sclerosis.

The USB 1.0 specification that go published in January 1996, featuring a 10 Mbit/s data throughput, made us conclude that we should hook a USB 1.0 controller supporting the 10 Mbit/s rate, on the DSP56002 8-bit host port. Unfortunately, the first USB controller sold by Intel was difficult to use, making us lose a lot of time. Between 1997 and 1999 came other USB 1.0 controllers successively presented by CYPRESS, ANCHOR, THESYS, NETCHIP, NATIONAL SEMICONDUCTOR, PHILIPS. Not all were able to operate at 10 Mbit/s. Not all were able to connect on a 8-bit host port. Most were insufficiently documented. Most had short commercial lifes. So we stayed away.

In 1999, we sadly realized that we should not have ignored the possibility of fiddling a IEEE 1284 EPP (Enhanced Parallel Port) on the DSP56002 8-bit host port, trying to rely on some on-the-shelf Windows driver. Actually, we should have subcontracted this, to somebody more competent than us. This way, any PC equipped with two EPP ports (one for a printer, the other for the DSP56602 board) could have been used, as soon as 1996 or 1997. Back in 1996, we had not figured that the EPP standard introduced in 1994, could provide up to 2 MByte/s bandwidth, approximately 15 times the speed achieved with normal Centronics port, with far less CPU overhead.

Basing on such painful experience, my attitude towards Audio DSP using a PIC32MX or PIC32MZ (or possibly ARM Cortex-M4 and M7) is to say that it is never too late to try something both simple and effective. Since day one, by working on an audio platform that may appear sub-performing compared to Raspberry Pi, BeagleBone, MiniDSP, or a full-blown Android TV Box, all audio enthusiasts wanting to enter the digital signal processing world will get the opportunity to concentrate on essentials like IIR Biquad filters and FIR filters, specifying them the way they want (in Phase I), and laying them out the way they want (in Phase II).

Thanks for reading.
 
Last edited by a moderator:
Hi,
I have done what you describe as phase 1 and much of what you describe as phase 2 using a PIC32MX. The description of it is here...

http://www.diyaudio.com/forums/blog...icrocontroller-potential-dsp-dds-element.html

http://www.diyaudio.com/forums/blog...p-crossover-implemented-using-pic32mx450.html

And for good measure, I uploaded all the source code etc, here...
http://www.diyaudio.com/forums/blogs/googlyone/1098-code-dual-channel-dds.html


A few things you will need to get your head around...
- If you want to avoid jitter on your LRCLK / SPICLK you will need to be rather judicious in selection of the clock dividers in the PIC. For example, if you use a 4MHz internal clock in the PIC, 44.1KHz and 48KHz will be a non integer multiple of the clock. The PIC will happily do this for you, but you will see jitter on the audio interface signal timings. :(

That is OK provided you use the PIC as the MASTER and drive all the clock, which is what I did.

An alternative would be to code some form of ASRC into the PIC. This would be required for example if you wanted to drive digital audio into the PIC. I have toyed with that idea but never convinced myself to spend the time to really get into it.

The code I uploaded and as described in the BLOG is pretty much it for a PIC running at 100MHz. There are a few microseconds spare, but not so many you could pop in an extra filter.

I never tried to make the thing "fully configurable" in the way you describe, but it is fully configurable in any way that a sane person would want to set any parameter.... in real time with audio running.... One thing you learn when you start coding real time systems at the edge of a processors performance is that you HAVE to keep things deterministic. To that end, the code I wrote runs all the functions. If one is not used, then the parameters are set to null values.

I rather doubt that an external configuration of a hard coded PIC DSP would in practise be any different, unless you recompile the code for each configuration. Good luck getting that to work reliably though!

Getting the optimisation right is really important, as is managing the overhead of function calls, and indeed how variables are accessed. Watch how you store data and how it is referenced - there are HUGE differences in efficiency between direct pointer access and indirect approaches.

The code does include a simple user interface that allows configuration of all parameters. This runs in the "spare time" between the DSP code executing each audio cycle.

Oh - and the code I uploaded reads from the EEPROM on my board, but because I am using the SPI for dual service (drive one audio channel and drive the SPI EEPROM) I never bothered writing the code that would allow the EEPROM to be written while the audio is active. Everything else in the code seems to run just fine - though to be honest I don't use this as a crossover on a daily basis.


Fundamentally however - once you get this sort of ting up and running it is really awesome what you can make a thing like a pic32 do. Have fun!
 
Last edited:
Hi, I have done what you describe as phase 1 and much of what you describe as phase 2 using a PIC32MX. The description of it is here...
http://www.diyaudio.com/forums/blog...icrocontroller-potential-dsp-dds-element.html
http://www.diyaudio.com/forums/blog...p-crossover-implemented-using-pic32mx450.html
And for good measure, I uploaded all the source code etc, here...
http://www.diyaudio.com/forums/blogs/googlyone/1098-code-dual-channel-dds.html
Hi googlyone, it looks encouraging indeed. Presently, I have a few questions for you :

1. Is it feasible to configure the PIC32 SPI with "audio mode", in slave mode, receiving a 44.1 kHz or a 48 kHz sampling frequency with a 24-bit audio resolution? Is there some fatal restriction (like the max. bitrate) when configuring the PIC32 SPI as slave? Is it the reason explaining why you asked the PIC32 to generate a master audio clock from its CPU clock? Is it the reason explaining why you configured the PIC32 SPI as master?

2. In your blog, you wrote "I think a full stereo DSP crossover with 4th-order filters, time alignment and two parametrics is not a bad start." I fully agree with you.

3. In your blog, you wrote "The code optimisation is currently at a "super basic" level. By that I mean that I have structured the 32 bit multiply and add code such that the compiler can use the MADD instruction - and I have been a bit cautious about selection of variables etc. But I suspect there is a fair bit of extra optimisation to be had in there." This will be the first time I program a PIC32. Can you please help me climbing fast on the learning curve, by letting me study the source code of the IIR Biquad filter that you implemented for the stereo DSP crossover? Any recommendation for the source code of a FIR filter?

4. You wrote "... much of what you describe as phase 2 using a PIC32MX". Does it mean that you were in the process of programming the foundation that's required for enabling users to specify their crossover topology, relying on a kind of netlist, by sending commands through the serial console?

Again, thanks
Steph
 
Last edited:
-1- The PIC's SPI Port
Yes, you can receive pretty much anything on the SPI. 24bit is no problems.

I used the CPU to generate the clocks as I wanted to be able to drive an ADC and DAC. You need a clock for this...

If you simply receive on the SPI, you may or may not have a MCLK signal, certainly the PIC doesnt need one. So when you have finished the DSP and want to drive the DAC, you now need to resolve:
- Output word timing.
- MCLK source for the DAC

The word timing will be simple enough, you could make it synchronous to the input word timing.

MCLK is somewhat less obvious. MCLK is very important for most delta sigma DACS, so you don't want to be bodging this up using a cruddy clock. For this reason it struck me that I was far better off using the PIC clock subsystem, and picking a clock rate that was a neat multiple of the internal clock. In that way I already had a nice clean MCLK, and could drive both the ADC and DAC from clean clock signals.

-3 The source code is out these isn't it? I will check to see what I posted.

I did not do any inline assembler, but did force the compiler to use the MADD instructions, and arranged the code to take advantage of this. I am not a regular programmer so every time I do something like this I end up learning it all over again...

-4- No netlist. A user interface that allows the user to
- Set crossover slope (for highpass and lowpass for each band)
- Set crossover freq (per above)
- Set delay for channel (per band)
- Select abiquad filter per band (para, HP, LP, shelving)
- select inversion / not
- set gain offset for the band

All the above are "hard coded" in software, but to be honest the code for each is trivial to the extent that each SPI sample all the code for each is run. It amounts to a bunch of lines of code (with rather careful use of variables to allow the MADD instructions to be used).

I will check whether I posed the code...
 
Steph,
These are the C and header files for the DSP I did in a PIC32... View attachment PIC32 DSP.zip

Things to be aware of:
- This was an intellectual exercise as much as anything for me
- Therefore
- I was pretty lazy about the way I changed the propgram from the original user interface for an ADAU1442 DSP to the PIC (I added the DSP core of code in and grafted it to my old application)
- As a lead in exercise for myself I made a DDS synthesiser, some of that code might be left laying around
- As noted earlier, once the thing was running, the interest in polishing ti kind of waned (I built a batch of 8 ADAU1442 DSPs a couple of years ago) so I had no need for another DSP

- You DO need to get the optimisation set on the microchip compiler. Level 1 is enough, and available in the free compiler.

- The code assumes that you have ADC and DAC on the SPI ports.

- Most of the calcs are actually 32 bit, with LONG LONG intermediary values.


The really important file is dig_cross.c all the action happens in here, and almost all of it in one ISR. 99% of the code is about user interfaces, parameter checks, reading from ROM and (less successfully) writing to ROM.

The user interface is inside an infinite loop in main(), and it is basically a simple state machine. Display updates are to a 16*2 LCD, and that code is similarly less than something to be proud of, but it works and has legacy back over a decade to a pic18F design that forced that structure... one day I ought to redo it, but hey, it works!

Have fun...
 
Last edited:
MCLK is somewhat less obvious. MCLK is very important for most delta sigma DACS, so you don't want to be bodging this up using a cruddy clock. For this reason it struck me that I was far better off using the PIC clock subsystem, and picking a clock rate that was a neat multiple of the internal clock. In that way I already had a nice clean MCLK, and could drive both the ADC and DAC from clean clock signals.
So was my opinion, before the arrival of high quality DACs like the PCM5102A and TDA7801, featuring an on-board PLL in charge of generating the internal 256 x Fs MCLK signal, following the Frame_sync signal. Thanks to such evolution, the PIC32 and the DACs can operate as I2S slaves, and as audio clocks slaves. This is a game-changing detail. A stereo 24-bit 48 kHz system requires a 2.304 Mbit/s SPI/I2S bitrate. Do you think that 40 MHz or 50 MHz PIC32 having their SPI/I2S configured as slave, are capable of this?
 
These are the C and header files for the DSP I did in a PIC32. The really important file is dig_cross.c. All the action happens in here, and almost all of it in one ISR.
Many thanks for the link you provided in the post #6 above. I went through the dig_cross.c file. Can you please tell if you see gross errors in the SPI ISR code that's below? Actually this is your SPI ISR code reworked by me in order to create a simple template (two IIR Biquad filters in series, in stereo) that everybody should be able to understand, and possibly expand. Again, one zillion thanks.

Code:
void __ISR(_SPI_2_VECTOR, ipl3SRS) SPI2InterruptHandler(void) {

/* test Code */

//mPORTBToggleBits(BIT_8);
//Nop();


/*****************************************************************/
/*                                                               */
/*                DIRECT FORM 1 IIR BIQUAD FILTER                */
/*                                                               */
/*                                                               */
/*     Xn0 ..>....>.. b0 ..>.. SUM .......>.........>.. Yn0      */
/*              .               .                .               */
/*              .               .                .               */
/*             Xn1              .               Yn1              */
/*              .               .                .               */
/*              .               .                .               */
/*              ..>.. b1 ..>.. SUM ..<.. a1 ..<....              */
/*              .               .                .               */
/*              .               .                .               */
/*             Xn2              .               Yn2              */
/*              .               .                .               */
/*              .               .                .               */
/*              ..>.. b2 ..>.. SUM ..<.. a2 ..<...               */
/*                                                               */
/*****************************************************************/


/*****************************************************************/
/* DF1 IIR Biquad filter 0 ***************************************/
/* shift old audio samples, read SPI2BUF *************************/
/* and convert the 24 bit audio data into an integer *************/
Y0n2_L = Y0n1_L;          
Y0n1_L = Y0n0_L;           
X0n2_L = X0n1_L;          
X0n1_L = X0n0_L;           
X0n0_L = SPI2BUF;
if (X0n0_L & 0x00800000)
    X0n0_L = X0n0_L | 0xff800000;
else;
Y0n2_R = Y0n1_R;          
Y0n1_R = Y0n0_R;           
X0n2_R = X0n1_R;          
X0n1_R = X0n0_R;           
X0n0_R = SPI2BUF;
if (X0n0_R & 0x00800000)
    X0n0_R = X0n0_R | 0xff800000;
else;
/*compute the new audio sample, rescale the new audio sample */
Acc_Reg =  (long long) F0_b0*X0n0_L;
Acc_Reg += (long long) F0_b1*X0n1_L;
Acc_Reg += (long long) F0_b2*X0n2_L;
Acc_Reg += (long long) F0_a1*Y0n1_L;
Acc_Reg += (long long) F0_a2*Y0n2_L;
Acc_Reg_Shift;
Y0n0_L_Assignment;
Acc_Reg =  (long long) F0_b0*X0n0_R;
Acc_Reg += (long long) F0_b1*X0n1_R;
Acc_Reg += (long long) F0_b2*X0n2_R ;
Acc_Reg += (long long) F0_a1*Y0n1_R ;
Acc_Reg += (long long) F0_a2*Y0n2_R ;
Acc_Reg_Shift;
Y0n0_R_Assignment;
/*****************************************************************/

/*****************************************************************/
/* DF1 IIR Biquad filter 1 ***************************************/
/* shift old audio samples, read SPI2BUF *************************/
/* and convert the 24 bit audio data into an integer *************/
Y1n2_L = Y1n1_L;          
Y1n1_L = Y1n_L;           
X1n2_L = X1n1_L;          
X1n1_L = X1n_L;           
X1n0_L = Y0n0_L;                /* takes filter 0 output as input*/
Y1n2_R = Y1n1_R;          
Y1n1_R = Y1n_R;           
X1n2_R = X1n1_R;          
X1n1_R = X1n_R;           
X1n0_R = Y0n0_R;                /* takes filter 0 output as input*/
/*compute the new audio sample, rescale the new audio sample *****/
Acc_Reg =  (long long) F1_b0*X1n0_L;
Acc_Reg += (long long) F1_b1*X1n1_L;
Acc_Reg += (long long) F1_b2*X1n2_L;
Acc_Reg += (long long) F1_a1*Y1n1_L;
Acc_Reg += (long long) F1_a2*Y1n2_L;
Acc_Reg_Shift;
Y1n0_L_Assignment;
Acc_Reg =  (long long) F1_b0*X1n0_R;
Acc_Reg += (long long) F1_b1*X1n1_R;
Acc_Reg += (long long) F1_b2*X1n2_R ;
Acc_Reg += (long long) F1_a1*Y1n1_R ;
Acc_Reg += (long long) F1_a2*Y1n2_R ;
Acc_Reg_Shift;
Y1n0_R_Assignment;
/*****************************************************************/

/* copy last internal filter data to output variables ************/

Output_Data_L = Y1n0_L;
Output_Data_R = Y1n0_R;

/* this allows to apply limits while allowing the filters ********/
/* to operate with larger values than the DAC can output *********/

Out_Test_Temp = Y1n0_L & DAC_Valid_BitMask;
if (Out_Test_Temp & 0x80000000)
    {
        if (!(Out_Test_Temp ^ 0xff800000))
            Output_Data_L = Y1n0_L;
        else
            Output_Data_L = -DAC_Output_Max;
    }
else if (Out_Test_Temp)
    Output_Data_L = DAC_Output_Max;
else
    Output_Data_L = Y1n0_L;

Out_Test_Temp = Y1n0_R & DAC_Valid_BitMask;
if (Out_Test_Temp & 0x80000000)
    {
        if (!(Out_Test_Temp ^ 0xff800000))
            Output_Data_R = Y1n0_R;
        else
            Output_Data_R = -DAC_Output_Max;
    }
else if (Out_Test_Temp)
    Output_Data_R = DAC_Output_Max;
else
    Output_Data_R = Y1n0_R;

/* cheat here and do not mask off the top bits *********************/
/* as they have been cleared above                                 */
/* point the write pointers at the right spot **********************/

*(Delay_Buffer_L + Write_Pointer_Offset) = Output_Data_L;
*(Delay_Buffer_R + Write_Pointer_Offset) = Output_Data_R;
Write_Pointer_Offset++;
Write_Pointer_Offset = Write_Pointer_Offset & Delay_Buf_Size_Mask;

/* copy last audio data to SPI2BUF *********************************/

SPI2BUF = *(Delay_Buffer_L + Rd_Offset);
SPI2BUF = *(Delay_Buffer_R + Rd_Offset);

/* copy last audio data to SPI1BUF also ****************************/

SPI1BUF = *(Delay_Buffer_L + Rd_Offset);
SPI1BUF = *(Delay_Buffer_R + Rd_Offset);

/* increment read pointers *****************************************/

Rd_Offset++;
Rd_Offset = Rd_Offset & Delay_Buf_Size_Mask;


//mPORTBToggleBits(BIT_8);
//Nop();


INTClearFlag(INT_SPI2);

}

I take notice that the Level 1 optimization is required, what's regarding the PIC32 free compiler. I should have written this as a comment in the C source code.
 
Last edited:
Steph,
Good on you! It is pleasing to see someone else out there with the interest and enthusiasm to see just how much cool stuff we can do with something like a PIC. I congratulate you.

But. It is 20:00hrs here, and I just logged on after a glass or two of Shiraz. so....

The code you have generated looks to have all the right bits, and incorporates the sections that will be essential for the PIC compiler optimising appropriately.

You will end up with code delivering 32 bits integer results, which will happily deliver you 24 bits for I/O.

Is the gross detail right? Yes. Is the detail right, or is there a big "oops" - Not sure. The easiest way is to compile and run it. If there is a bug, you will find it is in the detail, and easily fixed. That is the beauty of processors like the PIC.

I see some macros's in there. I assume they are all good, and I am getting into the habit of using them as they do make the code that much easier to read.

I do note that delays are ridiculously simple with something like a PIC, aren't they!

Give it a whirl and see how it goes. If on the weekend I get some spare time, I will revive this project in MPLAB and cut and paste your code in there. Wife and time allowing!
 
Neither of these is likely to satisfy the golden eared brigade. That said however, the surprise interest for me is the TDA7801.

If you feed this with an active crossover, it will stun you just how much "output" (read volume) you can get without distortion, and also how clear the reproduction is, given that you can cross over at 80Hz odd, and get all that cone excursion off the mid / treble.

The distortion is far from earth shattering, but I do think that it would make a really cool board to have a super cheap PIC and super simple amplifier IC.

It might be better to use something like the PCM5102A, if you want lots of watts, but I suspect that the TDA device could be very versatile and provided people don't set crazy expectations give great "bang for your buck".
 
Hi googlyone, unfortunately, nobody is selling an "assembled TDA7801 board" on eBay. Reason is easy to understand. Such board requires two I2S, one I2C, and a generous 15 Volt power supply.

Possibly, an "assembled TDA7801 Audio Dock with remote and text display" could find a market on eBay. The board would embed a powerful PIC32 (possibly PIC32MZ) featuring two I2S and one I2C for the TDA7801, plus one SPI for the text display. However, at this stage, I don't know how it would grab stereo audio. Could be USB audio, requiring a PCM2706 as extra. Could be SPDIF or TOSLINK, requiring a CS8412 as extra. How to setup the internal filters making up the stereo 2-way crossover and the equalization? There should be an electret microphone (requiring an ADC and preamp as extra), and a display for visualizing the gain and the phase (requiring a graphical display as extra). You shall not oblige the user to pay for a graphical display that's only required for the setup. You thus need to base on an existing graphical display like a smartphone. The smartphone shall run a webpage, displaying the measured gain and phase. Is this feasible through a wifi module? Is this feasible through a bluetooth module? This requires one serial (aka RS232) or one more SPI what's regarding the PIC32. Not a problem for a PIC32MZ. Your next project, perhaps?

Please note, the ADC picking up the mike signal must feature an internal PLL, internally generating a 256 x Fs MCLK that's driven by the 44.1 kHz or 48 kHz frame-sync entering the chip. The AD1938 Codec has such feature. I don't know if there is a simple stereo ADC having such feature, yet. For being able to graph the phase, you also need to grab the reference signal. You thus need a stereo ADC. For being able to measure the three transfer functions that are relevant (bare speaker, DSP correction that's applied, final result), you need one ADC more. See the attached .pdf sketch.

Possibly, an "assembled AD1938 Audio Dock with remote and text display" could find a market on eBay.
 

Attachments

  • DSP before DAC - mono.pdf
    116.7 KB · Views: 40
Last edited:
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.