diyAudio

diyAudio (http://www.diyaudio.com/forums/)
-   PC Based (http://www.diyaudio.com/forums/pc-based/)
-   -   A bare bones framework for audio DSP on a PC (http://www.diyaudio.com/forums/pc-based/212982-bare-bones-framework-audio-dsp-pc.html)

CopperTop 20th May 2012 11:19 AM

A bare bones framework for audio DSP on a PC
 
As someone who can just about program in 'C' , and without money to burn on dedicated DSP boards, I was keen to use an old PC for audio, specifically experimenting with active crossover based on FFTs. What I really wanted was to use the PC effectively as a super-fast microcontroller with SPDIF and DACs attached, not indulge in programming GUIs and drop-down menus etc.

I was also keen to have access to the 'Wave' channel on the PC so that I could use any software source for input, such as audio editing packages, internet streams etc. The problem was the 'learning curve' for understanding how the operating system interacts with sound cards, and how to send audio to several sample-synchronised DAC outputs and other things that a low level DSP board would make obvious.

Luckily I stumbled across the BASS audio library Un4seen Developments - 2MIDI / BASS / MID2XM / MO3 / XM-EXE / XMPlay which gave me the tools I needed to build a bare bones framework, on which I could hang any experimental DSP algorithms. (I'm sure there are other ways to do this sort of thing with DirectX etc., but it didn't appeal to me.)

The PC is so unbelievably powerful, that you can get away with the most inefficient programming techniques, plus it has the advantage of being the 'target' device itself which speeds up the iteration process no end.

Here is how to build a bare-bones framework for audio DSP.

You'll need:

(a) a PC with Windows XP (or maybe Vista or Linux, but I'm less sure of those alternatives)
(b) a means of compiling programs
(c) a reasonable quality sound card that will let you select any source as input to your software (SPDIF, CD, 'Wave' etc.) and provide enough analogue outputs. Two very cheap options, readily available secondhand are:
- Creative Audigy 2 ZS plus Kx open source driver (very cheap, fixed 48kHz sample rate but happy to resample any other sample rate with 'adequate' quality)
- Creative X-Fi (not Xtreme Audio) and the Creative supplied software (very high quality, can be bit perfect at popular sample rates, perfect resampling otherwise)
(d) The free BASS audio library.

The basic software 'infrastructure' provides a means of doing the following:

- Detect available sound cards
- Detect input sources (SPDIF, CD, Wave), and output destinations (ASIO output channels mapped to physical outputs like Line L/R, Centre L/R etc., and you can change the mapping using the sound card's 'applets')
- Select card, input source and output destinations
- Start 'recording'
- Start 'playback'

Your DSP code runs in between recording and playback, processing the recorded samples and pushing them out for playback over ASIO channels. Latency depends mainly on how complex the processing is. You can use this framework to build your own active crossovers, room correction algorithms, guitar effects etc.

Build your code with the BASS and BASS ASIO libraries (and put the DLLs in with the executable) and use the following basic framework. I've stripped out all the error checking etc. to simplify it; refer to the BASS documentation for gospel info on how to use each function properly! The BASS library comes with quite a few examples, but none of them is quite as targetted and simplified as the program below. Sorry for the disappearance of the tabbing - it was tabbed OK when I pasted the code in.

/************************************************** ***************************/
#include "bass.h"
#include "bassasio.h"
#include <math.h>
#include <stdio.h>
/************************************************** ***************************/
//#define TEST_TONES // use this option to generate a separate test tone at each output
#define SAMPLE_RATE_HZ 48000 // The Audigy always runs at 48 kHz, but re-samples any incoming data
// so it will read a 44k1 stream transparently. You can use 44k1 directly
// with the X-Fi.

#define INPUT_CHAN_TYPE 1
#define OUTPUT_CHAN_TYPE 0

#define NO_FLAGS 0

#define NUM_OP_CHANS 6 // as many outputs you need e.g. three way stereo crossover
#define ASIO_START_CHAN 2 // for the Audigy, by default ASIO channels 2-8 are mapped to physical outputs
// but this can be changed in the Kx router

#define INPUT_VOL 1.0
#define OUTPUT_VOL 1.0

#define STEREO 2

#define BUF_LENGTH 65536 // you can be generous with a PC. N.B This doesn't define the latency.

#define PI 3.141592654

/************************************************** ***************************/
// create buffers for incoming stereo samples
short pbuf_l [BUF_LENGTH];
short pbuf_r [BUF_LENGTH];

// pointers for circular buffer
int wr_ptr = 0;
int rd_ptr = 0;

/************************************************** ***************************/
// RECORD - buffer the recorded data
BOOL CALLBACK RecordingCallback(HRECORD handle, short *r_buffer, DWORD length, DWORD user)
{
static unsigned int ii;

//printf ("R %d\n", length); // just for debugging

/* always transfer from low level sampled array into our own circular buffer */
for (ii=0; ii < length/2; ii += 2) // 'length' is in bytes
{
pbuf_l [wr_ptr] = r_buffer [ii];
pbuf_r [wr_ptr] = r_buffer [ii + 1];

if (wr_ptr < BUF_LENGTH-1)
{
wr_ptr ++;
}
else
{
wr_ptr = 0;
}
}

return TRUE;
}
/************************************************** ***************************/
// ASIO playback function
DWORD CALLBACK AsioOutProc(BOOL input, DWORD channel, float *buffer, DWORD length, void *user)
{
unsigned int ii, ch;
static float phase [NUM_OP_CHANS] = {0,0,0,0,0,0}; // if using test tones

//printf ("P %d\n", length); // just for debugging

#ifdef TEST_TONES
for (ii=0; ii < length / 4; ii += NUM_OP_CHANS)
{
for (ch=0; ch < NUM_OP_CHANS; ch ++)
{
buffer [ii + ch] = (float) sin ((float) phase [ch]) /10;

phase [ch] += (float)0.01 * (ch + 1);

if (phase [ch] > 2*PI)
{
phase [ch] -= 2*PI;
}
}
}

#else // as a demonstration, copy incoming audio to outputs

for (ii=0; ii < length / 4; ii += NUM_OP_CHANS)
{
for (ch = 0; ch < NUM_OP_CHANS; ch ++)
{
// feed stereo input left to even channels, right to odd
if (ch % 2 == 0)
{
buffer [ii + ch] = (float) pbuf_l [rd_ptr] / 32768;
}
else
{
buffer [ii + ch] = (float) pbuf_r [rd_ptr] / 32768;
}
}

if (rd_ptr < BUF_LENGTH-1)
{
rd_ptr ++;
}
else
{
rd_ptr = 0;
}
}
#endif
return length;
}
/************************************************** ***************************/
int main(void)
{

int a;
BASS_DEVICEINFO device_info;
BASS_ASIO_DEVICEINFO asio_device_info;
char *name;
//variables for recording and playback device assignment
int rec_dev, pb_dev, rec_ip;

printf ("List of ASIO output devices:\n");
for (a = 0;BASS_ASIO_GetDeviceInfo(a, &asio_device_info); a++)
{
printf("dev %d: %s\ndriver: %s\n",a, asio_device_info.name, asio_device_info.driver);

if (BASS_ASIO_Init(a,0))
{
BASS_ASIO_CHANNELINFO i;
int b;
for (b=0;BASS_ASIO_ChannelGetInfo(1,b,&i);b++)
{
printf ("\tin %d: %s (group %d, format %d)\n",b,i.name,i.group,i.format);
}
for (b=0;BASS_ASIO_ChannelGetInfo(0,b,&i);b++)
{
printf("\tout %d: %s (group %d, format %d)\n",b,i.name,i.group,i.format);
}
}
BASS_ASIO_Free();
}
printf ("Use which ASIO playback device ? ");
scanf ("%d", &pb_dev);
printf ("\n");


// list recording devices
printf ("List of recording devices:\n");
for (a=0; BASS_RecordGetDeviceInfo(a, &device_info); a++)
{
printf ("%d: Device:%s Driver:%s\n", a, device_info.name, device_info.driver);
}

printf ("Use which recording device ? ");
scanf ("%d", &rec_dev);
printf ("\n");

BASS_RecordInit(rec_dev);

printf ("List of recording inputs:\n");
for (a = 0; name = BASS_RecordGetInputName(a); a++)
{
float vol;
int s = BASS_RecordGetInput(a, &vol);
printf("%d: %s \n", a, name);
}

printf ("Use which recording input ? ");
scanf ("%d", &rec_ip);
printf ("\n");

BASS_RecordSetInput (rec_ip, NO_FLAGS, INPUT_VOL);

// initialise the desired ASIO output device
BASS_ASIO_Init(pb_dev, NO_FLAGS);

// set the sample rate
BASS_ASIO_SetRate ((double)SAMPLE_RATE_HZ);

// enable first ASIO output channel
BASS_ASIO_ChannelEnable(OUTPUT_CHAN_TYPE, ASIO_START_CHAN, &AsioOutProc,0);

// using floating-point
BASS_ASIO_ChannelSetFormat(OUTPUT_CHAN_TYPE, ASIO_START_CHAN, BASS_ASIO_FORMAT_FLOAT);

// set the output volume for this channel
BASS_ASIO_ChannelSetVolume (OUTPUT_CHAN_TYPE, ASIO_START_CHAN, (double)OUTPUT_VOL);

// and join the next N ASIO output channels to it to make one 6 channel group
for (a=1; a < NUM_OP_CHANS; a++)
{
BASS_ASIO_ChannelJoin(OUTPUT_CHAN_TYPE, ASIO_START_CHAN + a, ASIO_START_CHAN);
}

//* kick off an audio recording */
// start recording @ SAMPLE_RATEhz 16-bit stereo
BASS_RecordStart(SAMPLE_RATE_HZ, STEREO, NO_FLAGS,(RECORDPROC*) &RecordingCallback,0);

// Put a pause in here to ensure that enough recorded samples have
// built up before starting playback. You may wish to think carefully about this
// to reduce latency
while (wr_ptr - rd_ptr < 4096);

// start playing back over ASIO
BASS_ASIO_Start (4096); // You can force a specific buffer length with a non-zero argument

// loop forever, and let the callbacks do all the work
while (1)
{
Sleep (500);
}

return 1;
}
/************************************************** ***************************/
The above program detects available ASIO output channels and asks you to select which device you want to use for output. A number of ASIO output channels are then 'joined' to form a sample-synchronised group.

The program then detects available recording input devices and asks you to select one, plus the input source. Inputs and outputs should all be on the same sound card.

Two callback functions are set up:
- a recording function pulls 16 bit samples from the stereo recording source and sticks them into a circular buffer
- a playback function continually pulls samples from the circular buffer and shoves floats into several ASIO output channels as a simple demonstration. There is also an option to enable test tones generated at individual frequencies at each analogue output.

The two callbacks produce and use data at exactly the same average sample rate.

Your DSP code can live in either callback function, or the main loop if you arrange it that way.

The above program uses about 1% of a 7 year old PC's processing power. I have found that my three way stereo crossover using very inefficient code and large FFTs takes about 30%. PCs are very powerful!

You can see how the output data is interleaved in the playback function. ASIO channels 2-7 are grouped so that you just have to stuff floats in channel sequence into the buffer. The 'length / 4' is because length is in bytes, but we're using 32 bit floats (could use 'sizeof (float)').

The two sound cards mentioned above have flexible routing allowing the ASIO output channels to be mapped to any physical output(s), and this can be accomplished using the sound cards' own 'applets' (I won't go into it here, but could explain it in another post). For both cards, 'Wave' is routed to the main stereo outputs by default, but can be turned off, thus freeing up those outputs for your processed audio. The Kx driver (and additional GUI-based software) is indispensible for the Audigy, and for the X-Fi use 'Audio Creation Mode' within the Creative-supplied control panel.

In practice, the idea is that you won't get any sound out of the sound card until the program is running.

Programs such as media players etc. will feed the default audio device as set up in the Windows control panel as normal, and it is this device and input that you select as your recording source at runtime.

geraldfryjr 20th May 2012 02:59 PM

This is a very cool thread and I will follow this very closely.
As this is something that I would very much like to learn how to do.
But, As I know nothing about today's modern programing Tecniques.
So, This looks like a neat place to start at it.

I do have the X-FI card and a GINA24 card as well as my main ASIO types.

Very Cool !!!

:cheers:

Jer :)

CopperTop 21st May 2012 12:34 AM

@geraldfryjr

Thanks for your comment.

In order to keep everything open source and ultra-cheap (or free) I just downloaded Codeblocks, an open source C IDE as an alternative to Microsoft Visual Studio etc.

Download binary

(I downloaded the version with Mingw included)

I created a 'console project' called 'Active Crossover' and cut and pasted the program text from the first post into main.c.

I placed bass.lib and bassasio.lib in the same directory as main.c.

In Project->Build Options->Linker Settings I added bass.lib and bassasio.lib. (For some reason I had to select 'No' when asked whether to use a relative path, as the alternative failed to compile.)

I then built the program using the Build menu (there were a few warnings but no errors). I placed bass.dll and bassasio.dll in the Bin\debug directory along with the newly-generated Active Crossover.exe.

I ran a media player program having previously disabled direct routing from 'Wave' to any analogue outputs in my Audigy 2 ZS card (using the Kx '4xDSP' GUI), but I know you can do the same in the X-Fi Audio Creation control panel. I then double-clicked on Active Crossover.exe, selected the appropriate sound card options when prompted, and music came out of my headphones. Hopefully the same would happen for you.

CopperTop 21st May 2012 10:28 AM

Just spotted an error in the code. Replace the '4096' with something bigger in this line:

while (wr_ptr - rd_ptr < 4096);

e.g. 10000

or you will get glitches. You can set specific callback periods for recording (see the BASS documentation for BASS_RecordStart (..)) and ASIO plyback (BASS_ASIO_Start (..)) which you might wish to do if low latency is important.

Other than that, if you can get the program to compile and run successfully, you're home and dry in that you can now place any DSP processing you like in between the record and playback callbacks. I've been using the FFTW library FFTW Home Page for FFT-based crossover filtering, which could also give you room correction 'for free', for example.

boris81 22nd May 2012 07:49 PM

Wonderful!
Thank you for documenting it!

spot 23rd May 2012 03:05 AM

This is great!

Have you had any problems the the input sampling being franctionally different to output sample rate causing buffers to eventually underrun or overrun?

This can happen if the input soundcard and the output soundcard are different devices, or does ASIO4all overcome this?

CopperTop 23rd May 2012 08:02 AM

Quote:

Originally Posted by spot (Post 3033381)
This is great!

Have you had any problems the the input sampling being franctionally different to output sample rate causing buffers to eventually underrun or overrun?

This can happen if the input soundcard and the output soundcard are different devices, or does ASIO4all overcome this?

@spot

I would certainly imagine that using two different sound cards would cause the problem you describe. In fact, I mentioned it on this thread http://www.diyaudio.com/forums/pc-ba...er-pc-dsp.html last night. In my system, I use only the one sound card in order to avoid the clock drifting problem - you can run the whole thing 'bit perfect'.

If you go via SPDIF, I believe the X-Fi card can re-sample effectively transparently, though. The Audigy 2 ZS will also re-sample, but the quality is reputedly not as good. I just don't know the mechanisms well enough to understand how re-sampling would work via a software-only route e.g. A media player streaming to one sound card (presumably locked to its sample clock), then a program like mine picking up that stream and then playing it over ASIO to a second sound card.

I'm not using ASIO4ALL (as far as I know), but instead relying on the BASS ASIO library and Creative's ASIO drivers for the X-Fi. If using the Audigy 2 ZS, it would then be the Kx project drivers.

Which sound card(s) do you have? I'd be very interested to know which cards allow you to effectively intercept a 'Wave' stream before feeding a processed version to multiple outputs. In my experience, the default is for sound cards to automatically route the 'Wave' stream to the main stereo output, thus preventing you from using those outputs for processed audio. The Kx driver and the X-Fi allow you to turn that internal routing off, however.

P.S. Thanks for your nice comment! (and also boris81 and geraldfry)

spot 23rd May 2012 11:02 AM

I'm interested in using a virtual loopback application also. Basically I want to take sound from a HTPC program (2 or 5.1channels) -> virtual sound card -> DSP filtering -> real sound card (many channels) -> amplifiers -> speakers, where there is a separate amp for each woofer and each tweeter etc.
The sound card would probably be a external USB 6 or 8 channel cheapie, but I was hoping to use a few of the cheap but quality 192khz usb stereo ones. Using your software it looks like this second option might be easy,.- might need to write up-sampling code (not hard).

CopperTop 23rd May 2012 11:36 AM

Quote:

Originally Posted by spot (Post 3033694)
I'm interested in using a virtual loopback application also. Basically I want to take sound from a HTPC program (2 or 5.1channels) -> virtual sound card -> DSP filtering -> real sound card (many channels) -> amplifiers -> speakers, where there is a separate amp for each woofer and each tweeter etc.
The sound card would probably be a external USB 6 or 8 channel cheapie, but I was hoping to use a few of the cheap but quality 192khz usb stereo ones. Using your software it looks like this second option might be easy,.- might need to write up-sampling code (not hard).

An external USB DAC may be a simpler option. As I understand it, most USB DACs implicitly have to re-sample, or adjust themselves to the audio stream and are not strictly 'bit perfect' (they either re-sample, or the bits stay the same and the clock adjusts itself from time to time) so the system you describe would work.

I believe, however, there is also the concept of asynchronous USB, where the external card requests packets of data from the PC and thus sets the sample rate itself (like an internal sound card), which might complicate things. Unless the virtual sound card can be slaved to the USB card, and the HTPC program slaved to the virtual sound card, thus locking the source and playback sample rates together? We need an expert on virtual sound cards.

Here's an entertaining read concerning the development of a USB-based DAC using non-asynchronous USB. Much thought went into the method for matching the DAC's clock to the USB audio stream using a close-to-ideal PLL.
The D/A diaries: A personal memoir of engineering heartache and triumph

phofman 23rd May 2012 02:03 PM

Did you look at jack for windows? IMO it supports asio only, but that could be sufficient for your need. http://www.diyaudio.com/forums/pc-ba...ml#post2929879

From the multiple clocks POW usb dac works as any other sound card. It consumes the stream independently of the other soundcards. Adaptive - pace of the internal USB controller clock, asynchronous - pace of the clock in the DAC. The advantage is that using two adaptive dacs fed from the same USB controller (e.g. an add-on PCI/PCIe one with two ports), they run synchronously since the two ports share the same clock. However, most usb inputs (capture, record) run asynchronously, even in soundcards with adaptive output (playback) mode.

IMO you need to tie the "virtual soundcard" to the physical second soundcard clock. That is why I am suggesting to use the technology designed for connecting various audio applications within a single clock domain - jack, live professor? http://www.diyaudio.com/forums/pc-ba...ml#post3003868 ) Jack even has a facility for adaptive resampling when joining two clock domains http://lac.linuxaudio.org/2012/papers/23.pdf


All times are GMT. The time now is 08:11 PM.


vBulletin Optimisation provided by vB Optimise (Pro) - vBulletin Mods & Addons Copyright © 2014 DragonByte Technologies Ltd.
Copyright 1999-2014 diyAudio


Content Relevant URLs by vBSEO 3.3.2