A bare bones framework for audio DSP on a PC

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
As someone who can just about program in 'C' , and without money to burn on dedicated DSP boards, I was keen to use an old PC for audio, specifically experimenting with active crossover based on FFTs. What I really wanted was to use the PC effectively as a super-fast microcontroller with SPDIF and DACs attached, not indulge in programming GUIs and drop-down menus etc.

I was also keen to have access to the 'Wave' channel on the PC so that I could use any software source for input, such as audio editing packages, internet streams etc. The problem was the 'learning curve' for understanding how the operating system interacts with sound cards, and how to send audio to several sample-synchronised DAC outputs and other things that a low level DSP board would make obvious.

Luckily I stumbled across the BASS audio library Un4seen Developments - 2MIDI / BASS / MID2XM / MO3 / XM-EXE / XMPlay which gave me the tools I needed to build a bare bones framework, on which I could hang any experimental DSP algorithms. (I'm sure there are other ways to do this sort of thing with DirectX etc., but it didn't appeal to me.)

The PC is so unbelievably powerful, that you can get away with the most inefficient programming techniques, plus it has the advantage of being the 'target' device itself which speeds up the iteration process no end.

Here is how to build a bare-bones framework for audio DSP.

You'll need:

(a) a PC with Windows XP (or maybe Vista or Linux, but I'm less sure of those alternatives)
(b) a means of compiling programs
(c) a reasonable quality sound card that will let you select any source as input to your software (SPDIF, CD, 'Wave' etc.) and provide enough analogue outputs. Two very cheap options, readily available secondhand are:
- Creative Audigy 2 ZS plus Kx open source driver (very cheap, fixed 48kHz sample rate but happy to resample any other sample rate with 'adequate' quality)
- Creative X-Fi (not Xtreme Audio) and the Creative supplied software (very high quality, can be bit perfect at popular sample rates, perfect resampling otherwise)
(d) The free BASS audio library.

The basic software 'infrastructure' provides a means of doing the following:

- Detect available sound cards
- Detect input sources (SPDIF, CD, Wave), and output destinations (ASIO output channels mapped to physical outputs like Line L/R, Centre L/R etc., and you can change the mapping using the sound card's 'applets')
- Select card, input source and output destinations
- Start 'recording'
- Start 'playback'

Your DSP code runs in between recording and playback, processing the recorded samples and pushing them out for playback over ASIO channels. Latency depends mainly on how complex the processing is. You can use this framework to build your own active crossovers, room correction algorithms, guitar effects etc.

Build your code with the BASS and BASS ASIO libraries (and put the DLLs in with the executable) and use the following basic framework. I've stripped out all the error checking etc. to simplify it; refer to the BASS documentation for gospel info on how to use each function properly! The BASS library comes with quite a few examples, but none of them is quite as targetted and simplified as the program below. Sorry for the disappearance of the tabbing - it was tabbed OK when I pasted the code in.

/*****************************************************************************/
#include "bass.h"
#include "bassasio.h"
#include <math.h>
#include <stdio.h>
/*****************************************************************************/
//#define TEST_TONES // use this option to generate a separate test tone at each output
#define SAMPLE_RATE_HZ 48000 // The Audigy always runs at 48 kHz, but re-samples any incoming data
// so it will read a 44k1 stream transparently. You can use 44k1 directly
// with the X-Fi.

#define INPUT_CHAN_TYPE 1
#define OUTPUT_CHAN_TYPE 0

#define NO_FLAGS 0

#define NUM_OP_CHANS 6 // as many outputs you need e.g. three way stereo crossover
#define ASIO_START_CHAN 2 // for the Audigy, by default ASIO channels 2-8 are mapped to physical outputs
// but this can be changed in the Kx router

#define INPUT_VOL 1.0
#define OUTPUT_VOL 1.0

#define STEREO 2

#define BUF_LENGTH 65536 // you can be generous with a PC. N.B This doesn't define the latency.

#define PI 3.141592654

/*****************************************************************************/
// create buffers for incoming stereo samples
short pbuf_l [BUF_LENGTH];
short pbuf_r [BUF_LENGTH];

// pointers for circular buffer
int wr_ptr = 0;
int rd_ptr = 0;

/*****************************************************************************/
// RECORD - buffer the recorded data
BOOL CALLBACK RecordingCallback(HRECORD handle, short *r_buffer, DWORD length, DWORD user)
{
static unsigned int ii;

//printf ("R %d\n", length); // just for debugging

/* always transfer from low level sampled array into our own circular buffer */
for (ii=0; ii < length/2; ii += 2) // 'length' is in bytes
{
pbuf_l [wr_ptr] = r_buffer [ii];
pbuf_r [wr_ptr] = r_buffer [ii + 1];

if (wr_ptr < BUF_LENGTH-1)
{
wr_ptr ++;
}
else
{
wr_ptr = 0;
}
}

return TRUE;
}
/*****************************************************************************/
// ASIO playback function
DWORD CALLBACK AsioOutProc(BOOL input, DWORD channel, float *buffer, DWORD length, void *user)
{
unsigned int ii, ch;
static float phase [NUM_OP_CHANS] = {0,0,0,0,0,0}; // if using test tones

//printf ("P %d\n", length); // just for debugging

#ifdef TEST_TONES
for (ii=0; ii < length / 4; ii += NUM_OP_CHANS)
{
for (ch=0; ch < NUM_OP_CHANS; ch ++)
{
buffer [ii + ch] = (float) sin ((float) phase [ch]) /10;

phase [ch] += (float)0.01 * (ch + 1);

if (phase [ch] > 2*PI)
{
phase [ch] -= 2*PI;
}
}
}

#else // as a demonstration, copy incoming audio to outputs

for (ii=0; ii < length / 4; ii += NUM_OP_CHANS)
{
for (ch = 0; ch < NUM_OP_CHANS; ch ++)
{
// feed stereo input left to even channels, right to odd
if (ch % 2 == 0)
{
buffer [ii + ch] = (float) pbuf_l [rd_ptr] / 32768;
}
else
{
buffer [ii + ch] = (float) pbuf_r [rd_ptr] / 32768;
}
}

if (rd_ptr < BUF_LENGTH-1)
{
rd_ptr ++;
}
else
{
rd_ptr = 0;
}
}
#endif
return length;
}
/*****************************************************************************/
int main(void)
{

int a;
BASS_DEVICEINFO device_info;
BASS_ASIO_DEVICEINFO asio_device_info;
char *name;
//variables for recording and playback device assignment
int rec_dev, pb_dev, rec_ip;

printf ("List of ASIO output devices:\n");
for (a = 0;BASS_ASIO_GetDeviceInfo(a, &asio_device_info); a++)
{
printf("dev %d: %s\ndriver: %s\n",a, asio_device_info.name, asio_device_info.driver);

if (BASS_ASIO_Init(a,0))
{
BASS_ASIO_CHANNELINFO i;
int b;
for (b=0;BASS_ASIO_ChannelGetInfo(1,b,&i);b++)
{
printf ("\tin %d: %s (group %d, format %d)\n",b,i.name,i.group,i.format);
}
for (b=0;BASS_ASIO_ChannelGetInfo(0,b,&i);b++)
{
printf("\tout %d: %s (group %d, format %d)\n",b,i.name,i.group,i.format);
}
}
BASS_ASIO_Free();
}
printf ("Use which ASIO playback device ? ");
scanf ("%d", &pb_dev);
printf ("\n");


// list recording devices
printf ("List of recording devices:\n");
for (a=0; BASS_RecordGetDeviceInfo(a, &device_info); a++)
{
printf ("%d: Device:%s Driver:%s\n", a, device_info.name, device_info.driver);
}

printf ("Use which recording device ? ");
scanf ("%d", &rec_dev);
printf ("\n");

BASS_RecordInit(rec_dev);

printf ("List of recording inputs:\n");
for (a = 0; name = BASS_RecordGetInputName(a); a++)
{
float vol;
int s = BASS_RecordGetInput(a, &vol);
printf("%d: %s \n", a, name);
}

printf ("Use which recording input ? ");
scanf ("%d", &rec_ip);
printf ("\n");

BASS_RecordSetInput (rec_ip, NO_FLAGS, INPUT_VOL);

// initialise the desired ASIO output device
BASS_ASIO_Init(pb_dev, NO_FLAGS);

// set the sample rate
BASS_ASIO_SetRate ((double)SAMPLE_RATE_HZ);

// enable first ASIO output channel
BASS_ASIO_ChannelEnable(OUTPUT_CHAN_TYPE, ASIO_START_CHAN, &AsioOutProc,0);

// using floating-point
BASS_ASIO_ChannelSetFormat(OUTPUT_CHAN_TYPE, ASIO_START_CHAN, BASS_ASIO_FORMAT_FLOAT);

// set the output volume for this channel
BASS_ASIO_ChannelSetVolume (OUTPUT_CHAN_TYPE, ASIO_START_CHAN, (double)OUTPUT_VOL);

// and join the next N ASIO output channels to it to make one 6 channel group
for (a=1; a < NUM_OP_CHANS; a++)
{
BASS_ASIO_ChannelJoin(OUTPUT_CHAN_TYPE, ASIO_START_CHAN + a, ASIO_START_CHAN);
}

//* kick off an audio recording */
// start recording @ SAMPLE_RATEhz 16-bit stereo
BASS_RecordStart(SAMPLE_RATE_HZ, STEREO, NO_FLAGS,(RECORDPROC*) &RecordingCallback,0);

// Put a pause in here to ensure that enough recorded samples have
// built up before starting playback. You may wish to think carefully about this
// to reduce latency
while (wr_ptr - rd_ptr < 4096);

// start playing back over ASIO
BASS_ASIO_Start (4096); // You can force a specific buffer length with a non-zero argument

// loop forever, and let the callbacks do all the work
while (1)
{
Sleep (500);
}

return 1;
}
/*****************************************************************************/
The above program detects available ASIO output channels and asks you to select which device you want to use for output. A number of ASIO output channels are then 'joined' to form a sample-synchronised group.

The program then detects available recording input devices and asks you to select one, plus the input source. Inputs and outputs should all be on the same sound card.

Two callback functions are set up:
- a recording function pulls 16 bit samples from the stereo recording source and sticks them into a circular buffer
- a playback function continually pulls samples from the circular buffer and shoves floats into several ASIO output channels as a simple demonstration. There is also an option to enable test tones generated at individual frequencies at each analogue output.

The two callbacks produce and use data at exactly the same average sample rate.

Your DSP code can live in either callback function, or the main loop if you arrange it that way.

The above program uses about 1% of a 7 year old PC's processing power. I have found that my three way stereo crossover using very inefficient code and large FFTs takes about 30%. PCs are very powerful!

You can see how the output data is interleaved in the playback function. ASIO channels 2-7 are grouped so that you just have to stuff floats in channel sequence into the buffer. The 'length / 4' is because length is in bytes, but we're using 32 bit floats (could use 'sizeof (float)').

The two sound cards mentioned above have flexible routing allowing the ASIO output channels to be mapped to any physical output(s), and this can be accomplished using the sound cards' own 'applets' (I won't go into it here, but could explain it in another post). For both cards, 'Wave' is routed to the main stereo outputs by default, but can be turned off, thus freeing up those outputs for your processed audio. The Kx driver (and additional GUI-based software) is indispensible for the Audigy, and for the X-Fi use 'Audio Creation Mode' within the Creative-supplied control panel.

In practice, the idea is that you won't get any sound out of the sound card until the program is running.

Programs such as media players etc. will feed the default audio device as set up in the Windows control panel as normal, and it is this device and input that you select as your recording source at runtime.
 
This is a very cool thread and I will follow this very closely.
As this is something that I would very much like to learn how to do.
But, As I know nothing about today's modern programing Tecniques.
So, This looks like a neat place to start at it.

I do have the X-FI card and a GINA24 card as well as my main ASIO types.

Very Cool !!!

:cheers:

Jer :)
 
@geraldfryjr

Thanks for your comment.

In order to keep everything open source and ultra-cheap (or free) I just downloaded Codeblocks, an open source C IDE as an alternative to Microsoft Visual Studio etc.

Download binary

(I downloaded the version with Mingw included)

I created a 'console project' called 'Active Crossover' and cut and pasted the program text from the first post into main.c.

I placed bass.lib and bassasio.lib in the same directory as main.c.

In Project->Build Options->Linker Settings I added bass.lib and bassasio.lib. (For some reason I had to select 'No' when asked whether to use a relative path, as the alternative failed to compile.)

I then built the program using the Build menu (there were a few warnings but no errors). I placed bass.dll and bassasio.dll in the Bin\debug directory along with the newly-generated Active Crossover.exe.

I ran a media player program having previously disabled direct routing from 'Wave' to any analogue outputs in my Audigy 2 ZS card (using the Kx '4xDSP' GUI), but I know you can do the same in the X-Fi Audio Creation control panel. I then double-clicked on Active Crossover.exe, selected the appropriate sound card options when prompted, and music came out of my headphones. Hopefully the same would happen for you.
 
Last edited:
Just spotted an error in the code. Replace the '4096' with something bigger in this line:

while (wr_ptr - rd_ptr < 4096);

e.g. 10000

or you will get glitches. You can set specific callback periods for recording (see the BASS documentation for BASS_RecordStart (..)) and ASIO plyback (BASS_ASIO_Start (..)) which you might wish to do if low latency is important.

Other than that, if you can get the program to compile and run successfully, you're home and dry in that you can now place any DSP processing you like in between the record and playback callbacks. I've been using the FFTW library FFTW Home Page for FFT-based crossover filtering, which could also give you room correction 'for free', for example.
 
This is great!

Have you had any problems the the input sampling being franctionally different to output sample rate causing buffers to eventually underrun or overrun?

This can happen if the input soundcard and the output soundcard are different devices, or does ASIO4all overcome this?
 
This is great!

Have you had any problems the the input sampling being franctionally different to output sample rate causing buffers to eventually underrun or overrun?

This can happen if the input soundcard and the output soundcard are different devices, or does ASIO4all overcome this?

@spot

I would certainly imagine that using two different sound cards would cause the problem you describe. In fact, I mentioned it on this thread http://www.diyaudio.com/forums/pc-based/213131-virtual-loopback-audio-driver-pc-dsp.html last night. In my system, I use only the one sound card in order to avoid the clock drifting problem - you can run the whole thing 'bit perfect'.

If you go via SPDIF, I believe the X-Fi card can re-sample effectively transparently, though. The Audigy 2 ZS will also re-sample, but the quality is reputedly not as good. I just don't know the mechanisms well enough to understand how re-sampling would work via a software-only route e.g. A media player streaming to one sound card (presumably locked to its sample clock), then a program like mine picking up that stream and then playing it over ASIO to a second sound card.

I'm not using ASIO4ALL (as far as I know), but instead relying on the BASS ASIO library and Creative's ASIO drivers for the X-Fi. If using the Audigy 2 ZS, it would then be the Kx project drivers.

Which sound card(s) do you have? I'd be very interested to know which cards allow you to effectively intercept a 'Wave' stream before feeding a processed version to multiple outputs. In my experience, the default is for sound cards to automatically route the 'Wave' stream to the main stereo output, thus preventing you from using those outputs for processed audio. The Kx driver and the X-Fi allow you to turn that internal routing off, however.

P.S. Thanks for your nice comment! (and also boris81 and geraldfry)
 
Last edited:
I'm interested in using a virtual loopback application also. Basically I want to take sound from a HTPC program (2 or 5.1channels) -> virtual sound card -> DSP filtering -> real sound card (many channels) -> amplifiers -> speakers, where there is a separate amp for each woofer and each tweeter etc.
The sound card would probably be a external USB 6 or 8 channel cheapie, but I was hoping to use a few of the cheap but quality 192khz usb stereo ones. Using your software it looks like this second option might be easy,.- might need to write up-sampling code (not hard).
 
I'm interested in using a virtual loopback application also. Basically I want to take sound from a HTPC program (2 or 5.1channels) -> virtual sound card -> DSP filtering -> real sound card (many channels) -> amplifiers -> speakers, where there is a separate amp for each woofer and each tweeter etc.
The sound card would probably be a external USB 6 or 8 channel cheapie, but I was hoping to use a few of the cheap but quality 192khz usb stereo ones. Using your software it looks like this second option might be easy,.- might need to write up-sampling code (not hard).

An external USB DAC may be a simpler option. As I understand it, most USB DACs implicitly have to re-sample, or adjust themselves to the audio stream and are not strictly 'bit perfect' (they either re-sample, or the bits stay the same and the clock adjusts itself from time to time) so the system you describe would work.

I believe, however, there is also the concept of asynchronous USB, where the external card requests packets of data from the PC and thus sets the sample rate itself (like an internal sound card), which might complicate things. Unless the virtual sound card can be slaved to the USB card, and the HTPC program slaved to the virtual sound card, thus locking the source and playback sample rates together? We need an expert on virtual sound cards.

Here's an entertaining read concerning the development of a USB-based DAC using non-asynchronous USB. Much thought went into the method for matching the DAC's clock to the USB audio stream using a close-to-ideal PLL.
The D/A diaries: A personal memoir of engineering heartache and triumph
 
Did you look at jack for windows? IMO it supports asio only, but that could be sufficient for your need. http://www.diyaudio.com/forums/pc-based/203793-vst-windows-7-system-audio.html#post2929879

From the multiple clocks POW usb dac works as any other sound card. It consumes the stream independently of the other soundcards. Adaptive - pace of the internal USB controller clock, asynchronous - pace of the clock in the DAC. The advantage is that using two adaptive dacs fed from the same USB controller (e.g. an add-on PCI/PCIe one with two ports), they run synchronously since the two ports share the same clock. However, most usb inputs (capture, record) run asynchronously, even in soundcards with adaptive output (playback) mode.

IMO you need to tie the "virtual soundcard" to the physical second soundcard clock. That is why I am suggesting to use the technology designed for connecting various audio applications within a single clock domain - jack, live professor? http://www.diyaudio.com/forums/pc-based/203793-vst-windows-7-system-audio.html#post3003868 ) Jack even has a facility for adaptive resampling when joining two clock domains http://lac.linuxaudio.org/2012/papers/23.pdf
 
Just a note about what you can do with a PC when it comes to DSP. Your PC has a performance measured in the tens of GFLOPS! It won't even break into a sweat for most audio processing applications.

In audio, the most obvious thing might be to implement an active crossover. You could emulate standard analogue filters using IIR filtering (Infinite Impulse Response) filters, but the most exciting possibilities lie in using FIR (Finite Impulse Response) filters to implement Linear phase filters Linear phase - Wikipedia, the free encyclopedia or even Digital Room Correction Digital room correction - Wikipedia, the free encyclopedia.

At its heart is the idea of convolving (Convolution - Wikipedia, the free encyclopedia) the incoming audio with a pre-defined impulse response. The impulse response can be that which gives you a linear phase band pass filter, or the inverse of your listening room's impulse response - or a combination of the two i.e. you can apply individual room correction to each driver at the same time as the crossover filtering.

On the surface, convolution looks as though it should be an incredibly processor-hungry activity. However, this isn't the case because of the Fast Fourier Transform (FFT) and the 'Convolution Theorem' which states that convolution in the time domain is the exact equivalent of multiplication in the frequency domain. All you have to do is calculate the forward FFTs of your filter's impulse response and incoming audio. Multiply them together using complex arithmetic then take the inverse FFT to get your convolved audio. Because the FFT is a very efficient operation, and it scales in complexity with only the log of the size of the input array, it is possible to do real time convolution of huge FIR filters.

The number of FIR 'taps' you can process on a PC is almost incredible.The Open Source application BruteFIR (http://www.ludd.luth.se/~torger/brutefir.html#whatis) can process over 3 million taps at 44.1kHz sample rate on a 1GHz Athlon. Most processors are much faster than this so will far exceed even that unbelievable figure.

BruteFIR is based on the FFTW library (FFTW Home Page), which is also what I have used for my playing with active crossovers. For the complex multiplication I have simply used the following inefficient code:

/*****************************************************************************/
// complex structure definition
struct cpx
{
double real;
double imag;
};
/*****************************************************************************/
struct cpx prod(struct cpx x,struct cpx y)
{
struct cpx z;

z.real = x.real * y.real - x.imag * y.imag;
z.imag = x.imag * y.real + x.real * y.imag;
return z;
}
/*****************************************************************************/

So to implement an active crossover:
1. Define filter function for each driver (frequency and phase response) as an array of complex numbers.
2. Take the inverse FFT to get the impulse response of the filter.
3. Smooth it with a Hamming window or similar.
4. Take the forward FFT to get the pre-defined filter smoothed frequency response (only have to do this once before 'runtime').

5. Continuously assemble FFT arrays of incoming audio as real values only.
6. Take the forward FFT of the audio.
7. Complex multiply it by the filter's (previously-defined) frequency response.
8. Take the inverse FFT to get the audio convolved with the filter's impulse response.
9. Use the Overlap-Add method Overlap?add method - Wikipedia, the free encyclopedia to continuously produce the filtered output for each driver (it is not necessary to use 'windowing' at the output stage).

This, of course, is complicated by fact that FFTs are symmetrical about the Nyquist point, so the FFT arrays have to be assembled in symmetrical mirrored form. Overlap-add requires the arrays to be zero padded to a larger size before being added into the output buffer etc.

It takes a while to get your head around it all, but I will endeavour to 'minimalise' my active crossover code and place it here later.
 
Last edited:
A few questions about the framework:

Have you tried with a recent PC motherboard featuring the Realtek HD Audio subsystem (three line-level stereo minijacks as outputs)?

On an quite old PC not featuring the Realtek HD Audio, have you tried attaching a CM6206-based USB multichannel audio board ?

Have you tried using ASIO4ALL for converting an on-board Realtek HD Audio subsystem, or a CM6206-based USB attachement, into an ASIO peripheral?

Have you tried VAC (Virtual Audio Cable) for the ASIO signal routing?
 
A few questions about the framework:

Have you tried with a recent PC motherboard featuring the Realtek HD Audio subsystem (three line-level stereo minijacks as outputs)?

On an quite old PC not featuring the Realtek HD Audio, have you tried attaching a CM6206-based USB multichannel audio board ?

Have you tried using ASIO4ALL for converting an on-board Realtek HD Audio subsystem, or a CM6206-based USB attachement, into an ASIO peripheral?

Have you tried VAC (Virtual Audio Cable) for the ASIO signal routing?

Hi steph. I'm afraid I have to answer in the negative to testing with all those permutations. My tests are limited to Creative X-Fi, and an older Audigy 2. I have previously used BASS successfully with an M Audio 2496 card. The PCs I have used are old Dell Mini towers, plus a Sony Vaio desk top machine.

In principle, I don't see why it shouldn't work with any PC with ASIO-capable card/chipset, providing the drivers are correct.

I did spend a little while looking into the pros and cons of virtual audio cables vs. two sound cards linked with SPDIF etc. I was delighted, however, to find that it was possible to use a single sound card as the simultaneous destination for media player applications, the source for my DSP program, and the destination for the processed audio. This must be the neatest way to do it, surely..?
 
hello,

installed bass.net

created a new CLR Console project on Microsoft Visual C++2010
entered the bare bones framework C source code (from above)

added bass.net in system references using the solution navigator
copied bass.dll and bassasio.dll in project subdirectory

warnings and errors at the .exe generation stage
could not generate the executable
see attached .zip

any advice welcome
 

Attachments

  • CopperTop (CLR Console) 0.10.zip
    2.9 KB · Views: 52
  • CopperTop (CLR Console) 0.10.jpg
    CopperTop (CLR Console) 0.10.jpg
    291.6 KB · Views: 264
Last edited:
Could you try changing line 162 to this?:
BASS_ASIO_ChannelEnable(OUTPUT_CHAN_TYPE, START_CHAN, (ASIOPROC*) AsioOutProc,0);
Unfortunately, the linker needs to know about START_CHAN. The linker generates the following report :


1>------ Build started: Project: CopperTop (CLR Console) 0.10, Configuration: Debug Win32 ------
1> CopperTop (CLR Console) 0.10.cpp
1>CopperTop (CLR Console) 0.10.cpp(134): warning C4996: 'scanf': This function or variable may be unsafe. Consider using scanf_s instead. To disable deprecation, use _CRT_SECURE_NO_WARNINGS. See online help for details.
1> C:\Program Files\Microsoft Visual Studio 10.0\VC\include\stdio.h(304) : see declaration of 'scanf'
1>CopperTop (CLR Console) 0.10.cpp(143): warning C4996: 'scanf': This function or variable may be unsafe. Consider using scanf_s instead. To disable deprecation, use _CRT_SECURE_NO_WARNINGS. See online help for details.
1> C:\Program Files\Microsoft Visual Studio 10.0\VC\include\stdio.h(304) : see declaration of 'scanf'
1>CopperTop (CLR Console) 0.10.cpp(147): error C2440: '=' : cannot convert from 'const char *' to 'char *'
1> Conversion loses qualifiers
1>CopperTop (CLR Console) 0.10.cpp(163): error C2065: 'START_CHAN' : undeclared identifier
========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========
 
CopperTop,

For clarity, I assume you are using the standard audio inputs and processing with BASS then sending to the standard outputs. Having programmed with BASS for a couple of years (great library!) I have thought of a similar concept but potentially more flexible:
- Use virtual soundcard software that means that any software player can be used, rather than mucking around with 2 soundcards, loopback etc. The Windows DDK has sample code MSVAD (virtual audio driver) that installs as a software soundcard which can be used as your PC default soundcard.
- Route the stream from MSVAD to a user mode application running on the PC that runs the bass libraries. You could enhance MSVAD to also expose a soundcard input, directing the bass code to use as input, but it would be cleaner to use in memory buffer transfer (the msvad sample shows how).
- Once in BASS, you could use the BASS DSP libraries but I would prefer to use VST for highest quality & features. Having played with VST plugins with BASS, it works but sometimes the plugin UI gets screwed up.
- Bass then outputs to your selected output soundcard.

This way you have a desktop 'DSP' application that hosts the VST plugins via BASS and is completely flexible for any audio player (eg. players like Zune that can't select the output audio device).

I have compiled MSVAD and got the basics working but have not had time to explore this further (I program in VB , my C++ is rusty which doesn't help).
 
CopperTop,

For clarity, I assume you are using the standard audio inputs and processing with BASS then sending to the standard outputs. Having programmed with BASS for a couple of years (great library!) I have thought of a similar concept but potentially more flexible:
- Use virtual soundcard software that means that any software player can be used, rather than mucking around with 2 soundcards, loopback etc. The Windows DDK has sample code MSVAD (virtual audio driver) that installs as a software soundcard which can be used as your PC default soundcard.

Hi deandob

I think I am using a neater solution than that:

Providing the drivers/control applet allow it, it is possible for a single sound card to function simultaneously as
1. The stereo destination for any software player (this just needs you to set the sound card as the default destination in Windows Control Panel. OR the sound card can act as a hardware input interface -line in, SPDIF etc.).
2. The stereo source for your own DSP application (just choose Wave in BASS as your input if you want to process the output from software players. OR select line , SPDIF etc. to process hardware inputs);
3. The multi-channel destination for your own DSP application e.g. eight analogue output channels. (Just send your eight outputs over ASIO from your BASS-based application)

No Virtual Sound cards in sight and, naturally, all the sample clocks are locked together.

Some sound cards and their drivers insist on providing a direct route from the input to the output, but even this may not necessarily be a huge problem, as it may only take up two of eight outputs. However, the impression I am getting is that the more professional cards will allow you to provide the functions that I list above, all at the same time, with no direct link from input to output.

Certainly this arrangement is possible with the humble Creative Audigy 2 (provided you use the open source Kx Project drivers and edit the DSP/router configuration), and it definitely works with the Creative X-Fi and its standard drivers.

I've been using it for a while and, as far as I can tell, it's that rare thing - a perfect solution. In fact, I worry that I'm missing something when people talk about needing two sound cards or commercial Virtual Sound card software, as these all involve extra complexity, expense and non-bit-perfect resampling...
 
OK, I understand, you are relying on the routing features of the soundcard, I did something similar with EMU driver Patchmix on my previous HTPC (now using Lynx but the driver/mixer isn't as powerful) and it does work well.
I didn't know BASS could capture the output stream of software players by just selecting WAVE as the input, I'll try that.

Sample clocks are not an issue here as the PC will be processing async, however there will be a latency delay but with a modern PC it would be minimal.

What I am talking about would be independent of the soundcard and an integrated software solution using MSVAD and BASS. I am planning on using this approach to send the DSP processed audio stream via USB to a DIY DAC in multichannel. Maximum flexibility but coding needed!
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.