Build in uSecs of DELAY into DAC??

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Ok,

I have seen a university project that did this, and I guess the cheapo portable CD players actually do it (15-60 secs of anti-skip), but does anyone have any references or citations, or suggestions about how to build in a specific (fixed if that is easiest) amount of straight up delay into the digital stream of a DAC. Assume you can modify or build a known DAC circuit that is suitable.

The simple idea is that I'd like to delay (for example) an HF array a fixed amount of time - external digital delays (as used in a typical PA/SR system) are not going to be a suitable "high-end" solution here...

_-_-bear :Pawprint:
 
I can't really see what is wrong with something like a Behringer DCX2496 which does this.

You can easily build a delay circuit for S/PDIF or even easier (but more delay parts) the MCLK / BCLK / SDATA / "L/RCLK" - you need 3 delays and for the latter and you could clock it with BCLK - you can get logic chips which have multiple delay elements and serial IO ports (some even programmable) this or do a round-robin RAM based variant. The former obviously requires an input receiver etc.

You could also do the same kind of thing using a computer.

You probably also owe it to yourself to search for [Erland Unruh DAC] for a really cool implementation of a digitally controlled delay buffer for input receiving.

Also, PS Audio did some interesting things in their older products - Ultralink or Audiolink or something like that. Genesis Digital Lens was also a Jitter attenuator courtesy of digital delay.
 
mSecs not uSecs, etc...

Sorry, my title should have read mSecs... not uSecs...

But I'll look for that citation...

The Behringer products are nice enough, but generally not terribly good to listen to, have higher than optimal noise level, use all tiny, tiny surface mount stuff, and a very tough to get into for mods (even changing out opamps) unless you have a SMD station and a microscope. Other than that they're very cost effective. Also, afaik, no schematics are available. (and, I think everything passes through a DSP chip or two, which is a black box as far as what it is doing or not...)

I'm looking for ultimate quality above all else.

And being a digital ignoramus, I need genuine information and insight into this sort of implementation. <---

But, does anyone know of a site with the Behringer schematics?

_-_-bear :Pawprint:
 
How much of a delay do you need? You could try to use a large FIFO chip (search Digi-key for "FIFO"). If you just need a few ms of delay then you could use the data, and LRclock lines as the data in/out of a large FIFO and the bitclock to clock it. If you need more than that, you will have to parallelize the datastreem and use the LRclock to clock the samples in/out of the FIFO.
 
macboy said:
How much of a delay do you need? You could try to use a large FIFO chip (search Digi-key for "FIFO"). If you just need a few ms of delay then you could use the data, and LRclock lines as the data in/out of a large FIFO and the bitclock to clock it. If you need more than that, you will have to parallelize the datastreem and use the LRclock to clock the samples in/out of the FIFO.

Unsure at the moment, but likely in the 10 mSec range, iirc.

The idea is to time delay the tweeter vs. mids where there is a physical distance between the two...

...being able to set up the amount of the delay, even if that is more or less in hardware, is essential. A random amount of delay won't do the trick.

So, how would this work with a FIFO?

I wonder though, if you delay the data, regardless of where it is in the data stream, if the bitclock is sync'd to the input signal then once the source ends, the bitclock goes off sync? Hmmm... been many years since I last looked at the guts of a DAC in detail, guess that shows...

_-_-bear :Pawprint:
 
bear said:
Unsure at the moment, but likely in the 10 mSec range, iirc.

The idea is to time delay the tweeter vs. mids where there is a physical distance between the two...

...being able to set up the amount of the delay, even if that is more or less in hardware, is essential. A random amount of delay won't do the trick.

So, how would this work with a FIFO?
So if you wanted to delay the serial datastream, then you would probably need about a 32K deep FIFO, since the bitclock would be around 2.82 MHz for CD if I'm not mistaken (and I certainly could be). A 32K FIFO would give you up to 32768/2.822M = 11.6 ms delay maximum. If you need more, you need a bigger device. Luckily there isn't much price difference; they are all expensive :p . Since the delay is dependant on clock frequency, you may need to detect the difference between 48 kHz and 44.1 kHz inputs, unless you know you'll only ever use 44.1.

The TI SN74V263/273/283/293 FIFOs have a programmable almost-full flag. This is an output pin which indicates that a user-specified threshold has been surpassesd. At this point, you would enable the output clock. This would be a duplicate of the input bitclock, thereby emptying the FIFO at exactly the same rate as you are filling it. The actual delay is determined by the threshold that you program for the almost-full flag.

I wonder though, if you delay the data, regardless of where it is in the data stream, if the bitclock is sync'd to the input signal then once the source ends, the bitclock goes off sync? Hmmm... been many years since I last looked at the guts of a DAC in detail, guess that shows...
Once the source ends, the output clock will stop too, since it is the same clock (you've simply started the output clock a few ms later to create delay). This would result in a few ms of audio getting 'stuck' in the FIFO. You wouldn't know the difference, until a valid clock starts up again and you get a brief blip of old audio before the new audio. So maybe you could build a clock loss detection circuit (like a timer that is reset on each rising clock edge... no clock edge and the timer expires) which would reset the FIFO to get it ready for new data.
 
commercial digital delay

I understand that proffesional audio uses delay circuits to equalize arrival times from speakers in large auditoriums.

Perhaps a search for such a device will reveal a design.

I was once asked to design such a device but the project only got as far as a block diagram.

It was as follows:

1. A clock running at 48KHz, in this case maybe 44.1KHz is more appropriate. The clock ran two counters.
2. An A to D feeding a 16 bit wide RAM array clocked by counter 1
3. A DIP switch that would preload counter 2 every time counter 1 overflowed.
4. The RAM data at counter 2 address fed a D to A to convert it all back to analog.

The RAM was of course arranged to be a circular buffer with new data overwriting old data. 65K words would provide about a second and a half of maximum delay. 65K words requires a 16 bit counter, you can scale as needed for different maximum delays.

If you don't use dual port RAM then double the clock and write/read on alternating cycles and divide by 2 for the counters.

You can use cheap DRAM because this scheme automatically refreshes the RAM without needing seperate RAS/CAS cycles.
 
rfbrw said:
The simplest solution for a fixed delay in a source synchronous setup is a large shift register implemented in a CPLD or a FPGA.
You need a lot of memory though... a typical 64-gate CPLD may only provide 1 or 2 samples of delay on 32/16 bit audio, and a FPGA which has enough delay to get tens of ms of delay isn't going to be cheap, and it's probably going to be a fine pitch SMT / separate-low-core-voltage type of chip.

10msec of delay requires 1764 bytes @ 44.1KHz 16-bit stereo... a 30F4013 dsPIC microcontroller has 2048 bytes of data RAM and a codec interface which supports I2S, and it's a 'nice' 5V-powered 40-pin DIP. As long as you don't increase the sample rate or the delay, you might be able to do it using only this chip.

If the 2048 bytes of RAM isn't enough, you could probably hook up a 62256-style SRAM chip to the PIC with little or no additional logic.
 
Another option is a DSP (dont' stop reading yet!) such as the TI TAS3103. This is a special purpose DSP which has various audio processing blocks in it, including a bunch of biquad (IIR) filters for EQ, HF, LP, etc., mixers, compressors/expanders, and a delay block. This can provide reverb, echo, or in our case, delay. The total delay is IIRC 43 ms for all channels at 48 kHz. That's a little better than 20 ms stereo. It is fully configurable via a relatively simple I2C interface to a microcontroller. You do not need to write a single line of code for the DSP, just tell it how much to delay the signal and it does it. It takes I2S in (almost every format in existense) and I2S out, so it can go right inline with the digital audio signal on-route to the DAC.
 
I feel like we're getting warmer...

Whenever I get around trying to build something in digital I get a little weak in the knees... not very much experience with it. :(

Pinkmouse, the idea is to do it somewhere between the SPDIF and the I/V converter... generally speaking. Avoiding a "separate" A/D - D/A conversion process in the signal path.

So far the TI chip appears to offer the cleanest dumb bunny route to a solution... assuming that when inserted into the I2S stream in my DAC the whole thing doesn't radically change character! (yeah, I know that digital is supposed to be impervious to sound changes but somehow it doesn't seem to actually work out quite like that).

But is it a flat pak multi pin surface mount chip?? Jeepers, what's a guy with a regular soldering iron going to do with it??

And what do you do to tell it what to do? I think ur saying it gets a one time interface with a computer that gives it a command or two?? (yeah, ok I gotta go to the TI site and look at it...)

Signed,

Fat fingers and Big Thumbs...

:D

_-_-bear :Pawprint:
 
gmarsh said:

You need a lot of memory though... a typical 64-gate CPLD may only provide 1 or 2 samples of delay on 32/16 bit audio, and a FPGA which has enough delay to get tens of ms of delay isn't going to be cheap, and it's probably going to be a fine pitch SMT / separate-low-core-voltage type of chip.

10msec of delay requires 1764 bytes @ 44.1KHz 16-bit stereo... a 30F4013 dsPIC microcontroller has 2048 bytes of data RAM and a codec interface which supports I2S, and it's a 'nice' 5V-powered 40-pin DIP. As long as you don't increase the sample rate or the delay, you might be able to do it using only this chip.

If the 2048 bytes of RAM isn't enough, you could probably hook up a 62256-style SRAM chip to the PIC with little or no additional logic.

The problem with using a PIC is that the data is continuous and RAM buffers can have complicated addressing to contend with. A large shift register in something like a XC3S250 would be very simple.
 
rfbrw said:
The problem with using a PIC is that the data is continuous and RAM buffers can have complicated addressing to contend with. A large shift register in something like a XC3S250 would be very simple.

circular buffers are easy to do.

Spartan3?! soldering 0.5mm SMT packages to a board and supplying them with 1.2V core supplies capable of sinking as well as sourcing current, doing 5V->3.3V translation in and out of the part, etc... that's not easy. The XC3S200 (the 250 doesn'te exist) is also an expensive enough chip on its own without any of the extra hardware you'll need to make it work in this application.

And for a newbie, making a dip-switch controlled variable-length shift register in verilog/vhdl and getting all the timing right is probably going to be just as hard as programming a PIC.
 
gmarsh said:


circular buffers are easy to do.

But you still need some form of control logic.


Spartan3?! soldering 0.5mm SMT packages to a board and supplying them with 1.2V core supplies capable of sinking as well as sourcing current, doing 5V->3.3V translation in and out of the part, etc... that's not easy. The XC3S200 (the 250 doesn'te exist) is also an expensive enough chip on its own without any of the extra hardware you'll need to make it work in this application.

The S3E family lists a 250E but that may well be vapourware. The XC3S400 in, fairly user friendly, PQ208 or TQ144 is around $27 in singles. As to the difficulty of turning it into a working design, what say you we agree to disagree.


And for a newbie, making a dip-switch controlled variable-length shift register in verilog/vhdl and getting all the timing right is probably going to be just as hard as programming a PIC.

I suppose 28000 bit shift register can be a bit of a pain if one sticks to schematic design.
 
rfbrw said:
But you still need some form of control logic.

Nope, the dsPIC alone (+crystal, decoupling caps, DIP switches to set delay, etc) will work just fine without any control logic.

It's much cheaper than the $27 spartan route. Don't forget that you'll need a XCF04S or a host processor to configure the FPGA and a bunch of other power supply stuff which brings the cost up even more, and finally you'll certainly be fabricating a PCB for the spartan while you can protoboard a DIP dsPIC.

As for the software, again it's easy:

three global variables needed:
- write pointer
- maximum buffer size
- delay (read from DIP switches on power-on)

operation (run this whenever the codec interface IRQ fires):
- Read codec recieve data register. Store data into write pointer location, and decrement write pointer. Jump to <max buffer size-1> if write pointer is zero.
- add delay to write pointer offset, modulo result with max buffer size, that's your read pointer. Read the memory location and write it to the codec transmit data register.

Designing hardware/software like this is my day job... An FPGA will work, but it's a costly and inefficient way to do it.
 
gmarsh said:


Nope, the dsPIC alone (+crystal, decoupling caps, DIP switches to set delay, etc) will work just fine without any control logic.

It's much cheaper than the $27 spartan route. Don't forget that you'll need a XCF04S or a host processor to configure the FPGA and a bunch of other power supply stuff which brings the cost up even more, and finally you'll certainly be fabricating a PCB for the spartan while you can protoboard a DIP dsPIC.

As for the software, again it's easy:

three global variables needed:
- write pointer
- maximum buffer size
- delay (read from DIP switches on power-on)

operation (run this whenever the codec interface IRQ fires):
- Read codec recieve data register. Store data into write pointer location, and decrement write pointer. Jump to <max buffer size-1> if write pointer is zero.
- add delay to write pointer offset, modulo result with max buffer size, that's your read pointer. Read the memory location and write it to the codec transmit data register.

Designing hardware/software like this is my day job... An FPGA will work, but it's a costly and inefficient way to do it.

As I said, lets agree to disagree.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.