Wave Field Synthesis Project

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Hello all, I'm here with a few questions for your help.

I'm intrigued by the possibility of a wave field synthesis project for my personal use.

I have a 12'x10' listening/measurement room with nearly anechoic response. The walls and ceiling are treated with barrier materials to keep noise out, and I have placed absorption to 8" on top of the barriers. I have a few diffraction panels that I have kept portable to adjustable placement around the room.


I understand the gist of the theory behind wave field synthesis, and I have the technical backround in DSP to make the project a reality. The main obstacle I face is that of the cost of digital-analogue conversion. I believe I could use a focusrite rack-mountable unit with Dante connection to my computers, but I'm concerned that it would require so many connections that I would saturate the Dante protocol and run out of money before I can create enough connections for unique audio channels.

One of my priorities with the project is recyclability, so that once I am done, if I want to resell or reuse the components, I can do so. Focusrite does not seem to fit this criteria.

So far, I already have 500 scanspeak 1" tweeters, and 200 audax 3" mid/full range speakers. With these speakers, I can cover the entirety of the room's perimeter with single spaced tweeters and woofers.

I also have around 1000 phillips TDA8588BJ 4x50w chips that I plan to use for the amplification of this project. Since I have more amplification channels than speakers, I am open to active DSP for each channel, instead of using passive crossovers.

I'm interested in hearing feedback from the community on a few things.

1. Should I use central DSP on my compute blades? This would pair with slave DACs like the focusrite over Dante protocol.

2. Should I duplicate the spatial info to a set of DSP capable DACs around the perimeter of the room? If so, how many discrete radial directions should I consider?

3. As a divide up speakers into groups for their respective directions, should I plan to use constant distances between the speakers with blending/shading between different radial directions? Or should I space the speakers logarithmically in discrete groups that do not blend/shade with eachother?


Any thoughts are appreciated and I welcome any constructive criticisms.
 
Hahaha yes, very expensive.

I priced out the focusrite, and I think I would be able to complete the project for $70k.

Given that I already have the drivers and amps, and nobody is going to buy them off me, I figure they cost me basically nothing.

In looking at dacs online, I think I could manage to get price per channel down to $5 per dac, and that wouldn't be an issue financially. The main kicker is interfacing all the different dacs to a single source.
 
Cabling will be probably the most fun part about this! I'll be sure to have good pictures.

Yes, the price will be high, but if I keep the price under 10k, I'll be happy. I paid 2200 for the drivers - guy had no idea what he had - and I got the amps for free after a local company stopped using them and had excess.
 
Wave Field - wish I knew more. I have a bunch of ESL cells and have been dreaming of a wall of sound for some years. But obviously starting with stereo recordings leading to some kind of panned wall of localization.

Now, with your resources (as with mine) you can blend the sound across the wall(s) by suitably diverting L and R signals to cannily placed drivers. In other words, use acoustical blending rather than have 1000 amps with DSP delays, crossovers, and pan-pots.

It would be monumentally simpler to wire your great army of drivers with series-parallel configurations resulting in merely a few dozen 8 Ohm assemblies.

B.
 
Wave Field - wish I knew more. I have a bunch of ESL cells and have been dreaming of a wall of sound for some years. But obviously starting with stereo recordings leading to some kind of panned wall of localization.

Now, with your resources (as with mine) you can blend the sound across the wall(s) by suitably diverting L and R signals to cannily placed drivers. In other words, use acoustical blending rather than have 1000 amps with DSP delays, crossovers, and pan-pots.

It would be monumentally simpler to wire your great army of drivers with series-parallel configurations resulting in merely a few dozen 8 Ohm assemblies.

B.

There is a tremendous difference between a series parallel or shaded array and true WFS, which is actually capable of producing sound from sources both inside and outside the working volume. Shaded arrays really just have nice room coverage and good power handling, but offer nothing unique from a sound perspective.

Numerous studies have shown that the human ear is sensitive to location in music at an animalistic level, and that any struggle to locate a sound source on the part of the listener breaks the suspension of reality that a great system can bring.

WFS creates a so-called 2.5 dimensional wavefront for each signal that reproduces what is in the room at the time of recording. If you listen to pop music, do whatever you want to with your speakers - it really doesn't matter from an objective standpoint, since the music was never real in the first place.

To reproduce an orchestra, a degree of harmonic entropy needs to exist around an envelope in the time domain where the sound is coherent. That is exactly what true WFS does. I've owned McIntosh's line arrays, and while they are good speakers, they are not realistic the same way that a manger audio driver is.

Unfortunately the manger driver is terrible at almost everything but realism for simple sounds, and they have horrific distortion and power handling.

After doing a bit of research, it seems that the bending mode of the speaker produces realistic wavefronts at most high-frequencies, but lacks low frequency credibility due to some mismatch of speed of propagation across the surface of the driver.

TL;DR

1000 amps with dsp for each channel will sound much better, so that is what I'll do.
 
Assuming, just to be nice and not because I believe it, that there's any content in your post which isn't prattle and blather, you'd still need 700 recording mics in the concert hall corresponding to each of the locations where you are placing your reproducing speakers in order to capture the sound field in order to reproduce it in a perfectly anechoic space.

B.
 
Last edited:
Assuming, just to be nice and not because I believe it, that there's any content in your post which isn't prattle and blather, you'd still need 700 recording mics in the concert hall corresponding to each of the locations where you are placing your reproducing speakers in order to capture the sound field in order to reproduce it in a perfectly anechoic space.

B.

Thank you for such a erudite addition to this thread. Unfortunately, it has nothing to do with WFS.


WFS is about the wavefront produced, not the source content. Educate yourself before attempting to crap on a thread.
 
Hello all, I'm here with a few questions for your help.

……………………..

I'm interested in hearing feedback from the community on a few things.

1. Should I use central DSP on my compute blades? This would pair with slave DACs like the focusrite over Dante protocol.

2. Should I duplicate the spatial info to a set of DSP capable DACs around the perimeter of the room? If so, how many discrete radial directions should I consider?

3. As a divide up speakers into groups for their respective directions, should I plan to use constant distances between the speakers with blending/shading between different radial directions? Or should I space the speakers logarithmically in discrete groups that do not blend/shade with eachother?


Any thoughts are appreciated and I welcome any constructive criticisms.

Getting back to these questions, I have a few answers.

1. Central compute will be necessary for correct representation of WFS with control over source distance. I can use IRCAM SPAT IRCAM Forumnet | Spat

This still leaves the question of digital delivery and conversion. I'm leaning towards AES67, since DANTE is proprietary, and I could implement AES67 on a handful of nvidia TK1 dev boards I have. At less than $100 per board, I should be able to manage cost close to $10 per channel.

2. EMPAC suggests that only 18 addressable directions is necessary for a project of this scale. If my primary goal were to be inject synthetic sources inside the room, this number would double for each discrete Fraunhofer region I wanted to address. Since I only want to generate a since Fraunhofer region, 18 discrete addressable channels spanning 180* ea should be enough.

3. To minimize special aliasing, I'll have to have active control over impulse across the entire array for each of the 18 channels. Additionally, I'll need active control over phase angle least half of each channel's drivers.


So it looks like I'll need to brush up on AES67. My m1000e chassis have both fiber and copper networking cards, and I have over 10Gb/s to work with as far as bandwidth goes, so no issue there.



I'd rather not have to write a new driver for the fiber interplane to generate optical signals natively, so for now I'll be pursuing a source in china for an ethernet capable android dev board. Perhaps an android dev board with mhl support could process and stream the PCM outputs of each of the 18 channels.


Thoughts?
 
Found a source for es9018 DACs that will run me $4 per channel. That is the plan unless someone knows of a serious problem with that chip.

The current plan for feeding the DACs is m1000e --> AES67 CAT5 ---> Alterra FPGA (tbd) --> I2S ---> 9018 DAC ---> Phillips TDA 8598 amps

A few more questions about this:

1. How concerned should I be about global clocking these DACs? Is that possible? I have zero experience with word clocks, or anything like that. Would that appreciably improve the sound?


2. If I want to minimize distortion from these chip amps and preserve the headroom of the amp, should I consider adding a dummy load in series with the speakers? I think that noise for this amp is pretty nasty at low levels, but the performance is pretty good right before saturation.


Any other comments or criticisms are welcome.
 
if i might be so free, the manger sucks...****** they sound terrible, ive heard better dml speakers tha will cost 10 euro...

I would qualify that statement, by specifying what it sucks at doing:

Complex Music Reproduction

I own a pair, and I have never heard more realistic cymbals and transients.

Good luck resolving anything electronic or more than 2-3 instruments at a time.
 
I would qualify that statement, by specifying what it sucks at doing:

Complex Music Reproduction

I own a pair, and I have never heard more realistic cymbals and transients.

Good luck resolving anything electronic or more than 2-3 instruments at a time.

well my statement was indeed a bit short sighted !but i really just did not like them. it was at a show, and yes maybe not ideal situation. but weird enough the B&O flagship was just damn sick sounding compared to the floorstanding manger. might as well be the woofer and weird tuning of the port i dont know but i did not like it, one bit. if i can listen to one again i will and see what it does
 
I've got my hands on a few MAX/MSP libraries to handle the DSP, and I've got 600 addressable audio outputs. Unfortunately, the last component in my D/A chain is a TDA1308.

To the best of my understanding the current supply in this chip is limited to 60 mA, with performance dropping after about 50 mA. Output impedance is less than 1/16th my load, so that shouldn't cause an issue.

I figure I can get about 20mW per driver with very low distortion. The system efficiency is 143dB @ 1W/1m/1000Hz before room gain. As far as I can tell, going from 1W to 20mW will reduce my SPL by about 17dB to around 126dB continuous maximum.

This is quite a crapshoot since SPL will not fall with distance the same way in a room surrounded by speakers. Another factor is the role of near field compensation for higher order ambisonics in reducing effective power output of the system... I just don't know yet how much this will impact me.

Anyways, 126dB could certainly be enough for me, but on the off-chance it is not, I would like to use my TDA8588BJ chips.

It would be highly inadvisable to use the full power of the chip, but it could serve a few roles for me, with 3 configurable voltage rails and very high amp efficiency.

Any thoughts?
 
No input on the amp chips, but did you get to where you needed to go with the DSP signal chain? Asking because I found out a while ago when looking into alternatives for active crossover and EQ that there is a lot of cheap used DSP power out there in the form of conference room management boxes. Check out the Bi Amp Audia Flex.

I ended up using a sound card with software crossover (EqualizerAPO) so I've got a couple and some I/O boards I'll be putting up for sale soon. Let me know if you're interested.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.