Why I feel stm32 Nucleo platform is good for DIYaudio - challenge

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Disclaimer: I have no relation with ST, so don't hesitate to challenge my statements: it may avoid spending precious time

I feel that stm32 Nucleo platform (http://www.st.com/content/st_com/en...cu-nucleo.html?querycriteria=productId=LN1847) is good for DIYaudio applications because it provides an interesting and maybe unique combination of:

  • Nucleo development boards, that are cheap (20 to 30$) and expose all the pins of the processor. The user can use all the features of the processor without having to develop custom boards or being limited by the designers of existing boards,
  • Chip with multiple digital audio interfaces (I2S, Spdif, TDM, almost all other serial audio interfaces). On a chip, you can have up to 8 of them (16 channels), and with generic interfaces like USB (including audio USB) , Ethernet
  • DSP functions,
  • I2C control of other digital devices,
  • Hardware Abstraction layer to help portability of the code and libraries from one processor or board to another,

  • A fast evolving processors familly, widespread in industry

  • Usage of common ARM Cortex cores ease the portability of the code to MCU of other manufacturers (common development tool chain, common libraries,
  • Free development toolchain (not perfect, but free)

Digital audio friendly, versatile, open, cheap, obsolescence resistant, no specific board to develop and manufacture : appealing, no ?

What is not so good:

  • (very) Little community,
  • To be programmed in C,

  • Not as much power as a real DSP (maybe)

With the suitable simple Open Source Libraries or code templates, it seems to me that it could be the Swiss Army Knife of audio applications:

  • Fit the specific interface needs of one's application,

  • Glue together the needed audio components (ADC, DAC, FDA, MICs…),

  • Mix/Filter/crossover qall data as needed,

  • Stay with development board or implement in a dedicated board...

SBC are OK as long as you can cope with one digital audio interface (except BBB with specific driver).

USB to I2S interfaces are OK as long as you just want to use them as an interface and have no DSP management to achieve.

DSP boxes are often focused on one interfacing schema: only analog , or only Spdif, or only I2S… Most of the products are not open. Some may be expensive.

Xmos products looks great, but how much is the entry ticket price ?

Analog device SigmaStudio looks great, but can we have USB or ethernet input ? freeDSP is great and proposes at the moment 2 analog in => 4 analog out. What about other needs?

Please challenge:

  • Other development boards/platform to consider (please give the entry price ticket)?
  • Show stoppers for high level audio?
  • Not interesting for the DIYAudio community because…

I'm presently working on a proof of concept Asynch USB => DSP => multiple Spdif for a multi amp project. I will share the sources.

JMF
 
Last edited:
Dear Abraxalito,

I had come through part of the thread. It is hudge, with almost 700 messages.

I my quest to find an elegant solution to my need, I very often had the impression of lagging behind others, that explored the same path several years ago. But I found conclusions only in seldom cases. It looks like if all those discussions did not aggregated in shared knowledge, and with people like me reinventing the wheel, and not developping effective tools. Dedicated wikis recording the experiments would be nice in some cases.

Would you accept to share your conclusions after this long journey? Did some people developped some software usable by the community? Did some applications proved to be inefficient?

All feedback would be really appreciated :)

JMF
 
Happy to talk about my own experience of using Cortex M CPUs in digital projects.

First of all, my needs don't align well with what I notice others doing in this space, thinking particularly of steph_tsf who has contributed hugely to the DSP XOs thread and also to your recent enquiries in relation to the STM32 processors. Steph likes to use the off-the-shelf codecs whereas I like to develop my own DACs. I've spent much more time on DAC (and amplifier) development in the past 6 years than I have on microcontrollers.

The question that intrigues me is how can DSP be used to enhance the listening experience, so one of the experiments I did early on was playing around with one or two different oversampling filters to use with my DAC. From that I discovered that the best oversampling is none. Once I had that conclusion worked out the need for a uC diminished considerably, all my DACs since have been NOS.

Another working assumption that I've been going along with which appears contrary to the majority position here is that jitter isn't particularly important to listening satisfaction. So when I saw your title about 'can low jitter be achieved?' I figured it wouldn't be particularly relevant to me.

Lastly I have shied away from using USB due to issues with common-mode noise from computers and their power supplies. Seems USB is fairly popular though here.

Where I do plan to go with ARM M4 is in the development of a TFcard player. I have purchased a succession of card-based players from Taobao, starting off with the QA550 which has been an excellent workhorse over the years. TFcards are the medium of convenience for me, given the prices seem always to fall and their miniscule size means all my music will fit in a wallet.

About your remarks about shared knowledge, I take the view that open sourcing everything is an unnecessary overhead in time and effort. I prefer to operate on a 'pull open' basis rather than placing everything in the public domain. This is because I value my time, not because I have any kind of desire to keep my designs private. I welcome enquiries from individuals curious to learn how I implemented things, it seems to me curiosity is the necessary ingredient and I have often found it lacking on DIYA in that plenty will ask (because that's easy) but few will follow up (because that requires dedication).

Does any of this help?
 
Thanks Abraxalito to share your experience,

I have the same feeling that people digging about those Cortex M MCU are those that don't find what they need off the shelf and seem to all have differently aligned needs to fullfill. This reinforce my understanding that those MCUs can play a role to fix all those specific needs,

Your point about jitter is interesting. It was in my understanding one of the few variables that remain once you consider that you transfer a "bit perfect" digital data. However I have no blind test listening hands on experience about jitter effect. Good to know that it could not be so important.

About USB and ground noise, this is my weak point. I'm much better at software as hardware design (which means very low hard design skills). My target is avoid big/main pitfalls. Help much appreciated there.

I'm interested in USB as:
- if Asynch, it allows to solve the different time domains issue,
- it is a common audio interface, immediately recognized for all OS,
- it proved successful in many designs (maybe addressing correctly the common mode noise),
- it has the potential to support a lot of features, like multi channel (but hard to do because of specific soft and drivers...)

I like your idea of the TFcard player. Struggeling with USB Async, I was asking myself if pushing the raw file to the stm32 board and then playing the file wouldn't ne the easiest and cleanest option. But this means having onboard memory and finding an easy way to implement such process with something like MPD... I also have the intuition that it may reduce the flexibility in the music flow management. However MCUs are good candidates for such thing... I would love have MPD on the MCU, but I think that it is not possible.

Thanks for sharing your view about open source. As you say there are many ways to share things.

JMF
 
I think its great that you are trying to make this a viable platform for audio. A few thoughts...

I think that a lot of what has traditionally been "embedded programming" is in the midst of a sea change that we are all adjusting too.

To understand what's happening, the critical thing to focus on is "the average custom application developer." Two decades ago, this might have been someone working in Visual Basic, or Delphi, a decade or so ago, it might have been someone developing a web application using PHP or RubyOnRails.

One way or another, their projects don't have big budgets or long timelines. They leverage lots of libraries, frameworks, and IDE capabilities to get something working quickly, and then tweak it until it's good enough and move on. The absolute efficiency of the code doesn't matter nearly as much as speed of development and developer efficiency. The hardware it ran on was similarly pragmatic.

What's significant about today is that the basic hardware that has supported a growing population of average application developers in both creating applications and deploying them is now available for ~$10-30. For that, you can have a device that runs a general purpose operating system (usually Linux) with plenty of memory, storage and processor cycles. Moreover, such devices are pretty small, and power consumption is quite low, and getting lower.

So, now, I'd argue, it's increasingly practical for people to create application-specific devices by integrating a suitable System On Module + some application specific peripherals in much the same way, and using much the same skills as used by a VisualBasic developer at the turn of the century.

Now that I've said all that, I guess I have to make an actual point :) Or, I'll ask some questions:

Given that the niche for this effort is already somewhat narrow, what happens in a couple of years when there is a $20 board running Debian or Ubuntu that finally has gigabit ethernet and multiple I2S channels?

Sure, by then, the Nucleo will probably be cheaper too, but say it gets to $5-10, would that really be enough to tip the scales away from a $20 Ubuntu device? Particularly when the price difference is only likely to be spread over a handful of devices?

The other question is whether focusing on C + templates and documentation is going to be the best route for making this as widely useful as it could be?

Perhaps more significantly, if the focus is on C + templates, what kind of contributions are you most likely to get from other people? Will they be things that make it easier for average application developers to make use of the platform, or will they be things that make things easier for other C programmers?

What if you focused on making sure that there were interfaces and libraries that made it super easy for Python, Ruby and Javascript developers to leverage existing hardware and software capabilities for creating great audio system?

Yeah, I know, it doesn't scratch the same itch, but its still worth considering. Right now I'm trying to decide how much effort I want to put in to learning an embedded linux environment (LEDE/OpenWRT) which has the advantage of running on $10 hardware, but the disadvantage of not already having LADSPA packages, and requiring cross compilation. I'm not too far from deciding just to use RPi3 as my platform of choice and put up with the compromises, limits, and workarounds, even though I'd much rather use something with faster ethernet and storage (and/or an unencumbered USB bus), and multiple I2S interfaces.

"Your point about jitter is interesting. It was in my understanding one of the few variables that remain once you consider that you transfer a "bit perfect" digital data."

To the extent that this is/was actually true, its apparent significance has likely been elevated in order to sell people things, by creating the perception of an unfulfilled need.

However I have no blind test listening hands on experience about jitter effect. Good to know that it could not be so important.

Blind listening is great, but you can also approach it from a theoretical/mathematical perspective.

You can use existing rules of thumb synthesized from detailed study of human auditory perception to come up with a noise/distortion "budget" for the entire audio system. Using a sound theoretical model for the impact of jitter on noise and distortion, calculate the amounts that might result from various amounts of jitter.

Measure the jitter (and it's contribution to noise and distortion) of the status quo. Is it even likely to be noticeable within the context of the whole system? If not, how much would the other components of the system have to improve before the USB transport caused jitter is likely to be noticeable. How likely/expensive are those improvements?

With that perspective, you can figure out how much focus to put into further jitter reductions.
 
Your point about jitter is interesting. It was in my understanding one of the few variables that remain once you consider that you transfer a "bit perfect" digital data. However I have no blind test listening hands on experience about jitter effect. Good to know that it could not be so important.

Of note, assuming something GROSS isn't at play (transmission loss), any/all jitter of any remote concern is going to be on the DAC's clock.
 
Given that the niche for this effort is already somewhat narrow, what happens in a couple of years when there is a $20 board running Debian or Ubuntu that finally has gigabit ethernet and multiple I2S channels?

Just one question here - what leads you to think that multi-core CPU boards are going to support multiple I2S? Its taken quite a long time for M4 based SoCs to have multiple channel I2S support (excluding LPC4300's soft-I2S). We already have extremely affordable gigabit ethernet boards like FriendlyARM's NanoPi M3 ($35, octa-core A53) but none of the offerings from FriendlyARM have multiple I2S that I've seen, most have none at all. Providing multiple I2S would be serving a niche within a niche ISTM when the HDMI port already supports multiple channel digital out.
 
Last edited:
Providing multiple I2S would be serving a niche within a niche ISTM when the HDMI port already supports multiple channel digital out.
Indeed.

A Raspberry Pi Zero would get stereo audio from USB (Async Audio Stereo soundcard), process it, and output eight channels through HDMI, to an off-the-shelf 8-channel power amplifier having a HDMI input, from where you can hook two 4-way speakers using Speakon-8 connectors.

Pros :
- all the hardware is off-the-shelf
- a lot of memory
- lots of MIPS
- HDMI enabling to hook a display for tuning the system
- could load a standard Linux distro featuring a few pre-programmed however editable audio DSP features
- Linux (additional features)
Cons :
- how to hook a measurement microphone ?
- nobody is going to program "bare metal"
- Linux (learning curve)

A STM32F7 Nucleo board would get stereo audio from USB (Async Audio Stereo soundcard), process it, and output 4 x I2S connecting on TDA7801 or STA326 power amplifiers, from where you can hook two 4-way speakers using Speakon-8 connectors.

Pros :
- opportunity to rely on top notch I2S-input power amplifiers, Class-AB or Class-D
- possibility to hook an ADC on I2S for connecting a measurement microphone
- can be programmed "bare metal"
- no Linux (less complicated, no learning curve)
Cons :
- less memory than Raspberry Pi Zero
- less MIPS than Raspberry Pi Zero
- no HDMI, so can't directly connect on a display
- requires a specific "cape" (hardware development required) quite large because of hosting the eight power amplifiers
- no Linux (lack of additional features)

Regards,
Steph
 
@easp,

Thank you for sharing your understanding.

Aboiut programming langage:
I agree with you about the mainstream development. High level programming on a lot of software layers. As a consequence, I sometimes that computers power doubles every 18 month, but we are still often doing the same thing, with same times of tools. I don't speak here of scientific calculations, but web browsing, mail, office applications... They are just nicer.

I'm heading toward 50 now. I learned during my scholarship Pascal, C, Ada. I understand that but never programmed in Java, PHP, RubyOnRails... So it is tools that don't come to my mind. Can they help for development of MCUs ? If you think so, interesting to share.

About the SBC platform:
I arrived to stm32 MCUs coming from Rpi equivalent (Orange Pi) =>2xUSB=>FX-Audio D802. It runs Linux + MPD+ecasound+LADSPA+ALSA => LXMini
This is really good except that I didn't succeeded to synchronize the 2USB.

I agree with the above messages: SBC don't allow yet to have several fully operational I2S except BBB but only 700MHz single core, and with dedicated kernel and driver. Either SBC have only one I2S, or don't expose the right pins, or the drivers are not there... HDMI output could be the royal way, but there are only few (one?) HDMI to I2S.

Last all audio layers provide services, but sometimes it is difficult to understqnd what happens, what makes the thing not working.

Nucleo boards have the audio interfaces, expose all the pins, provide code exemples, have to directly manipulate the samples. Sort of "more work but less obstuction"

By the way, it is important to use the right tool a the right place I love Linux/MPD that really delivers. I appreciate the MPD clients with nice GUIs developped for sure with high level software.

MCUs strengh could be interfacing specialized silicons between each others, with a PC, with Ethernet, with mass storage, synchronizing things, crunching some numbers... It looks like a fix for taylor made things

JMF
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.