It's a simple question, and probably easy to test, but I thought I would get a few other folk's thoughts on this. Mathematically, a square wave can be described as a weighted sum of sinusoidal tones at its fundamental frequency and its harmonics. In practice, a perfect square wave is limited by the rise time of the system, so the edges become more rounded as the higher frequency harmonics are attenuated.
That said, if one were to listen to a square wave signal near the edge of the human hearing range (say 16KHz or whatever highest frequency you are able to hear), wouldn't the harmonic components be outside of your hearing range and thus not sound appreciably different to a sinusoidal signal of the same amplitude?
This is, of course, assuming that one is using an amplifier and speaker able to produce sounds past 20KHz.
The reason that I ask is because of how the question relates to the notion of frequency cutoffs when designing amplifiers. Prior to joining this forum, when I had set about to design amplifiers on my own, I assumed that the simplest approach would be to design everything to a cutoff of 20KHz. When I joined this site, however, I was surprised to learn that many hi-fi designers design to a higher frequency cutoff than 20KHz, sometimes 200KHz or more.
What are other people's thoughts on this? what frequency cutoffs do you typically design to and are there any benefits to designing to a higher cutoff than 20KHz?
That said, if one were to listen to a square wave signal near the edge of the human hearing range (say 16KHz or whatever highest frequency you are able to hear), wouldn't the harmonic components be outside of your hearing range and thus not sound appreciably different to a sinusoidal signal of the same amplitude?
This is, of course, assuming that one is using an amplifier and speaker able to produce sounds past 20KHz.
The reason that I ask is because of how the question relates to the notion of frequency cutoffs when designing amplifiers. Prior to joining this forum, when I had set about to design amplifiers on my own, I assumed that the simplest approach would be to design everything to a cutoff of 20KHz. When I joined this site, however, I was surprised to learn that many hi-fi designers design to a higher frequency cutoff than 20KHz, sometimes 200KHz or more.
What are other people's thoughts on this? what frequency cutoffs do you typically design to and are there any benefits to designing to a higher cutoff than 20KHz?