AK4499EQ - Best DAC ever

Chris, the screenshot is from the manual.
The procedure of off/ on is multi-level, the front panel switch is just standby (this is what it should be left in).

I do not know why should it be questioned.
 

Attachments

  • Screenshot_20201223_022040_com.google.android.apps.docs.jpg
    Screenshot_20201223_022040_com.google.android.apps.docs.jpg
    540.7 KB · Views: 209
It's kept on for convenience to avoid 30 min of wait, it is not the same thing as needing to be powered continuously for days to reach spec. Don't continue with the BS. I mentioned in my first reply that I saw it required 30 minutes. This whole thing was about claims of in the order of days, not minutes.
 
Last edited:
It's kept on for convenience to avoid 30 min of wait, it is not the same thing as needing to be powered continuously for days to reach spec. Don't continue with the BS. I mentioned in my first reply that I saw it required 30 minutes. This whole thing was about claims of in the order of days, not minutes.

Crystals phase noise and stability improve over time. The improvement is
varied from sample to sample and the same is true for the time frame. Jocko
(RIP) measured this with his test gear on NDK, Crystek and probably others.

I had quite a few discussions with him regarding various oscillator design
topics. He stated that some would improve over days, others would do so
much more rapidly. For the best SC-cut OCXO's it can be considerable time
for them to reach their lowest phase noise.

This is documented in various RF white papers and if you look for it you will
find it. We need to move on, or should I say you need to move on.


TCD
 
Just to add to my previous post, Jocko also stated that particularly with Crystek and this was a few years ago, there was a significant variance between phase
noise, particularly at low frequencies between samples. I know he used to reject many NDK clocks if they did not 'settle' to a certain spec. I also know
that phase noise was only part of the measurement procedure, Allen Variance most likely came into play but he didn't reveal too much about this.

He always came back to one thing, the quality of the crystal was of major importance for real good phase noise close to carrier. With cheaper oscillators
this does vary much more than with say SC-cut OCXO's.

TCD
 
We need to move on

Yes, I suggest you abandon the ridiculous position that cheap AT cut crystal oscillators that are mounted on generic PCB substrate (*cough Crystek*) and not even fully sealed need several days to reach an optimum level of audible performance.

Please, let me know the mechanism by which a non-ovenized, standard AT cut XO will improve dramatically over time. Other than bits of hearsay from Jocko (RIP), I would love to know the order of magnitude of such a measured improvement, the test procedure, and ambient temperature over this time.
 
Last edited:
Okay so what I find confusing with all of this talk of phase noise is how no one is really talking about the impact that stuff after the clock has.

The clocks inside the D90 go though a CPLD before being fed into the AK4499? Well their goes the excellent 'close-in' phase noise floor you paid for with your cushy clock. Heck even the input section on the AK4499 itself most likely swamps out the noise floor of the cushy clock too.

I mean just look at dedicated clock buffers, devices specifically designed, often with little regard for power consumption, for buffering and distributing clocks within a system. Even the best ones will add significantly to the the phase noise performance of your cushy clock. Not that it isn't worth buying a decent clock to minimise periodic jitter, but where random jitter is concerned, your precious 'close-in', low frequency, noise floor of said clock is going to evaporate very quickly.

So when someone says something like the clock needing 3 days to stabilise? In what possible way? In terms of phase noise? Even if it did it wouldn't matter, the system would swamp it. In terms of frequency stability? Unless jumping all over the place this doesn't matter for audio.
 
Exactly. Even if there is no logic outside, the first thing the clock goes through in these DAC ICs is likely a buffer and in some cases, a level translator. There is going to be additive jitter. Imagine what the phase noise is like in one of the "FPGA based" designs. There are plenty of articles and papers on what kind of PN/jitter you can expect from the clock tree of a standard Xilinx or Intel FPGA, and it's not all that great. Even the "best" Xilinx output primitives like ODDR will not help you.

I've got no problem with aiming for the best, even if it's beyond audibility, but many of these claims are just outlandish.
 
Last edited:
It is a well understood fact that high precision clocks require settling time to reach their best phase noise performance, this is NOT debatable, it is an established fact proven by Phase Noise measurements.

We can debate all we want about to what degree close in phase noise may be audible, or not, and such audibility will be system dependent, but there is no debate regarding the fact that clock performance does improve with (lower close in phase noise) with a certain amount of settling time.

This!

My point (and my sarcastic remark to mark) was about audibility for three days - which implied continuous improvement for three days in the audible range. This is difficult to believe since tests are difficult to perform, unless you have two such DACs, one on for three days and one kept off until a relatively short time before the test. While this can be done, how can one exclude that the changes are due to capacitors or other components (after all, also caps in the power supplies have shown clearly measurable changes for several days ofter power on).

The only way to test the effect of phase noise in settling clocks would be to inject phase noise to the (three or more days warm) circuitry in a controlled fashion in the DAC (i.e. simulating the amounts measured after power on, one day or two) and performing an ABX test with an external control for that. Anything else would not prove anything. Note that I am not an ASR-a-single-measurement-tells-everything-about-a-DAC-type-of-guy but proper testing is proper testing. Mark has proven nothing, has only a claim.

Roberto
 
Yes, I suggest you abandon the ridiculous position that cheap AT cut crystal oscillators that are mounted on generic PCB substrate (*cough Crystek*) and not even fully sealed need several days to reach an optimum level of audible performance.

I didn't say that, please read my posts more carefully and don't misquote
me.

Please, let me know the mechanism by which a non-ovenized, standard AT cut XO will improve dramatically over time.

I didn't say that either, note 'dramatically'. Read my posts more carefully.

Other than bits of hearsay from Jocko (RIP), I would love to know the order of magnitude of such a measured improvement, the test procedure, and ambient temperature over this time.

As I clearly stated, the improvement varied between samples as did the baseline phase noise, as you would expect from fairly cheap oscillators.

Chris all I ask is you read my posts more carefully before responding.

TCD
 
I listened to D90 for a few days to see if the sound will grow on me. During that time I compared it to DDDAC (NOS, no digital filters, analog-out straight off the I/V resisitros). I also have very revealing 2-gain stages Aleph J.

NOS means significant attenuation approaching the Nyquist frequency and a good amount of imaging artefacts which will then inter modulate with themselves and with the signal. One may like the resulting sound but out is far from a reference.

My impression of D90 was not positive. It sounded cold, processed, unpleasant.

IME this is the translation
  • Cold = little or no distortion
  • Processed = highs not attenuated
  • Unpleasant = the other device is more expensive, I built it myself, or it has little or no distortion

The positives were a lot of details (but details that are being kind of forced onto you...). It also had good separation of instruments, but again... it just seemed over-processed and pushed hard out of its analog-out. DDDAC actually had more details that were presented in a very natural, relaxed manner for you to enjoy. D90 seemed like it was pushing everything towards the listener; that was all too hard to cope for longer than 10-15min. On quick A-B it wasn't that bad, but when I tried to do something else (write/read/talk to my family), D90 was too tiring... I had to lover the volume and/or just turn it OFF.

I am just wondering if you did anything to D90 to make it sound more natural? Maybe get rid off OP amps' analog stage and do something less harmful?

I just listen to it. It sounds lovely and extremely natural. In fact, it is the perfect DAC to simulate the sound of other devices by suitable processing upstream.
 
Exactly. Even if there is no logic outside, the first thing the clock goes through in these DAC ICs is likely a buffer and in some cases, a level translator. There is going to be additive jitter. Imagine what the phase noise is like in one of the "FPGA based" designs. There are plenty of articles and papers on what kind of PN/jitter you can expect from the clock tree of a standard Xilinx or Intel FPGA, and it's not all that great. Even the "best" Xilinx output primitives like ODDR will not help you.

In fact, people accept FPGAs without a hitch but frown upon CPLDs, ignoring the fact that the latter allow significantly higher latency predictability than the former....
 
Okay so what I find confusing with all of this talk of phase noise is how no one is really talking about the impact that stuff after the clock has.

The clocks inside the D90 go though a CPLD before being fed into the AK4499? Well their goes the excellent 'close-in' phase noise floor you paid for with your cushy clock. Heck even the input section on the AK4499 itself most likely swamps out the noise floor of the cushy clock too.

I mean just look at dedicated clock buffers, devices specifically designed, often with little regard for power consumption, for buffering and distributing clocks within a system. Even the best ones will add significantly to the the phase noise performance of your cushy clock. Not that it isn't worth buying a decent clock to minimise periodic jitter, but where random jitter is concerned, your precious 'close-in', low frequency, noise floor of said clock is going to evaporate very quickly.

Yep, it's a very complex equation and there are many factors that contribute.

Another reason why 'discrete' designs, aka Mola / Dave etc have an advantage, they can control these factors much more carefully with
reclocking at the last and critical point.

As much as Rob Watts is maligned by many here, he pointed out *exactly what you have above and they specifically *don't use an OCXO but *do use
reclocking right at the DAC array. This is good design practice.

TCD
 
I didn't say that, please read my posts more carefully and don't misquote
me.

Sorry to lump you in with Mark, then.

The rest of your post is just repeating stuff that is already known and not disputed. I would be curious if you, or anyone else, had any links to measurements showing this improvement in phase noise over a long time span. I'd like to get a better idea of the magnitude of improvement and mechanism by which this occurs. I've so far come up empty in searches, and I'm usually pretty good at locating papers. I'm not doubting that the phenomenon occurs, only its relevance.
 
Last edited:
I just listen to it. It sounds lovely and extremely natural. In fact, it is the perfect DAC to simulate the sound of other devices by suitable processing upstream.

I agree. This is exactly one of the reasons why D90 sounded thin and unpleasant in my system. The worst area of performance was the low-frequency extension.
I also like the sound of NOS; it's a personal preference.
 
Last edited:
...do use
reclocking right at the DAC array. This is good design practice.

Yes, but with AKM dacs that means either choosing to limit supported sample rates or else using a clock at twice the needed frequency with its added phase noise. For ESS dacs the higher frequency clock is a given.

Some people think support for the highest possible sample rates means a dac is 'better' just like some people think lower measured THD means 'better.'
 
Last edited:
I agree. This is exactly one of the reasons why D90 sounded thin and unpleasant in my system. The worst area of performance was the low-frequency extension.
I also like the sound of NOS; it's a personal preference.

Interesting, your comment about the low frequency extension. For many reviewers the quality of the bass range is one of the things where the D90 is outstanding.

And it is ok not to like its sound. After all, it is music reproduction, not dialysis (quoting Pass). But it is a very correct sound and there are always many options to tweak it either at the source or after the output. OTOH changes, such as distortion, that are often not reversible from a mathematical point of view exactly, and one cannot suppress them after the fact.

But if you are happy with a different DAC, more power for you!!!