Confused by the title? Most people probably are, and that’s the point. We constantly hear that “digital is perfect” and “there cannot be differences between transports” etc. We heard this from engineers, computer scientists and armchair experts. All three are wrong, but the engineers really ought to know better.
Let’s start with some basics. Most musical instruments, from the human voice to a guitar or piano, are analog. Our ears are analog. And the sound waves between the two must be analog. God did it. Don’t argue with God.
Digital is a storage method. It can only occur in the middle of this chain, with sound converted to digital and then back. The goal is 100% transparency – or, more accurately, transparency so good we cannot tell the difference. While that sounds like a cop out, its not. Analog records are also intended to be 100% transparent, and they fail miserably. CD, DSD, or whatever need only to fail less, to be an improvement. My opinion is that done right, digital DOES fail less and is potentially superb. Its that word “potentially” that trips us up.
While there are many points along the chain where we can lose fidelity I want to talk about one in particular: Jitter. I want to talk about jitter for two reasons:
- It has a huge impact on sound quality in real life systems today.
- No one talked about it until recently, and very few understand what it is or why it’s a problem.
To understand jitter, first we need to understand CD playback. I will use the CD example simply because I have to pick one and it is the most common High-End format. CD music is digitized by measuring the music signal at very tiny increments. An analogy would be Pixels on your screen, and in CD, they use a lot of horizontal “pixels” (samples) — 44,100 samples every second. The height, or “voltage” of each sample is represented by a number we debate endlessly: the bit depth. CDs use 16 bits which means 64,000 shades of gray.
Illustration of the height and spacing of music samples; courtesy: Wikimedia.org.
But there is another characteristic that is equally important to sound quality. In fact, mathematically it is part of the same calculation, and yet nobody talks about it. That characteristic is the time between samples. Think about height and time like a staircase; each step has a height and tread depth — the two together determine the steepness. Similarly the analog output of “pulse code modulation” (which CD is) is determined by the height (limited by 2^16 or 64000 levels) and the time between samples. That time is assumed to be precisely 1/44,100 second. But we live in an imperfect world and that fraction of a second varies some. The variation, which is random, is called Jitter.
Because it is random, jitter is not harmonically related to the music; and therefore in musical terms dissonant (or lousy sounding). So while bits are in fact bits, there is much more on the interface between the transport and a DAC, than bits. There is also jitter and noise, and noise causes jitter.
Giant caveat here: jitter only matters at one place – at the DAC itself (the chip, or the discrete resistor array). Jitter at the transport, on your Ethernet, etc. does not matter on its own – so long as it is completely and cleanly clocked right at the DAC chip. That said, there are many ways to interface to the DAC, and some accomplish this cleanup better than others. So it gets murky.
Any engineer who tells you that the digital input signal cannot impact sound quality has failed to take into consideration fully one-half of the data necessary to re-create the wave form. They have focused only on the bit depth ( and its 96 dB theoretical signal to noise!) and ignored the jitter contribution. If all you are doing a reading a completer file, which has no time component – bits are in fact bits. But an SPDIF or similar signal DOES have a critical time component and that is why it is not in fact purely digital.
For those of you who don’t much care, and just want your music to sound good, it gets both better and worse 🙂
There are two ways to send a digital signal between a source and a DAC. The traditional method is by an interface called SPDIF. It’s the little yellow RCA jack on the back of your CD transport. The problems with SPDIF are that a) it is Synchronous, b) the source is the master clock, c) the timing (clock) is derived and therefore a CD player, with likely a cheap clock, carried on an interface hat is not clock friendly, determines jitter. So when you do something logical like buy a fancy DAC to make your cheap CD player sounds better, you get the jitter of the cheap CD player and, as we noted above, that’s half the story.
USB is the other (prominent) was to provide an input to a DAC. USB has its own issues but is typically better than SPDIF. The reason is that USB is NOT synchronous. It just throws bits at the DAC, into a buffer, and lets the DAC re-time the whole thing reading the bits out in lock step with its clock. And herein lies the rub, you are now only as good as that clock. If you re time the signal before the USB input, all that re-clacking is lost – the bits are just tossed into the buffer to be stored (for a few millisconds, but stored nonetheless). So re-clockers accomplish nothing.
There are more problems- primarily related to electrical noise but I think they are less severe than jitter and certainly are another topic. These can be overcome by good practices of isolating the USB output, or the input, and powering that “clean”side from a clean power supply. Like I do.
I hope this has shown you that sound differences from transports and digital signals are neither snake oil nor mysterious, only annoying. I will make only one recommendation: make sure that SPDIF connection is a true 75 ohm cable. It need not be a fancy audiophile cable. It can be amazon basics. It can be cheap Schiit (I think they call them that). But it must be 75Ω.
Now if we could only fix crummy digital mastering, but that’s out of our control.
All the best,
CEO Sonogy Research LLC