Integrated Amplifiers vs. Separates: Trade-offs and benefits.

I recently watched a thread on an audiophile discussion site decend into silliness when a member dared ask about “why would anyone want an integrated amp?”, and offered some reasons pro and con, all factual. We can always debate assertions, but these represented his opinion of the trade offs. Actually they were not bad.

From what i could see, few people had any idea of the trade-offs, and simply recounted good or bad experiences, as if “my Honda was a great car” justified the superiority of front wheel drive. A more justified conclusion would be that Honda makes a fine product, which happens to be FWD, and that FWD certainly can be good for everyday sedans. Nothing more.

“Why anyone would want separates in the first place?”

There are pluses and minuses.  On the minus side, separates are far less convenient – bigger, more wires, etc. And, as we will get to, they cost more for the same quality. At the same time, separates provide some advantages, as we reach for better and better sound, and provide more modularity, especially if the owner might change other components – such a s speakers; either as part of the hobby, or as finances and space permit. Separates allow one to tailor the amp to the speakers; powerful amps for inefficient speakers for example, solid state for low impedance speakers, or myriad other reasons. Without going down a rat-hole, I’ll note that the amp-speaker interface is a difficult one, and many amps sound good with some speakers but not with others. I’ve worked hard to design products that are relatively speaker-agnostic, but its not totally possible. For example, if i could be absolutely certain you would never need more than 25 watts, i could deliver a sonically wonderful, pure class-A amp with all the other advantages that i believe make Sonogy unique. But that 25 watt limit would rule out many speakers and speaker-room-music combinations.

Separates can be better, and all things equal, often are; but at a much higher price. Some of the clear advantages of separates are:

  • isolation of low level and high level circuitry and power supplies
  • ability to place circuitry and wiring more flexibly to reduce noise
  • space to build multiple, specialized power suppliers
  • room to optimize the location of heat sinks, etc.
  • modularity

Now, all of these can be overcome in a cost-no object, impossible-to lift and live-with, integrated amp. But no one wants that product (do you?). So, compromises will exist.

The two big benefits of an integrated amp are cost and simplicity. For any given level of quality, an integrated amp will be simpler and cost less to make and ship. The high end community likes to sweep this truth under the table, but everything is a compromise of what we want to do, against what we can afford to do – in design, mechanical, parts quality and quantity, etc. Brochures are full of words like “ultimate” and “without compromise” but this is flatly untrue. The very existence of $35,000 mono blocks proves the point – everything (else?) is a trade off and every designer wishes he or she could spend a little more money on componentry and build.

“An integrated amp reduces spend on several very costly parts that don’t contribute directly (although they may indirectly) to sound quality”

The biggest savings and simplification comes from building and wiring one chassis, not two.  The big ticket items in almost anything are the chassis, heat sinks, transformer, and trim. Electronics are sometimes less expensive than one imagines . The cardboard shipping box, in fact, can cost a huge amount.  So these things add up when you need two, not one (or three if you have a separate phono stage too…).  So i can make a $3000 integrated amp that sounds as good as a $4-5-6000 (or more) preamp and amp pair; that’s great value.   However, an integrated amp has one drawback: it must fit your power and speaker load needs.

90% of the time it will. As you can guess, my everyday amp is an old prototype of a commercial Sonogy product. Its not terribly powerful (about 70 wpm/8 ohms), but it is unconditionally stable and can drive difficult loads (each channel has 8 x 15 Amp transistors and can just dump current momentarily). I have never had a pair of speakers that it could not drive loudly in a large room. This says, to me, that a really good 40W amp would do well for the vast majority of audiophiles. It might fail in pro applications or for a college dorm dance night, but that’s not today’s discussion, really. My point is that we over-value huge power, forgetting that doubling perceived volume requires TEN TIMES the power (perceived volume — decibels — are logarithmic) – so unless you plan to jump from 40W to 400W, why bother?  On the other hand a truly well designed amp that is unconditionally stable, and drives high current, can punch well above its weight.

Too often, integrated amps *as a category* are judged by the typical examples produced, most of which are either budget designs, “hgih feature” AV designs, or simply uninspired.   Even the recent crop of popular, integrated units from big names are mostly built around re-badged Asian kit platforms.  Yep, many are.  But if we set out with high ideals, there is no reason we cannot solve some of the problems one by one, make a truly GREAT product, and still have savings in simplicity, packaging and cost. Chief among these opportunities is the power supply compromise. I’ve done a couple of integrated designs, one as a contract, and I simply did not make that compromise – easy as that. Spend the money, build multiple idealized supplies. QED. In fact, i have a project underway that may (or may not) become a future integrated amp product, and it will only be compromised by a modest power rating (and, of course, in the fact that i can’t put that pesky transformer in its own small box…. or can I?).

So the integrated amp recipe is valid – reduce cost, hopefully with a less-than-proportional reduction in sound quality

What will be lost in a truly high end integrated design?:

  1. flexibility to have different power output choices
    –> so it only works for 9 out of 10 people 🙂
  2. chassis isolation of the low level from high level circuitry (a big deal IMNSHO)
    –>This can be solved, but takes lots of effort

At the end of the day most people would be well served by a truly great 30-50W integrated amp. More money could be then spent on DACs/speakers/vinyl reproducers where the differences are larger and the laws of physics sometimes conspire to make bigger, better. And yes, an amp manufacturer just told you to put more money into speakers. I’m an idiot.

G

Why an integrated amp? And what have we been up to for the past 24 months?

Integrated amps “don’t get no respect” — or they don’t get the respect they deserve.  There are many reasons for this, beginning with the fact that HiFi is, for many, a hobby unto itself along with a way to enjoy beautiful music. The journey is part of the enjoyment, and separate components allow for experimentation, “tuning” and custom rigs.

But individual components have quite a few drawbacks as well.  Separate pre-amps and amps, first of all, are far more costly for exactly the same circuitry and performance.  Its little known that the chassis is usually the single costliest part of any component.  Add lots of extra jacks, power cords, etc. and the extra costs associated with separate amps and preamps continues to rise.  Now, let’s move on to those various wires.  Assuming that wires can only make things worse, and the best wires simply are truly transparent, an integrated amp brings two great benefits:  1) there are no interconnects, and b) you save, potentially, $100s or even $1000s by not buying them. I will stay out of the subjective mine field that is costly interconnects.

Finally, for use in real living environments, as opposed to dedicated music rooms/HiFi museums, an integrated amp is smaller, simpler, and likely much more acceptable to a non-audiophile spouse — and to this end we will concentrate on a package that is sized and styled to complement – or at least fit in well – with a traditional design aesthetic.  For the record, in my living room I’m on the side of said, theoretical wives.  A smaller, simpler component probably fits in much better, so long as it sounds absolutely spectacular.  And it can.

There are many more advantages, along with the one big disadvantage:  you cant mix and match exactly what you need, like monoblocks, or a higher powered amp to drive inefficient speakers in a large room.  On the down-side, you must make do with the single size, or maybe two power levels that a manufacturer, like Sonogy, produces.  Yet this is less of an issue than imagined. Consider that power goes up exponentially with volume level and you will realize that a doubling of volume requires not twice the power, but ten times the power; say 40W to 400W all other things held equal. Yep its true; so that 40W amp, if well designed, might do more than you imagine.  Sadly, few are really all that well designed.

Sonogy are out to change all that, and initial feedback from reviewers., bloggers, dealers, and audiophiles is very, very good. And we are subjecting modestly priced equipment to demanding environments, like $35,000 speaker systems: sink or swim little product.

Ugly yet beautiful!  Interior of prototype preamp, used to prove-in most of the integrated amp technologies in a simple form-factor to work on. Click to expand.

Another beauty of doing an integrated amp is that it had made us reconsider, from first principles in many cases, nearly every component in the HiFi chain.  . . . Or most of them anyway, since we are still in the early stages of tackling the DAC. DACs however, are not YET always part of an integrated amp, although it is becoming a great feature (again, if done well).  Along our journey Sonogy have therefore either designed from scratch or optimized all of the following:

  1. Preamp stage
  2. Power amp stage
  3. RIAA / Phono amplifier (MC and MM, switchable)
  4. Headphone Amplifier
  5. Logic and control (yes Virginia, there will be remote control….)
  6. Optimization of all the associated power supplies, which, i would argue, matter more than the circuitry itself.  Just to make a point, in this integrated amp, there will be FOUR power supplies.

We plan to cover each of these in its own blog entry.   The interesting implication is that Sonogy has completed new next-generation designs not only for an integrated amp, but for an entire series of components that can be built, in the words of Mercedes-Benz,  “to a standard not to a price”  – and yet for the performance we will deliver we expect that price to be extraordinarily attractive.

Watch this blog for more, as we have the opportunity and time to complete an overview of each development effort, possibly with pictures of pre- prototypes and design notes that the the tech geeks among us might appreciate 🙂

Grant

 

 

Sonogy – Next Generation R&D

Ahh, another trade off.

i can spend time actually DOING R&D or communicating it here. But I must remember that sharing that information is part of what this hobby, and a successful business is about. So, i will endeavor to post interesting information on research we perform, and develop as it happens.  The good news is “I am way behind on my blog” – meaning that much has progressed on the R&D front over the past two+ years.  The bad news is, well, I am well behind sharing it here. 🙂

Follow our progress to learn what Sonogy is doing on everything from:

  • basic product R&D
  • testing new and unique ways of achieving superb sound at a reasonable price
  • with features and convenience not normally associated with the high end.

These are all things I want, and trust they appeal to many of you.

I’ll try to keep blogs to a readable length, and make this post “Sticky” as an introduction.

Enjoy!

 

Digital Audio Compression – a little more insight

For those who don’t work in the field, digital audio coding, and the associated practice of compression, can be mysterious and confusing.  For now i want to avoid the rat’s-hole of high definition and simply assume that most music – whether you think its great or awful, is recorded in CD quality.  By the way, CD quality can be pretty darn good, although it often is not.  but that’s the fault of recording, mastering etc.

The basics of CD quality are that it is:

— 2 channels

— 44,100 samples per second

— 16 bits per sample (meaning 2 to the 16th power shades of musical gray, or about 64000)

Multiply all this out and you have 1.411 Mbps plus overhead in “RAW” format, no compression.

As audiophiles, many of us have a low opinion of compression. We have heard 128 kbps MP3s on our iPods and pronounced them unacceptable.  That is true, but it’s also misleading.  Why?

  1. Its more than 10:1 compression!  That’s a lot.
  2. most of us hear these on poor quality internal DACs and amplifiers, via the analog jack.

But let’s get back to compression.  First, there are two kinds of compression, lossless and lossy. Lossless does not change the digital data one “bit”.  A good example is a ZIP file which makes your excel spreadsheet (or whatever) smaller but preserves all data.  This is done by mathematics that eliminates redundancy (like stings of zeros) or other methods of coding the data – without removing anyLossy formats on the other hand, DO remove information, and therefore musical accuracy. Some algorithms are better than others; MP4 (AAC) is about twice as efficient as MP3 for example.

FLAC (Free Lossless Audio Codec and ALAC (Apple Lossless Audio Codec) are the dominant lossless systems. Each can compress CD audio “about” 2:1, or to ~ 768 kbps.  It depends on how much redundancy exists int he the music and may be larger or smaller – after all it LOSSLESS – not driven to some arbitrary design speed.  When the process is reversed you have CD audio, no more, no less.  It should be sonically transparent. Although some claim to be able to hear it, this is unlikely. Most probably they are hearing something else, or imagining a difference. I cannot hear the difference on a VERY revealing system.

AAC (m4a or MP4) and MP3 dominate lossy compression.  Each can operate at many speeds, from 96 kbps (total, both channels) to 384 kbps or in special circumstances even more.  MP3, by far the worse, is most often used because it is the “least common denominator” — supported by everything.  We lose.  The important thing to realize is that there is a HUGE difference between 128 kbps MP3 and 384 kbps MP3 in terms of quality.  At 384 its only 2:1 compression beyond what can be achieved with ALAC or FLAC.  And i have heard great recordings in m4a at 384 kbps sound superb – try “Ripple” on for size if you doubt me, but do it on a great digital system (i played it on iTunes, MacBook Pro, BitPerfect (a $10 app), USB, galvanically isolated USB, re clocked to nano-second jitter, into a franken-DAC that began life as a MSB Full Nelson, with 96 kbps up-sampling.

I am not arguing that compression is desirable in high end – only that it needs to be understood in a broader context.  In fact, i plan another blog in which I’ll share some findings when i was working with the JPEG and MPEG standards groups (when employed by Bell Communications Research Inc., aka “Bellcore”) and related projects in the late 1980s and early 1990s – with some really surprising results.

In short, i find the poor recording and mastering practices evident especially in many rock/pop recordings, and more than a few classical recordings, to be far more detrimental and nasty sounding than relatively might AAC compression. Ditto the effects of jitter on the digital signal (see my existing blog on the evils of jitter).

Digital is complex. It is frustrating.  And yet it is misunderstood, and very early in its development.  I believe it has huge potential if we clear away the confusion and focus on finding solutions to the real problems.  So rip your stuff lossless. If you have to compress, dig into the expert settings (they are there in everything from iTunes up) and rip at least at the highest speed setting. Hard drives are cheap – enjoy the music.

Grant

CEO Sonogy Research, LLC

Jitter, or “why digital audio interfaces are analog”

Confused by the title?  Most people probably are, and that’s the point.  We constantly hear that “digital is perfect” and “there cannot be differences between transports” etc. We heard this from engineers, computer scientists and armchair experts. All three are wrong, but the engineers really ought to know better.

Let’s start with some basics.  Most musical instruments, from the human voice to a guitar or piano, are analog. Our ears are analog.  And the sound waves between  the two must be analog.  God did it. Don’t argue with God.

Digital is a storage method.  It can only occur in the middle of this chain, with sound converted to digital and then back.  The goal is 100% transparency – or, more accurately, transparency so good we cannot tell the difference.  While that sounds like a cop out, its not. Analog records are also intended to be 100% transparent, and they fail miserably. CD, DSD, or whatever need only to fail less, to be an improvement.  My opinion is that done right, digital DOES fail less and is potentially superb. Its that word “potentially” that trips us up.

While there are many points along the chain where we can lose fidelity I want to talk about one in particular:  Jitter.  I want to talk about jitter for two reasons:

  1. It has a huge impact on sound quality in real life systems today.
  2. No one talked about it until recently, and very few understand what it is or why it’s a problem.

To understand jitter, first we need to understand CD playback. I will use the CD example simply because I have to pick one and it is the most common High-End format. CD music is digitized by measuring the music signal at very tiny increments. An analogy would be Pixels on your screen, and in CD, they use a lot of horizontal “pixels” (samples) — 44,100 samples every second. The height, or “voltage” of each sample is represented by a number we debate endlessly: the bit depth. CDs use 16 bits which means 64,000 shades of gray.

     
Illustration of the height and spacing of music samples;  courtesy: Wikimedia.org.

But there is another characteristic that is equally important to sound quality. In fact, mathematically it is part of the same calculation, and yet nobody talks about it. That characteristic is the time between samples. Think about height and time like a staircase; each step has a height and tread depth — the two together determine the steepness.   Similarly the analog output of “pulse code modulation” (which CD is) is determined by the height (limited by 2^16 or 64000 levels) and the time between samples. That time is assumed to be precisely 1/44,100 second. But we live in an imperfect world and that fraction of a second varies some.  The variation, which is random, is called Jitter.

Because it is random, jitter is not harmonically related to the music; and therefore in musical terms dissonant (or lousy sounding).  So while bits are in fact bits, there is much more on the interface between the transport and a DAC, than bits.  There is also jitter and noise, and noise causes jitter.

Giant caveat here:  jitter only matters at one place – at the DAC itself (the chip, or the discrete resistor array).  Jitter at the transport, on your Ethernet, etc. does not matter on its own – so long as it is completely and cleanly clocked right at the DAC chip.  That said, there are many ways to interface to the DAC, and some accomplish this cleanup better than others.  So it gets murky.

Any engineer who tells you that the digital input signal cannot impact sound quality has failed to take into consideration fully one-half of the data necessary to re-create the wave form. They have focused only on the bit depth ( and its 96 dB theoretical signal to noise!) and ignored the jitter contribution.  If all you are doing a reading a completer file, which has no time component – bits are in fact bits.  But an SPDIF or similar signal DOES have a critical time component and that is why it is not in fact purely digital.

For those of you who don’t much care, and just want your music to sound good, it gets both better and worse 🙂

There are two ways to send a digital signal between a source and a DAC.  The traditional method is by an interface called SPDIF.  It’s the little yellow RCA jack on the back of your CD transport. The problems with SPDIF are that a) it is Synchronous, b) the source is the master clock, c) the timing (clock) is derived and therefore a CD player, with likely a cheap clock, carried on an interface hat is not clock friendly,  determines jitter.  So when you do something logical like buy a fancy DAC to make your cheap CD player sounds better, you get the jitter of the cheap CD player and, as we noted above, that’s half the story.

USB is the other (prominent) was to provide an input to a DAC.  USB has its own issues but is typically better than SPDIF.  The reason is that USB is NOT synchronous.  It just throws bits at the DAC, into a buffer, and lets the DAC re-time the whole thing reading the bits out in lock step with its clock.  And herein lies the rub, you are now only as good as that clock.  If you re time the signal before the USB input, all that re-clacking is lost – the bits are just tossed into the buffer to be stored (for a few millisconds, but stored nonetheless).  So re-clockers accomplish nothing.

There are more problems- primarily related to electrical noise but I think they are less severe than jitter and certainly are another topic.  These can be overcome by good practices of isolating the USB output, or the input, and powering that “clean”side from a clean power supply. Like I do.

I hope this has shown you that sound differences from transports and digital signals are neither snake oil nor mysterious, only annoying.  I will make only one recommendation: make sure that SPDIF connection is a true 75 ohm cable. It need not be a fancy audiophile cable. It can be amazon basics. It can be cheap Schiit (I think they call them that).  But it must be 75Ω.

Now if we could only fix crummy digital mastering, but that’s out of our control.

All the best,

 

Grant

CEO Sonogy Research LLC

High res Digital Music vs. “redbook CD” a quick overview

I’m getting whiplash from polarized — and shallow — opinions in the high end world.

In digital music specifically, I’m  bothered by the hardened, and often fact-free opinions of both audiophiles and engineers, the latter who ought to know better.

As a design engineer and true audiophile and music lover, I’m a rare bird, sitting in both camps.   I have learned (I don’t argue with reality) that things don’t always sound as they measure, and furthermore understand rational reasons for this (beginning with incomplete measurements).  For this post I’ll try to avoid the quagmire of subjective thresholds and simply ask “where are the differences and what is possible?”

I’ll turn up the contrast. At one extreme are many who believe digital is fatally flawed, always has been, and cannot be cured.  At the other end we have engineers who say “all is well, and if the bits are right, its perfect”. This is factually (technically) incorrect.  I’ll touch on only one small aspect of why here.

I don’t want to boil the ocean. I only want to address the question of whether Redbook CD format is good enough for even highly discriminating music lovers and revealing systems, and if so, whether high res files and recordings can simultaneously sound better. I’ll touch other topics in future posts: (digital interface signals and their contribution to audio quality, why SPDIF and DSD among others are  part analog) and other related topics.

My personal opinion is that, done perfectly (not in this world) RedBook — or 16bit, 44.1 k-sample, linear PCM encoding, is theoretically equal to and likely superior to any analog we are likely to experience – $5-10k turntables and all.  The problem is, Redbook digital is rarely (ok, never) done perfectly.   The “flaw” in the Redbook standard, again in my opinion, is that the sampling frequency chosen — for practical/cost reasons, makes it very hard for both studios and consumer playback equipment to perform ideal A–> and D–> A. These are analog processes folks.

The biggest problem in the standard itself is the 44,000-sample (don’t confuse this with 44 kHz analog signals) rate. This sampling rate was chosen to be more than 2X the highest audible frequency of about 20,000 Hz.  Per Shannon’s math and Nyquist’s specific theorem, one must sample at **more than** 2X a frequency in order to faithfully reproduce a smooth, distortion-free “X Hz” tone – and all frequencies below it.  Really, it can be perfect – but there’s a catch.  If you have ANY — **ANY** — signals above 20 kHz that get into the recording path they can alias and play havoc with the recording, interfering down into the audio band.  Plus, these sorts of non-musically related distortions are particularly annoying, leading in part to “digital glare”.

That’s one of the measurement flaws.  All distortions are not created equal, sonically.  Yep there’s good distortion and bad distortion, or at least bad and worse.  This is understood well in music theory.  A Boesendorfer or Steinway concert grand Piano is valued for its consonant harmonic distortions.  So are (some) tubes.  So distortion can be pleasant, or at least not unpleasant.  Digital aliasing is not in that group – its just nasty. As is “slewing” distortion – and any odd-order, high-order harmonics.  Back to the sampling frequency – to rid ourselves of aliasing nastiness, we must filter out 100% of that ultra-sonic stuff — the stuff above our cut-off frequency of 20kHz.

Ok, but i said it could be done. It can.  In theory. The problem is, to get rid of everything above 20,000 Hz the standard only leaves us 4,000 Hz for filters. And good sounding, phase coherent filters typically work by roughly halving the sound every OCTAVE, not every 1/10 of an octave, which is what the standard leaves  us, almost exactly.  Bottom line #1: the filters used can be nasty.  Bottom line #2: they are not 100% perfect so we typically get at least some aliasing. Maybe not much, but some. Note this is only ONE problem in the standard.  But rejoice, there are real workable, solutions and they don’t begin with throwing away CD (16/44k redbook).

And this, in my worthless opinion, is why high res files (24/96 etc) sound better. They had WAY more headroom to work with for the filtering in the studio, and our home CD players have more space too.  Furthermore, with 24 bits, engineers can miss their levels by a few dB and it all works out.  And they can edit and make digital copies and still have perfection in the 16 or 18 most significant bits – which is still way better, on paper, than your mondo turntable – or mine (one of my collection is a Logic DM101 with a Syrinx arm and a Grado Reference, The other a Linn triple-play, if you care).

So we should quit worrying about format errors, and do two things:

1. Encourage studios to do the best job possible.  Think they do? Listen to most rock, then listen to an old Verve two-track recording. ’nuff said.

2. Buy home equipment that gets the signal processing right.  That’s another blog, but by this i mean low noise, low jitter, fast filter and buffer amps, and great power suppliers.  Just like I built.  Trust me, it works.

I hope you found this useful.  When i have time to organize a complex subject I’ll tackle why the digital player and interface can make a difference. After all buts are bits. Its true… but that signal isn’t (just) bits. Intrigued?

Grant

CEO Sonogy Research LLC

“Bitperfect” – huh?

After a rather long hiatus from audio technology, i was re-immersing myself in the technology — especially with regard to the evolution of digital formats and streaming.  An odd word kept coming up – “bitperfect” – commonly used, almost never defined, what the heck? Of course bits are perfect. The problems are all analog.

I’ll not go down this rat-hole today, but suffice it to say that digital audio signals have analog characteristics to them that have direct impact on the reconstruction of the analog wave.  More on THAT later.

So what is “biperfect” and what’s imperfect about much digital (computer) audio?

I’ll oversimplify.  Most of this has to do with how volume is controlled in computer audio.  One would think that once in the digital domain, manipulation – for example turning down the volume – would be easy and without distortion.  In theory, it can, but in reality, one would be wrong. the vast majority of music is coded initially as “RedBook” – CD format with a resolution of 16 bits – or “65,000 shades of gray” which is pretty darned good – and IMNSHO, NOT where the problems in CD audio lie.  But if we simply do volume-control multiplication (like make it half as big) on the 16-but words, we slowly lose resolution (think through the math its true). If this doesn’t make sense – think about an extreme example – we digitally “turn down” the volume  99.99something% of the way and are left with only three digital levels – zero, one and two.  This is two-bit resolution and will sound like absolute crap. That’s a technical term.  For a comparison, if you can, turn your monitor to 4 or 8 bits of color and look at the screen.  Yuk.  ‘nuf said I hope.

You can see the numbers in a presentation by ESS Technology here:

http://esstech.com/files/3014/4095/4308/digital-vs-analog-volume-control.pdf

To get it right we need to do two things:

  1. do our math at higher resolution, for example 24 or 32 bits, so we can maintain full 16 bit resolution,
  2. **AND** directly convert these higher-resolution words to analog (meaning a 32-buit conversion process) so we don’t truncate that resolution.

Simply doing the math in 32 bits is pretty simple. Yes, we’d need some code to convert it and do floating point math, and we’d need to temporarily save the much bigger buffered file, but that’s easy for machines that edit in photoshop.  The problem comes next; we need to convert this 32 bit word into a squiggly AC analog music voltage.  Problem: we have a 16-bit D/A chip.  And all the interfaces (SPDIF, AES/EBU, carrier pigeons).

You can read more about it, but the bottom line is this: in 99% of all cases, and 100% of all PC/MAC/LINUX cases, you should never use the digital volume control – that convenient little slider.  Just say no.  Set the volume to full and send the output to your DAC or networked digital player – and let ALL the bits get there, to be converted to a nice, clean, high-res music signal.  Then you can attenuate it with good, old-fashioned resistors.

(note: if you are just playing MP3 files through the sound card to your earbuds, none of this really matters)

So “bitperfect” is a word that we should never have had to invent nor explain. It comes of shortcuts made in commercial music players.

Fortunately, most high res players with real aspirations know this, and take care of it. JRiver, Roon, etc. are all bitperfect. Sorry to leave out many others.

I’ll add that for Macs there is a surprisingly good app that simply takes your iTunes library and hijacks the signal, delivering it without manipulation – in other words, “biperfect” and costs $10.  Its called….. Bitperfect for Mac.

Enjoy!

 

Grant

CEO, Sonogy Research, LLC

 

Applying MANO to Change the Economics of our Industry –
A Promising TMForum Catalyst (Dec 2015)

Appledore Research Group has been outspoken on the importance of automation and optimization in the Telco Cloud. We have outlined its importance, and the mechanism to minimize both CAPEX and OPEX in recent research. Our belief is that this kind of optimization depends on three critical technologies:

  1. Analytics to collect data and turn it into useful information
  2. Policy driven MANO to allow for significant flexibility within well defined constraints, and
  3. Algorithms capable of identifying the most cost effective solutions, within the constraints (location, performance, security, etc.) enforced by the policies

Here’s an excerpt from recent ARG research outlining the process:
Policy_flow_chart_for_blog

Until now, we have seen relatively little action and innovation in the industry to pursue these goals – but here’s an interesting project that’s right on point. I want to share an exciting TMForum Catalyst; one that investigates the economic power of NFV, and asking “how, in practice…?”

That is not a typo. I did say “exciting” “catalyst” and “TMForum” in the same sentence. I realize that standards and management processes are not usually the stuff that makes your heart beat faster; but if you care about our industry’s commercial future (and like innovative thinking), this one’s different.

The premise is simple: the flexibility inherent in the “Telco Cloud” – underpinned by NFV and SDN, makes it possible and feasible to consider economic factors when deciding how to instantiate and allocate resources across data-centers. This catalyst, involving Aria-Networks, Ericsson NTT Group, TATA and Viavi set out to demonstrate this capability, along with a realistic architecture and contributions back to the TMF’s Frameworks construct.

To me, this is exciting. It says we can use the “MANO+” environment to drive down costs, and possibly even, over time, to create a “market” for resources such that high quality, low cost resources flourish while more marginal ones are further marginalized. This goes straight to the economics, competitiveness, and profitability of our industry and deserves serious attention.

This catalyst team appears well balanced in this regard, with each player bringing expertise in one or more of those critical areas, and one of the leading operators driving the cloud transformation guiding the objectives.

Ericsson summed up the challenge and the objective as follows:

“This TM Forum catalyst project intends to bridge the gap between OSS/BSS and the data silos in finance systems and data center automation controls to enable the kind of dynamic optimization analytics needed to achieve business-agile NFV orchestration.” – Ravi Vaidyanathan, Ericsson Project Lead

At the moment the industry is understandably focused on making NFV and MANO work – even simply. We must all walk before we try to run. Yet its very rewarding and encouraging to see the industry not only attempt to run, but to think about how far they can run. Step #1 in any journey is a destination; hats off to this team for picking a worthy one.

By the way, this team won a deserved award for most important contributions to the TM Forum’s standards. They deserve it for really thinking!

Grant Lenahan
Partner and Principal Analyst
Appledore Research Group
grant@appledorerg.com