Archive for Telecom Blogs

Applying MANO to Change the Economics of our Industry –
A Promising TMForum Catalyst (Dec 2015)

Appledore Research Group has been outspoken on the importance of automation and optimization in the Telco Cloud. We have outlined its importance, and the mechanism to minimize both CAPEX and OPEX in recent research. Our belief is that this kind of optimization depends on three critical technologies:

  1. Analytics to collect data and turn it into useful information
  2. Policy driven MANO to allow for significant flexibility within well defined constraints, and
  3. Algorithms capable of identifying the most cost effective solutions, within the constraints (location, performance, security, etc.) enforced by the policies

Here’s an excerpt from recent ARG research outlining the process:

Until now, we have seen relatively little action and innovation in the industry to pursue these goals – but here’s an interesting project that’s right on point. I want to share an exciting TMForum Catalyst; one that investigates the economic power of NFV, and asking “how, in practice…?”

That is not a typo. I did say “exciting” “catalyst” and “TMForum” in the same sentence. I realize that standards and management processes are not usually the stuff that makes your heart beat faster; but if you care about our industry’s commercial future (and like innovative thinking), this one’s different.

The premise is simple: the flexibility inherent in the “Telco Cloud” – underpinned by NFV and SDN, makes it possible and feasible to consider economic factors when deciding how to instantiate and allocate resources across data-centers. This catalyst, involving Aria-Networks, Ericsson NTT Group, TATA and Viavi set out to demonstrate this capability, along with a realistic architecture and contributions back to the TMF’s Frameworks construct.

To me, this is exciting. It says we can use the “MANO+” environment to drive down costs, and possibly even, over time, to create a “market” for resources such that high quality, low cost resources flourish while more marginal ones are further marginalized. This goes straight to the economics, competitiveness, and profitability of our industry and deserves serious attention.

This catalyst team appears well balanced in this regard, with each player bringing expertise in one or more of those critical areas, and one of the leading operators driving the cloud transformation guiding the objectives.

Ericsson summed up the challenge and the objective as follows:

“This TM Forum catalyst project intends to bridge the gap between OSS/BSS and the data silos in finance systems and data center automation controls to enable the kind of dynamic optimization analytics needed to achieve business-agile NFV orchestration.” – Ravi Vaidyanathan, Ericsson Project Lead

At the moment the industry is understandably focused on making NFV and MANO work – even simply. We must all walk before we try to run. Yet its very rewarding and encouraging to see the industry not only attempt to run, but to think about how far they can run. Step #1 in any journey is a destination; hats off to this team for picking a worthy one.

By the way, this team won a deserved award for most important contributions to the TM Forum’s standards. They deserve it for really thinking!

Grant Lenahan
Partner and Principal Analyst
Appledore Research Group

The Rise of Policy in Network Management:
Seductive Opportunities Along with Complex Risks

author: Grant Lenahan

The role of policy is about to expand rapidly, projecting a little-understood area, mostly associated with the operation of real-time routers, into the domain of management. It’s a great boon, but will demand re-thinking both what we think policy is, and what we think “OSS and BSS” are. Success will demand a well-defined plan, executed in a series of clearly defined steps.

Policy has been with us since the relatively early days of the Internet, when the IETF defined “Policy Decision Points” and “Policy Execution Points” – or PDPs and PEPs. Used only in very specific instances, policy has been limited to AAA/edge routers, and in 3G and 4G mobile networks “flow based charging”, where 3GPP defined the derivative “PCRF” and PCEF” to manage flow based charging.

The bottom line is that policy will quickly expand from relatively few use cases, to handling a wide range of network configuration tasks, all based on some key questions:

  • Who is the user, and what priority does that user have?
  • What is the product/service, or plan, and what parameters are demanded, possibly by SLA?
  • What is the network condition? Is it congested? Empty?
  • What are the technical and economic feasibility limits we must work within?

Policy is already being defined to control many attributes in SDN and NFV – scale, reliability, bandwidth, security, and location (geographic or datacenter) among others. Elements of a policy model are being talked about in various industry groups, from ETSI/MANO to the TMForum (Den-ng, ZOOM). But this is the dry “how?”; let’s discuss the exciting “what?”.

The real excitement begins when we understand that policy, combined with analytics and real-time (MANO-style) orchestration, can implement real-time, all-the-time, optimization of networks. While scary, these sorts of feedback loops have long been used in military and commercial guidance systems, in machine control, and in myriad other control systems. In fact, the basic ideas are called, in academia, “control theory”.

Imagine a data-center that approaches congestion, and through analytics driving new policy rules, automatically moves demand to a lightly used datacenter – improving performance and averting capital spend; quite the happy outcome. Or, consider analytics that correlate a set of security breaches with specific parameters, and closes the loophole, changing the policies that define those parameters. SDN, SON, NFV, and “3rd Network” based MEF services can all benefit from such dynamic and far-reaching policy.

Discussing each is beyond the scope of this Blog, but I’d like to set the stage for future dives into several elements of policy. In preparation, let’s consider that control-theory flow, from information collection (analytics), to determining the corrective action (optimization) to issuing the revisions (policy control and possibly orchestration). This simple, yet complex concept can fundamentally change the economics, flexibility and operation of networks. In my opinion, it is essential to derive the greatest benefit from virtualized networks.

Policy Circle

Before we begin a new hype cycle though, consider the challenges and risks. This level of automation will be difficult to deploy and tune. Policy conflicts must be managed, and autonomous systems must be tested, and trusted (“I wouldn’t do that, Dave”), and instability must be controlled (all control systems can oscillate). Success will likely come from a set of incremental steps, each of which adds – and tests – a layer of automation, and will therefore take years to complete. But those that benefit greatly will be those who build, brick by brick, to well understood goal or vision.

Stay tuned for future installations touching on specific areas of policy in tomorrow’s network.


Dynamic, Self-Optimizing VNFs:
Overview of an Innovative TMForum Catalyst

author: Grant Lenahan

Here’s a link to a short, to the point explanation of how several major industry players, sponsored by at&t, and under the auspices of the TMForum are looking at how policy, virtualization, analytics and “orchestration” combine to usher in a new world of dynamic optimization.  In the not-too-distant future we may have networks smart enough to heal themselves, scale themselves, and optimize costs and resource utilization.  Of course, the devil is in the details, and policy is still poorly understood int he world of management (OSS, BSS, etc.) software.  While the functions denoted by ‘FCAPS’ remain as important as ever, the methods are changing rapidly. or maybe i should say, “must change rapidly if we want to succeed”.

Enjoy, and many thanks to RCR and the TMForum for making this possible.