Author Archive for glenahan

Please see my blogs on Appledoreresearch.com!

My continuing blogs will be on Appledore Research Group Website
ARG is becoming THE acknowledged experts at managing virtualized and hybrid telecom networks and services.

My technology, music, audio and driving related rants will remain here!

— Grant

Applying MANO to Change the Economics of our Industry –
A Promising TMForum Catalyst (Dec 2015)

Appledore Research Group has been outspoken on the importance of automation and optimization in the Telco Cloud. We have outlined its importance, and the mechanism to minimize both CAPEX and OPEX in recent research. Our belief is that this kind of optimization depends on three critical technologies:

  1. Analytics to collect data and turn it into useful information
  2. Policy driven MANO to allow for significant flexibility within well defined constraints, and
  3. Algorithms capable of identifying the most cost effective solutions, within the constraints (location, performance, security, etc.) enforced by the policies

Here’s an excerpt from recent ARG research outlining the process:
Policy_flow_chart_for_blog

Until now, we have seen relatively little action and innovation in the industry to pursue these goals – but here’s an interesting project that’s right on point. I want to share an exciting TMForum Catalyst; one that investigates the economic power of NFV, and asking “how, in practice…?”

That is not a typo. I did say “exciting” “catalyst” and “TMForum” in the same sentence. I realize that standards and management processes are not usually the stuff that makes your heart beat faster; but if you care about our industry’s commercial future (and like innovative thinking), this one’s different.

The premise is simple: the flexibility inherent in the “Telco Cloud” – underpinned by NFV and SDN, makes it possible and feasible to consider economic factors when deciding how to instantiate and allocate resources across data-centers. This catalyst, involving Aria-Networks, Ericsson NTT Group, TATA and Viavi set out to demonstrate this capability, along with a realistic architecture and contributions back to the TMF’s Frameworks construct.

To me, this is exciting. It says we can use the “MANO+” environment to drive down costs, and possibly even, over time, to create a “market” for resources such that high quality, low cost resources flourish while more marginal ones are further marginalized. This goes straight to the economics, competitiveness, and profitability of our industry and deserves serious attention.

This catalyst team appears well balanced in this regard, with each player bringing expertise in one or more of those critical areas, and one of the leading operators driving the cloud transformation guiding the objectives.

Ericsson summed up the challenge and the objective as follows:

“This TM Forum catalyst project intends to bridge the gap between OSS/BSS and the data silos in finance systems and data center automation controls to enable the kind of dynamic optimization analytics needed to achieve business-agile NFV orchestration.” – Ravi Vaidyanathan, Ericsson Project Lead

At the moment the industry is understandably focused on making NFV and MANO work – even simply. We must all walk before we try to run. Yet its very rewarding and encouraging to see the industry not only attempt to run, but to think about how far they can run. Step #1 in any journey is a destination; hats off to this team for picking a worthy one.

By the way, this team won a deserved award for most important contributions to the TM Forum’s standards. They deserve it for really thinking!

Grant Lenahan
Partner and Principal Analyst
Appledore Research Group
grant@appledorerg.com

The Rise of Policy in Network Management:
Seductive Opportunities Along with Complex Risks

author: Grant Lenahan

The role of policy is about to expand rapidly, projecting a little-understood area, mostly associated with the operation of real-time routers, into the domain of management. It’s a great boon, but will demand re-thinking both what we think policy is, and what we think “OSS and BSS” are. Success will demand a well-defined plan, executed in a series of clearly defined steps.

Policy has been with us since the relatively early days of the Internet, when the IETF defined “Policy Decision Points” and “Policy Execution Points” – or PDPs and PEPs. Used only in very specific instances, policy has been limited to AAA/edge routers, and in 3G and 4G mobile networks “flow based charging”, where 3GPP defined the derivative “PCRF” and PCEF” to manage flow based charging.

The bottom line is that policy will quickly expand from relatively few use cases, to handling a wide range of network configuration tasks, all based on some key questions:

  • Who is the user, and what priority does that user have?
  • What is the product/service, or plan, and what parameters are demanded, possibly by SLA?
  • What is the network condition? Is it congested? Empty?
  • What are the technical and economic feasibility limits we must work within?

Policy is already being defined to control many attributes in SDN and NFV – scale, reliability, bandwidth, security, and location (geographic or datacenter) among others. Elements of a policy model are being talked about in various industry groups, from ETSI/MANO to the TMForum (Den-ng, ZOOM). But this is the dry “how?”; let’s discuss the exciting “what?”.

The real excitement begins when we understand that policy, combined with analytics and real-time (MANO-style) orchestration, can implement real-time, all-the-time, optimization of networks. While scary, these sorts of feedback loops have long been used in military and commercial guidance systems, in machine control, and in myriad other control systems. In fact, the basic ideas are called, in academia, “control theory”.

Imagine a data-center that approaches congestion, and through analytics driving new policy rules, automatically moves demand to a lightly used datacenter – improving performance and averting capital spend; quite the happy outcome. Or, consider analytics that correlate a set of security breaches with specific parameters, and closes the loophole, changing the policies that define those parameters. SDN, SON, NFV, and “3rd Network” based MEF services can all benefit from such dynamic and far-reaching policy.

Discussing each is beyond the scope of this Blog, but I’d like to set the stage for future dives into several elements of policy. In preparation, let’s consider that control-theory flow, from information collection (analytics), to determining the corrective action (optimization) to issuing the revisions (policy control and possibly orchestration). This simple, yet complex concept can fundamentally change the economics, flexibility and operation of networks. In my opinion, it is essential to derive the greatest benefit from virtualized networks.

Policy Circle

Before we begin a new hype cycle though, consider the challenges and risks. This level of automation will be difficult to deploy and tune. Policy conflicts must be managed, and autonomous systems must be tested, and trusted (“I wouldn’t do that, Dave”), and instability must be controlled (all control systems can oscillate). Success will likely come from a set of incremental steps, each of which adds – and tests – a layer of automation, and will therefore take years to complete. But those that benefit greatly will be those who build, brick by brick, to well understood goal or vision.

Stay tuned for future installations touching on specific areas of policy in tomorrow’s network.

Grant

Dynamic, Self-Optimizing VNFs:
Overview of an Innovative TMForum Catalyst

author: Grant Lenahan

Here’s a link to a short, to the point explanation of how several major industry players, sponsored by at&t, and under the auspices of the TMForum are looking at how policy, virtualization, analytics and “orchestration” combine to usher in a new world of dynamic optimization.  In the not-too-distant future we may have networks smart enough to heal themselves, scale themselves, and optimize costs and resource utilization.  Of course, the devil is in the details, and policy is still poorly understood int he world of management (OSS, BSS, etc.) software.  While the functions denoted by ‘FCAPS’ remain as important as ever, the methods are changing rapidly. or maybe i should say, “must change rapidly if we want to succeed”.

Enjoy, and many thanks to RCR and the TMForum for making this possible.

http://inform.tmforum.org/featured/2014/12/video-ericsson-drive-toward-virtualization/

-Grant

Hybrid, Virtualized Networks Require Modern, Hybrid Management

author: Grant Lenahan

It sounds obvious, but apparently, its not.

I’ve been working closely with ETSI/MANO, TMForum and TMF ZOOM and several thought-leading carriers implementing virtualization, and I’m observing a disturbing trend: a focus on the virtualization technology itself that seems to be omitting the broader challenges of management systems and processes that are absolutely critical to making it work.

The business value of Virtualization, although delivered by new technology, is that it enables new flexibility, new business models, and infinite product packages, all at low incremental cost. Let’s set out some examples:

  • Services can be scaled infinitely, matching price-point and capacity to need and willingness to pay
  • Services can be turned on and off for any arbitrary time period
  • Bundles can be easily created, likely based on each buyer’s needs
  • Flows and network functions can be placed where capacity exists or where they can be delivered at the lowest cost consistent with SLA needs
  • … And many more

But these benefits do not occur magically; each is dependent on various management functions, implemented in OSS/BSS.   Poor support; limited value.

No one is explicitly ignoring OSS/BSS of course; it’s subtler. In ETSI the focus is on user stories and the technology that closely surrounds NFVs. This is reasonable – people can’t concentrate on sufficient depth and breadth at the same time and make good progress – its just “focus”. The ETSI diagram below illustrates the point – lots of detail in the MANO domain and OSS/BSS – dozens of important functions – are relegated to the top left corner.  In operators the focus tends to be segregated teams – again, good for concentrating expertise, but bad for an E2E view.

Unintentionally, this kind of specialization has historically led to one of two outcomes –neither desirable:

  1. Creation of a shiny new stack for the new technology, which inevitably is a new silo that creates messy, hard to manage integration between the “old” and the “new” and stifles agility, and/or
  2. Forcing existing systems, some of which may not be up to the task, to support the new technology – inevitably poorly, and with similar impact of flexibility and agility.

My point is simple, yet the implementation is subtle and complex – but ultimately very worthwhile. OSS and BSS are critical to realizing the benefits of virtualization, and to monetizing this exciting technology. They must support the same transformational operational models that NFV, cloud & SDN do. And they must do so in an environment that continues to support many non-virtualized technologies, especially those in the distribution (access) network(s). We cannot separate them and focus only on the “new technology” (outcome 1), nor can we assume existing systems are up to the challenge (outcome 2); rather, we can only succeed when we manage the end-to-end business process – across domains — efficiently. In general, while many complex systems will remain, this means a re-think of the E2E architecture

Virtualization is truly transformative but the decisions we make over the next months and years will determine just how extensive and successful that transformation is. As I like to joke (half) – “if you can’t efficiently bill for it, it’s just a hobby.”

Food for thought! Watch this space for function-by-function examples in the near future – Grant

 

_______________________________________________________