There’s little debate in our industry that network virtualization (NFV/SDN) is the way forward for telecoms’ infrastructure. So why are adoption rates of NFV/SDN architectures still low?
That’s because two critical elements for realizing its benefits have been missing: a standardized way of automating NFV/SDN infrastructure (as opposed to a plethora of different solutions), and a more viable way of rolling out virtualization technology. Amdocs has long recognised this challenge and has been actively contributing through organisations like the TM Forum to find solutions.
ONAP provides the answer to both.
Its introduction of an open-source approach into telecommunications – an industry traditionally steeped in proprietary solutions – is critical to the success of network virtualization. I’d go as far as to say that virtualization can’t really be justified without open source – in other words, the benefits of virtualization will be hard to realize without open source.
Why can’t we just do what we’ve always done?
In theory, virtualization is a no-brainer. Like cloud, it benefits from low-cost, off-the-shelf hardware, while gaining greater agility in network management, service creation and provisioning. But because virtualization is a fundamental shift for our sector, the reality looks quite different.
It simply doesn’t make sense to deploy NFV/SDN networks in the traditional way, where operators specify their requirements and vendors provide proprietary solutions. The cost of developing, maintaining and constantly updating code for proprietary NFV/SDN products – including all customer-specific adaptations – would be prohibitive for any individual vendor. (Not to mention their operator clients, who would of course have to foot the hefty bill).
Another issue with the traditional approach is that onboarding a new virtual network function (VNF) is a lengthy and costly process for the operator. It involves developing vendor guidelines, testing, and then going back and forth with the vendor to fix bugs until the VNF can finally be deployed.
As a service provider, you don’t want to go through – and pay for – this protracted process more often than you need to. It means that you’re now heavily tied in with your chosen vendor, can’t switch easily, and can’t viably build your network with solutions from multiple suppliers.
ONAP addresses all these issues, thanks to its underlying open-source philosophy, building on industry agreements already established
Learning from the cloud
Our industry needs to adopt much more from cloud computing than just virtualization technology – we need to learn how it works and why open source has become so fundamental to its business model. Just think of the success of OpenStack, Cloud Foundry, KVM, Hadoop and Docker, and you can see that taking a collaborative approach – rather than one marked by competitive tension – has obvious benefits.
First, many hands make light work. With ONAP, industry giants like AT&T, China Mobile, Bell, Orange, Cisco and Amdocs are putting their weight – and their software engineers – behind the project. Bringing together a large community of developers means that software can mature more quickly compared to what an individual organisation could achieve.
With a large-enough share of telecom heavyweights – operators, vendors and integrators – joining the project, it will reach the critical mass needed to make ONAP the de-facto industry standard for network automation. This will put an end to the product fragmentation that has curbed operators’ appetite for NFV/SDN technology, and simplify the work of standardization projects.
What’s more, onboarding can happen much faster with a standardized platform: a vendor can build and test a VNF for ONAP before it is validated and deployed by the operator. Any outstanding troubleshooting, fault finding and bug fixing will also become easier because the ONAP code is accessible to all parties.
For the same reason, ONAP will finally enable service providers to build multi-vendor networks, rather than being ‘locked in.’ It will also make their vendor relationships less ‘sticky’, enabling them to switch suppliers as they see fit.
What will it take to succeed?
Let’s be clear: moving our industry from its traditional vendor-centric to the open-source model won’t happen overnight. The success of projects like ONAP will be critical to making this transformation happen.
The first priority will be to unite large swathes of the industry behind ONAP in order to elevate it to an industry standard.
Secondly, membership must not just be a token gesture. Contributors must pull their weight, making sufficient resources available, both human and financial. Committing to ONAP early on will have the added benefit that they can drive and influence the development of the platform from the get-go, rather than becoming an ‘also ran’.
Thirdly, the project must be well-managed – a lot will depend on ONAP’s Governing Board and Technical Steering Committee. But while strong leadership is required, it mustn’t become too restrictive. If members think that too few of the suggestions they’re putting forward are making it onto the ONAP roadmap, this may affect their motivation to contribute.
The result could be a multiplication of side projects, which would lead us back to fragmentation and proprietary approaches. This must be avoided at all cost.
For more on this subject, don’t miss Eyal Felstaine’s presentation “Open source and the open network – strategies for success” at TM Forum Live, Tuesday 16 May.
This blog was previously published on TM Forum’s Inform.
This blog is part of our ONAP Insider series, which takes you behind the scenes, offering a more in-depth look at the workings of ONAP, how it is changing business models, simplifying network design and untapping new business opportunities for service providers, content developers and end-users.