Traditionally, network management has been developed around three main tasks: node management, node control (or configuration) and node monitoring. In a conceptual plane, this approach is still valid, but is clearly limited in scope: the bounded network domain (or simply ‘network’, in the sense of ‘administrative network domain’). However, being this so tightly related to the node and its technology, makes it difficult to focus on the network service provisioning and management. Things even go worse when considering inter-network services, which have to tackle inter-domain scenarios, user profiling and interfacing, and the distributed nature of present and future computing paradigms. It is precisely in the latter where, as a matter of fact and business, virtualisation has revolted IT service provisioning, materialising in Cloud Computing and making more evident that network management is not prepared for it.
One of the most valuable features virtualisation has is resource capability abstraction. Indeed, it has been the main enabler for IaaS business models in IT , permitting a clear separation between computing infrastructure services (legacy) and servicing the infrastructure itself (IaaS). But this complicates network management: not only legacy network services were difficult to manage and automate , but also network IaaS (aka NaaS) requires a set of resource/technology abstraction functions that tend to be complex to deal with when considering the different layers in a network . Notwithstanding the complexity of the problem, network virtualisation and NaaS are seen as the means for letting networks be part of the future ICT world, where coordinated IT and network services allow flexible and dynamic ICT infrastructures to be deployed, even issuing energy efficiency and application awareness.
Needless to say, capability abstraction allows flexible manipulation of the resource. An appropriate functional decomposition of resource capabilities allows creating virtual instances of it with the sufficient level of granularity. On its turn, this allows the infrastructure provider to offer operable virtual resources (VRs) as a service, that is, configurable and monitorable VRs, as the basis for NaaS. The operator then has to deploy the control logic on top of them. One may argue this is not new, it has been happening for years since the Hardware Abstraction Layer (HAL) was designed for some Operating Systems running on PCs. And it’s true: back in 1979, R. W. Watson and J. G. Fletcher already had dissertations on the feasibility of designing network architectures for supporting network operating systems (NOSs). A good analogy can be found in HAL resources + device drivers in a PC OS compared to virtualised network resource + node controllers in networks (but, aren’t Future Networks big scale, distributed computer buses?). For this purpose, some research projects are creating tools to make network virtualisation easy to handle from a provider/operator perspective. A couple of European projects DANA contributes to in this area are EU Mantychore and EU GEYSERS. The former targets a deployment of a NaaS service over European National Research and Education Networks (NRENs) by means of the so-called OpenNaaS framework. This framework not only implements tools to model and expose capabilities of network resources, but also offers high-level services such as Bandwidth on Demand (BoD) and IT virtualisation tools connectors (i.e. OpenNebula). The latter, GEYSERS, creates the so called Logical Infrastructure Composition Layer (LICL), which hides the complexity of the infrastructure virtualisation to the operator. The LICL allows infrastructure providers to focus on the abstraction/management functions and to provide interfaces for control and monitoring, which are used by the operator.
Nowadays, network equipment vendors are incorporating virtualisation capabilities to their products, of course not limited to VLANs, but rather Virtual Route Forwarding capabilities, multiple Virtual Router/Switch inside the same box, etc. This embedding process follows a natural evolution, but most of the operators are still not ready to work with such advantages. In some cases, it has been seen as major drawback, since network boxes could be even more difficult to manage than before (i.e. creating virtual networks inside a single box). In some other cases, the difficulty relies on the different approaches to network virtualisation vendors are doing, which are not fully compatible with each other in many cases, thus breaking the resource access hegemony HAL provides to OSs .
To sum up, a track is open ahead for network virtualisation and its implications in network management are to be considered (if not yet). A number of stakeholders take place on this process, ranging from vendors, to application providers, passing through infrastructure providers and operators, in their wide scope (both IT and network). Models already exist in the computing world, at different scales (HAL and device drivers in PCs; cloud computing and service middlewares…), which indeed are a big step ahead network virtualisation and its management.
Keep on reading us!
Notes and References:
- Check my previous blogpost On keywords related to cloud, services and virtualisation for further information.
- A proof can be found in the number of standards that have appeared last years for network management: TMF’s NGOSS TNA and eTOM, TMF’s IPsphere, ITU-T’s Y2001/2007/2011 for NGN architecture, ETSI’s TISPAN, etc.
- The cross-layer implications of virtualisation is a well-known matter for study that I will leave out of scope of this post.
- Can you imagine a buying a new PCI card for your PC from vendor A, which requires a software/hardware adaptation of PCI bus messaging because your motherboard has a slightly different implementation of PCI, made by vendor B?