SDN is Evolving far from where it Began
For years, the Cisco Visual Networking Index has been predicting phenomenal growth in almost every aspect of networking. Every year, a new technology or evolving practice seems to accelerate the predictions. By 2020, the VNI predicts a doubling in the average speed of connectivity and total IP traffic. In the past two years, the introduction of the 5G phenomena and the advent of 4K UHD video streaming make the predicted scenario seem even more likely.
Data centric views of operation (service control) now dominate and the earlier SDN controller centric configuration-only views are not progressing further
In response, network architecture has been undergoing a rapid co-evolution driven by the cloud and OTT service consumption habits of enterprise networks and the rise of cloud native development frameworks.
The result has been a delamination of the traditional service provider or IT network, where services are stripped from the network and presented as overlays. The delaminated service and transport are now evolving independently).
The underlying transport is moving toward utility—optimized for cost-per-bit, and delivery of even higher-density packet switching. The control plane for this network is also streamlining, moving from Unified MPLS through toward simplification through the deployment of IPv6 and Segment Routing. The use of header metadata or conferring new meanings to bits in existing headers may be used to add additional intelligence and trigger desired network behaviors.
Services are proliferating as multiple logical overlays with centralized control using architectures optimized for micro-segmentation, chaining and slicing. The evolution of the service overlay has borrowed conceptually from Software Defined Networking (SDN) and arguably in small part from Network Function Virtualization (NFV), but adapts and refines these concepts for commercialization. In these implementations, the roles of controllers and protocols have receded into utility middleware and the prominence of segmenting the data shipped over the network, security, identity and operational telemetry has arisen. On the Internet, the delaminated model is no longer a discussion point, but a reality. The service compositions themselves have often blown past the current state of VM-centric NFV implementations to fully cloud native instantiations.
SDN introduced the “reactive network” concept 6 years ago, where observation and reaction were elevated in the operation of the network. This was often called “analytics”, but the contribution of the controller to telemetry was a small portion of what was needed for an end-to-end network and service view and that portion was often trapped in a “pull” operational mode when the industry had gone full speed into ‘push”.
As the idea of streaming telemetry from multiple sources using multiple methods of analysis (big data) for real-time insight advanced, network data analytics emerged. The SDN controller design proved to be an inadequate framework for data science, and its role diminished to middleware for network control.
Data centric views of operation (service control) now dominate and the earlier SDN controller centric configuration-only views are not progressing further. And, what has been steadily emerging is a “stack” of interconnected, independent and re-usable elements to address operational needs.
A new model of centralized service control, far beyond the original SDN flow-rule and configuration roots of SDN, is our new reality where network control, orchestration and network data analytics are different functional layers of an operational stack.
There is a lesson in this motion, from the hyperbolic claims of early SDN to the present “stack,” in which orchestration, network control and network data analytics are separate layers and no longer co-joined in a less-developed view. Because successive waves of innovation will occur to refine the any problem solution, we will have often be working in a space comprised of multiple practices and technologies. Any solution should be approached as a framework that will have to be modular, polyglot in specific functionalities and imagined as part of a larger “stack” in anticipation of further advancements or re-use.
In this light, to build an analytics application as a product that binds to a specific closed framework for culturing data (e.g. static ingest, storage and query engines) would seem to be folly and may cause a repeat of the multiple-silo NMS past, but on a far larger scale. Common infrastructure is an enabler of multiple use cases and specific development teams. The infrastructure here should be an open source framework within this layer of the “stack”.
Rejoining our “reactive network” or “virtuous cycle” by leveraging network data analytics alone will not make services more affordable or efficient, nor will address all of the challenges pending with explosive growth. The concepts underlying the technology and economics of services development and deployment continue to show room for efficiency-driven and security-conscious refinements that will advance the “stack” further.
Abstractions are already being introduced beyond containers where the resource basis of a service shifts beyond chains of micro-services to even smaller and more ephemeral forms—individual functions as a service (creating a new model of development and deployment that directly ties cost-to-use down to the level of a program function). Deployment options for these functions are already expanding beyond regional and centralized data centers to include a potentially large network edge computing environment (e.g. IoT or mobile devices themselves or proximal collection/interaction sites, macro or small sites).
A proliferation of new devices in our cities, factories, infrastructure and homes will put even greater stresses on our concepts of resource management, workload placement and security. The high speed, transactional nature, and diversity of this new environment will make traditional security demarcations ever more difficult to establish and maintain, leading to a “zero trust” environment.
Aspects of identity as they relate to security from network control and resource management capabilities will emerge as tiers of new-but-related layers or elements or our “stack”, combined with analytics, and wrapped in a policy control layer to enable operation in this new environment.
Identity can refer to both human and devices and is not limited to an OAuth certification of identity in the authorization domain or other forms of implicit or explicit assertion. Identity will become an extended context that might include device type, location (from access point to geolocation), corporate role, time of day and other data that can be used to create a more dynamic security posture when coupled to the service, function or data asset being accessed.
Identity management faces challenges with the growing of number of devices per admin and the number of credential framework choices. Some devices can have multiple identities and may require an automatic credential provisioning or ‘bootstrapping process’ and authorization by manufacturers. Enterprise departments and IT Admin will be challenged to strictly control device access to enterprise apps and behavior on the enterprise network.
Security in the next wave of growth and innovation will be based on this identity-centric context at execution time.
To link these four infrastructure bits: compute, network, storage and security, with the context (identity), we need a policy engine. This engine will make dynamic and proactive context-based resource allocation, resource usage, and data access control decisions from multivariate inputs (e.g. identity, device type, time of day, reputation, the specific service/function/asset request, telemetry and enterprise or user templated default policy).
In addition, we will need to revisit the present functionality of the compute, network and storage scheduler in orchestration stacks (bare metal, virtual machine or container), would be extended as the optimizations required are more complex. Now the optimizing workload and data will be placed over a larger surface (edge, WAN and DC resources) with potentially new governance imbedded policy roles/rules with the resources.
There is a need to establish an identity engine or broker and for frameworks for a Key service system capable of adapting multiple variants of key establishment protocols, managing, storing and protecting, and wrapping keys to support the need for policy-driven encryption in data access, content access or mobility.
Analytics can be used to provide this new policy tier of the stack with system insights, data access patterns and policy violation feedback for ongoing learning and refinement—a loop similar to the “virtuous cycle” loop associated with SDN (Figure 4).
Like Network Data Analytics and SDN before it, the architecture of this developing dynamic policy tier of the stack should be standards based with an open architecture to support innovation and a range of use cases.
Though analytics has emerged as a proper layer of its own with open source frameworks to build on, we have no time for a victory lap. The stack is advancing again. It is this continual advancement of the stack that will be on my mind and the many pieces we need to develop to operate at the next level of operationalization, prepare for 5G and IoT growth; and deploy and incorporate the next waves of innovation.