Programmability in current networks. Work on adaptive, programmable networks capable of accommodating to different operational environments and to support ever-changing application requirements has been carried out for more than 20 years. Initial work on programmability could be roughly divided between the open signaling (OPENSIG) and active networks approaches. OPENSIG argued for a set of open, programmable network interfaces that provided external programs access to the internal state and control of network devices. Active networks advocated the dynamic deployment of network services at runtime, via special packets containing executable code and similar mechanisms.

The OPENSIG efforts crystalized on standards such as i) the IETF General Switch Management Protocol (GSMP), which enabled the partitioning of a label switch (ATM, Ethernet) into multiple “virtual switches”, allowing external controllers to manage the “virtual switches” via the GSMP protocol; ii) the IETF FORCES initiative, which defines a framework and associated protocols to standardize the information exchange between the control and forwarding planes of an IP network element; and iii) the IEEE P1520  standard project, which aimed at establishing a reference model for networks APIs.

Although these technologies saw some deployments, they never gained the traction that the current Software Defined Networking (SDN) trend has in the networking industry and research communities. SDN can be seen as a re-incarnation of OPENSIG, which initially used the OpenFlow protocol as a means to control the forwarding of TCP, UDP, IP and Ethernet packets through network nodes from an external controller (in a similar way to GSMP).

A number of limitations have already been identified for the first wave of SDN technologies. First, the flexibility of OpenFlow as the protocol between the controller and the device being controlled: since SDN does not specify a network architecture, the controller-device protocol should be able to define rules on arbitrary fields in order to be evolvable. Flexible interfaces to define mechanisms for parsing packets and matching header fields such as P4 are being researched to address this issue. Another problem is the scalability and resiliency of centralized controllers; approaches to distributed SDN control in order to improve resiliency and allow for per-domain controllers are being investigated. Recursive SDN controllers are also foreseen to allow for the partitioning and control of network resources at more granular scopes.

What about RINA networks? RINA takes a different approach to network programmability, leveraging the fact that it is a working hypothesis for a general theory of computer networking. The functions performed by each layer, can be divided in three categories of growing timescale and complexity: data transfer (forwarding/sending/receiving PDUs), data transfer control (flow and retransmission control) and layer management (enrollment of new IPCPs, routing, namespace management, flow allocation, resource allocation, IPCP authentication, application access control, security).

RINA uses the principle of separation of mechanism and policy to support programmability.  Mechanism is the invariant, fixed part of DIF functions, while policy is the variable, programmable behavior of the functions that can be adapted to the particular scenario where the DIF is operating. For example, all layer management functions use the same protocol (CDAP, the Common Distributed Application Protocol) to exchange information with their peers, but the objects being exchanged using the protocol can vary. As another example, all DIFs have the same mechanism for packet forwarding, but the packet scheduling policy can be programmed.

Implementing new policies is the way to cope with new or unexpected requirements, rather than designing and implementing new full-fledged protocols. Hot-replacement of policies is possible, and does not cause IPC service disruption (e.g. deallocation of application flows). Since in RINA all layers reuse the same extensible data transfer protocol (EFCP, the Error and Flow Control Protocol) there is no need to define mechanisms for generic header matching like in P4. Moreover, due to the distributed nature of its layer, RINA has the potential to mitigate the resiliency and scalability issues of centralized layer management approaches, since responsibilites can be splitted across all the IPCPs in a DIF. Centralized solutions are still possible, since RINA nodes can export the local IPCP RIBs to a remote entity, e.g. a central Manager. Layer management could therefore be implemented by the Manager remotely accessing the RIBs through the CDAP protocol.

In contrast with SDN, programmability of the layer functions is not limited to packet forwarding; policies can be implemented for transmission control, flow control, resource scheduling, multiplexing, routing, authentication, encryption, etc. The recursive nature of RINA, therefore, allows to use the DIF basic block at different scopes and with different policies. By reusing the same DIF API and mechanisms there is no need to introduce new protocols. New layers can be added dynamically and without requiring additional hardware/software support.

Do you want to learn more?