by Sumit Chawla, Rishee Basdeo, Michael DAniello, Tristen Payne, James Seibel, Sargam Sharma, Georgianna Shoga, Weng Bingliang, Chris Willis, John Day, Lou Chitkushev (Boston University)

Abstract. IoT holds great promise to improve the quality of life and enable new capabilities. However this bright future is being endangered by at least two factors: security and the complexity created by the proliferation of IoT protocols. Proliferation will lead to an unnecessary increase in complexity and increased inefficiency, increased cost and lost effectiveness. Generally the argument is that IoT requirements are unique and therefore justify a unique protocol for its unique circumstances. But the power of IoT is in the potential to integrate what they do and the data they collect in ways, we have only scratched the surface. We undertook to test this rationale of uniqueness and found that it wanting. In fact, IoT protocols represent a rather elegant degenerate case of the general model of protocols we have uncovered over the past decade. That would seem to leave the only rationale for different protocols is to serve as barriers to competition.

I. Introduction

Over the years we have seen technology transform exponentially, from using computers that took up whole rooms to workstations to desktop computers to holding a computer in the palm of our hands. This trend is continuing as there is now the potential for a computer in practically any physical object. Consequently, one of the current topics in networking that is grabbing considerable attention is the Internet of Things (IoT). The scope of IoT is sometimes hard to nail down. One might start to believe that there is a reason for every object in the world “to be on the ‘Net.

The applications seem endless from industry to healthcare to transportation to shipping to the home and beyond. Applications range from thermostats, to home security, to RFIDs for packages, to industrial control, health monitors, every package that is shipped, traffic lights, drones, etc. etc. Along with the proliferation of IoT devices has come almost as many different protocols to control or retrieve information from these things. Because IoT applications have limited functionality and operate under very tight constraints of cost, bandwidth, power, tight resources on computing and memory, etc., there is a strong argument to include only the functionality absolutely necessary. Consequently, IoT protocols try to be as minimal as possible, to cut every possible corner. To try to understand them all and why they should be used for particular situations is a daunting task.

So the question arises, are all of these protocols necessary? Could IoT be simpler? Obviously, different environments and different application domains present opportunities for simpler protocols, or the converse, present special problems to overcome.

On the surface, it would seem we have set ourselves an impossible task. IoT is primarily concerned with protocols that operate directly over the physical media. Consequently, their design is most affected by the medium. Furthermore, the very nature of IoT devices and applications are highly constrained as well. All of this would seem to argue for the least flexibility to find commonality. Different protocols tend to lock the customer into one vendor’s (or set of vendors) IoT product line and at the same time create barriers to entry to competitors. Given the number of IoT “standards,” the practical effect is less to make devices more interchangeable (the purported purpose of standards) than it is to create alliances of companies in competition with each other.

Considering all of this, it might not appear at first glance there would be much chance of finding much commonality without sacrificing the advantages of a hand-crafted protocol. Commonality has many benefits:

  • The customer can more easily mix and match devices from different vendors in their network.
  • Commonality should create economies of scale and reduce product costs faster.
  • The key to effective network management has always been commonality, commonality, commonality. Reducing the “parts count” is important to reducing the cost of operations. Furthermore, protocols (and therefore the products they are in) built to work together will complement each other, be more efficient, make management simpler, more effective, and to make changes more predictable with fewer surprises.

Clearly, if we look for equivalence where the protocols do precisely the same thing, the answer would be not much. But if we look for common functions that yield the same effect, the same service, but not necessarily precisely the same way that might yield better results. As long as we are looking for commonality, there is another area of concern. With the very large numbers of IoT devices both in terms of numbers of units and types of units, it would be advantageous if the devices were as self-configuring or dynamically configurable as possible. Experience with both the Protocol-Id field in IPv4 and the Ethertype field in Ethernet indicates that if at all possible, administrative Registration Authorities should be avoided wherever possible. Over time, these registration authorities become an “archeology” of legacy products and protocols where ranges of assigned numbers become strata, and worse, can never be reclaimed.

The rest of this paper is organized as follows. In Section 2, we introduce the model we will use in our search for commonality. For brevity in this paper, Table 1 replaces Appendix A and B of the Technical Report version [1], which briefly introduces the 10 IoT technologies surveyed for this work. Section 4 presents our conclusions.

II. Our Model of Protocols

A. The model itself

We will compare the existing IoT protocols to a canonical model of protocols developed in [2]. To analyze these protocols, two tools can be brought to bear: separating mechanism and policy, and distinguishing abstract and concrete syntax. Separating mechanism and policy is often used in operating systems, but applies very well to protocols. Distinguishing abstract and concrete syntax makes the protocol invariant with respect to syntax. The functions performed on the information is the same only their lengths differ. The protocol can be specified with the abstract syntax and use different concrete syntaxes for different environments.

The former when applied to data transfer protocols yields a powerful common structure (See Figure ??????). When analyzing data transfer protocols, there are several aspects that must be kept in mind. First, all protocols go through three phases: Enrollment, Allocation, and Data Transfer:
  • Enrollment creates sufficient state within the network to allow an instance of communication to be created, or how a Layer Process (LP) joins a layer.
  • Allocation creates the shared state to allocate and coordinate the resources to support the data transfer functions of an instance of communication.
  • Data Transfer provides the functions to support the transfer of data as requested.
Figure 1. Canonical model of protocols
Note that in Figure1 the functions naturally cleave into 3 groups of functions of increasing duty-cycle and increasing computational complexity. Each group is decoupled from the others through a state vector. This is precisely what one wants to see in a strong system design.
Data Transfer has the fastest duty cycle and the simplest computation. It consists of SDU Protection, Delimiting, Order and Lost and Duplicate Detection. This is decoupled through a shared state vector from Data Transfer Control, which consists of Retransmission (Ack) and Flow Control, the feedback mechanisms. Note that these two are largely independent. Data Transfer writes the state vector, while Data Transfer Control reads the state vector, sends ack and flow control information, and does retransmissions, if needed. The only interaction with Data Transfer is to halt sending data when necessary. These require a degree of synchronization.
It is decoupled through more complex state from Layer Management, responsible for enrollment, resource allocation including routing if present, etc. The first two groups constitute a traditional Error and Flow Control Protocol, such as HDLC, TCP, etc. Note that the Ethernet MAC protocol and UDP also fits this model but has a null Data Transfer Control Function. While Layer Management requires operations on more complex objects and uses a Common Distributed Application Protocol to perform remote operations on objects to maintain a partially replicated database, e.g. routing updates.
Figure 2. Names in a data transfer layer
Reliability of data transfer can be ensured by a combination of error detection functions, e.g. CRCs or checksums and retransmission control. Generally, there are four identifiers required for a data transfer layer, as shown in Figure 2.
  • An identifier, often called an address, that identifies the LP.  Addresses are only unambiguous within the scope of the layer and only required when the layer has more than two LPs.
  • If the LP is to support more than one flow at the same time, a connection-id is necessary to distinguish flows between the same two addresses and only need be unambiguous between the two LPs named by the addresses.
  • This is generally created by concatenating connection-endpoint-ids (CEP-ids) of the source and destination. These CEP-ids are unambiguous within the LP, i.e. local to the LP.
  • Finally, a port-id is required to identify each instance of communication between the LP and the user of the layer. Port-ids are the only identifier shared between the LP and a user of the LP and are unambiguous with respect to the (N)-LP.
Watson [3] concludes that port allocation and synchronization should be distinct. Subsequent work [4] shows that this yields better security. No identifier exposed to the user of a flow should be carried in protocol. In addition, some protocols have a “protocol-id” field to identify the syntax of the user-data, i.e. an encapsulated higher level protocol. These protocols generally lack an allocation phase and CEP-ids and are ill-formed. This kind of identifier is not required in properly designed architectures.


B. The common IoT device

In the analysis of IoT protocols (see Table 1 or [1]), it was quickly recognized that many IoT devices (but probably not all) had a common model. The survey included 10 IoT protocols: Wireless Hart [5], ISA 100.11a [6], Zigbee [7], Z-wave [8], Dash-7 [9], Thread [10], Bluetooth-LE [11], NFC [12], LoRaWAN [13] and LTE-A [14]. Most of these protocols support a single application, e.g. a thermostat, light controls, etc. with commands that involve a small amount of data. The data transfer protocol is often a stop-and-wait protocol, either explicitly, e.g. 802.11, or implicitly created by a request/response command form. Most of these protocols use pre-assigned MAC addresses, although some do have an explicit enrollment procedure to assign an address or join a layer, e.g. 802.11. Since these protocols are so simple and generally point-to-point, layer management is minimal or non-existent.

The single application implies that multiple flows between the same two addresses do not occur, so explicit connection-ids (port-ids and CEP-ids) are not required and enrollment (when present) is synonymous with allocation. The stop-and-wait nature of Data Transfer along with being point-to-point is sufficient to ensure reliable data transfer. Generally, one SDU is mapped to one PDU and a checksum or CRC is present. We found that all of the IoT protocols surveyed have this configuration. Figure ?????? depicts a network of sensors/actuators connected to an IoT management platform that follows the common IoT device model.

In the interests of improved security, and consistent with the position above that registration authorities should be avoided. If possible, addresses should be assigned when a device joins the network and the size of the address fields in the protocol should be large enough to accommodate the largest layer using the address and no larger.

III. Putting it all together

To understand what all of this tells us, we need to return to the two concepts introduced earlier that allows us to take our pursuit of commonality further. First is the concept from operating systems of separating mechanism and policy. For example, the machinery for doing acknowledgements is common, but when to acknowledge is policy; a mechanism to detect corrupt data is common, but the particular error polynomial used is policy, etc. Protocols have a small set of mechanisms. The variety comes from the differences in policy. Second, the concept of distinguishing abstract and concrete syntax found in ASN.1 makes the protocol invariant with respect to syntax: the protocol can be specified in one syntax and encoded in many others, without modifying its procedures. This is especially useful for the data transfer protocols where the PCI or header information is used either as an index or hash into a table or array, or it is used for simple calculations modulus the width of the field.
These two concepts combine to argue that there is a single data transfer protocol with different policies associated with a small number of mechanisms, which are either present or not using a small number of concrete syntaxes. To achieve this, we have created a protocol based on Watson’s delta-t [3] that utilizes these two concepts.
Figure 3. A network of sensors/actuators connected to an IoT Manager, following the common IoT device model
It is also the case that there is a single Application Protocol because: 1) There are only 6 operations that can be performed remotely: create/delete, read/write, start/stop; and 2) those operations are performed on objects outside the protocol. Hence there is nothing specific to the application that is part of the protocol. Therefore, there is no requirement for another protocol. Again, the concept of abstract and concrete syntax can be use to yield different encodings for different environments.
It should be noted that it is far easier to make changes to an object model for an application than to make changes to an application protocol. With this characterization, the two protocols can be used with different policies and different concrete syntaxes to perform the same functions with the same efficiency and effectiveness.
There are some IoT applications where the nature of the environment, the application, or the technology makes enrollment either impractical or unnecessary, e.g. use of RFIDs in a warehouse. While MAC addresses or E-164 identifiers could be used in these situations, a product serial number is probably preferable.
The Common Protocol alluded to here can represent all of the IoT protocols that fit that model. Protocols with more elaborate flow or retransmission control are easily accommodated. Hence the IoT environment is a degenerate case of the canonical data transfer protocol model (called EFCP, Error and Flow Control Protocol). No special cases occur. When the aspects not needed are null, the protocol is not affected. The only real reason for different protocols is a barrier to competition.
Earlier, the paper mentioned that the other major problem confronting IoT is security. We would be remiss, if we did not mention that the IPC Model brings a much simpler and more robust security environment to IoT, because the layer becomes a securable container [15]. How secure is determined by the policies used. With explicit enrollment, all members of a layer are authenticated, and if the lower layers are not trusted, then all data passed to the layer below can be encrypted. By securing the layer as a whole, rather than each protocol in the layer, the security is both much simpler, less costly, and much more robust.

IV. Conclusions

This analysis has found that there is not much difference across IoT in the data transfer protocols or the application protocols. Then what is special about IoT?

  • there is a wide variety of the object models that the application protocol manipulates, which on each device are relatively simple,
  • there are a modest number of physical media technologies, where the choice of data transfer policies is mostly determined by the environment, and
  • there are potentially a very large number of IoT devices to be put on a network.

It would not be an exaggeration to repeat that last one: There are potentially a very, very large number of IoT devices that could be attached to anyone network. In fact, scaling problems has been the one thing most talked about. At a meeting in London in 2015, it was said that in 5 to 10 years there could be more devices connected to the Internet in the London area, than on the Internet today. One can presume that that will be true of every major metropolitan area in the world. The general consensus in the meeting was that it was hard to figure out how to handle a network at that scale using the current technology.

IoT devices tend to be quite simple. They can be sensors and/or effectors of relatively simple tasks. The power and the promise of IoT is not in the individual devices but what they can do in concert. We have a glimmer of that potential now, but would suggest that we have not even begun to imagine what can be accomplished with combinations of IoT devices that have not yet been imagined. This is the real potential. People will find creative and innovative new capabilities from combinations that have not yet occurred to anyone and would probably sound outrageous if we had. This is where we want to spend our energies, not trying to get around incompatibilities in protocols. As we have seen limitations and incompatibilities like this can inhibit that innovation.

Given that, if IoT is going to be successful, one complicating factor is sufficient. The others should be minimized to the greatest degree possible.  A common data transfer and common application protocol can go a long way toward making that the case. In fact, separating mechanism and policy as in RINA provides commonality without sacrificing flexibility. Because IoT networks are generally edge networks of limited scope (but high density of devices), the data transfer characteristics can be tailored to the owner’s requirements, and the recursive nature of the layer structure solves the scaling issues.

We must move to greater commonality and RINA is really the only way to do that. Furthermore, it not only simplifies the IoT networking problems by bringing simplicity and commonality, it does it for the rest of the network: whether WAN, Cloud, or Data Center. And that commonality can be leveraged to provide even greater savings in both capital expenditures and operating costs.


[1] S. Chawla, R. Basdeo, M. D’Aniello, T. Payne, J. Siebel, S. Sharma, G. Shoga, W. Bingliang, C. Willis, and J. Day, “Iot or coping with the tribble syndrome,” BU Technical Report, June 2017.

[2] J. Day, Patterns in network architecture: A return to fundamentals. Pearson Education, 2007.

[3] R. Watson, “Mechanisms for a reliable timer-based protocol,” Computer Networks, vol. 2, pp. 271–290, 1978.

[4] G. Boddapati, J. Day, I. Matta, and L. Chitkushev, “Assessing the security of a clean-slate internet architecture,” Proceedings of the Network Protocols (ICNP), 2012 20th IEEE International Conference on, 2012.

[5] F. Group, “Hart communication protocol specification,” HCF SPEC-13 version 7.5, May 2013.

[6] I. S. of Automation, “Ansi/isa-100.11a-2011 wireless systems for industrial automation: Process control and related applications,” ANSI/ISA standard, May 2011.

[7] Z. Alliance, “Zigbee specification,” Document 053474r20, September 2012.

[8] S. Labs, “Z-wave specifications,” Z-Wave Specifications release package,September 2018.

[9] D.-. Alliance, “Dash7 alliance protocol specification v1.1 – wireless sensor and actuator network protocol,,” DASH7 Alliance Protocol Specification v1.1, January 2017.

[10] T. Group, “Thread 1.1 specification,” Thread 1.1, 2017.

[11] B. S. I. G. (SIG), “Bluetooth mesh networking specifications,” Bluetooth specifications, July 2017.

[12] N. Forum, “Nfc logical link control protocol technical specification,” NFC Forum Technical Specifications.

[13] L. A. T. Committe, “Lorawan specification v1.1,” 2017.

[14] 3GPP, “Lte advanced pro specifications,” 3GPP Release 13, June 2016.

[15] T. Ramazenifarkhani and P. Teymoori, “Securing the internet of things with recursive internetwork architecture (rina),” International Conference on Computing, Networking and Communications, May 2018.