The need for network management has always been recognized. At the same time, it was recognized as both overhead to selling equipment as well as a facility to smooth over the shortcomings of the equipment. Most datacomm networks in the 1970s and before were fairly small, often using equipment from a single vendor.  Network management stations, then called network control, were sold as a loss leader:  Sell the razor cheap (or free), they buy more blades.  As the 70s progressed, networks were not only getting larger but more and more diverse. The likelihood of a multi-vendor network was not only more likely but becoming more common. A broader view of network management was needed.

All of this was coming together as the 70s neared the end and the US computing industry initiated the OSI effort in 1978. Since the industry had always sold network control, they recognized at the beginning its importance. And they adopted a new more sophisticated sounding term, network management.  Not always realizing that in these new packet networks, it was qualitatively different. In OSI, it was given its own working group. At the same time, there were strong forces working against network management. Causing the work to be slow to getting off the ground, partly because it was a rather amorphous topic.

BBN had done such a good job providing network management for the ARPANET, that many didn’t realize how necessary it was. Consequently, the first rudimentary foray towards network management was with ICMP around 1980, and the first real network management wasn’t until 1988 with SNMP, which is considered below.

There had not been much of a consistent structure to network management, just lots of lists. Eventually, there was general agreement that there were five broad ‘management applications,’ the so-called FCAPS: Fault, Configuration, Accounting, Performance, Security. But there was really no operational model of how they worked together. At least, it was 5 lists instead of one. It didn’t take long for vendors to realize that network management standards were a major threat to their barriers to competitors. With management standards, network equipment could be more easily swapped out for a competitor’s. The response was predictable.

The contributions (many from IBM) continually added more and more issues to the discussions as to what these applications might be about. In this sort of environment, it was very hard to develop concrete proposals for standards and IBM favored postponing getting too concrete until it was clearer where it was going.[1] Of course, this was for all sorts of good reasons. In the early 1980s, IBM ran full-page ads in places like Scientific American showing how the original 5-layer SNA model really had seven layers and followed the OSI Reference Model. The ads went on to point out that while OSI did data transfer, it didn’t do network management. Was IBM stonewalling? Some said so. It was quite obvious that whoever defined (and sold) network management said a lot about how the equipment in the network had to behave and thus controlled the account.

In 1984, GM and Boeing with NIST (National Institute for Standards and Technology, formerly NBS) had joined forces in an effort, called MAP/TOP, to develop factory and office automation based on the emerging LAN and OSI standards.  In the fall of 1984, they visited the subsidiary of Motorola, where I worked looking for ideas on network management. Earlier that year, we had begun work on network management for our LAN and multiplexor products and in late Spring I had developed a network management model, which we presented to them. They were enthusiastic about it, thought we were far ahead of everyone else.  Our staff began attending IEEE 802.1 meetings to help them pull together the specifications. The reaction by the other companies was typical, “O, now I understand.” Well, only superficially but it got things moving.  Within 18 months, IEEE 802.1 had an architecture and protocol completed, which were submitted fully formed into ISO as working drafts ready to be voted on. IBM never saw it coming.[2]   They tried to stop it or delay it, but the proposals had too much support. (All of the companies that had participated in its development in 802, of course, voted for it just as we had planned.) That broke the logjam and began in earnest the development of CMIP and its associated management standards.  (It was the development of CMIP and MIBs that led to the realization that application protocols modify state external to the protocol and that there is only one application protocol.)

In some sense, that effort was relatively traditional in that it consisted of a Network Management System, modeled on the classic operations center concept, collecting information from Agents in the devices in the network and sending commands to the Agents to modify their configuration, etc. The state of hardware at the time made it important to minimize the resources the Agents required. This flipped the traditional client/server concept by putting most of the load at the client, rather than server. In addition, quite early in the development of packet networks it had been recognized that some functions like routing and congestion control had to be automatic feedback mechanisms, in effect autonomic functions, because events in the network were happening too fast to put a human in the loop. The most that could be done was manage the network, not control it. The change in terminology was not just adopting more sophisticated sounding terminology for marketing, but was a real practical shift in the nature of the problem.

The original insight in early summer of 1984 was that management was:

Monitor and Repair, but not Control.

To structure the idea further I drew on my previous interest in the structure of nervous systems.[3] I posited that there were four planes of network management that parallel nervous systems. Progressing from the bottom up: sensors (peripheral) collecting raw data from the operation of the layers and the hardware, to agents (hypothalamus) aggregating data to be uploaded, to managers (cerebellum) the operational center of the network where the aggregated data was further processed and analyzed for immediate issues, to coordination (cerebrum) where the long-term projections, planning and adjustments to operation were considered and evaluated. With increasing aggregation of data moving up, and increasing cycle time moving down. These automatic feedback functions (routing, congestion control were the current examples) constituting Layer Management, clearly corresponded to the autonomic nervous system.


In the early 90s, I gave a talk at Motorola cellular on this network management model. The greybeards in the front row, took issue with my claim. Insisting they controlled their network.  A young engineer in the back of the room piped up, “I think I know what he means.” At the time, the entire UK cellular system was Motorola equipment. The engineer had babysat the system for 3 years. He proceeded to tell of a 6 week period when the number of switch crashes dropped off precipitously. They couldn’t figure it out. They hadn’t done anything, no new equipment, no new software releases, no configuration changes, nothing. Then they realized:

It had been the 6 weeks the operators were on strike!

Monitor and Repair, but NOT Control!


Collecting all of this data on these much larger networks created a new requirement: a database would be required to make this data available and network management had special requirements.  But it also raised an issue that required more than passing knowledge of databases. Management is collecting data on the parts of the equipment. In the database world, this is called a ‘bill of materials’ or ‘parts-explosion’ structure. The accepted wisdom at the time was that relational databases were the solution to all database problems.

Charlie Bachman, the inventor of the Entity-Relation database model (as well as the 7-layer OSI model) and with whom I worked for several years, always said, ‘you can’t do a bill of materials structure in a relational database.’ I would counter, ‘you could but who would want to!’[4] Consequently, our group adopted an E-R Model database.[5]  Meanwhile, every other vendor (DEC, HP, IBM, etc.) adopted a relational database for their network management system. They all encountered major problems, with some ripping out the database and resorting to index-sequential files.

The other big requirement for effective management that was that to lower uncertainty and ensure as much predictability as possible required:  commonality, commonality, commonality. Some progress was made on this in the OSI standards with MIB definitions for each layer and protocol, as well as common structures for the management systems and the management applications. We actually made quite a bit more progress within our product group but upper management prevented us from submitting it to OSI. Our major insight was to separate what had to be common from what didn’t have to be.  The networking aspects had to be common or nothing could communicate, but the hardware and software support aspects didn’t have to be. For all intents and purposes, this eliminated the need for device (or vendor) specific translation. This also implied considerable progress in further architecting FCAPS, but you can’t be everywhere in a large standards effort. In 1986, we fielded a full network management product with all of the FCAPS, including automatic configuration of devices. Of course with RINA, we have essentially maximized commonality in data transfer, but there are still things we will learn in layer management that will lead to more simplification and enabling more sophisticated management than can’t be done now.

In parallel with this, over the past 3 decades some have waxed eloquent about autonomic management with some even believing it was all that was required. However, there has been a dearth of real results. Somehow the papers all remain at 30,000 feet with the rubber seldom finding the road. It might be true that autonomic is all that is needed as long as networks never get beyond the complexity of slime molds, sponges, and jellyfish. After that with the development of eyespots, there is a central nervous system. Just the act of observing the network implies some degree of centralization of information. Hence, there is some need for a homunculus.

[For completeness, we must mention what was happening in the Internet, although it did little to progress the state of the art and if anything was a drag on progress. In the late 80s, about the time the IEEE work was being moved to OSI. The IETF began to look at network management. There were two major proposals: a forward-looking object-oriented management protocol called HEMS, and a much more rudimentary protocol called SNMP.  The IEEE work had already experimented with a minimal protocol like SNMP and had found it was too minimal.  They had found that to accomplish anything bare Reads and Writes (Set/Get) generated too many requests/responses, which caused too much traffic and too much delay. Consequently, this led OSI to develop the object-oriented protocol, CMIP, that could operate on several objects at once with “scope and filter.” HEMS was similar to CMIP based on a postscript-like model. Although the initial version lacked a “scope and filter” operator, it could easily have been added. (As an example of the importance of scope and filter, suppose one wants to shut off any of 1000 flows through a router with queue lengths greater than X. SNMP would generate between 2000 and 4000 messages, while CMIP would generate 2.) After much debate in the IETF, SNMP was chosen over HEMS, even though its implementation was larger than either HEMS or CMIP.[6]

The IETF completely missed the importance of commonality and by the time they figure it out, it was too late. Too many RFCs for MIBs had been generated for specific devices.  They were going right back to device-specific lists that were common prior 1985. When they did recognize it, some participants merely took the OSI management standards and did a global replace on terminology for the SNMP versions.

The situation was further complicated when SNMP was first approved, the router vendors took the tack that SNMP would be suitable for monitoring, but not for configuration because it was insecure. Strictly speaking, it was. However, SNMP was encoded with ASN.1 (from OSI). Instead, router vendors recommended using their management software for configuration, which used Telnet and sent passwords in the clear. (Every PC in the world had Telnet. And even better, most routers had the default password! Needless to say, most computers in the world did not have ASN.1 compilers.) Amazingly, the IETF fell for this argument.[7]  The importance of Account Control raises its head again.

The IETF immediately set off to produce SNMPv2, which would be secure. The original authors tried to force through a draft that predictably, blew up in their faces.[8] There ensued a decade long confusion before something began to emerge. But by then the damage had been done. The state of network management was roughly back where it was prior to the MAP/TOP effort in 1984.

As long as we are on the topic of bad ideas, IBM was a late-comer to all of this having tried to hold the line with SNA for far too long. In the late 80s, they needed to make up for lost time (no time for a major implementation effort), make a play for market share (something they were always good at), and co-opt the network equipment vendors. They did all of that by introducing a “Manager of Managers.” Sounds impressive, doesn’t it!? and like IBM.

Basically the Manager of Managers consisted of a window for events (pretty trivial to implement),[9] a network map (you have to have a map!) showing the status of nodes, and windows for the command line interface to each of the device-specific element managers.  Sound pretty empty of substance?  Did you ever know a 2nd level manager that wasn’t!?  😉  Of all the approaches floated in the 1980s, this is the one that both did the least and had the least chance of improving anything.

After the multiple debacles surrounding SNMP, network management was pretty much a desert for 20 years. A few attempted products, but they tended to focus on particular aspects.  From time to time, there would be an effort to develop another standard again focused on a specific aspect of management, such as a standard command line interface,[10] YANG, NETCONF, etc. No one wanted to tackle the hard work that needed to be done, i.e. finding and imposing more commonality. Without more commonality, no real progress was possible.  Needless to say, the equipment vendors didn’t help much.  The lack of commonality (as we have seen) is the basis of their account control.  But creating that commonality ultimately requires fixing the flows in the Internet itself.]

Meanwhile, given that the primary purpose of a router is to just move bits, routers were becoming more and more a commodity.  The router vendors were confronted with how to put more value in them to get people to buy more routers and more expensive routers.  Deep Packet Inspection (DPI) was their first attempt, which generated the side effect for providers of creating the Net Neutrality issue (that cost the providers dearly and still does). As that ran its course, there was the ‘router table crisis’ that led to carrier-grade NAT and other expensive boxes to solve the problems that created. More recently, the ploy has been ‘machine learning.’ Essentially, an admission that they don’t understand networks well-enough to do the necessary traffic engineering. All of these driving the Internet further from resolving the fundamental flaws in the Internet and deeper into the ITU model they were already in.

A few years ago, large network owners along with some academics started making noise about a new management approach: Software Defined Networking (SDN) and pushing router vendors to develop it.  (An odd choice of name. Routers from the ARPANET to the present were always defined by software!)  Even stranger was a leading Princeton professor’s claim that the purpose of the SDN architecture was to say as little about the network as possible.[11] Strange, I thought it would be precisely the opposite!  To do effective management, you want to be able to say as much about the network as possible.

Remember. Commonality, Commonality, Commonality.  SDN brought with it a whole new bevy of acronyms. None of which were directly built on the science of network traffic flows.

But what is SDN really? Believe it or not, it is a dressed up version of IBM’s 1980s Manager of Managers. Only now the Manager is called “Orchestration” (!) and the element managers are called domain managers, but their functionality is still the same. The difference is that rather than just providing a window so there is direct input to the element managers, Orchestration imposes an overlay of abstraction.[12] As one SDN expert tried to explain it to me, “[I]ts chief virtue is that it is specified at a level of abstraction that allows telecommunications network control and management can be specified logically and translated by separate mechanisms to the actual physical network resources.”  On the other hand, a Stanford professor and major SDN theorist claims there are no abstractions in networking. He may be right about the Internet.  Given its origins as a Carnack architecture, the severe flaws introduced at the outset and all of the ‘enhancements’ (actually patches and kludges) heaped on the ‘Net over the last 40 years, it is probably true that there are no significant invariances (abstractions) to be found. Of course, this does not bode well for the success of SDN. Building on unstable ground yields an unstable edifice.

SDN proponents wax eloquent about the user defining how the user’s data is to be handled. Letting the nature of the traffic be specified in terms the “user” understands is an admirable goal. Letting the user define how their traffic is to be handled is an entirely different matter.  Can you imagine every network user trying to specify how their flows are to be handled? We know that users always think their traffic is the most important!  Too many cooks will . . .[13] It is far better to take the traditional systems approach of presenting the user with a black box to which the characteristics of a flow are defined and let the black box, i.e. the network, determines how it might meet those requirements.

The other problem is worse: “Unfortunately because of proprietary vendor equipment and a multiplicity of element managers we have ended up with multiple NFV SDN Controller domains that are often not logical at all. And worse yet to resolve conflicts and to ‘standardize the functions of virtual networking (and) increase interoperability of software-defined networking (SDN) elements,’ the NFV Orchestrator has been created with a role that goes well beyond monitoring and policy resolution.”  This is actually worse than the IBM solution. At least with IBM’s solution 30 years ago, there was direct access to the element managers. This creates three potential but very real problems:

  1. the translations between Orchestration and the vendor specific element managers is going to be fraught with problems. For anyone who has tried to write a device driver that emulated between the OS view of devices or between a later version of the device and the actual device knows this problem well. There will be a never-ending list of slight semantic differences that have been exploited that don’t quite map correctly and create numerous problems. Ultimately, it results in unexpected behavior and in an inability to effective maintain manage a portion of the network.
  2. Worse, the manager has to treat the network element as an entity delivering a given set of QoS classes as its service. The problem is that the network element is managing different layers each with flows with different QoS-classes. There is multiplexing at each layer, hence the QoS of a flow at layer (N-1) is not the QoS of a flow at Layer (N), but the QoS of the aggregate of specific (N)-flows, potentially of different QoS, multiplexed onto the (N-1)-flow. (There can be distinct advantages to multiplexing flows of different, but complimentary QoS.)
  3. As time has progressed, the SDN policy of centralizing routing has shifted to moving the “centralization” closer and closer to the routers. Centralizing an inherently decentralized problem always has the same predictable result. Ultimately, they will find that centralizing routing in the routers is the optimal trade-off. 😉

As noted earlier, SDN does not solve any of the fundamental flaws in the Internet architecture. SDN leaders contend that their intent is to provide the SDN overlay, and then evolve the element managers to have greater and greater commonality and eventually evolve the underlying Internet into some as yet unspecified new architecture.  However, without a vision of what this new architecture looks like, of what the goal is, the more likely the current architecture will strongly influence the SDN design (thereby limiting where they can evolve to) and the lack of abstractions, i.e. ‘commonality’ will be manifested as either a lack of commonality or too much, i.e. a nailed-down implementation, which would inhibit innovation.  As it is the SDN products appearing on the market do not bode well:  millions of lines of code where there is strong evidence that far more can be accomplished with a tenth of that.

There is great emphasis on ‘evolving.’  This is supposed to allay customer concerns over disruptive technologies, especially ones that don’t do anything, e.g. IPv6. However, what it really is about is maximizing revenue with a constant stream of enhancements, as well as providing cover for the fact that they really don’t know where they are going.

A major aspect of all of this management is to ultimately allow QoS to be provided in these networks. Surely, no one would think that QoS can be solved by an overlay. That would be silly! It is apparent from basic systems design that for something like QoS to be effective, it has to be enforced at every layer all the way to the resource itself, the “wire” . . . . . . O, that’s right. Princeton got a lot of funding from NSF to do just that. Sigh. To paraphrase, the poet laureate of Great Britain. “You can get them to do anything if you put it to them right. The trouble is they try all the wrong ways first.” So far, that sure looks like what they are doing! (Most have yet to realize that TCP congestion control thwarts any attempt to support QoS.)

Two of the root causes of this are 1) the field has ceased to act scientifically and behaves more like a craft tradition,[14] and 2) the Internet has always been in the beads-on-a-string ITU model, rather than the layered networking model. In the Internet, layers are simply modules in a system. The model is all about boxes and paths. There is no concept of the layer as distributed coordination and that the layers have different scope. In the original networking model, layers have different scopes and consist of cooperating processes in different systems. It hasn’t helped that Moore’s Law has allowed us to ignore problems far too long, encouraging stop-gap measures to be continually applied until it is assumed the original work must be right.

More recently, the pendulum has begun to swing back the other way with the usual tendency to go to the other extreme: everything should be totally decentralized. As with nervous systems, it is not all one or the other.  Let us take a more detailed look at network management with respect to decentralization.

There are two forms of management, we are interested in:  Network Management and Application Management, or DAF Management. We will leave operating systems or systems management for another time.

Network Management is responsible for networking equipment and the DIFs they support. Since this involves real hardware, the domains of management are usually determined by who owns the equipment.

Application or DAF Management is responsible for managing the DAF and any DIFs uniquely required to support this DAF. As we move RINA down to the legacy Media Layers and eventually to the wire, the effectiveness and ability of network management will be greatly increased. At the same time, DAF Management will largely be a subset of DIF Management, with the exception of application-specific part.

As noted above, events are happening too fast for a human to be in the loop, it has long been recognized that there was some degree of autonomic management in the DIFs.  This was referred to as Layer Management. It was clear that routing (managing resource allocation within the DIF) and congestion management are autonomic. The two operate at different time scales and are governed by policies based on the QoS-cubes supported by the DIF.

In the early work on network management architecture, some went a little overboard (HP for one) with multiple levels of managers with overlapping management domains. There are no 2nd level managers. There may be a hierarchy of subnets in a network perhaps with their own managers, but the managers are always peers. The Coordination processes may prepare new input for managers but it does not effect the changes in the network.  It does not override managers.

Let us look at the opportunities for decentralized management:

  1. Management domains are generally determined by the owner of the equipment or some other logical aspect of the organization. For a large network, the network may be divided into multiple domain each with its own manager. As just noted, even if the subnets managed by these domains are viewed hierarchically, e.g. region, backbone, the managers are all peers and may back each other up. While can occur, it should not be assumed that a domain is homogeneous with respect to equipment or technology, nor would it be likely.
  2. Any number of “managers” may observe (monitor) all or part of a network. This monitoring is targeted at the health of the network. But while management domains may overlap for monitoring, not for modification. Modifying the network has to be a unique responsibility. Only one management DAF can be responsible for modifying attributes in the management domain.
  3. The Agents –  A Management DAF consists of Management Applications and Agents. The Agents are the local members of a DAF in the processing system of each member of the DIF/DAF. Each Agent has in its domain the IPC Processes of the DIFs this Management DAF is responsible for. The Agent is analogous to the sphere in Flatland. It has access to the state of all IPCPs in its domain. While it is possible that there would be multiple Agents belonging to different Management DAFs in the same processing system, the more common case is one Agent per processing system.

There is potential here for a new form of autonomic behavior. Drawing on the nervous system analogy for ganglia.[15] Decentralized strategies have a tendency to be good at finding local optima but not global optima. There might be strategies where the Agents or subsets of Agents could improve the health of the network by optimizing across DIFs to achieve more global optima. Also, for some networks, it may be the case that portions of the network need to undergo complex configuration changes in real time. All of these might be better coordinated by a ganglia function operating autonomically among one or more Agents.

  1. Event Management – The sensors in the DIF/DAFs provide the Agents with raw data. The Agents have direct access into the DIFs/DAFs. Agents aggregate the data and report it to Event Management. The primary function of EM is to maintain a log of the management data.  (Potentially this is a classic use of Blockchain, although it is not clear it requires that level of security. It is an open question whether given the inherent security of the DAF/DIF structure, modification of the log is seen as a threat. But it would be an interesting ‘recursive’ use of Blockchain.) This is how everyone else monitors the network or the distributed application. This could be distributed with all agents reporting to all monitors. (This sounds inefficient depending on the volume of data. While it would distribute the load, it would eliminate subscriptions based on conditions across more than one Agent.)
  2. Configuration – is the unique responsibility of the network manager for the equipment in its domain. This appears to be a necessity.  Coordination-level processes will develop new configurations or update old ones, verify and test them. Here the decision process can be a consensus, although the activation of a configuration probably must be centralized. (See below) For network management, some configuration changes may be triggered by well-defined events such as day/night, weekdays/weekends, holidays, special events (Super Bowl Sunday), disaster response, etc. Some of these may require a consensus decision but most don’t. Other than to determine what the configuration changes are and the conditions for activating them. Once the configuration differences are known, activating them can be fully automated. (A configuration is a tree. By embedding directives to indicate the order, activating a configuration is basically a tree walk.)

One interesting conjecture would be, can different parts of a large network sense the conditions for a configuration change and act independently without causing instabilities? However, there will be issues to resolve about what happens at the boundaries if adjoining areas do not perceive the same conditions, etc.

  1. Fault Management – is actually a management DAF with a small management domain, i.e. the equipment (or parts thereof) being diagnosed and hopefully repaired remotely. This pretty much has to be a traditional centralized management system.
  2. Performance Management – Everyone has always known there was a need for Performance Management for analyzing performance data, but it was always a bit vague what precisely it was for how it related to the operational model. Given that the behavior in the DIFs is governed by the policies of layer management, i.e. it is autonomic. And that this is being managed, not controlled. Then it follows that performance data is collected and analyzed at the Coordination level to determine if any of the autonomic policies in the DIFs need to be adjusted, changed, etc.

This will be a ‘big data’ problem.  Large amounts of data collected in the network will have to be analyzed and interpreted. This information is then reviewed by all participants (or their representatives) to determine whether policies in the DIFs need to be modified, or if there are upcoming events that will required changes in configuration and/or policies.

These decisions can be made by a consensus process of some sort. However, this level of decision making will require a degree of expertise. The decisions being made here are not direct, but indirect. They are modifying the management of the network, not its control. If any modifications are approved, they can be activated in the next configuration update. (Care will have to be given to the order this is done in.)

  1. Accounting Management –  is really a Coordination Process.  I have always contended that the so-called ‘accounting data’ is actually performance data. It isn’t accounting data until it is multiplied by dollars, etc. In other words, one should not have blinders on when collecting data that is used for accounting.

Basically, it would appear that Coordination is the center of decentralized management. It also appears that a considerable amount of the centralized functions can be made automatic. But fundamentally, the real effectiveness of network management has to rely on a strong foundation in the network architecture.

For application management, of course, the DIFs associated with the DAF.  (We assume that ISPs, etc. would tend to provide certain ‘vanilla’ QoS-cubes. (Some research is probably needed on what these would look like.) Then DAF designs might have DIFs that use these ‘vanilla’ services and augment them to be specific to the needs of their DAF. As for DAF management itself, I would foresee that the basic RIB, Configuration, Event, and Fault machinery would be largely the same.  But we do need to explore what other DAF commonalities can be exploited.

[1] It is notable that in the OSI work, the head of delegation from most countries was from IBM.

[2] Because I was Rapporteur of the OSI Reference Model (as well as having non-standards responsibilities), I did not attend the 802 meetings so as not attract attention to the effort, even though I was writing and editing many of the contributions we submitted. IBM’s focus in 802 was on token ring, not on architecture. It worked.

[3] In the early 1970s on the assumption that networks were not as complex as the human nervous system, I had audited a course in invertebrate zoology to understand the range of complexity of nervous systems.

[4] It is analogous to ‘you could write a Java compiler for a Turing Machine, but who would want to!’

[5] One of our new hires from MIT kept wondering about relational, so we suggested doing some tests and found that in the best case the relational model was 19 times slower.

[6] Yea, yea, I know what you are thinking. What can I say? It was SIMPLE.

[7] Given how much the IETF detested ASN.1 and its apparent complexity, I would tease them that ‘wasn’t ASN.1 an encryption algorithm?’ Strictly speaking, SNMP may not have been secure, but it was much more secure than sending passwords on Telnet.  The IETF missed the point of ASN.1. It made protocols invariant with respect to syntax. That can be very powerful. But they couldn’t use that property.

[8] In consensus organizations, like standards committees, it is practically a theorem that trying to ram through complete draft without even minor changes will blow up and fail. Recent experience in the US with the Republicans attempt to modify the Affordable Care Act was a classic repeat of SNMPv2 (and others I have observed). The result seems to be true over a very large range. See the Discourse on Livy.

[9] No one was ever fired for buying IBM.

[10] A major router vendor insisted had to be simple enough to be implemented on the router. When I asked why. The answer was, so field engineers could use it. To which I replied, ‘you are still issuing your field engineers teletypes?’ Knowing full well that a simple management system could reside on a PC.

[11] One theme that underlies the vast center of academic research is to never solve anything. Never find general principles that would essentially solve an issue and thus eliminate topics for funding topics. IOW, don’t do science, propose instead widgets for specific problems, which generates other problems.

[12] It would be funny if it weren’t so sad: An enthusiastic SDN advocate once told us how with traditional network management, one had to send all of these commands to the devices to find out what was going on. With SDN, one merely queried the database! Much easier! I didn’t have the heart to ask him how all that information got in the database or how closely it reflected the current state of the device. 😉

[13] In any case, this is just an indication that they don’t know what to do.

[14] “Research” funding has largely become angel funding for those without the guts to do a start-up for real.

[15] In many invertebrates and vertebrates, there is a mass of nerves in another part of the body separate from the brain that are used for muscle coordination or other functions in a remote part of the body. (Dinosaurs were known for having ganglia near their hind quarters.