| Internet-Draft | Meditation on Connectivity | January 2026 |
| Fjeldstrom | Expires 14 July 2026 | [Page] |
This document analyzes how the Internet arrived at its current connectivity equilibrium. It does not propose new mechanisms or assign fault, but instead examines a series of historically contingent, locally rational adaptations (particularly the adoption of default-deny security boundaries alongside the pressures of global scaling) that collectively shaped today's reliance on higher-layer compensatory mechanisms. It clarifies how sustained operational triage and successful adaptation deferred reconsideration of endpoint semantics, and why the accumulated effects of those decisions are now observable at Internet scale.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 14 July 2026.¶
Copyright (c) 2026 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document.¶
This document adopts a diagnostic perspective, examining observable system behavior and known failure modes to reconstruct how the Internet reached its present connectivity equilibrium. It takes a reflective, historical stance rather than an architectural or prescriptive one. It does not seek to judge past decisions or argue that alternative choices should have been made. Instead, it asks how a series of necessary, locally rational responses to real problems interacted over time to shape the system that exists today.¶
The focus is on clarity rather than correction: understanding how pressures, trade-offs, and opportunity costs accumulated, and how those forces guided the Internet toward its present equilibrium.¶
This document is intended as contextual background for subsequent architectural analysis revisiting end-to-end semantics under contemporary operational conditions.¶
This document is intentionally descriptive and historical in nature. It does not argue that alternative architectural choices should have been made, nor does it propose mechanisms or remedies. Its purpose is to reconstruct how a sequence of locally rational responses to operational pressures interacted over time to produce the present connectivity equilibrium.¶
The central question explored here is straightforward to state but difficult to answer definitively: why was the architectural impact of default-deny security boundaries on endpoint semantics not revisited once those boundaries became widespread?¶
This is posed as a historical question, examined through experience, contemporaneous documents, and observable system behavior, rather than as a counterfactual or accusation.¶
Firewalls and related boundary-enforcing mechanisms emerged in response to immediate and visible risks. Early Internet hosts were fragile, trust relationships were poorly understood, and exposure carried tangible operational cost. Default-deny boundaries offered a practical and effective response, aligning with organizational realities and improving survivability.¶
At the time, this shift addressed a pressing problem. It enabled wider deployment and safer operation. Nothing about this response was careless or irrational; it was a necessary adaptation to the conditions of the era.¶
At nearly the same time, the Internet was transitioning from a research network into a global commercial infrastructure. This transition introduced urgent challenges: routing scalability, address exhaustion, congestion collapse, and operational survivability.¶
These issues were existential and demanded immediate, collective attention. They could not be deferred without risking systemic failure. In contrast, the semantic consequences of boundary enforcement were local, survivable, and compensable. This coincidence of pressures created an implicit triage, in which problems that threatened the Internet's existence were addressed first, while others were absorbed through adaptation.¶
The system adapted. Applications compensated for lost reachability. Protocols were extended or layered. Rendezvous services, relays, and traversal mechanisms emerged to preserve connectivity across restrictive boundaries.¶
These compensations were successful. They kept the network usable and services available. Over time, they became familiar, then expected, and eventually structural. What began as damage control hardened into infrastructure, reducing the immediate incentive to revisit deeper architectural implications.¶
The behaviors described in this document are not novel, nor are they specific to any single protocol, layer, or application domain. They are instances of a broader class of well-understood system failure modes that have been observed repeatedly across computing and networking history and named precisely because of their recurrence.¶
Terms such as thrashing, congestion collapse, broadcast storm, network meltdown, and brittleness were coined to describe characteristic system states that arise when feedback, load, and control mechanisms interact poorly under real-world conditions. These terms do not describe bugs in particular implementations; they describe invariant dynamics that appear when systems optimized for steady-state operation are subjected to bursty load, correlated demand, or the steady-state use of fallback mechanisms.¶
The continued relevance of these observations does not diminish with time. Like thermodynamics, they are empirical constraints on system behavior, not artifacts of obsolete technology. Faster links, larger buffers, virtualization, or higher-layer abstractions do not repeal them; they merely shift the layer at which they become visible.¶
The present reachability equilibrium exhibits the same structural features that characterize these earlier failure modes: compensatory mechanisms that were correct and effective in exceptional circumstances have become structural; control-plane authority and data-plane forwarding are misaligned; load concentrates onto shared fallback infrastructure; and failures become correlated, opaque, and difficult to localize. These are the same dynamics that historically produced meltdowns and collapses elsewhere in the stack.¶
Recognizing this pattern is not an exercise in hindsight or blame. It reflects the maturation of an exploratory system that successfully stabilized under pressure but deferred consolidation of its architectural lessons. The proposal in this document should therefore be understood as an application of long-standing systems knowledge to a layer where those constraints have quietly reasserted themselves, rather than as a response to a one-off or unprecedented failure.¶
Every architectural choice carries an opportunity cost. Addressing immediate, visible failures necessarily defers attention from other questions, even important ones. As long as compensatory mechanisms remained effective, the cost of reopening foundational assumptions outweighed the perceived benefit.¶
In this sense, the present system reflects not a single decision, but the accumulated opportunity costs of sustained triage under growth pressure.¶
Over time, the balance shifted: compensatory mechanisms became dominant, fallback paths became primary paths, and second-order effects, like loss of locality, concentration of load, and correlated failure domains became increasingly visible. The system did not regress; it settled. It reached an equilibrium defined by the constraints it had accumulated and the adaptations that successfully absorbed them.¶
From an architectural perspective, the system appears to occupy a metastable regime: locally stable under prevailing constraints, yet lacking strong restoring forces should those constraints shift. Whether this equilibrium persists or transitions depends primarily on external pressures (including economic, regulatory, operational, and security-related) whose interactions span too many degrees of freedom to admit meaningful prediction. Accordingly, this document does not speculate on future trajectories, but focuses on reconstructing the conditions and adaptations that produced the present state.¶
This meditation does not conclude that past decisions were wrong, nor that the present equilibrium is irrational. It suggests only that the conditions which once justified deferring certain architectural questions have changed. This document describes observable adaptations and equilibria; a companion analysis examines the underlying architectural invariants that produced them.¶
In other words, when a system model depicts a viable path that is consistently avoided, the discrepancy should be attributed to the model or the path, not to the actors responding rationally to observed constraints (a familiar example being the formation of pedestrian "desire paths"). Understanding how the system arrived here -- through necessity, triage, and successful adaptation -- is a prerequisite to deciding whether that equilibrium remains appropriate, or whether the forces that define it should now be reconsidered.¶
No answers are prescribed here. The intent is simply to make the question visible again, with the benefit of time and perspective.¶
This memo has no IANA actions.¶
This document is informational and contains no security considerations.¶