In this post I would like to briefly discuss about an assumption that is deeply buried in a lot of people’s minds when they think about computer networks: we tend to think about a computer network as a networks of devices, as if it was the same as the PSTN of the early days (Public Switched Telephone Network). This is a wrong assumption that leads to many misunderstandings; the reality is that computer networks are networks of processes.
Let’s start with looking at what a computer is and defining a process. A computer is a set of electronic systems tied together, capable of executing programs. An application process is the instantiation of a program executing in a computer intended to accomplish some purpose. All computers have a special process that arbitrates the access to the computer’s resources (CPU, memory, ..), and allows them to be shared among different applications. This special process is the Operating System (OS), special, but nontheless a process.
Coming back to the main argument of this post, computer networks are not like the PSTN of the early days: we don’t want to have two devices communicating (the two phone terminals at the ends of a cable), but two (or more) processes executing in the same or different computers. This is the key difference: it is not the computers the ones that communicate, but the application processes that are executing inside.
It looks like a trivial assumption, but it is not. If we look at the current networking technology, we have lots of hints that we’re not considering this model:
- The distinction between “virtual networks” and “physical networks”: they are all networks of processes, in essence, distributed applications. Is the “TCP/IP stack” executing on a hypervisor more “physical” than the one executing on a Virtual Machine Operating System? They are all processes, of course not! They just belong to different distributed applications, they have a different scope.
- The “network protocol stack” is always considered as having a different nature than the applications that make use of it. But this difference is arbitrary and at the eye of the observer. Both the “network protocol stack” and any application in a computer are just application processes that execute within that computer.
- We have an Internet “protocol suite” that focuses on “endpoints”, “hosts” and “interfaces”, instead of understanding that what matters are the processes and the communication between them (a trivial consequence of taking the right view is realizing that networking is nothing more than distributed Inter Process Communication – IPC).
- Operating Systems have different APIs and mechanisms for implementing local IPC (shared memory, pipes, …) vs. distributed IPC (networking). They both provide the same service: communication between processes, why should the APIs be different?
In conclusion, part of the current problems associated to the complexity and shortcomings of our current computer networking technology are due to the fact that computer networks are many times thought of as networks of devices instead of networks of processes. It would have not been surprising that this was the case when computer networks were first introduced (after all, it was a paradigm shift compared to telephony), but this was 40 years ago. Isn’t it about the time to take the right perspective?