To recap, the First Generation of wireless mesh products was typified by single-radio ad hoc systems that built peer-to-peer, self-forming, self-healing networks. MANET (Mobile Ad-hoc Networks) provided great flexibility, but typically low performance in terms of throughput and jitter (variation in delay). This is because a single radio provides service (connections to users) and backhaul (links between nodes).
The number of nodes possible in a First Generation MANET network has been increased over the years, with recent developments increasing the number of nodes in test situations to over 300. But the throughput and jitter limitations remain, since they are a simple matter of physics with a single radio. In effect, this is the equivalent of a hub-based wired network – collisions and contention will always be an issue.
The Second Generation arrived in the early 2000s in products like the first Tropos Networks offerings, which added a second backhaul radio to the service radio. This combination helped segment the collision and contention domains, but still had severe limitations in mobile networks, those with multiple hops, and those with higher demands for speed and low jitter and latency (voice, streaming video, etc.).
The ugly truth
I made a splash in 2004 with a controversial on-line posting in Dailywireless.org pointing out that larger networks with higher data demands would experience rapidly diminishing performance as the hop count grew in First- or Second-Generation wireless mesh networks. The Ugly Truth about Mesh Networks didn’t earn me a lot of friends in the industry, but it did set the stage for Meshdynamics’ MD4000 Third-Generation wireless mesh nodes (and a few other suppliers, to be sure).
Third-Generation wireless networks further segment the collision and contention domains by using multiple backhaul and service radios to create much more deterministic performance. It’s the equivalent of a true switch versus a hub in a wired topology. But the key topological difference from a wired switch is that Third-Generation wireless networks retain the flexibility to form, re-form, and adapt to any disruptions or interference. This provides the best of both worlds in many ways. Incorporating developing wireless standards and technologies such as MIMO (Multiple-Input and Multiple-Output) continues to increase the performance of Third-Generation systems
My fundamental assumption, which proved correct, was that COTS radios would become cheaper and standardized – so that hardware was not an inherent limitation. But that would only be true if one could separate the networking techniques from the hardware of the radios. This is the direction I took, in essence creating independent radio “robots” within each node that could dynamically make decisions about routing and re-routing while maximizing performance. (Well, truly, nearly optimizing performance – more on that below). Abstracting radios (and wired links) as “channels” opened many productive paths of development, a number of which are embodied in the patent portfolio I am interested in sharing with interested System Developers and OEMs.
A lost generation
The Fourth Generation, in my humble opinion, took a wrong turn – at least in dealing with the most dynamic and demanding applications. Attempting to optimize wireless performance with a centralized controller, as exemplified by Aironet Wireless (Cisco Systems) and Aruba Networks (Hewlett-Packard), is a bit of a canard. This approach may work well in optimizing topologies in a relatively stable Enterprise network, but cannot survive in a challenging mobile and/or multi-hop outdoor network that must contend with disruptions, interference, and constant change.
Network? Computing? Yes.
In fact, my latest work involves leaving behind the idea of the network as an entity separate from the computing environment. In my view, the Fifth Generation of wireless networking conceives of the network as a control system being used to provide services to client computing, control, and analysis systems. Building on the Third-Generation radio “robot” technologies provides the dynamic networking and performance-optimizing foundation.
In addition, discrete applications may be hosted within the network nodes, offering localized control, analysis, and other functions. Variations on this theme have been referred to as “Fog Computing”; but to me, it’s really about abstracting the network as simply a service to dynamically deliver computing power as and where needed.
Massive networks will be brittle
This addresses a key issue that I believe will plague the future of huge and distributed networks like the emerging Internet of Things. Big Data and Expert Systems will be brittle, since current network and computing techniques will simply be enlarged and not fundamentally re-engineered. And all of these assume humans’ ability to centrally architect both the network and the information structure.
But that simply won’t work with the massive decentralized networks that will come next – the human architects can’t know what they can’t know! In my book Rethinking the Internet of Things, I argue that a Publish/Discover/Subscribe architecture is the only approach that can flexibly scale. The network component of that architecture involves the distribution of networking functionality and computing function.
The downside of all of this distribution and flexibility is the loss of the ability to centrally define and dictate topology. With the radio networking intelligence within individual nodes (and/or their resident applications) driving decisions about topologies and routing; how can the system ever be optimized?
Dynamic roles in nature
As so often, the answer can be seen in nature – in this case in a typical ant colony. Each individual worker ant performs a specific role, biased by the environment, the Queen’s pheromones, and the actions and pheromones of the worker’s nestmates. If a sudden event destroys part of the colony, workers that have been gathering food will shift their activity to compensate – serving in the nursery, perhaps. As the situation return to normal, these ants may independently shift back to their earlier activity. From the outside, all appears optimized for colony survival. But in fact, it is only nearly optimized. Each individual ant is making her own choices, though all are biased to contribute to the well-being of the colony as a whole.
All well and good for ants, but what of networks? It’s true; my wireless radio “robots” may shift roles in response to changing situations: serving as a backbone node in some situations and as a branch extension in others – and moving freely from role to role. This poses a challenge for humans observing and managing a network made up of so many independent networking and computing entities.
The answer is modeling – but not esoteric theoretical modeling. As described in my previous blog post, I have recreated the exact Artificial Intelligence code of the radio “robots” in an x86 platform. Not a simulation, not an approximation, the exact same code. Network managers may observe the decisions and activity of the network on the desktop before uploading a tuned and preferentially biased configuration to the nodes themselves.
Distribution of networking and application intelligence. Publish/Discover/Subscribe information architecture. Near-optimal topology decisions. And modeling to reflect it all. This is the Fifth Generation of wireless mesh networking – and I am eager to work with others who share this vision. Reach me via my LinkedIn page or website.