Monday, October 29, 2018

The Fifth Generation

Although technological change is a constant, I do believe that in retrospect discrete waves or generations may be discerned. This has certainly been true in wireless mesh networking. In fact, a great deal of the early success I experienced with my company Meshdynamics was due to providing one of the first products to define a new generation.

To recap, the First Generation of wireless mesh products was typified by single-radio ad hoc systems that built peer-to-peer, self-forming, self-healing networks. MANET (Mobile Ad-hoc Networks) provided great flexibility, but typically low performance in terms of throughput and jitter (variation in delay). This is because a single radio provides service (connections to users) and backhaul (links between nodes).

The number of nodes possible in a First Generation MANET network has been increased over the years, with recent developments increasing the number of nodes in test situations to over 300. But the throughput and jitter limitations remain, since they are a simple matter of physics with a single radio. In effect, this is the equivalent of a hub-based wired network – collisions and contention will always be an issue.

The Second Generation arrived in the early 2000s in products like the first Tropos Networks offerings, which added a second backhaul radio to the service radio. This combination helped segment the collision and contention domains, but still had severe limitations in mobile networks, those with multiple hops, and those with higher demands for speed and low jitter and latency (voice, streaming video, etc.).

The ugly truth

I made a splash in 2004 with a controversial on-line posting in Dailywireless.org pointing out that larger networks with higher data demands would experience rapidly diminishing performance as the hop count grew in First- or Second-Generation wireless mesh networks. The Ugly Truth about Mesh Networks didn’t earn me a lot of friends in the industry, but it did set the stage for Meshdynamics’ MD4000 Third-Generation wireless mesh nodes (and a few other suppliers, to be sure).


Third-Generation wireless networks further segment the collision and contention domains by using multiple backhaul and service radios to create much more deterministic performance. It’s the equivalent of a true switch versus a hub in a wired topology. But the key topological difference from a wired switch is that Third-Generation wireless networks retain the flexibility to form, re-form, and adapt to any disruptions or interference. This provides the best of both worlds in many ways. Incorporating developing wireless standards and technologies such as MIMO (Multiple-Input and Multiple-Output) continues to increase the performance of Third-Generation systems

My fundamental assumption, which proved correct, was that COTS radios would become cheaper and standardized – so that hardware was not an inherent limitation. But that would only be true if one could separate the networking techniques from the hardware of the radios. This is the direction I took, in essence creating independent radio “robots” within each node that could dynamically make decisions about routing and re-routing while maximizing performance. (Well, truly, nearly optimizing performance – more on that below). Abstracting radios (and wired links) as “channels” opened many productive paths of development, a number of which are embodied in the patent portfolio I am interested in sharing with interested System Developers and OEMs.

A lost generation

The Fourth Generation, in my humble opinion, took a wrong turn – at least in dealing with the most dynamic and demanding applications. Attempting to optimize wireless performance with a centralized controller, as exemplified by Aironet Wireless (Cisco Systems) and Aruba Networks (Hewlett-Packard), is a bit of a canard. This approach may work well in optimizing topologies in a relatively stable Enterprise network, but cannot survive in a challenging mobile and/or multi-hop outdoor network that must contend with disruptions, interference, and constant change.

Network? Computing? Yes.

In fact, my latest work involves leaving behind the idea of the network as an entity separate from the computing environment. In my view, the Fifth Generation of wireless networking conceives of the network as a control system being used to provide services to client computing, control, and analysis systems. Building on the Third-Generation radio “robot” technologies provides the dynamic networking and performance-optimizing foundation.

In addition, discrete applications may be hosted within the network nodes, offering localized control, analysis, and other functions. Variations on this theme have been referred to as “Fog Computing”; but to me, it’s really about abstracting the network as simply a service to dynamically deliver computing power as and where needed.

Massive networks will be brittle

This addresses a key issue that I believe will plague the future of huge and distributed networks like the emerging Internet of Things. Big Data and Expert Systems will be brittle, since current network and computing techniques will simply be enlarged and not fundamentally re-engineered. And all of these assume humans’ ability to centrally architect both the network and the information structure.

But that simply won’t work with the massive decentralized networks that will come next – the human architects can’t know what they can’t know! In my book Rethinking the Internet of Things, I argue that a Publish/Discover/Subscribe architecture is the only approach that can flexibly scale. The network component of that architecture involves the distribution of networking functionality and computing function.

The downside of all of this distribution and flexibility is the loss of the ability to centrally define and dictate topology. With the radio networking intelligence within individual nodes (and/or their resident applications) driving decisions about topologies and routing; how can the system ever be optimized?

Dynamic roles in nature

As so often, the answer can be seen in nature – in this case in a typical ant colony. Each individual worker ant performs a specific role, biased by the environment, the Queen’s pheromones, and the actions and pheromones of the worker’s nestmates. If a sudden event destroys part of the colony, workers that have been gathering food will shift their activity to compensate – serving in the nursery, perhaps. As the situation return to normal, these ants may independently shift back to their earlier activity. From the outside, all appears optimized for colony survival. But in fact, it is only nearly optimized. Each individual ant is making her own choices, though all are biased to contribute to the well-being of the colony as a whole.

All well and good for ants, but what of networks? It’s true; my wireless radio “robots” may shift roles in response to changing situations: serving as a backbone node in some situations and as a branch extension in others – and moving freely from role to role. This poses a challenge for humans observing and managing a network made up of so many independent networking and computing entities.

The answer is modeling – but not esoteric theoretical modeling. As described in my previous blog post, I have recreated the exact Artificial Intelligence code of the radio “robots” in an x86 platform. Not a simulation, not an approximation, the exact same code. Network managers may observe the decisions and activity of the network on the desktop before uploading a tuned and preferentially biased configuration to the nodes themselves.

Distribution of networking and application intelligence. Publish/Discover/Subscribe information architecture. Near-optimal topology decisions. And modeling to reflect it all. This is the Fifth Generation of wireless mesh networking – and I am eager to work with others who share this vision. Reach me via my LinkedIn page or website.

Wednesday, October 3, 2018

Critics from the Past – Smarter Simulation

The previous three posts described work done, starting 2002, in building mesh networking products and devising a truly scalable architecture for the Internet of Things (IoT). One other area of interest dates back to 1992. The problem I was working on then was auto-programming robots: leveraging Artificial Intelligence-based reasoning tools to critique and collaboratively flesh out skeletal robotic assembly strategies.

The “Critics” system developed at that time first modeled the (virtual) robot environment, and then progressively refined it with (real) robot sensor feedback.

Today, simulation systems model large networks and I was tasked with developing one such system to validate a wireless mesh network design of ours. I could have focused on two different things: model the Finite State Machine of the wireless “robot” interactions; or simulate the RF channel and link quality of the RF transmitters. I needed both to do this project justice.

So we developed a framework to run exactly the same firmware image that runs on the embedded devices for an x86 desktop platform. Only the RF characteristics must be estimated for the model (the physical nodes are constantly measuring and analyzing real-world RF). The rest is operating in the simulation exactly as in the networking devices themselves – and vice-versa.



New processors, new radios, different environments – all may easily be accommodated in the simulation. Our OEM and System Developer partners are assured that their next-generation Structured Mesh™ products still work. MeshSuite™ does not "model" their embedded device software – it is their embedded software, running on a desktop (x86) target platform.

Straddling both worlds 

In addition, the software may be moved back and forth between simulation and the physical network. It only needs to be tuned in one environment or the other – ideally, they move together in lockstep. This is engendered by the Abstraction Layers built into both the networking device software and the simulation – the core networking strategies and State Machines don’t change. Different channels, different network goals, different embedded user applications – all are accommodated.

Combined with the autonomy of the nodes, this gives us something like a Mars Rover situation: we can create a mission-level strategic plan for network tree topology and count on devices to execute those tactics to accomplish it without further ado. 

And we can run multiple different scenarios in parallel in the simulation without reconfiguring physical nodes for every test, rapidly prototyping the environment under different conditions - as in genetic algorithms.

Ants and Finite State Automata

Once again, this approach is informed by Nature. Individual ants operate on a very simple “If … Then … Else” decision tree, biased by pheromones from the Queen and their nest mates – and it scales. They are all driven by the same very simple “Operating System”, so the overall actions of the colony are nearly optimized from an external view.

By developing the simulation and network operating software identically and basing it on collaborating Critics, we achieve both resilience and high performance – and can scale up or down. We have only scratched the surface of what may be accomplished with an interested OEM or System Developer partner. Reach me via my LinkedIn page or website.