Monday, October 29, 2018

The Fifth Generation

Although technological change is a constant, I do believe that in retrospect discrete waves or generations may be discerned. This has certainly been true in wireless mesh networking. In fact, a great deal of the early success I experienced with my company Meshdynamics was due to providing one of the first products to define a new generation.

To recap, the First Generation of wireless mesh products was typified by single-radio ad hoc systems that built peer-to-peer, self-forming, self-healing networks. MANET (Mobile Ad-hoc Networks) provided great flexibility, but typically low performance in terms of throughput and jitter (variation in delay). This is because a single radio provides service (connections to users) and backhaul (links between nodes).

The number of nodes possible in a First Generation MANET network has been increased over the years, with recent developments increasing the number of nodes in test situations to over 300. But the throughput and jitter limitations remain, since they are a simple matter of physics with a single radio. In effect, this is the equivalent of a hub-based wired network – collisions and contention will always be an issue.

The Second Generation arrived in the early 2000s in products like the first Tropos Networks offerings, which added a second backhaul radio to the service radio. This combination helped segment the collision and contention domains, but still had severe limitations in mobile networks, those with multiple hops, and those with higher demands for speed and low jitter and latency (voice, streaming video, etc.).

The ugly truth

I made a splash in 2004 with a controversial on-line posting in Dailywireless.org pointing out that larger networks with higher data demands would experience rapidly diminishing performance as the hop count grew in First- or Second-Generation wireless mesh networks. The Ugly Truth about Mesh Networks didn’t earn me a lot of friends in the industry, but it did set the stage for Meshdynamics’ MD4000 Third-Generation wireless mesh nodes (and a few other suppliers, to be sure).


Third-Generation wireless networks further segment the collision and contention domains by using multiple backhaul and service radios to create much more deterministic performance. It’s the equivalent of a true switch versus a hub in a wired topology. But the key topological difference from a wired switch is that Third-Generation wireless networks retain the flexibility to form, re-form, and adapt to any disruptions or interference. This provides the best of both worlds in many ways. Incorporating developing wireless standards and technologies such as MIMO (Multiple-Input and Multiple-Output) continues to increase the performance of Third-Generation systems

My fundamental assumption, which proved correct, was that COTS radios would become cheaper and standardized – so that hardware was not an inherent limitation. But that would only be true if one could separate the networking techniques from the hardware of the radios. This is the direction I took, in essence creating independent radio “robots” within each node that could dynamically make decisions about routing and re-routing while maximizing performance. (Well, truly, nearly optimizing performance – more on that below). Abstracting radios (and wired links) as “channels” opened many productive paths of development, a number of which are embodied in the patent portfolio I am interested in sharing with interested System Developers and OEMs.

A lost generation

The Fourth Generation, in my humble opinion, took a wrong turn – at least in dealing with the most dynamic and demanding applications. Attempting to optimize wireless performance with a centralized controller, as exemplified by Aironet Wireless (Cisco Systems) and Aruba Networks (Hewlett-Packard), is a bit of a canard. This approach may work well in optimizing topologies in a relatively stable Enterprise network, but cannot survive in a challenging mobile and/or multi-hop outdoor network that must contend with disruptions, interference, and constant change.

Network? Computing? Yes.

In fact, my latest work involves leaving behind the idea of the network as an entity separate from the computing environment. In my view, the Fifth Generation of wireless networking conceives of the network as a control system being used to provide services to client computing, control, and analysis systems. Building on the Third-Generation radio “robot” technologies provides the dynamic networking and performance-optimizing foundation.

In addition, discrete applications may be hosted within the network nodes, offering localized control, analysis, and other functions. Variations on this theme have been referred to as “Fog Computing”; but to me, it’s really about abstracting the network as simply a service to dynamically deliver computing power as and where needed.

Massive networks will be brittle

This addresses a key issue that I believe will plague the future of huge and distributed networks like the emerging Internet of Things. Big Data and Expert Systems will be brittle, since current network and computing techniques will simply be enlarged and not fundamentally re-engineered. And all of these assume humans’ ability to centrally architect both the network and the information structure.

But that simply won’t work with the massive decentralized networks that will come next – the human architects can’t know what they can’t know! In my book Rethinking the Internet of Things, I argue that a Publish/Discover/Subscribe architecture is the only approach that can flexibly scale. The network component of that architecture involves the distribution of networking functionality and computing function.

The downside of all of this distribution and flexibility is the loss of the ability to centrally define and dictate topology. With the radio networking intelligence within individual nodes (and/or their resident applications) driving decisions about topologies and routing; how can the system ever be optimized?

Dynamic roles in nature

As so often, the answer can be seen in nature – in this case in a typical ant colony. Each individual worker ant performs a specific role, biased by the environment, the Queen’s pheromones, and the actions and pheromones of the worker’s nestmates. If a sudden event destroys part of the colony, workers that have been gathering food will shift their activity to compensate – serving in the nursery, perhaps. As the situation return to normal, these ants may independently shift back to their earlier activity. From the outside, all appears optimized for colony survival. But in fact, it is only nearly optimized. Each individual ant is making her own choices, though all are biased to contribute to the well-being of the colony as a whole.

All well and good for ants, but what of networks? It’s true; my wireless radio “robots” may shift roles in response to changing situations: serving as a backbone node in some situations and as a branch extension in others – and moving freely from role to role. This poses a challenge for humans observing and managing a network made up of so many independent networking and computing entities.

The answer is modeling – but not esoteric theoretical modeling. As described in my previous blog post, I have recreated the exact Artificial Intelligence code of the radio “robots” in an x86 platform. Not a simulation, not an approximation, the exact same code. Network managers may observe the decisions and activity of the network on the desktop before uploading a tuned and preferentially biased configuration to the nodes themselves.

Distribution of networking and application intelligence. Publish/Discover/Subscribe information architecture. Near-optimal topology decisions. And modeling to reflect it all. This is the Fifth Generation of wireless mesh networking – and I am eager to work with others who share this vision. Reach me via my LinkedIn page or website.

Wednesday, October 3, 2018

Critics from the Past – Smarter Simulation

The previous three posts described work done, starting 2002, in building mesh networking products and devising a truly scalable architecture for the Internet of Things (IoT). One other area of interest dates back to 1992. The problem I was working on then was auto-programming robots: leveraging Artificial Intelligence-based reasoning tools to critique and collaboratively flesh out skeletal robotic assembly strategies.

The “Critics” system developed at that time first modeled the (virtual) robot environment, and then progressively refined it with (real) robot sensor feedback.

Today, simulation systems model large networks and I was tasked with developing one such system to validate a wireless mesh network design of ours. I could have focused on two different things: model the Finite State Machine of the wireless “robot” interactions; or simulate the RF channel and link quality of the RF transmitters. I needed both to do this project justice.

So we developed a framework to run exactly the same firmware image that runs on the embedded devices for an x86 desktop platform. Only the RF characteristics must be estimated for the model (the physical nodes are constantly measuring and analyzing real-world RF). The rest is operating in the simulation exactly as in the networking devices themselves – and vice-versa.



New processors, new radios, different environments – all may easily be accommodated in the simulation. Our OEM and System Developer partners are assured that their next-generation Structured Mesh™ products still work. MeshSuite™ does not "model" their embedded device software – it is their embedded software, running on a desktop (x86) target platform.

Straddling both worlds 

In addition, the software may be moved back and forth between simulation and the physical network. It only needs to be tuned in one environment or the other – ideally, they move together in lockstep. This is engendered by the Abstraction Layers built into both the networking device software and the simulation – the core networking strategies and State Machines don’t change. Different channels, different network goals, different embedded user applications – all are accommodated.

Combined with the autonomy of the nodes, this gives us something like a Mars Rover situation: we can create a mission-level strategic plan for network tree topology and count on devices to execute those tactics to accomplish it without further ado. 

And we can run multiple different scenarios in parallel in the simulation without reconfiguring physical nodes for every test, rapidly prototyping the environment under different conditions - as in genetic algorithms.

Ants and Finite State Automata

Once again, this approach is informed by Nature. Individual ants operate on a very simple “If … Then … Else” decision tree, biased by pheromones from the Queen and their nest mates – and it scales. They are all driven by the same very simple “Operating System”, so the overall actions of the colony are nearly optimized from an external view.

By developing the simulation and network operating software identically and basing it on collaborating Critics, we achieve both resilience and high performance – and can scale up or down. We have only scratched the surface of what may be accomplished with an interested OEM or System Developer partner. Reach me via my LinkedIn page or website.

Friday, September 28, 2018

Saying Controversial Things – Again!

When I wrote Rethinking the Internet of Things, I expected one proposal to be controversial. And it was.

I stated that the myriad devices making up the emerging Internet of Things (IoT) will be too dumb, cheap, and uncritical to justify the power to equip them all with IPv6. Howls of derision resulted.

But I have been through this before. Back In 2004 I wrote another controversial piece about the ugly truths of (single-radio) wireless mesh. Today, almost all mesh network products use (at least) dual-radio wireless switch stacks – even in home WiFi networks

At the edge of industrial and other networks, the vast majority of devices will speak and listen in tiny bits of data – like ants. I developed the concept of "Chirp" networks – communications using tiny bits of data with minimal framing. And like Nature's lightweight pollen in spring time, no guaranteed delivery, either.

Digital “pollen” can be non-unique as well

These Chirps are self-classified. Chirps identify themselves in both public and private ways, to allow integration of data from a wide variety of sources. This identification might be type (moisture sensor, door lock, etc.), location, etc.



Because the size of the future IoT is well beyond our current comprehension, it won’t be possible for humans (or even machines!) to catalog every device or pre-configure preferred data sources. Instead, Publish/Discover/Subscribe will reveal useful data that no one could have known of or predicted.

At the edges of the network, the vast numerical majority of devices will simply speak and listen in tiny bits of data. And they will be designed with a basic trust in an IoT universe that propagates these messages to some sort of integration point where the IoT may be interpreted for human consumption.

Also controversial – Chirps need not be uniquely addressed – nor do they need IPv6

Nobody confuses my grandfather Francis daCosta with me. Our lineage paths differ in our family's network tree. All that is necessary is the context of where we come from and where we have been.

Routing – back to trees

So. If the Chirps are so simple and non-uniquely addressed, how will big data integrators ever make sense of the cacophony?

The routing and other network intelligence come from a device I’ve called a Propagator in the Chirp-related patents and the book. It’s a straightforward derivative of our mesh node. The Propagators add the context of the data (location, lineage, et al) and the intelligence to the transmission (multicast bundling, pruning, and routing; addressing and IPv6 packetization; management; control loops; the list goes on …). Economies of scale stem from placing CPU cycles on mesh nodes, thus simplifying end devices. Rather than build a bulky IPv6 packet for every tiny squib of data, the Propagators spoof, trim, and package as necessary.



In this three-tiered architecture (Integrators, Propagators, Devices) it is critical that deterministic paths are available to link them. Happily, the overlying structured tree topology I discussed earlier (and developed in 2002 for MeshDynamics mesh nodes) works perfectly well for Propagators. Nature tells us that trees scale, connecting trillions and trillions of cells in a networked path (leaves-to-roots and vice-versa) that doesn’t burden any cell with management of the whole.

I am interested in developing collaboration with the larger and sophisticated OEMs and System Developers who might share this vision of the full potential of a massively-scaled public/private Internet of Things. If these ideas intrigue you, let’s talk. Reach me via my LinkedIn page or website.

Thursday, September 20, 2018

Making Tree Topologies Dynamic

As I noted in my previous post, after years of developing higher-performance wireless networking products my focus is shifting in two ways. First, I am orienting my efforts toward working with OEMs, System Developers, System Integrators, and major agencies to integrate my software into their "things".

Second, I am refining my algorithms based on lessons from nature conferred by my friend and marine biologist (by training) Byron Henderson, which we began to explore in our book Rethinking the Internet of Things.

Trees and network switches

One of these lessons from nature involves trees. In 2002, my robotics and control system experience suggested that a tree-like branching structure would be the way to create a deterministic network architecture across a physical mesh of inter-working wireless nodes. Essentially, this is creating a switch-like topology from a physical mesh. But I also wanted networks to converge -- and more importantly, re-converge -- quickly and with more intention than conventional Spanning Tree Protocols. This required placing more independent intelligence in each node, as I’ll describe later.



With the recent emergence of new networking requirements, such as drone swarms and other mobile applications, I have been reflecting on trees in nature- again.  Tree-like branching structures have evolved multiple times and in varied lineages -- organisms as diverse as giant oaks and the marine Gorgonian soft coral colonial animals (Order Alcyonacea).


Trees are a mathematically efficient way to organize living tissue (and other things) to maximize spread and coverage from a fixed connection (such as a root).

What if the branches could move?

But a tree-like structure has limitations in adapting to rapid and unpredictable changes in environments. We’ve all seen trees and shrubs growing at odd angles to try to reach the sunlight when another tree or building shades the plant. But a tree can only adapt to environmental changes to a limited degree, as the branching structure is already set. It’s not possible for the organism to disconnect some branches and reconnect them elsewhere to optimize for the current situation.

Returning to networking, some of my recent algorithm work has aimed to enhance the efficiency and “tune-ability” of tree topologies with even more rapid re-convergence as nodes move and/or the environment changes. I’ve wanted to minimize latency and (especially) jitter to support real-time Publish/Subscribe applications in my algorithms.

So I built in the capacity to bias and tune the network topology to optimize for a flexible variety of factors, including hop count, link cost, bandwidth, and end-to-end delay. But the fundamental architectural decision that I made early-on that is enabling these refinements is distributing the networking intelligence to every node. In essence, I freed up every “branch” to make its own decision on how, where, and when to connect -- or reconnect.

This is accomplished by having each node maintain an awareness of its adjacent nodes and potential connections (usually radios and channels). The tuning and biasing takes place on top of this foundation, which nicely separates the two functions I wish to optimize: rapid convergence for immediate adaptation; and sophisticated capacity for performance optimization.

If the applications you are working on demand real-time performance in complex networking environments, I look forward to discussing how we might work together -- perhaps in the shade of a tree. 😁

Please connect via LinkedIn or my website -- and thanks for your time.

Tuesday, September 11, 2018

Change is Natural, Collaboration is Next

After a long absence, I am returning to this blog because of some changes in personal philosophy and developments in the emerging Internet of Things and Networking spaces. I have been developing wireless mesh networking algorithms, software, and products for 16 years as MeshDynamics, drawing on my lifetime experience in real time embedded systems, robotics, and wireless networking to create technologies uniquely suited to demanding outdoor environments.

MeshDynamics software, for example, is especially well-suited to the most demanding outdoor environments requiring the highest performance over many hops, in motion, and/or for high throughput and low-latency applications like voice, video, and real-time command and control. Because my networking software is abstracted and isolated from the radio and other hardware, it may be optimized for use with any combination of radios, frequencies, and device configurations. Much of my software has been re-written based on open-source packages like OpenWRT to speed integration.

Integrating networking into others’ products

Post-2014, we shifted our emphasis from building products to providing source code licenses and working primarily with OEMs, Embedded Software Developers, System Developers, and major agencies to integrate our software into their devices and solutions. To this end, MeshDynamics has created an open-source-based suite of software modules, source code included, intended to be incorporated into "things": robot drone swarms, mesh nodes, Internet of Things hubs, etc.



We are now seeking partners ready to test this source code base for a fit with their own offerings, just as organizations as diverse as Sharp, PGA Tour, mining OEMs, and the US Navy used the software currently and in the past.

Emulating Nature's networks

Back in 2002, when I began architecting "wireless" switch stacks, I was developing algorithms based on my judgment that radios would become cheaper and that enterprise networking environments would become more complex. The last mile needed more than single-radio, MANET based access points and obsolete hub-like mesh architectures.

This has proven true, but over the last few years I have realized that scaling to the large numbers and dynamic network configurations required by swarms of drones or self-driving cars, etc., represents an unprecedented challenge.

Unprecedented in traditional wireless networking -- but not in nature.

So in recent years, I am using the communication principles that have emerged over millennia in nature to inform my networking development. Some of this thinking is reflected in the book Rethinking the Internet of Things, which I wrote with the help of long-time friend Byron Henderson. We drew on our combined backgrounds in networking, robotics, embedded systems, and biology to describe an architecture for the IoT that builds on lessons from the way nature deals with copious tiny “signals” -- from pollen and birdsong on up. Industry interactions and the developments in drone technology and Artificial/Augmented Intelligence are causing me to expand the biological approach to network topology once again.

Directed propagation

Metaphors by themselves can be misleading, but building on actual principles developed by nature over millions of years of evolution yields insights. The key driver of all biological existence is propagation – placing as many of an individual’s genes as possible into future generations. In that process, the environment exerts a pressure through natural selection that leads to the best-adapted individuals leaving more offspring. This creates the illusion of progress in evolution, as successive generations become better adapted to conditions over hundreds of thousands or millions of years. Sterile hybrids, such as mules, leave no offspring and thus are not refined by this environmental pressure.

Robotic drone swarms have a similar drive to propagate inherent in their design and programming. But this propagation is of data and information related to their mission. Adapting to their local physical and radio environments, they only survive and carry on their mission through communication (messaging) – with other devices in the swarm, and with command, control, and big data analysis functions at some distance.

Interconnected drone robots may adapt more quickly to their environment than living beings.
So the “generations” pass in seconds rather than millennia – but only if the communications paths are persistent and resilient, even reforming after interruptions. And the devices may learn and pass on information from the environment – a process mirroring human cultural evolution, which proceeds much more quickly than can biological evolution

This concept of swarms of adaptive robotic individuals communicating wirelessly in a rapidly evolving topology is top-of-mind for me now as I develop new networking algorithms for use by OEMs, agencies, and System Developers. Demanding outdoor environments requiring mobility, low latency, large hop-to-hop counts (as in mines, tunnels, or a long string of drones), and high throughput are the most likely to need these developing capabilities.

A delicate balance

A delicate balance is needed between individual autonomy, learning and the ability to externally “bias” the network for better efficiency of aggregated devices. Biological evolution similarly acts on individuals – but aggregations of individuals may better survive through common adaptations. This is seen in human society as well as “super-organisms” such as ants and bees. Networks not inherently driven to learn, propagate, and evolve are the mules of the wireless world – and thus have no future.

Networking technologies have evolved: from the strict topologies of Token Ring to the shared backplanes of hubs, to dedicated switched ports, and now to wireless.

I believe that the next phase will be driven by independent but interconnected machines responding to environmental pressures and the "mission" bias to rapidly evolve their internal networking topology.

I am interested in talking with those who are intrigued by these ideas and wish to work together on developing solutions for dynamic networking environments of today and the future. Contact me via my website to start a conversation.